jackkuo commited on
Commit
050e3f7
·
verified ·
1 Parent(s): 38949c4

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -9E1T4oBgHgl3EQfCwJj/content/tmp_files/2301.02868v1.pdf.txt +773 -0
  2. -9E1T4oBgHgl3EQfCwJj/content/tmp_files/load_file.txt +0 -0
  3. -9FLT4oBgHgl3EQfDS7k/content/2301.11979v1.pdf +3 -0
  4. -9FLT4oBgHgl3EQfDS7k/vector_store/index.pkl +3 -0
  5. -tE1T4oBgHgl3EQfCwIW/content/2301.02867v1.pdf +3 -0
  6. -tE1T4oBgHgl3EQfCwIW/vector_store/index.faiss +3 -0
  7. -tE1T4oBgHgl3EQfCwIW/vector_store/index.pkl +3 -0
  8. .gitattributes +71 -0
  9. 0NE1T4oBgHgl3EQf4wUX/content/2301.03503v1.pdf +3 -0
  10. 0NE1T4oBgHgl3EQf4wUX/vector_store/index.faiss +3 -0
  11. 0NE1T4oBgHgl3EQf4wUX/vector_store/index.pkl +3 -0
  12. 0dAyT4oBgHgl3EQf1PkS/content/tmp_files/2301.00730v1.pdf.txt +1718 -0
  13. 0dAyT4oBgHgl3EQf1PkS/content/tmp_files/load_file.txt +0 -0
  14. 0tFQT4oBgHgl3EQf0TbP/content/tmp_files/2301.13416v1.pdf.txt +1018 -0
  15. 0tFQT4oBgHgl3EQf0TbP/content/tmp_files/load_file.txt +0 -0
  16. 29AyT4oBgHgl3EQf1vl1/content/tmp_files/2301.00740v1.pdf.txt +1514 -0
  17. 29AyT4oBgHgl3EQf1vl1/content/tmp_files/load_file.txt +0 -0
  18. 2NFAT4oBgHgl3EQfkh06/content/2301.08611v1.pdf +3 -0
  19. 2NFAT4oBgHgl3EQfkh06/vector_store/index.faiss +3 -0
  20. 2NFAT4oBgHgl3EQfkh06/vector_store/index.pkl +3 -0
  21. 2dE0T4oBgHgl3EQfugHS/content/tmp_files/2301.02607v1.pdf.txt +736 -0
  22. 2dE0T4oBgHgl3EQfugHS/content/tmp_files/load_file.txt +424 -0
  23. 39A0T4oBgHgl3EQfNf8t/vector_store/index.pkl +3 -0
  24. 3NE1T4oBgHgl3EQfSQNJ/vector_store/index.faiss +3 -0
  25. 4tE2T4oBgHgl3EQf6ghP/content/2301.04200v1.pdf +3 -0
  26. 4tE2T4oBgHgl3EQf6ghP/vector_store/index.pkl +3 -0
  27. 59E5T4oBgHgl3EQfPQ7C/content/tmp_files/2301.05504v1.pdf.txt +2254 -0
  28. 59E5T4oBgHgl3EQfPQ7C/content/tmp_files/load_file.txt +0 -0
  29. 6dE3T4oBgHgl3EQfpwoB/content/tmp_files/2301.04644v1.pdf.txt +2684 -0
  30. 6dE3T4oBgHgl3EQfpwoB/content/tmp_files/load_file.txt +0 -0
  31. 6tFJT4oBgHgl3EQflyzE/content/2301.11585v1.pdf +3 -0
  32. 6tFJT4oBgHgl3EQflyzE/vector_store/index.pkl +3 -0
  33. 8tA0T4oBgHgl3EQfOv9y/content/2301.02165v1.pdf +3 -0
  34. 8tA0T4oBgHgl3EQfOv9y/vector_store/index.faiss +3 -0
  35. 8tA0T4oBgHgl3EQfOv9y/vector_store/index.pkl +3 -0
  36. 8tAyT4oBgHgl3EQfqPgO/content/2301.00537v1.pdf +3 -0
  37. 8tAyT4oBgHgl3EQfqPgO/vector_store/index.faiss +3 -0
  38. 8tAyT4oBgHgl3EQfqPgO/vector_store/index.pkl +3 -0
  39. 8tAzT4oBgHgl3EQfE_rp/content/tmp_files/2301.01005v1.pdf.txt +1589 -0
  40. 8tAzT4oBgHgl3EQfE_rp/content/tmp_files/load_file.txt +0 -0
  41. 8tFLT4oBgHgl3EQfBi4b/content/tmp_files/2301.11970v1.pdf.txt +1601 -0
  42. 8tFLT4oBgHgl3EQfBi4b/content/tmp_files/load_file.txt +0 -0
  43. 9dE1T4oBgHgl3EQfCQJ2/content/tmp_files/2301.02862v1.pdf.txt +1049 -0
  44. 9dE1T4oBgHgl3EQfCQJ2/content/tmp_files/load_file.txt +0 -0
  45. A9E0T4oBgHgl3EQfxwJo/vector_store/index.faiss +3 -0
  46. AdAzT4oBgHgl3EQf_v_f/content/2301.01954v1.pdf +3 -0
  47. AdAzT4oBgHgl3EQf_v_f/vector_store/index.faiss +3 -0
  48. AdAzT4oBgHgl3EQf_v_f/vector_store/index.pkl +3 -0
  49. AdFIT4oBgHgl3EQf-yxb/content/2301.11412v1.pdf +3 -0
  50. AdFIT4oBgHgl3EQf-yxb/vector_store/index.faiss +3 -0
-9E1T4oBgHgl3EQfCwJj/content/tmp_files/2301.02868v1.pdf.txt ADDED
@@ -0,0 +1,773 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Crucial role of Fe in determining the hard magnetic properties of Nd2Fe14B
2
+ Juba Bouaziz∗
3
+ Department of Physics, University of Warwick, Coventry CV4 7AL, UK and
4
+ Peter Gr¨unberg Institut and Institute for Advanced Simulation,
5
+ Forschungszentrum J¨ulich & JARA, D-52425 J¨ulich, Germany
6
+ Christopher E. Patrick
7
+ Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH, UK
8
+ Julie B. Staunton†
9
+ Department of Physics, University of Warwick, Coventry CV4 7AL, UK
10
+ (Dated: January 10, 2023)
11
+ Nd2Fe14B’s unsurpassed, hard magnetic properties for a wide range of temperatures result from
12
+ a combination of a large volume magnetization from Fe and a strong single-ion anisotropy from
13
+ Nd. Here, using finite temperature first-principles calculations, we focus on the other crucial roles
14
+ played by the Fe atoms in maintaining the magnetic order on the Nd sublattices, and hence the large
15
+ magnetic anisotropy, and directly generating significant uniaxial anisotropy at high temperatures.
16
+ We identify effective spins for atomistic modelling from the material’s interacting electrons and
17
+ quantify pairwise and higher order, non-pairwise magnetic interactions among them. We find the
18
+ Nd spins couple most strongly to spins on sites belonging to two specific Fe sublattices, 8j1, 8j2.
19
+ Moreover the Fe 8j1 sublattice also provides the electronic origin of the unusual, nonmonotonic
20
+ temperature dependence of the anisotropy of Y2Fe14B. Our work provides atomic-level resolution
21
+ of the properties of this fascinating magnetic material.
22
+ The elemental lanthanides show remarkable magnetic
23
+ properties deriving from their partially-filled shells of
24
+ atomic-like 4f electrons.
25
+ However, direct exploitation
26
+ of these properties is hindered by low magnetic order-
27
+ ing temperatures.
28
+ No elemental lanthanide retains its
29
+ magnetism at room temperature, with the highest Curie
30
+ temperature Tc being 292 K for Gd [1].
31
+ Combining
32
+ the lanthanides with other elements can strengthen the
33
+ magnetic interactions and allow ordering to persist to
34
+ higher temperatures.
35
+ The most successful example of
36
+ this paradigm is the rare-earth/transition-metal (RE-
37
+ TM) family of permanent magnets [2]. Specifically, Nd-
38
+ Fe-B demonstrates exceptional magnetic strength over a
39
+ wide range of temperatures. Having revolutionized com-
40
+ puter hard disk technology in the last century, Nd-Fe-B is
41
+ again under intense investigation owing to its use in elec-
42
+ tric vehicle motors and renewable energy turbines [3].
43
+ The RE-TM magnetic interactions are most simply de-
44
+ scribed in terms of the exchange field Bexch.
45
+ In this
46
+ picture, the TM-3d electrons produce an effective mag-
47
+ netic field which couples to the spin magnetic moments
48
+ of the RE ions. A minimal model to describe the RE ions
49
+ and the high magnetic anisotropy they generate combines
50
+ this exchange field with the interaction with an external
51
+ field Bext and the crystal field ˆVCF, which describes the
52
+ (predominantly) electrostatic interaction of the 4f charge
53
+ cloud with its environment [4–7]:
54
+ HRE = 2µB ˆS · Bexch + µB (ˆL + 2ˆS) · Bext + ˆVCF. (1)
55
+ ˆS and ˆL are the total spin and orbital angular mo-
56
+ mentum operators.
57
+ Values of the exchange field can
58
+ be extracted by fitting Eq. 1 to experimental data ob-
59
+ tained in inelastic neutron scattering (INS) or magneti-
60
+ zation measurements. Experimental estimates of Bexch
61
+ are far stronger than fields achievable in the laboratory
62
+ (µBBexch/kB >∼ 300 K [8], i.e. Bexch >∼ 450 T) as required
63
+ to maintain magnetic order above room temperature.
64
+ Going beyond a phenomenological understanding of
65
+ RE ordering requires an atomistic picture of the mag-
66
+ netic interactions among effective spins. Nd2Fe14B has a
67
+ tetragonal crystal structure with 68 atoms per unit cell
68
+ ([8] [9]). The RE atoms occupy two crystallographically
69
+ distinct sites (RE4f and RE4g), which (together with Fe4c
70
+ and B4g atoms) form planes encapsulating the remain-
71
+ ing 5 Fe sublattices (4e, 8j1, 8j2, 16k1, 16k2). For the
72
+ Nd sites the spins come from the localized f-electrons
73
+ but for the TM sites the local effective spins, or local
74
+ moments, emerge from the material’s itinerant electron
75
+ fluid [10].
76
+ Spin-polarized regions at atomic sites form
77
+ from co-operative behavior of the valence electrons and
78
+ at finite temperatures their orientations fluctuate on rel-
79
+ atively long time scales compared to the remaining elec-
80
+ tronic degrees of freedom. These local magnetic moments
81
+ are the pertinent, effective spins for the TM aspect of the
82
+ atomistic modelling.
83
+ A conceptually simple model assumes interactions only
84
+ between pairs of spins (ij) according to the classical
85
+ Heisenberg model, −Jij ˆSi · ˆ
86
+ Sj where ˆSi represents an
87
+ effective spin. Previous works [12, 13] calculate such Jij
88
+ parameters from first principles within density-functional
89
+ theory (DFT) [12, 13], and use them directly in atomistic
90
+ spin dynamics simulations. With the TM magnetocrys-
91
+ talline anisotropy (MCA) modelled as a sum of single
92
+ arXiv:2301.02868v1 [cond-mat.mtrl-sci] 7 Jan 2023
93
+
94
+ 2
95
+ c)
96
+ (c)
97
+ (b)
98
+ Nd spin
99
+ Nd+Fe spin
100
+ Nd orb
101
+ Fe orb
102
+ Exp
103
+ (a)
104
+ FIG. 1. (a) Nd2Fe14B’s magnetization versus (T/Tc) from DLM-DFT calculations compared to experiment [11]. (b) Sub-lattice
105
+ resolved magnetic order parameters and (c) Weiss fields. The dots indicate the full DLM-DFT results, dashed lines from a
106
+ pair-wise interaction model and continuous lines from a fit of the DLM-DFT results to the model discussed in the text Eq. (3).
107
+ ion-like terms, assumed to be substantial, and RE crystal
108
+ field coefficients taken from experiment, the simulations
109
+ can reproduce the magnetization behavior of Nd2Fe14B,
110
+ including the spin reorientation transition at low tem-
111
+ perature, and represent the current state-of-the-art in
112
+ modelling these magnets [14]. Although such a pair-wise
113
+ Heisenberg model is computationally straightforward to
114
+ implement, it is nonetheless a clear presumption for a
115
+ magnetic metal like Nd-Fe-B. Despite the huge technical
116
+ importance of the material, the role of “beyond Heisen-
117
+ berg” itinerant electron spin features has yet to be elu-
118
+ cidated for Nd-Fe-B. Moreover the MCA from the spin-
119
+ orbit coupling of the itinerant d-electrons is also not guar-
120
+ anteed to be single-ion like [15]. In this letter we quantify
121
+ the significance of both these aspects and propose ways
122
+ to improve atomistic spin modelling.
123
+ The disordered local moment (DLM) picture imple-
124
+ mented within DFT provides an appropriate ab initio
125
+ framework [10, 15, 16]. The approach combines statisti-
126
+ cal mechanics of the effective spins (local moments, {ei})
127
+ and DFT, to describe the complementary evolution of
128
+ electronic and magnetic structure as a material’s tem-
129
+ perature is varied. Strongly correlated 4f-electron effects
130
+ are treated with a parameter free, self interaction correc-
131
+ tion (SIC) approach [17, 18] which incorporates Hund’s
132
+ rules naturally [19].
133
+ As such DLM-DFT can describe
134
+ temperature-dependent magnetic properties of perma-
135
+ nent magnets as shown recently for the RECo5 fam-
136
+ ily [7]. The crucial RE contribution to the anisotropy is
137
+ accounted by crystal field theory, calculating the CF coef-
138
+ ficients within DFT using a robust numerical method [16]
139
+ so that the modelling is independent of any prior fit of
140
+ phenomenological parameters.
141
+ Here we investigate the nature of magnetic order in
142
+ Nd2Fe14B, sublattice-resolved, and describe the mag-
143
+ netic interactions among the effective spins associated
144
+ with both RE and TM sites. We show that the interac-
145
+ tions among the TM spins are influenced by the global
146
+ magnetic order and its impact and link with the spin-
147
+ polarised electrons of the system. This is in essence a
148
+ multi-spin coupling effect. We find significant diversity
149
+ in the behavior of the Fe local moments depending on
150
+ their location in the unit cell. While most TM spins are
151
+ ferromagnetically-coupled, some interact antiferromag-
152
+ netically with each other. This leads to some frustration
153
+ and a peculiar strong suppression of magnetic order on
154
+ the 8j1 sites which are located roughly midway between
155
+ the Nd-containing layers. We also find that the Nd spins
156
+ couple most strongly to spins on sites belonging to this
157
+ Fe sublattice along with those on another (8j2). Further-
158
+ more we discover a link between this 8j1 sublattice and
159
+ the unusual non-monotonic temperature dependence of
160
+ the non-RE MCA of the isostructural material Y2Fe14B,
161
+ resolving a longstanding a puzzle [11, 20]. Finally our
162
+ calculation of the anisotropy field of Nd2Fe14B across a
163
+ range of temperatures agrees well with experiment and
164
+ confirms the vital role played by the Fe spins for the
165
+ functionality of this champion magnet.
166
+ Apart from the local moments themselves, the cen-
167
+ tral quantities in DLM-DFT theory are Weiss fields {hi}
168
+ which, analogously to the exchange field of Eq.1, drive
169
+ the ordering of the local moments.
170
+ However, unlike
171
+ Bexch, the Weiss fields are not phenomenological, but
172
+ instead are rigorously defined by thermal averages over
173
+ the local moment orientational configurations {ei} of the
174
+ magnetic energy Ω{ei} [10, 15], i.e.
175
+ hi =
176
+
177
+ 3
178
+ 4π ⟨Ω⟩ei;T dei .
179
+ (2)
180
+ where ⟨X⟩ei denotes the average of X with the restric-
181
+ tion that the orientation of the moment on site i is fixed
182
+ as ei and the order parameters of the local moments,
183
+ {mi} are the averages {⟨ei⟩} [10, 15]. ⟨Ω⟩�ei;T is calcu-
184
+ lated from DFT [10]. Crucially no prior prescription is
185
+ assumed for the form of the magnetic interactions inher-
186
+ ent in the first-principles Ω. For a pairwise Heisenberg
187
+ model, the magnetic energy Ω{ei} = −1/2 �
188
+ ij Jijei · ej
189
+ with Weiss fields linear in the {mi}, hi = �
190
+ j Jijmj.
191
+
192
+ 3
193
+ 16k1
194
+ 16k2
195
+ 8j1
196
+ 8j2
197
+ f g 4e 4c
198
+ FIG. 2. The relative strengths of interactions between sites in
199
+ the unit cell (boron sites not included), Jij, (Eq. 3) highlight-
200
+ ing the RE-TM ones (sites 48–51 and 52–55 correspond to 4f
201
+ and 4g respectively). Numerical values in meV are given in [9]
202
+ along with specific site coordinates. Red/blue color indicates
203
+ FM/AF interactions.
204
+ Consequently beyond-pairwise terms are clearly identi-
205
+ fied in DLM-DFT theory from the non-linear dependence
206
+ of the Weiss fields on the {mi} [21–25]. The {mi} order
207
+ parameters, describing an equilibrium state at a temper-
208
+ ature T, are given by the self-consistent solution of Eq.2
209
+ and mi = (−1/βhi + coth βhi), the Langevin function,
210
+ (β = 1/kBT).
211
+ Figure 1(a) shows the magnetization as a function of
212
+ T compared to experiment and resolved into the RE and
213
+ TM spin and orbital components.
214
+ The magnetization
215
+ is directed along θ = 45◦ in the (xz)-plane.
216
+ Full cal-
217
+ culational details are given in the Supplemental Mate-
218
+ rial [9] and references [16, 26–28]. The contribution from
219
+ a particular site i is found by multiplying its local mo-
220
+ ment magnitude, µi, by the order parameter mi(T). The
221
+ Fe and Nd spin moments interact antiferromagnetically
222
+ (AF) and order in an anti-parallel alignment in a fer-
223
+ rimagnetic state, but the large orbital moment of Nd,
224
+ pointing opposite to its spin, leads to overall ferromag-
225
+ netic (FM) order.
226
+ The Fe orbital moments are small
227
+ (∼ 0.05µB/atom). The calculated Tc is 1058 K, which,
228
+ although an overestimate of 473 K in comparison to the
229
+ experiment [11], is reasonable for a first-principles theory
230
+ which uses a mean field approximation for the statistical
231
+ mechanics of the effective spins [28].
232
+ On each of the six Fe and two Nd sublattices ([8, 9]
233
+ the magnetic order varies from complete, {mi = 1},
234
+ at T = 0K to zero above Tc, {mi = 0}.
235
+ Figure 1(b)
236
+ shows how the temperature dependence of magnetic or-
237
+ der varies across the sublattices. The Nd sublattices dis-
238
+ order more quickly than all the Fe sublattices except
239
+ the 8j1 one.
240
+ Complementary information in Fig. 1(c)
241
+ shows that Weiss fields, {hi}, promote strong ordering
242
+ when large and have considerable sublattice variation,
243
+ notably the factor ∼4 difference between the 8j1 and
244
+ 8j2 sites. Analysis of {hi}, Eq.2, reveals the presence
245
+ and importance of interactions that fall outside those of
246
+ a Heisenberg-like model. For such a pairwise model the
247
+ Jij interactions (Fig. 2), directly obtained from the Weiss
248
+ fields for small values of the {mi}, are used to construct
249
+ the model’s Weiss fields and {mi} at all T (dashed lines
250
+ in Fig. 1(c)). There are large discrepancies from the full
251
+ ab initio DLM-DFT data away from Tc, leading us to
252
+ propose a more realistic representation of the interac-
253
+ tions which is straightforward to incorporate into atom-
254
+ istic spin modelling of the magnet’s properties. It leads
255
+ to a magnetic energy per unit cell
256
+ ¯Ω = −1
257
+ 2
258
+
259
+ ij
260
+ Jijmi · mj − 1
261
+ 4
262
+
263
+ i
264
+ BI(mi · M)2,
265
+ (3)
266
+ where i, j run over the sites in the unit cell, I denotes one
267
+ of the 8 sub-lattices to which the site i belongs and M
268
+ is the total magnetization per unit cell, M = �
269
+ i µimi
270
+ where the order parameters on the RE sites are anti-
271
+ parallel to the TM sites for the ferrimagnetic state. The
272
+ second, higher order term captures the effect of the over-
273
+ all spin-polarization of the electronic structure on the
274
+ effective interactions between the local moments. Com-
275
+ puting Weiss fields from this expression fits the DLM-
276
+ DFT calculations very well as shown by the full curves in
277
+ Fig. 1(c) and ¯Ω closely approximates ⟨Ω⟩T . Table I lists
278
+ the BI parameters that measure the sublattice-dependent
279
+ size of these higher order, multi-spin terms.
280
+ System
281
+ 4c
282
+ 4e
283
+ 8j1
284
+ 8j2 16k1 16k2
285
+ Rf
286
+ Rg
287
+ Nd2Fe14B -15.42 14.31 -5.06 -1.38 3.82 4.44 -2.53 -1.41
288
+ Y2Fe14B
289
+ -13.68 9.91 -4.07 1.27 4.25 4.07
290
+ 0.0
291
+ 0.0
292
+ TABLE I. Effective, multi-spin interaction constants (in µeV),
293
+ BI, for Nd2Fe14B and Y2Fe14B.
294
+ Fig. 2 shows the relative strengths of the Jij interac-
295
+ tions between pairs of sites. They are represented on a
296
+ 64 × 64 grid (56 Fe sites and 8 RE sites and arranged
297
+ according to sublattice). Numerical values are given as
298
+ Supplemental Material [9]. Assuming a range less than
299
+ roughly 5˚A, they can be directly used in atomistic spin
300
+ simulations together with the terms from Table I. The
301
+ figure illustrates the vital importance on the RE mag-
302
+ netic ordering of the hexagonal nets of Fe atoms [8, 9]
303
+ from the k1, k2 and notably sites on the j1 and j2 sublat-
304
+ tices. Indeed the largest contributions to the Weiss fields
305
+ at the RE sites originate from the j1 and j2 sublattices.
306
+ The TM-TM interactions are particularly varied rang-
307
+ ing from FM (red) for the majority to AF (blue). The
308
+ j1 sites have AF interactions with e, c and RE sites and
309
+ strong FM ones with j2 sites.
310
+ This frustration drives
311
+ this sublattice’s aversion to magnetic order. The diver-
312
+ sity of the interactions stems from the profound effect
313
+
314
+ 60
315
+ 40
316
+ 30
317
+ 40
318
+ 20
319
+ Site
320
+ 10
321
+ 20
322
+ 0
323
+ -10
324
+ 20
325
+ 40
326
+ 60
327
+ Site i4
328
+ that atom coordination and spacing of Fe atoms in a
329
+ metallic material has on its magnetism. The archetype
330
+ for this quintessentially itinerant electron effect is fcc
331
+ Fe where squeezing the lattice turns ferromagnetic order
332
+ anti-ferromagnetic and then destroys it [29, 30].
333
+ The diverse nature of the magnetic order on the six
334
+ Fe sub-lattices also has an impact on the intrinsic MCA
335
+ generated by the system’s itinerant valence electrons. As
336
+ found for other TM metal magnets [15, 26], a simple fit in
337
+ terms of a single ion model is unsatisfactory and, as found
338
+ for other itinerant electron magnets, two-ion type terms
339
+ should be included in the model [15, 31, 32]. Further-
340
+ more, on general grounds, modelling the MCA as a sum
341
+ of single ion anisotropy terms must be done extremely
342
+ carefully. The various Fe sites have different crystallo-
343
+ graphic point symmetries, and their unique symmetry
344
+ axes do not necessarily point along the c direction [33].
345
+ There are important implications for atomistic spin dy-
346
+ namics simulations [14, 31, 34] where it is not correct
347
+ to assign a single ion anisotropy to each Fe atom with
348
+ the same angular dependence and same symmetry axes.
349
+ Rather a simpler and more rigorous alternative would
350
+ be to compute an anisotropy energy based on the vector
351
+ sum of all moments in the same sublattice so that a single
352
+ symmetry axis is recovered [5].
353
+ While significantly smaller than from the f-electrons of
354
+ the Nd atoms, the primarily TM component of the MCA
355
+ grows in importance with rising T.
356
+ As the RE MCA
357
+ drops swiftly along with the magnetic order
358
+ [35, 36],
359
+ the TM contribution can actually increase as shown ex-
360
+ plicitly in measurements on Y2Fe14B [11, 20].
361
+ Such
362
+ non-monotonic temperature variation is puzzling, and
363
+ has been attributed to a magnetostructural effect from
364
+ an anisotropic expansion of the crystal lattice, compet-
365
+ ing single-ion-like contributions [20] or competing single
366
+ and two-ion MCA using atomistic spin dynamics simula-
367
+ tions [31, 32] Since fully relativistic effects such as spin-
368
+ orbit coupling are included in our DLM-DFT theory we
369
+ investigate the MCA temperature dependence directly
370
+ and show our results in Fig. 3 for Y2Fe14B.
371
+ Using the highly accurate, full potential (FP) KKR
372
+ code [37, 38], we first calculate the MCA at T = 0K to
373
+ be ≈ 0.9 meV/formula unit (FU) which agrees well with
374
+ experimental values [11]. The same rapid loss of magnetic
375
+ order with increasing temperature which we find for the
376
+ Fe 8j1 sites in Nd2Fe14B (Fig. 1(b)) is also evident in
377
+ Y2Fe14B [9] and this points to a significant role for this
378
+ sublattice in the anomalous MCA T-dependence.
379
+ We
380
+ therefore carry out further FP MCA calculations where
381
+ now the Fe 8j1 sites are constrained to be magnetically
382
+ disordered (m8j1 = 0) via an equal weighting of local
383
+ moments on each of these sites along the ±x and ±z
384
+ directions. The effect on the computed MCA is striking
385
+ - it increases greatly to ≈ 1.7 meV/FU - and we infer
386
+ that the much faster decrease of 8j1 magnetic order with
387
+ temperature relative to that on the other Fe sublattices
388
+ is the key driver for the MCA T-dependence.
389
+ To test this proposition, we calibrate DLM-DFT MCA
390
+ values against our T = 0K FP MCA calculations, given
391
+ the current implementation with an atomic sphere ap-
392
+ proximation (ASA). Although the ASA values are smaller
393
+ than the FP ones, the same large increase of the value
394
+ when the Fe 8j1 sites are magnetically disordered is
395
+ found. In Fig. 3 we show the DLM-DFT temperature de-
396
+ pendent MCA both using the ASA (red curve) and also
397
+ scaled by the ratio between the FP and our ASA T = 0K
398
+ values (green). The increase with temperature is evident,
399
+ peaking at T/Tc = 50% in line with experiment [11] con-
400
+ firming our proposition. Since the calculations are for a
401
+ fixed lattice structure, we can exclude thermal expansion
402
+ as a cause of the non-monotonic behavior. We also show
403
+ the effect on the MCA of forcing the Fe8j1 sublattice to
404
+ remain magnetically disordered at all T, i.e. m8j1 = 0.
405
+ The resulting unscaled MCA, shown in black in Fig. 3, is
406
+ dramatically altered - the peak has gone and the MCA
407
+ decays linearly with temperature and the T = 0K value
408
+ is enhanced significantly. Clearly, establishment of mag-
409
+ netic order on the Fe8j1 sublattice correlates with a sub-
410
+ stantial drop in the (uniaxial) MCA.
411
+ 0.0
412
+ 0.2
413
+ 0.4
414
+ 0.6
415
+ 0.8
416
+ 1.0
417
+ T/TC
418
+ 0.0
419
+ 0.5
420
+ 1.0
421
+ 1.5
422
+ 2.0
423
+ K1 (meV/FU)
424
+ Exp
425
+ Th
426
+ Th-8j1
427
+ Th-scl
428
+ FIG. 3. The T-dependence of the leading anisotropy constant,
429
+ K1, of Y2Fe14B from DLM-DFT theory (red curve) and ex-
430
+ periment (blue) [11]. The green curve shows the theory values,
431
+ scaled to account for the difference between FP [37] and ASA
432
+ ([9]) at T = 0K. The black curve shows K1 (unscaled) if the
433
+ Fe 8j1 sublattice is constrained to be disordered magnetically.
434
+ Our ultimate goal is to describe Nd2Fe14B’s large mag-
435
+ netic anisotropy and its temperature variation. So to the
436
+ TM MCA we add the dominant RE components. These
437
+ are calculated [6, 39] from the solution of Eq. 1 where the
438
+ crystal field coefficients [9] are determined from first prin-
439
+ ciples [16], and exchange field, Bexch provided directly by
440
+ the DLM-DFT Weiss field for each Nd site (Fig. 1) di-
441
+ vided by the computed Nd spin moment of 3.66 µB.
442
+ Our calculated exchange fields of 699 and 725 T for
443
+ the RE f and g sites respectively are somewhat larger
444
+ than those used in fits of experimental magnetization
445
+
446
+ 5
447
+ 0.4
448
+ 0.6
449
+ 0.8
450
+ 1.0
451
+ T/Tc
452
+ 0
453
+ 5
454
+ 10
455
+ µ0HA(T)
456
+ Computed
457
+ Computed-scl
458
+ Exp (Hirosawa 1986)
459
+ Exp (Grossinger 1986)
460
+ FIG. 4. Evolution of the anisotropy field, HA, versus T/Tc
461
+ from theory compared to experimental measurements (from
462
+ Refs. [42], black curve,
463
+ [11], blue curve). The agreement is
464
+ good above Tc/2. The red and green curves use the non-RE
465
+ MCA taken from the red and green curves of Fig. 3.
466
+ data (450–520 T [40]), but as pointed out in Ref. [8],
467
+ the large number of parameters in Eq. 1 introduce sig-
468
+ nificant uncertainties. In principle, INS data would pro-
469
+ vide a direct measure of the exchange fields but are not
470
+ available for Nd-Fe-B. Our proposed values are, however,
471
+ supported by the good agreement between INS experi-
472
+ ments [41] and our DLM-DFT calculations for the re-
473
+ lated Gd2Fe14B magnet (324 T vs 307/319 T). The Gd
474
+ exchange fields are substantially smaller than those cal-
475
+ culated for Nd. The relative difference (∼2) mirrors that
476
+ of the spin moments (7.46 vs. 3.66 µB) and reflects the
477
+ similar Weiss fields we calculate for the two materials.
478
+ Using the method of Ref. [39] we calculate effective
479
+ anisotropy constants K1(T), K2(T) and anisotropy field,
480
+ µ0HA = 2K1/M ab initio which is shown in Figure 4.
481
+ The red/green curves show µ0HA which includes the non-
482
+ RE contribution to the MCA of the red/green plots in
483
+ Fig. 3. Fig. 4 also shows the experimental measurements
484
+ from Refs. [11, 42]. Below T/Tc ∼ 0.5 there is some dis-
485
+ crepancy between the two sets of experimental data, but
486
+ above there is consistency between both the experiments
487
+ and our calculations.
488
+ The calculations show the clear
489
+ importance of the Fe-dominated MCA to the anisotropy
490
+ field at high temperatures - the red curve is over 1 T less
491
+ than the green one over a range of temperatures despite
492
+ the contributions from the non-RE MCAs differing by
493
+ less than 30 µeV per Fe atom.
494
+ Nd2Fe14B’s spin reorientation transition (SRT) at
495
+ 135K [8, 14, 43] is not described by our calculations owing
496
+ to an underestimate of the high order crystal field coeffi-
497
+ cients [14, 44, 45]. This shortcoming exemplifies a more
498
+ general challenge for theory modelling of low T strongly
499
+ correlated f-electron effects to construct a robust way to
500
+ significantly enhance the value of these coefficients [46].
501
+ Around room temperature and above, however, the ef-
502
+ fects on the MCA from these high order terms are small.
503
+ This is also the temperature regime where the tenets of
504
+ our DLM-DFT theory are valid.
505
+ Nd2Fe14B and the RE-TM permanent magnet family
506
+ to which it belongs have a compelling set of attributes.
507
+ Their technological value is enormous and growing and
508
+ their magnetic properties, at a fundamental level, come
509
+ from a rich and subtle combination of RE, localized, and
510
+ TM, itinerant electron, effects.
511
+ To enhance magnetic
512
+ functionality and extract pointers for the development of
513
+ even better materials, multiple interrelated aspects have
514
+ to be accounted for. Our ab initio DLM-DFT modelling
515
+ has shown the importance of describing accurately the
516
+ rich and complex itinerant electron magnetism associ-
517
+ ated with the Fe sites and valence electrons generally for
518
+ the production of the robust exchange field acting on the
519
+ RE atoms, the higher order effective spin interactions
520
+ and the nature of the non-f electron MCA. The modifi-
521
+ cations proposed here should be incorporated into future
522
+ atomistic, effective spin and micromagnetic modelling to
523
+ correctly describe these phenomena.
524
+ The work was supported by EPSRC (UK) Grant No.
525
+ EP/M028941/1 (J.B. and J.B.S.) and Royal Society Re-
526
+ search Grant RGS\R1\201151 (C.E.P.).
527
528
529
+ [1] R. J. Elliott, in Magnetic Properties of Rare Earth Metals
530
+ (Plenum Press, London and New York, 1972) p. 2.
531
+ [2] J. M. D. Coey, Hard Magnetic Materials: A Perspective,
532
+ IEEE Trans. Magn. 47, 4671 (2011).
533
+ [3] S. Hirosawa, Preface to the viewpoint set on: Permanent
534
+ magnets and underlining material science, Scripta Mat.
535
+ 154, 245 (2018).
536
+ [4] M. Richter, Band structure theory of magnetism in 3d-4f
537
+ compounds, Journal of Physics D: Applied Physics 31,
538
+ 1017 (1998).
539
+ [5] M. D. Kuz’min and A. M. Tishin, Chapter three theory
540
+ of crystal-field effects in 3d-4f intermetallic compounds,
541
+ Handbook of Magnetic Materials 17, 149 (2007).
542
+ [6] C. E. Patrick, M. Matsumoto, and J. B. Staunton,
543
+ First-principles calculations of the magnetocrystalline
544
+ anisotropy of the prototype 2:17 cell boundary phase
545
+ Y(Co1−x−yFexCuy)5, Journal of Magnetism and Mag-
546
+ netic Materials 477, 147 (2019).
547
+ [7] C.
548
+ E.
549
+ Patrick
550
+ and
551
+ J.
552
+ B.
553
+ Staunton,
554
+ Temperature-
555
+ dependent
556
+ magnetocrystalline
557
+ anisotropy
558
+ of
559
+ rare
560
+ earth/transition metal permanent magnets from first
561
+ principles:
562
+ The light RCo5(R
563
+ =
564
+ Y, La-Gd) inter-
565
+ metallics, Phys. Rev. Materials 3, 101401 (2019).
566
+ [8] J. F. Herbst, R2Fe14B materials: Intrinsic properties and
567
+ technological aspects, Rev. Mod. Phys. 63, 819 (1991).
568
+ [9] See Supplemental Material for a picture of the crys-
569
+ tal structure, site-resolved local, electronic spin-polarized
570
+ densities of states, further information about multi-
571
+ spin interactions, Y2Fe14B magnetic properties, the 4f-
572
+ atomic Hamiltonian and numerical values of magnetic in-
573
+
574
+ 6
575
+ teractions between pairs of sites (see, also, references [47–
576
+ 53] therein).
577
+ [10] B. L. Gyorffy, A. J. Pindor, J. Staunton, G. M. Stocks,
578
+ and H. Winter, A first-principles theory of ferromagnetic
579
+ phase transitions in metals, Journal of Physics F: Metal
580
+ Physics 15, 1337 (1985).
581
+ [11] S. Hirosawa, Y. Matsuura, H. Yamamoto, S. Fujimura,
582
+ M. Sagawa, and H. Yamauchi, Magnetization and mag-
583
+ netic anisotropy of R2Fe14B measured on single crystals,
584
+ Journal of applied physics 59, 873 (1986).
585
+ [12] A. Liechtenstein, M. Katsnelson, and V. Gubanov, Ex-
586
+ change interactions and spin-wave stiffness in ferromag-
587
+ netic metals, Journal of Physics F: Metal Physics 14,
588
+ L125 (1984).
589
+ [13] A. I. Liechtenstein, M. Katsnelson, V. Antropov, and
590
+ V. Gubanov, Local spin density functional approach to
591
+ the theory of exchange interactions in ferromagnetic met-
592
+ als and alloys, Journal of Magnetism and Magnetic Ma-
593
+ terials 67, 65 (1987).
594
+ [14] Y. Toga, M. Matsumoto, S. Miyashita, H. Akai, S. Doi,
595
+ T. Miyake, and A. Sakuma, Monte carlo analysis for
596
+ finite-temperature magnetism of Nd2Fe14B permanent
597
+ magnet, Phys. Rev. B 94, 174433 (2016).
598
+ [15] J. B. Staunton, L. Szunyogh, A. Buruzs, B. L. Gyorffy,
599
+ S. Ostanin, and L. Udvardi, Temperature dependence of
600
+ magnetic anisotropy: An ab initio approach, Phys. Rev.
601
+ B 74, 144411 (2006).
602
+ [16] C. E. Patrick and J. B. Staunton, Crystal field coef-
603
+ ficients for yttrium analogues of rare-earth/transition-
604
+ metal magnets using density-functional theory in the
605
+ projector-augmented wave formalism, Journal of Physics:
606
+ Condensed Matter 31, 305901 (2019).
607
+ [17] J. P. Perdew and A. Zunger, Self-interaction correction
608
+ to density-functional approximations for many-electron
609
+ systems, Phys. Rev. B 23, 5048 (1981).
610
+ [18] M. L¨uders, A. Ernst, M. D¨ane, Z. Szotek, A. Svane,
611
+ D. K¨odderitzsch, W. Hergert, B. L. Gy¨orffy, and W. M.
612
+ Temmerman, Self-interaction correction in multiple scat-
613
+ tering theory, Phys. Rev. B 71, 205109 (2005).
614
+ [19] C. E. Patrick and J. B. Staunton, Rare-earth/transition-
615
+ metal magnets at finite temperature:
616
+ Self-interaction-
617
+ corrected relativistic density functional theory in the dis-
618
+ ordered local moment picture, Phys. Rev. B 97, 224415
619
+ (2018).
620
+ [20] J. Cadogan and H.-S. Li, Analysis of the unusual tem-
621
+ perature dependence of the anisotropy constant K1 of
622
+ Y2Fe14B, Journal of magnetism and magnetic materials
623
+ 110, L15 (1992).
624
+ [21] J. B. Staunton, R. Banerjee, M. d. S. Dias, A. Deak, and
625
+ L. Szunyogh, Fluctuating local moments, itinerant elec-
626
+ trons, and the magnetocaloric effect: Compositional hy-
627
+ persensitivity of FeRh, Phys. Rev. B 89, 054427 (2014).
628
+ [22] E. Mendive-Tapia and J. B. Staunton, Ab initio theory
629
+ of the gibbs free energy and a hierarchy of local moment
630
+ correlation functions in itinerant electron systems: The
631
+ magnetism of the mn3a materials class, Phys. Rev. B 99,
632
+ 144424 (2019).
633
+ [23] D. Boldrin, E. Mendive-Tapia, J. Zemen, J. B. Staunton,
634
+ T.
635
+ Hansen,
636
+ A.
637
+ Aznar,
638
+ J.-L.
639
+ Tamarit,
640
+ M.
641
+ Barrio,
642
+ P. Lloveras, J. Kim, X. Moya, and L. F. Cohen, Multi-
643
+ site exchange-enhanced barocaloric response in Mn3NiN,
644
+ Phys. Rev. X 8, 041035 (2018).
645
+ [24] E. Mendive-Tapia and J. B. Staunton, Theory of mag-
646
+ netic ordering in the heavy rare earths: Ab initio elec-
647
+ tronic origin of pair- and four-spin interactions, Phys.
648
+ Rev. Lett. 118, 197202 (2017).
649
+ [25] E. Mendive-Tapia, D. Paudyal, L. Petit, and J. B.
650
+ Staunton, First-order ferromagnetic transitions of lan-
651
+ thanide local moments in divalent compounds: An itin-
652
+ erant electron positive feedback mechanism and fermi
653
+ surface topological change, Phys. Rev. B 101, 174437
654
+ (2020).
655
+ [26] M. Matsumoto, R. Banerjee, and J. B. Staunton, Im-
656
+ provement of magnetic hardness at finite temperatures:
657
+ Ab initio disordered local-moment approach for YCo5,
658
+ Phys. Rev. B 90, 054421 (2014).
659
+ [27] C. E. Patrick, S. Kumar, G. Balakrishnan, R. S. Ed-
660
+ wards, M. R. Lees, E. Mendive-Tapia, L. Petit, and J. B.
661
+ Staunton, Rare-earth/transition-metal magnetic interac-
662
+ tions in pristine and (Ni,Fe)-doped YCo5 and GdCo5,
663
+ Phys. Rev. Materials 1, 024411 (2017).
664
+ [28] C. E. Patrick and J. B. Staunton, MARMOT: mag-
665
+ netism, anisotropy, and more, using the relativistic disor-
666
+ dered local moment picture at finite temperature, Elec-
667
+ tronic Structure (2022).
668
+ [29] J. Kuebler, magnetic moments of ferromagnetic and an-
669
+ tiferromagnetic bcc and fcc iron, Physics Letters A 81,
670
+ 81 (1981).
671
+ [30] F. J. Pinski, J. Staunton, B. Gyorffy, D. D. John-
672
+ son, and G. M. Stocks, Ferromagnetism versus anti-
673
+ ferromagnetism in face-centered cubic iron, Phys. Rev.
674
+ B 56, 2096 (1986).
675
+ [31] R. Cuadrado, R. F. Evans, T. Shoji, M. Yano, A. Kato,
676
+ M. Ito, G. Hrkac, T. Schrefl, and R. W. Chantrell,
677
+ First principles and atomistic calculation of the magnetic
678
+ anisotropy of Y2Fe14B, Journal of Applied Physics 130,
679
+ 023901 (2021).
680
+ [32] R. F. Evans, L. R´ozsa, S. Jenkins, and U. Atxitia, Tem-
681
+ perature scaling of two-ion anisotropy in pure and mixed
682
+ anisotropy systems, Physical Review B 102, 020412
683
+ (2020).
684
+ [33] Y. Miura, H. Tsuchiura, and T. Yoshioka, Magnetocrys-
685
+ talline anisotropy of the fe-sublattice in Y2Fe14B sys-
686
+ tems, Journal of Applied Physics 115, 17A765 (2014).
687
+ [34] Q. Gong, M. Yi, R. F. L. Evans, B.-X. Xu, and O. Gut-
688
+ fleisch, Calculating temperature-dependent properties of
689
+ Nd2Fe14B permanent magnets by atomistic spin model
690
+ simulations, Phys. Rev. B 99, 214409 (2019).
691
+ [35] H. B. Callen and E. Callen, The present status of the tem-
692
+ perature dependence of magnetocrystalline anisotropy,
693
+ and the l(l+1)/2 power law, Journal of Physics and
694
+ Chemistry of Solids 27, 1271 (1966).
695
+ [36] C. E. Patrick, G. A. Marchant, and J. B. Staunton, Spin
696
+ orientation and magnetostriction of Tb1−xDyxFe2 from
697
+ first principles, Phys. Rev. Applied 14, 014091 (2020).
698
+ [37] The jukkr developers, The J¨ulich KKR Codes (2020).
699
+ [38] N. Papanikolaou, R. Zeller, and P. H. Dederichs, Con-
700
+ ceptual improvements of the KKR method, Journal of
701
+ Physics: Condensed Matter 14, 2799 (2002).
702
+ [39] C. E. Patrick, S. Kumar, G. Balakrishnan, R. S. Ed-
703
+ wards, M. R. Lees, L. Petit, and J. B. Staunton, Calcu-
704
+ lating the magnetic anisotropy of rare-earth–transition-
705
+ metal ferrimagnets, Phys. Rev. Lett. 120, 097202 (2018).
706
+ [40] M. Loewenhaupt, I. Sosnowska, and B. Frick, Ground-
707
+ state multiplet of rare-earth 3+ ions in R2Fe14B investi-
708
+ gated by inelastic neutron scattering, Phys. Rev. B 42,
709
+ 3866 (1990).
710
+ [41] M. Loewenhaupt and I. Sosnowska, Exchange and crystal
711
+
712
+ 7
713
+ fields in R2Fe14B studied by inelastic neutron scattering
714
+ (invited), Journal of Applied Physics 70, 5967 (1991).
715
+ [42] R. Gr¨ossinger, R. Krewenka, X. Sun, R. Eibler, H. Kirch-
716
+ mayr, and K. Buschow, Magnetic phase transitions
717
+ and magnetic anisotropy in Nd2Fe14−xCoxB compounds,
718
+ Journal of the Less Common Metals 124, 165 (1986).
719
+ [43] M. Yamada, H. Kato, H. Yamamoto, and Y. Nakagawa,
720
+ Crystal-field analysis of the magnetization process in a
721
+ series of Nd2Fe14B-type compounds, Phys. Rev. B 38,
722
+ 620 (1988).
723
+ [44] K. Hummler and M. F¨ahnle, Full-potential linear-muffin-
724
+ tin-orbital calculations of the magnetic properties of
725
+ rare-earth–transition-metal intermetallics. ii. Nd2Fe14B,
726
+ Phys. Rev. B 53, 3290 (1996).
727
+ [45] Y. Tatetsu, Y. Harashima, T. Miyake, and Y. Gohda,
728
+ Role of typical elements in Nd2Fe14X (X = B, C, N, O,
729
+ F), Phys. Rev. Materials 2, 074410 (2018).
730
+ [46] L. V. Pourovskii, J. Boust, R. Ballou, G. G. Eslava, and
731
+ D. Givord, Higher-order crystal field and rare-earth mag-
732
+ netism in rare-earth–Co5 intermetallics, Phys. Rev. B
733
+ 101, 214433 (2020).
734
+ [47] O. Isnard, W. B. Yelon, S. Miraglia, and D. Fruchart,
735
+ Neutron-diffraction study of the insertion scheme of hy-
736
+ drogen in Nd2Fe14B, Journal of Applied Physics 78, 1892
737
+ (1995).
738
+ [48] Y.-K. Huang, C. Wu, Y. Chuang, F.-M. Yang, and
739
+ F. De Boer, First-order magnetic transition in (nd, pr)
740
+ 2fe14b, Journal of the Less Common Metals 132, 317
741
+ (1987).
742
+ [49] F. Bolzoni, F. Leccabue, O. Moze, L. Pareti, M. Solzi,
743
+ and A. Deriu, 3 d and 4 f magnetism in nd2fe14- x co
744
+ x b and y2fe14- x co x b compounds, Journal of applied
745
+ physics 61, 5369 (1987).
746
+ [50] K. Stevens, Matrix elements and operator equivalents
747
+ connected with the magnetic properties of rare earth ions,
748
+ Proceedings of the Physical Society. Section A 65, 209
749
+ (1952).
750
+ [51] J. Enkovaara, C. Rostgaard, J. J. Mortensen, J. Chen,
751
+ M. Du�lak,
752
+ L. Ferrighi,
753
+ J. Gavnholt,
754
+ C. Glinsvad,
755
+ V. Haikola, H. Hansen, et al., Electronic structure cal-
756
+ culations with gpaw: a real-space implementation of the
757
+ projector augmented-wave method, Journal of physics:
758
+ Condensed matter 22, 253202 (2010).
759
+ [52] D. S. G. Bauer, Development of a relativistic full-
760
+ potential
761
+ first-principles
762
+ multiple
763
+ scattering
764
+ Green
765
+ function method applied to complex magnetic textures
766
+ of nanostructures at surfaces, Forschungszentrum J¨ulich
767
+ http://publications.rwth-aachen.de/record/229375
768
+ (2014).
769
+ [53] S. H. Vosko, L. Wilk, and M. Nusair, Accurate spin-
770
+ dependent electron liquid correlation energies for local
771
+ spin density calculations: a critical analysis, Canadian
772
+ Journal of Physics 58, 1200 (1980).
773
+
-9E1T4oBgHgl3EQfCwJj/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
-9FLT4oBgHgl3EQfDS7k/content/2301.11979v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5455c1808cdfb8fec225470ddb4617f179b4e05cfcda43d11feafc0b68fbd2ac
3
+ size 311763
-9FLT4oBgHgl3EQfDS7k/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2dbdb23029796d4ae354ec401c4e3439902b44030fc18e362f7bcdff41a537b7
3
+ size 129546
-tE1T4oBgHgl3EQfCwIW/content/2301.02867v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c9a8da6cca8cf6c145d02cdfd6311791fd6e684880200ce3a86b372be7fb279
3
+ size 195982
-tE1T4oBgHgl3EQfCwIW/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8793bc9fa8b178dca15833958fa82b8535f283da46d10171972cff59bfc07f62
3
+ size 2490413
-tE1T4oBgHgl3EQfCwIW/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ee81d787d10567ac91bb40dab60c59e0df372645d9416a1078448795345b3a6
3
+ size 92799
.gitattributes CHANGED
@@ -3087,3 +3087,74 @@ TtAyT4oBgHgl3EQfVvcM/content/2301.00147v1.pdf filter=lfs diff=lfs merge=lfs -tex
3087
  VdE2T4oBgHgl3EQftwjJ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3088
  y9E2T4oBgHgl3EQf4Ain/content/2301.04177v1.pdf filter=lfs diff=lfs merge=lfs -text
3089
  ltFRT4oBgHgl3EQfZTcN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3087
  VdE2T4oBgHgl3EQftwjJ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3088
  y9E2T4oBgHgl3EQf4Ain/content/2301.04177v1.pdf filter=lfs diff=lfs merge=lfs -text
3089
  ltFRT4oBgHgl3EQfZTcN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3090
+ J9AyT4oBgHgl3EQfTffD/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3091
+ wNAzT4oBgHgl3EQfCPpJ/content/2301.00955v1.pdf filter=lfs diff=lfs merge=lfs -text
3092
+ AdAzT4oBgHgl3EQf_v_f/content/2301.01954v1.pdf filter=lfs diff=lfs merge=lfs -text
3093
+ TtAyT4oBgHgl3EQfVvcM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3094
+ BtFKT4oBgHgl3EQfXS78/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3095
+ AtAyT4oBgHgl3EQfRvdA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3096
+ ztAyT4oBgHgl3EQfbPfN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3097
+ -tE1T4oBgHgl3EQfCwIW/content/2301.02867v1.pdf filter=lfs diff=lfs merge=lfs -text
3098
+ ctE2T4oBgHgl3EQfagfL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3099
+ FdAzT4oBgHgl3EQfHPsU/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3100
+ ltFRT4oBgHgl3EQfZTcN/content/2301.13552v1.pdf filter=lfs diff=lfs merge=lfs -text
3101
+ gdE1T4oBgHgl3EQffAR-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3102
+ -tE1T4oBgHgl3EQfCwIW/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3103
+ 6tFJT4oBgHgl3EQflyzE/content/2301.11585v1.pdf filter=lfs diff=lfs merge=lfs -text
3104
+ CdE2T4oBgHgl3EQfoAgi/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3105
+ wNAzT4oBgHgl3EQfCPpJ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3106
+ pdFPT4oBgHgl3EQf7jXR/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3107
+ pdFQT4oBgHgl3EQfsDaJ/content/2301.13386v1.pdf filter=lfs diff=lfs merge=lfs -text
3108
+ CdE2T4oBgHgl3EQfoAgi/content/2301.04013v1.pdf filter=lfs diff=lfs merge=lfs -text
3109
+ GtFJT4oBgHgl3EQfEiwz/content/2301.11437v1.pdf filter=lfs diff=lfs merge=lfs -text
3110
+ FdAzT4oBgHgl3EQfHPsU/content/2301.01040v1.pdf filter=lfs diff=lfs merge=lfs -text
3111
+ X9E0T4oBgHgl3EQfmgGg/content/2301.02500v1.pdf filter=lfs diff=lfs merge=lfs -text
3112
+ atFJT4oBgHgl3EQf8S2B/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3113
+ ptAyT4oBgHgl3EQfzflz/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3114
+ bdE3T4oBgHgl3EQfdgpw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3115
+ NdFRT4oBgHgl3EQfGjdL/content/2301.13485v1.pdf filter=lfs diff=lfs merge=lfs -text
3116
+ 2NFAT4oBgHgl3EQfkh06/content/2301.08611v1.pdf filter=lfs diff=lfs merge=lfs -text
3117
+ DdFRT4oBgHgl3EQfxTg4/content/2301.13641v1.pdf filter=lfs diff=lfs merge=lfs -text
3118
+ 2NFAT4oBgHgl3EQfkh06/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3119
+ XdE1T4oBgHgl3EQfJQP0/content/2301.02951v1.pdf filter=lfs diff=lfs merge=lfs -text
3120
+ qtA0T4oBgHgl3EQfKv9k/content/2301.02108v1.pdf filter=lfs diff=lfs merge=lfs -text
3121
+ NdFRT4oBgHgl3EQfGjdL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3122
+ GtFJT4oBgHgl3EQfEiwz/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3123
+ mtE3T4oBgHgl3EQf6gu4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3124
+ v9E3T4oBgHgl3EQf-wsu/content/2301.04828v1.pdf filter=lfs diff=lfs merge=lfs -text
3125
+ y9E2T4oBgHgl3EQf4Ain/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3126
+ oNAyT4oBgHgl3EQfy_n2/content/2301.00696v1.pdf filter=lfs diff=lfs merge=lfs -text
3127
+ ntAyT4oBgHgl3EQflfh4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3128
+ DdFRT4oBgHgl3EQfxTg4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3129
+ 0NE1T4oBgHgl3EQf4wUX/content/2301.03503v1.pdf filter=lfs diff=lfs merge=lfs -text
3130
+ A9E0T4oBgHgl3EQfxwJo/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3131
+ x9AzT4oBgHgl3EQf7_4P/content/2301.01896v1.pdf filter=lfs diff=lfs merge=lfs -text
3132
+ 3NE1T4oBgHgl3EQfSQNJ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3133
+ pdFQT4oBgHgl3EQfsDaJ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3134
+ XdE1T4oBgHgl3EQfJQP0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3135
+ S9E5T4oBgHgl3EQfAA7y/content/2301.05376v1.pdf filter=lfs diff=lfs merge=lfs -text
3136
+ 0NE1T4oBgHgl3EQf4wUX/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3137
+ W9FJT4oBgHgl3EQfPiyI/content/2301.11487v1.pdf filter=lfs diff=lfs merge=lfs -text
3138
+ wtE3T4oBgHgl3EQflQqM/content/2301.04605v1.pdf filter=lfs diff=lfs merge=lfs -text
3139
+ X9E0T4oBgHgl3EQfmgGg/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3140
+ AdFIT4oBgHgl3EQf-yxb/content/2301.11412v1.pdf filter=lfs diff=lfs merge=lfs -text
3141
+ AdAzT4oBgHgl3EQf_v_f/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3142
+ I9AyT4oBgHgl3EQffvhT/content/2301.00345v1.pdf filter=lfs diff=lfs merge=lfs -text
3143
+ oNAyT4oBgHgl3EQfy_n2/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3144
+ H9E4T4oBgHgl3EQfgw3C/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3145
+ gtE5T4oBgHgl3EQfhw_s/content/2301.05644v1.pdf filter=lfs diff=lfs merge=lfs -text
3146
+ 8tA0T4oBgHgl3EQfOv9y/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3147
+ WdE3T4oBgHgl3EQfbQr_/content/2301.04515v1.pdf filter=lfs diff=lfs merge=lfs -text
3148
+ 8tAyT4oBgHgl3EQfqPgO/content/2301.00537v1.pdf filter=lfs diff=lfs merge=lfs -text
3149
+ gtE5T4oBgHgl3EQfhw_s/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3150
+ AdFIT4oBgHgl3EQf-yxb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3151
+ 8tA0T4oBgHgl3EQfOv9y/content/2301.02165v1.pdf filter=lfs diff=lfs merge=lfs -text
3152
+ S9E5T4oBgHgl3EQfAA7y/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3153
+ wtE3T4oBgHgl3EQflQqM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3154
+ hNE_T4oBgHgl3EQf3Rzj/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3155
+ UdAyT4oBgHgl3EQfhfgJ/content/2301.00376v1.pdf filter=lfs diff=lfs merge=lfs -text
3156
+ 4tE2T4oBgHgl3EQf6ghP/content/2301.04200v1.pdf filter=lfs diff=lfs merge=lfs -text
3157
+ 8tAyT4oBgHgl3EQfqPgO/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3158
+ WdE3T4oBgHgl3EQfbQr_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3159
+ SdE0T4oBgHgl3EQfUgDE/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3160
+ -9FLT4oBgHgl3EQfDS7k/content/2301.11979v1.pdf filter=lfs diff=lfs merge=lfs -text
0NE1T4oBgHgl3EQf4wUX/content/2301.03503v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ffcf3e16c1db0aefda273dca43ed32ffeeec13b14b2980c55037929eab68fef6
3
+ size 285862
0NE1T4oBgHgl3EQf4wUX/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24baca3f3069903b6e5237c04a8eb85f2a7106b70deb207c1120eaf9351ed7c2
3
+ size 2818093
0NE1T4oBgHgl3EQf4wUX/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89847a08d507fc5198db79f69e69f5c5dc82e546d6c84f620e7ae7dee393bc0e
3
+ size 126368
0dAyT4oBgHgl3EQf1PkS/content/tmp_files/2301.00730v1.pdf.txt ADDED
@@ -0,0 +1,1718 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Lifting-wing Quadcopter Modeling and Unified Control
2
+ Quan Quan∗, Shuai Wang, and Wenhan Gao
3
+ Hybrid unmanned aerial vehicles (UAVs) integrate the efficient forward flight of fixed-
4
+ wing and vertical takeoff and landing (VTOL) capabilities of multicopter UAVs. This paper
5
+ presents the modeling, control and simulation of a new type of hybrid micro-small UAVs,
6
+ coined as lifting-wing quadcopters.
7
+ The airframe orientation of the lifting wing needs to
8
+ tilt a specific angle often within 45 degrees, neither nearly 90 nor approximately 0 degrees.
9
+ Compared with some convertiplane and tail-sitter UAVs, the lifting-wing quadcopter has a
10
+ highly reliable structure, robust wind resistance, low cruise speed and reliable transition flight,
11
+ making it potential to work fully-autonomous outdoor or some confined airspace indoor. In the
12
+ modeling part, forces and moments generated by both lifting wing and rotors are considered.
13
+ Based on the established model, a unified controller for the full flight phase is designed. The
14
+ controller has the capability of uniformly treating the hovering and forward flight, and enables
15
+ a continuous transition between two modes, depending on the velocity command. What is more,
16
+ by taking rotor thrust and aerodynamic force under consideration simultaneously, a control
17
+ allocation based on optimization is utilized to realize cooperative control for energy saving.
18
+ Finally, comprehensive Hardware-In-the-Loop (HIL) simulations are performed to verify the
19
+ advantages of the designed aircraft and the proposed controller.
20
+ Nomenclature
21
+ 𝑜e𝑥e𝑦e𝑧e
22
+ =
23
+ Earth-Fixed Coordinate Frame(eF )
24
+ 𝑜b𝑥b𝑦b𝑧b
25
+ =
26
+ Quadcopter-Body Coordinate Frame(bF )
27
+ 𝑜l𝑥l𝑦l𝑧l
28
+ =
29
+ Lifting-Wing Coordinate Frame(lF )
30
+ 𝑜w𝑥w𝑦w𝑧w
31
+ =
32
+ Wind Coordinate Frame(wF )
33
+ ep
34
+ =
35
+ Position in eF
36
+ ev
37
+ =
38
+ Velocity in eF
39
+ bva, lva
40
+ =
41
+ Airspeed vector in bF and lF , respectively
42
+ evw
43
+ =
44
+ Wind velocity in eF
45
+ 𝑉a
46
+ =
47
+ Airspeed
48
+ ∗Corresponding Author, is with the School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China.
49
+ Email: [email protected].
50
+ arXiv:2301.00730v1 [cs.RO] 2 Jan 2023
51
+
52
+ 𝜙, 𝜃, 𝜓
53
+ =
54
+ Euler angles in bF
55
+ 𝜔𝑥b, 𝜔𝑦b, 𝜔𝑧b
56
+ =
57
+ Angular velocity in bF
58
+ 𝛼
59
+ =
60
+ Angle of attack in lF
61
+ 𝛽
62
+ =
63
+ Sideslip angle in lF
64
+ 𝐶𝐿
65
+ =
66
+ Aerodynamic lift coefficient
67
+ 𝐶𝐷
68
+ =
69
+ Aerodynamic drag coefficient
70
+ 𝐶𝑚
71
+ =
72
+ Aerodynamic pitch moment coefficient
73
+ 𝐶𝑌
74
+ =
75
+ Aerodynamic lateral force coefficient
76
+ 𝐶𝑙
77
+ =
78
+ Aerodynamic roll moment coefficient
79
+ 𝐶𝑛
80
+ =
81
+ Aerodynamic yaw moment coefficient
82
+ 𝜅
83
+ =
84
+ Installation angle of the lifting wing
85
+ 𝜂
86
+ =
87
+ Installation angle of the motor
88
+ 𝑐
89
+ =
90
+ Mean chord of the lifting wing
91
+ 𝑏
92
+ =
93
+ Wingspan of the lifting wing
94
+ 𝑆
95
+ =
96
+ Area of the lifting wing
97
+ I. Introduction
98
+ A. Why Lifting-wing Quadcopter
99
+ Unmanned aerial vehicles (UAVs) have attracted lots of recent attention due to their outstanding performances in
100
+ many fields, such as aerial photography, precision farming, and unmanned cargo. According to [1], UAV platforms are
101
+ currently dominated by three types: fixed-wing UAV, rotorcraft UAV, and their hybrid that integrates the advantages of
102
+ the first two. The hybrid UAVs have the capability of Vertical Take-off and Landing (VTOL), which enables more
103
+ accessible grounding or holding by hovering. This might be mandated by authorities in high traffic areas such as lower
104
+ altitudes in the urban airspace. Furthermore, hybrid UAVs are categorized into two types: convertiplane and tail-sitter.
105
+ A convertiplane maintains its airframe orientation in all flight modes, but a tail-sitter is an aircraft that takes off and
106
+ lands vertically on its tail, and the entire airframe needs to tilt nearly 90◦ to accomplish forward flight [1, 2].
107
+ In March 2015, Google announced that the tail-sitter UAV for a packet delivery service was scrapped, because it is
108
+ still too difficult to control in a reliable and robust manner according to the conclusion came by the project leader [3].
109
+ Some studies try to remedy this by newly designed controllers [4]. However, unlike this way, we will study a new type
110
+ of hybrid UAV, coined as the lifting-wing quadcopter [5, 6], to overcome the difficulty Google’s Project Wing faced. A
111
+ lifting-wing quadcopter is a quadcopter [7, 8] with a lifting wing installed at a specific mounting angle. During the
112
+ flight, the quadcopter will provide thrust upward and forward simultaneously; and the lifting wing also contributes a
113
+ 2
114
+
115
+ a) VertiKUL 2
116
+ b) Vespertilio
117
+ c) RflyLW2
118
+ d) Prime Air
119
+ Fig. 1
120
+ Prototypes of lifting-wing quadcopters.
121
+ lifting force partially.
122
+ As shown in Fig. 1, as far as we know, some prototypes of lifting-wing quadcopters in public can be found, such as
123
+ VertiKUL2 by the University of Leuven (Fig. 1 (a), Sept 2015)[9], Vespertilio by the VOLITATION company(Fig. 1
124
+ (b))[10], the latest version of the Prime Air delivery drone unveiled by Amazon(Fig. 1(d), Jun 2019)[11] and, RflyLW2
125
+ by us(Fig. 1 (c)) [5, 6].
126
+ The lifting-wing quadcopter is a new type of hybrid UAV because convertiplane and tail-sitter UAVs in their cruise
127
+ phase work as fixed-wing UAVs, so the airframe orientation is defined as the head of the fixed-wing. But, the lifting-wing
128
+ quadcopter is more like a quadcopter. The airframe orientation of the lifting wing needs to tilt a specific angle often
129
+ within 45◦, neither nearly 90◦ (corresponding to tail-sitter UAVs) nor approximately 0◦(corresponding to convertiplane
130
+ UAVs). Fig. 2 shows the full flight phase of the three VTOL UAVs. The design and performance evaluation of
131
+ lifting-wing quadcopters have been studied extensively in [5, 6]. In order to make this paper self-contained, we briefly
132
+ introduce the advantages of the lifting-wing quadcopter compared with some convertiplane and tail-sitter UAVs.
133
+ • Highly reliable structure. It does not require extra transition actuators. This is a reliable structure by eliminating
134
+ the need for complicated control.
135
+ • Robust wind resistance. It has a shorter lifting wing compared with the corresponding fixed wing of convertiplanes
136
+ and tail-sitter UAVs, because rotors can share the lift. Moreover, as shown in Fig. 4, it does not have a vertical
137
+ rudder. Instead, this function is replaced by the yaw control of the quadcopter component. In order to improve the
138
+ 3
139
+
140
+ 融天
141
+ 福阳融天
142
+ MOITATIJOVAsmirgc9iocobret①
143
+
144
+
145
+
146
+
147
+
148
+
149
+
150
+
151
+
152
+
153
+
154
+
155
+
156
+
157
+ (c) Lifting-wing quadcopter
158
+ (b) Convertiplane UAV
159
+ (a) Tail-sitter UAV
160
+ ① Vertical Take-off
161
+ ② Transition
162
+ ③ Level Flight
163
+ ④ Vertical Landing
164
+ a) Tail-sitter UAV
165
+ b) Convertiplane UAV
166
+ c) Lifting-wing quadcopter
167
+
168
+
169
+
170
+
171
+
172
+
173
+
174
+
175
+
176
+
177
+
178
+
179
+
180
+ ① Vertical Take-off
181
+ ② Transition
182
+ ③ Forward Flight
183
+ ④ Vertical Landing
184
+
185
+
186
+ Fig. 2
187
+ Different flight modes of some VTOL UAVs.
188
+ yaw control ability, the axes of rotors do not point only upward anymore as shown in Fig. 4(a). This implies that
189
+ the thrust component by rotors can change the yaw directly rather than merely counting on the reaction torque of
190
+ rotors. From the above, the wind interference is significantly reduced on the one hand; on the other hand, the yaw
191
+ control ability is improved. As a result, it has better maneuverability and hover control to resist the disturbance of
192
+ wind than those by tail-sitter and convertiplane UAVs.
193
+ • Low cruise speed. It can make a cruise at a lower speed than that by convertiplanes and tail-sitter UAVs,
194
+ meanwhile saving energy compared with corresponding quadcopters. This is very useful when a UAV flies in
195
+ confined airspace such as a tunnel, where the high speed is very dangerous. Although current hybrid UAVs can
196
+ have a big or long wing for low cruise speed, they cannot work in many confined airspace due to their long
197
+ wingspan. However, the lifting-wing quadcopter can be small.
198
+ • Reliable transition flight. When a tail-sitter UAV performs transition flight, the velocity and angle of attack will
199
+ change dramatically, leading to complicated aerodynamics even stall. Unlike tail-sitter UAVs, the lifting-wing
200
+ quadcopter only has to tilt a specific angle often smaller than 45◦ rather than 90◦. The airflow on the lifting wing
201
+ is stable, and lift and drag are changed linearly with the angle of attack. These will avoid great difficulty (or say
202
+ danger) in controlling within the full flight envelope.
203
+ With the four features above, it is potential for the lifting-wing quadcopter to work fully-autonomously outdoors or
204
+ in some confined airspace indoors replacing with corresponding quadcopters. The further comparisons with multicopter,
205
+ 4
206
+
207
+ noitiansTno-seT IeoinsV
208
+ gnibasI IeoinsV
209
+ A ELTable 1
210
+ Comparison of different VTOL UAVs.
211
+ Endurance
212
+ Reliability
213
+ Wind Resistance at Hover
214
+ Flight Range
215
+ Multicopter tilt-rotor/wing convertiplane
216
+ 4
217
+ 2
218
+ 2
219
+ 4
220
+ Multicopter dual-system convertiplane
221
+ 3
222
+ 4
223
+ 2
224
+ 3
225
+ Multicopter tail-sitter
226
+ 4
227
+ 4
228
+ 1
229
+ 4
230
+ Lifting-wing multicopter
231
+ 2
232
+ 4
233
+ 4
234
+ 2
235
+ Multicopter
236
+ 1
237
+ 5
238
+ 5
239
+ 1
240
+ Note: bigger number implies better.
241
+ Endurance
242
+ Flight Range
243
+ Wind Resistance at Hover
244
+ Reliability
245
+ Multicopter tilt-rotor/wing
246
+ convertiplane
247
+ Multicopter dual-system
248
+ convertiplane
249
+ Multicopter tail-sitter
250
+ Lifting-wing Multicopter
251
+ Multicopter
252
+ Fig. 3
253
+ Comparison of different VTOL UAVs.
254
+ tilt-rotor/wing convertiplane, multicopter dual-system convertiplane and multicopter tail-sitter [2] are summarized in
255
+ Tab. 1 and Fig. 3. As shown, the lifting-wing quadcopter possesses the feature between current hybrid UAVs and
256
+ quadcopters, and it is more like a quadcopter.
257
+ B. Control of Current Hybrid UAVs
258
+ The control of the lifting-wing quadcopter has the following two distinguishing features.
259
+ • Unified control for full flight phases. Hybrid UAVs often have three different flight modes, including the hover,
260
+ the transition flight, and the forward flight. By taking the multicopter tilt-rotor/wing convertiplane, multicopter
261
+ dual-system convertiplane and multicopter-tail sitter, for example, their take-off and landing are controlled only by
262
+ the quadcopter component, while the forward flight is controlled like a fixed-wing aircraft. The two control ways
263
+ are very different, so the transition flight is challenging due to the nonlinearities and uncertainties. However, a full
264
+ flight phase of the lifting-wing quadcopter always involves thrust by the quadcopter and aerodynamic force by the
265
+ 5
266
+
267
+ 19i2list1oto1itluM--
268
+ 9li-1oitlMliniwo-lititl
269
+ 19voH1690n6t2i291bniW
270
+ 豪0
271
+ ....p!!
272
+ 2
273
+ Euqnlgucelifting wing. Therefore, the lifting-wing quadcopter can be considered under only the transition flight mode in the
274
+ full flight phase (hover control here also will take the aerodynamic force into consideration due to wind on the
275
+ lifting wing). As a result, a unified control is needed. Fortunately, the lifting-wing quadcopter only needs to tilt a
276
+ specific angle often smaller than 45◦, rather than 90◦ like tail sitter UAVs. This reduces the possibility of having a
277
+ stall.
278
+ • Cooperative control for energy saving. The transition flight for current hybrid UAVs is very short, so not
279
+ too much attention needs to pay to energy consumption in practice. However, it should be considered for the
280
+ lifting-wing quadcopter as it is under the transition flight mode in the full flight phase. Cooperative control for
281
+ energy saving is feasible. For example, roll control can be performed by both the quadcopter component and the
282
+ ailerons by the lifting wing. Obviously, the aileron control is more energy-saving.
283
+ Among the control phase of a hybrid UAV, the transition control is the most challenging issue [12], especially for
284
+ tail-sitter UAVs. Since the actuators of tail-sitter UAVs are like those of lifting-wing quadcopters, the existing transition
285
+ control of tail-sitter UAVs can be used for reference.
286
+ (i) Trajectory open-loop control for transition flight.
287
+ The open-loop trajectory tracking control is very straightforward. The principle is to make the UAV enter into
288
+ another mode’s condition by focusing on controlling some variables like altitude other than the trajectory,
289
+ then switch to the controller of the next mode. For example, increasing thrust and reducing its pitch angle at
290
+ the same time can make a tail-sitter UAV enter into forwarding flight [13, 14]. The aim is to keep the altitude
291
+ the same [14]. Because the transition time for tail-sitter UAVs is short, the trajectory will not change too
292
+ much. Obviously, this method is inapplicable to lifting-wing quadcopters.
293
+ (ii) Trajectory closed-loop control for transition flight.
294
+ • Linearization method based on optimization. According to the trajectory and the model, the reference
295
+ state and feedforward are derived by optimization in advance [15, 16]. Based on them, the linearization
296
+ can be performed. With the resulting linear model, existing controllers against uncertainties and
297
+ disturbance are designed [17, 18]. As for the lifting-wing quadcopter, this method is applicable when
298
+ the model and transition trajectory are known prior. Furthermore, cooperative control of the quadcopter
299
+ component or the ailerons of the lifting wing can be performed by taking energy-saving into optimization.
300
+ However, in practice, the model is often uncertain as the payload is often changed, such as parcel
301
+ delivery. Also, this method is not very flexible due to that the trajectory has to be known a priori.
302
+ • Nonlinear control method. One way is to take all aerodynamic forces as disturbances, and only the
303
+ quadcopter component works for the flight transition [19, 20]. This requires that the quadcopter has
304
+ a strong control ability to reject the aerodynamic force. Another way takes the aerodynamic force
305
+ into consideration explicitly to generate a proper attitude [21, 22]. How to cooperatively control the
306
+ 6
307
+
308
+ quadcopter component and the actuators of fixed-wing is not found so far. This is because, we guess,
309
+ the transition flight is often short, and more attention is paid to making the UAV stable by reducing the
310
+ possibility of stall rather than optimization.
311
+ As shown above, the linearization method based on optimization is somewhat not flexible, but cooperative control for
312
+ energy saving can be performed in an open-loop manner. The nonlinear control method is flexible but not considering
313
+ how to control cooperatively for energy saving.
314
+ C. Our Work and Contributions
315
+ In this paper, we will consider designing a unified controller for the full flight phase of a lifting-wing quadcopter.
316
+ What is more, the quadcopter component and the ailerons of the lifting wing work cooperatively to save energy. First,
317
+ we build the model of the lifting-wing quadcopter. Unlike the tail-sitter UAV, it does not have a rudder, and its tilted
318
+ rotors will generate force components on the XY-plane in the quadcopter-body coordinate frame (traditional quadcopters
319
+ do not have the force component on the XY-plane). Because of this, the translational dynamic involves five control
320
+ variables, namely three-dimensional force in the quadcopter-body coordinate frame and two Euler angles (pitch and roll
321
+ angles), further by considering the aerodynamic force determined by Euler angles. However, it is difficult and a bit
322
+ too early to determine the five control variables according to the three-dimensional desired acceleration, because it is
323
+ hard to obtain the bounds of these control variables. An improper choice may not be realized by actuators. To this
324
+ end, we only choose the 𝑜b𝑧b force (the main force component) in the quadcopter-body coordinate frame and two Euler
325
+ angles (pitch and roll angles) to determine the desired acceleration uniquely, leaving the other two force components as
326
+ a lumped disturbance. This adopts the controlling idea of quadcopters [7, 23], but the computation method is different
327
+ due to the existence of the aerodynamic force. With the determined Euler angles, moments are further determined in the
328
+ lifting wing coordinate frame. So far, the unified control for the full flight phase is accomplished. Finally, we will utilize
329
+ the control allocation to realize cooperative control for energy saving. The 𝑜b𝑧b force and three-dimensional moments
330
+ will be realized by four rotors and two ailerons. This is why we have the freedom to optimize the allocation for saving
331
+ energy. The principle behind this is to make the aerodynamic force (two ailerons) undertake the control task as much as
332
+ possible because aileron control is more energy-saving than rotor control. As a result, cooperative control for energy
333
+ saving is accomplished.
334
+ The contributions of this paper are: (i) establish the model of a lifting-wing quadcopter for the first time; (ii) a
335
+ unified controller design for the full flight phase of the lifting-wing quadcopter; (iii) control allocation for energy-saving
336
+ performance. Comprehensive HIL simulation experiments are performed to show (i) the proposed lifting-wing
337
+ quadcopter is more energy-saving with aileron; (ii) synthesizing the angular rate command from the coordinated turn in
338
+ high-speed flight can reduce sideslip; (iii) the transition phase of the proposed lifting-wing quadcopter is significantly
339
+ better than the tail-sitter and UAVs.
340
+ 7
341
+
342
+ ex
343
+ ey
344
+ bx
345
+ by
346
+
347
+ c
348
+ b
349
+ x
350
+ d
351
+ y
352
+ d
353
+ 1
354
+ 2
355
+ 3
356
+ 4
357
+ a) Front view
358
+ b) Left view
359
+ c) Top view
360
+ d) 3D view
361
+ ey
362
+ ez
363
+ bz
364
+ by
365
+
366
+
367
+ CoG
368
+ ex
369
+ lx
370
+ lz
371
+ bz
372
+
373
+
374
+ bx
375
+ ar
376
+
377
+ al
378
+
379
+ CoG
380
+ CoG
381
+
382
+ al
383
+
384
+ ar
385
+
386
+ Fig. 4
387
+ Coordinate frames and nomenclatures.
388
+ II. Coordinate Frame
389
+ A lifting-wing quadcopter is divided into two components, the lifting-wing component and the quadcopter component
390
+ as shown in Fig. 4. According to these, the following coordinate frames are defined.
391
+ A. Earth-Fixed Coordinate Frame (eF )
392
+ The earth-fixed coordinate frame 𝑜e𝑥e𝑦e𝑧e is an inertial frame. The 𝑜e𝑧e axis points perpendicularly to the ground,
393
+ and the 𝑜e𝑥e axis points to a certain direction in the horizontal plane. Then, the 𝑜e𝑦e axis is determined according to the
394
+ right-hand rule. This frame is fixed, the initial position of the lifting-wing quadcopter or the center of the Earth is often
395
+ set as the coordinate origin 𝑜e.
396
+ B. Quadcopter-Body Coordinate Frame (bF )
397
+ The quadcopter-body coordinate frame 𝑜b𝑥b𝑦b𝑧b is fixed to the quadcopter component of a lifting-wing quadcopter.
398
+ The Center of Gravity (CoG) of the lifting-wing quadcopter is chosen as the origin 𝑜b of bF . The 𝑜b𝑥b axis points to
399
+ the nose direction in the symmetric plane of the quadcopter. The 𝑜b𝑧b axis is in the symmetric plane of the quadcopter,
400
+ pointing downward, perpendicular to the 𝑜b𝑥b axis. The 𝑜b𝑦b axis is determined according to the right-hand rule.
401
+ 8
402
+
403
+ C. Lifting-Wing Coordinate Frame (lF )
404
+ The lifting-wing coordinate frame 𝑜l𝑥l𝑦l𝑧l is fixed to the lifting-wing component. The origin 𝑜l of lF is also set at
405
+ the CoG of the lifting-wing quadcopter. The 𝑜l𝑥l axis is in the symmetric plane pointing to the nose of the lifting wing.
406
+ The 𝑜l𝑧l axis is in the symmetric plane of the lifting wing, pointing downward, perpendicular to the 𝑜l𝑥l axis, and the
407
+ 𝑜l𝑦l axis is determined according to the right-hand rule. The installation angle of the lifting wing, that is the angle
408
+ between the 𝑜l𝑥l axis and the 𝑜l𝑥l𝑦l plane, is denoted by 𝜅 ∈ R as shown in Fig. 4(b).
409
+ D. Wind Coordinate Frame (wF )
410
+ The origin 𝑜w of the wind coordinate frame 𝑜w𝑥w𝑦w𝑧w is also at the CoG of the lifting-wing quadcopter. The 𝑜w𝑥w
411
+ axis is aligned with the airspeed vector. The 𝑜w𝑧w axis is in the symmetric plane of the lifting wing, pointing downward,
412
+ perpendicular to the 𝑜w𝑥w axis, and the 𝑜w𝑦w axis is determined according to the right-hand rule. The angle of attack
413
+ (AoA), denoted by 𝛼 ∈ R, is defined as the angle between the projection of the airspeed vector on the 𝑜l𝑥l𝑧l plane and
414
+ the 𝑜l𝑥l as shown in Fig. 4(b). The sideslip angle, denoted by 𝛽 ∈ R, is defined as the angle between the airspeed vector
415
+ and the 𝑜l𝑥l𝑧l plane as shown in Fig. 4(c).
416
+ To convert the aerodynamic forces and moments acting on frame wF and lF to bF respectively, two rotation
417
+ matrices are defined as followed:
418
+ Rb
419
+ w(𝜆) =
420
+ ��������
421
+ cos 𝜆 cos 𝛽
422
+ − cos 𝜆 sin 𝛽
423
+ sin 𝜆
424
+ sin 𝛽
425
+ cos 𝛽
426
+ 0
427
+ − sin 𝜆 cos 𝛽
428
+ sin 𝜆 sin 𝛽
429
+ cos 𝜆
430
+ ��������
431
+ , Rb
432
+ l =
433
+ ��������
434
+ cos 𝜅
435
+ 0
436
+ sin 𝜅
437
+ 0
438
+ 1
439
+ 0
440
+ − sin 𝜅
441
+ 0
442
+ cos 𝜅
443
+ ��������
444
+ ,
445
+ where 𝜆 = 𝜅 − 𝛼. And the rotation matrix Re
446
+ b maps a vector from frame bF to eF , defined by
447
+ Re
448
+ b =
449
+ ��������
450
+ cos 𝜃 cos 𝜓 − sin 𝜃 sin 𝜙 sin 𝜓
451
+ − sin 𝜓 cos 𝜙
452
+ cos 𝜓 sin 𝜃 + cos 𝜃 sin 𝜙 cos 𝜓
453
+ sin 𝜃 sin 𝜙 cos 𝜓 + cos 𝜃 sin 𝜓
454
+ cos 𝜙 cos 𝜓
455
+ sin 𝜓 sin 𝜃 − cos 𝜙 cos 𝜃 sin 𝜙
456
+ − cos 𝜙 sin 𝜃
457
+ sin 𝜙
458
+ cos 𝜙 cos 𝜃
459
+ ��������
460
+ .
461
+ III. MODELING
462
+ A. Assumptions
463
+ For the sake of model simplicity, the following assumptions are made:
464
+ Assumption 1. The body structure is rigid and symmetric about the 𝑜l𝑥l𝑦l plane.
465
+ Assumption 2. The mass and the moments of inertia are constant.
466
+ Assumption 3. The geometric center of the lifting-wing quadcopter is the same as the CoG.
467
+ Assumption 4. The aircraft is only subjected to gravity, aerodynamic forces, and the forces generated by rotors.
468
+ 9
469
+
470
+ B. Flight Control Rigid Model
471
+ By Assumptions 1-2, the Newton’s equation of motion is applied to get the translational motion as follows
472
+ e �p = ev
473
+ e�v = Re
474
+ b
475
+ bf
476
+ 𝑚
477
+ (1)
478
+ where ep =
479
+
480
+ 𝑝𝑥e 𝑝𝑦e 𝑝𝑧e
481
+ �T and ev =
482
+
483
+ 𝑣𝑥e 𝑣𝑦e 𝑣𝑧e
484
+ �T are the position and velocity expressed in frame eF respectively;
485
+ 𝑚 is the mass, bf is the total force acting on the airframe expressed in frame bF .
486
+ To facilitate attitude control and combine the control characteristics of the rotor and lifting wing, the rotational
487
+ dynamics is carried out in frame lF . It is given by Euler’s equation of motion as
488
+ �Re
489
+ l = Re
490
+ l
491
+ �l𝝎
492
+
493
+ ×
494
+ J · l �𝝎 = lm − l𝝎 × (J·l𝝎)
495
+ (2)
496
+ where the rotational matrix is derived by Re
497
+ l = Re
498
+ b Rb
499
+ l , Rb
500
+ l being a constant matrix; lm is the total moment acting
501
+ on the airframe expressed in frame lF , l𝝎 =
502
+
503
+ 𝜔𝑥l 𝜔𝑦l 𝜔𝑧l
504
+ �T is the angular velocity in frame lF ,
505
+ �l𝝎
506
+
507
+ × denotes the
508
+ skew-symmetric matric
509
+ �l𝝎
510
+
511
+ × =
512
+ ��������
513
+ 0
514
+ −𝜔𝑧l
515
+ 𝜔𝑦l
516
+ 𝜔𝑧l
517
+ 0
518
+ −𝜔𝑥l
519
+ −𝜔𝑦l
520
+ 𝜔𝑥l
521
+ 0
522
+ ��������
523
+ ,
524
+ and J ∈ R3×3 is the inertia matrix given by
525
+ J =
526
+ ��������
527
+ 𝐽𝑥
528
+ 0
529
+ −𝐽𝑥𝑧
530
+ 0
531
+ 𝐽𝑦
532
+ 0
533
+ −𝐽𝑥𝑧
534
+ 0
535
+ 𝐽𝑧
536
+ ��������
537
+ .
538
+ C. Forces and Moments
539
+ By Assumptions 3-4, the total forces and moments acting on the UAV are decomposed into three parts: the
540
+ aerodynamic forces and moments acting on the airframe (fa and ma), the forces and moments generated by rotors (fr and
541
+ mr), and the gravitational forces f𝑔, where f𝑔 = [0 0 𝑔]T, 𝑔 is the gravitational acceleration. The front two types of
542
+ forces and moments will be described detailly in the following two subsections.
543
+ 10
544
+
545
+ Lf
546
+ Df
547
+ Yf
548
+ a
549
+ v
550
+ x
551
+ av
552
+ z
553
+ av
554
+ y
555
+ av
556
+ 1T
557
+ 2T
558
+ 3T
559
+ 4T
560
+
561
+ lo
562
+ ly
563
+ lx
564
+ lz
565
+
566
+
567
+ Fig. 5
568
+ Forces act on a lifting-wing quadcopter.
569
+ 1. Forces and Moments in the Quadcopter Component
570
+ In the quadcopter part, the thrust and torque produced by one rotor are given by
571
+ 𝑇𝑖 = 𝐾 𝑓 𝜛𝑖2, 𝑀𝑖 = 𝐾𝑚𝜛𝑖2 = 𝐾𝑚
572
+ 𝐾 𝑓
573
+ 𝑇𝑖
574
+ (3)
575
+ where 𝐾 𝑓 > 0 is the lift force coefficient, 𝐾𝑚 > 0 is the drag torque coefficient, and 𝜛𝑖 is the angular rate of the ith
576
+ rotor, 𝑖 = 1, 2, 3, 4. In order to improve the controllability during performing yaw, an installation angle 𝜂 is set as shown
577
+ in Fig. 4(a). The left motors tilt to the positive left and right motors tilt to the positive right.
578
+ Because of the installation angle 𝜂, the forces and moments produced by rotors are expressed by the thrust on each
579
+ propeller 𝑇𝑖 as
580
+ �������������
581
+ 𝑓𝑟𝑦
582
+ 𝑓𝑟𝑧
583
+ 𝑚𝑟𝑥
584
+ 𝑚𝑟𝑦
585
+ 𝑚𝑟𝑧
586
+ �������������
587
+ =
588
+ �������������
589
+ sin 𝜂
590
+ − sin 𝜂
591
+ − sin 𝜂
592
+ sin 𝜂
593
+ − cos 𝜂
594
+ − cos 𝜂
595
+ − cos 𝜂
596
+ − cos 𝜂
597
+ −𝑑𝑦 cos 𝜂
598
+ 𝑑𝑦 cos 𝜂
599
+ 𝑑𝑦 cos 𝜂
600
+ −𝑑𝑦 cos 𝜂
601
+ 𝑑𝑥 cos 𝜂
602
+ −𝑑𝑥 cos 𝜂
603
+ 𝑑𝑥 cos 𝜂
604
+ −𝑑𝑥 cos 𝜂
605
+ 𝐾1
606
+ 𝐾1
607
+ −𝐾1
608
+ −𝐾1
609
+ �������������
610
+ ����������
611
+ 𝑇1
612
+ 𝑇2
613
+ 𝑇3
614
+ 𝑇4
615
+ ����������
616
+ ,
617
+ (4)
618
+ where 𝐾1 = 𝐾𝑚
619
+ �𝐾 𝑓 + 𝑑𝑥 sin 𝜂, 𝑑𝑥 and 𝑑𝑦 are the components of the distance from the center of the lifting-wing
620
+ quadcopter to a propeller on the 𝑜b𝑥b𝑦b plane, as shown in Fig. 4(c).
621
+ 2. Forces and Moments in the Lifting-wing Component
622
+ The aerodynamic forces and moments acting on the lifting wing are mainly generated by the lifting wing itself and
623
+ the ailerons at the trailing edge, as shown in Fig. 5.
624
+ 11
625
+
626
+ Let evw be the wind velocity in eF . Then
627
+ bva = bv − (Re
628
+ b)T · evw.
629
+ (5)
630
+ Thus, the airspeed vector lva =
631
+
632
+ 𝑣a𝑥 𝑣a𝑦 𝑣a𝑧
633
+ �T and airspeed 𝑉𝑎 are defined as
634
+ lva = (Rb
635
+ l )T · bva,
636
+ (6)
637
+ 𝑉a =
638
+ √︃
639
+ 𝑣2a𝑥 + 𝑣2a𝑦 + 𝑣2a𝑧.
640
+ (7)
641
+ The aerodynamic angles 𝛼 and 𝛽 are defined as
642
+ 𝛼 = tan−1( 𝑣a𝑧
643
+ 𝑣a𝑥
644
+ ), 𝛽 = sin−1(
645
+ 𝑣a𝑦
646
+ 𝑉a
647
+ ).
648
+ (8)
649
+ In the longitudinal plane, lift, drag, and pitching moment acting on the lifting-wing body are given by
650
+ 𝑓𝐿 = 𝑄𝑆(𝐶𝐿 + 𝐶𝐿 𝛿𝑒𝛿𝑒)
651
+ 𝑓𝐷 = 𝑄𝑆(𝐶𝐷 + 𝐶𝐷 𝛿𝑒𝛿𝑒)
652
+ 𝑚 = 𝑄𝑆𝑐(𝐶𝑚 + 𝐶𝑚𝛿𝑒𝛿𝑒).
653
+ (9)
654
+ The lateral force and the roll and yaw moments acting on the lifting-wing body are given by
655
+ 𝑓𝑌 = 𝑄𝑆(𝐶𝑌 + 𝐶𝑌 𝛿𝑎𝛿𝑎)
656
+ 𝑙 = 𝑄𝑆𝑏(𝐶𝑙 + 𝐶𝑙 𝛿𝑎𝛿𝑎)
657
+ 𝑛 = 𝑄𝑆𝑏(𝐶𝑛 + 𝐶𝑛𝛿𝑎𝛿𝑎)
658
+ (10)
659
+ where 𝑄 = 1
660
+ 2 𝜌𝑉2
661
+ 𝑎; 𝐶𝐿, 𝐶𝐷, 𝐶𝑚,𝐶𝑌 , 𝐶𝑙 and 𝐶𝑛 are nondimensional aerodynamic coefficients, 𝐶𝐿 𝛿𝑒, 𝐶𝑚𝛿𝑒, 𝐶𝐷 𝛿𝑒,𝐶𝑌 𝛿𝑎,
662
+ 𝐶𝑛𝛿𝑎 and 𝐶𝑙𝛿𝑎 are control derivative; 𝑆 is the area of the lifting wing, 𝑐 is the mean chord of the lifting wing, 𝑏 is the
663
+ wingspan of the lifting-wing aircraft, 𝛿𝑒 and 𝛿𝑎 are calculated using the right and the left aileron (𝛿𝑎𝑟 and 𝛿𝑎𝑙, as shown
664
+ in Fig. 4(d)) as
665
+
666
+ 𝛿𝑒
667
+ 𝛿𝑎
668
+
669
+ =
670
+
671
+ 1
672
+ 1
673
+ −1
674
+ 1
675
+ � �
676
+ 𝛿𝑎𝑟
677
+ 𝛿𝑎𝑙
678
+
679
+ .
680
+ (11)
681
+ The external forces and moments are summarized as
682
+ 12
683
+
684
+ Table 2
685
+ Lifting-wing quadcopter structure parameters, lift force and drag torque coefficients
686
+ 𝑚
687
+ Aircraft mass
688
+ 1.92 kg
689
+ 𝜅
690
+ Installation angle of lifting wing
691
+ 34 deg
692
+ 𝜂
693
+ Installation angle of motor
694
+ 10 deg
695
+ 𝑑𝑥
696
+ The distance from 𝑜𝑏𝑥𝑏 to a propeller
697
+ 0.25 m
698
+ 𝑑𝑦
699
+ The distance from 𝑜𝑏𝑦𝑏 to a propeller
700
+ 0.2125 m
701
+ [𝐽𝑥𝑥 𝐽𝑦𝑦 𝐽𝑧𝑧]
702
+ Moment of inertia
703
+ [5.12 5.54 7.6] × 10−2 kg · m2
704
+ 𝑏
705
+ Wingspan of the lifting-wing aircraft
706
+ 0.94 m
707
+ 𝑐
708
+ Mean chord of the lifting wing
709
+ 0.17 m
710
+ 𝐾𝑚
711
+ Drag moment coefficient
712
+ 5.875e-07 kg · m2
713
+ 𝐾 𝑓
714
+ Lift force coefficient
715
+ 2.824e-05 kg · m2
716
+ bf =
717
+ ��������
718
+ 0
719
+ 𝑓𝑟𝑦
720
+ 𝑓𝑟𝑧
721
+ ��������
722
+ + 𝑄𝑆Rb
723
+ w
724
+ ��������
725
+ −(𝐶𝐷 + 𝐶𝐷 𝛿𝑒𝛿𝑒)
726
+ (𝐶𝑌 + 𝐶𝑌 𝛿𝑎𝛿𝑎)
727
+ −(𝐶𝐿 + 𝐶𝐿 𝛿𝑒𝛿𝑒)
728
+ ��������
729
+ + 𝑚Rb
730
+ ef𝑔
731
+ (12)
732
+ lm = Rl
733
+ b
734
+ ��������
735
+ 𝑚𝑟𝑥
736
+ 𝑚𝑟𝑦
737
+ 𝑚𝑟𝑧
738
+ ��������
739
+ + 𝑄𝑆
740
+ ��������
741
+ 𝑏(𝐶𝑙 + 𝐶𝑙 𝛿𝑎𝛿𝑎)
742
+ 𝑐(𝐶𝑚 + 𝐶𝑚𝛿𝑒𝛿𝑒)
743
+ 𝑏(𝐶𝑛 + 𝐶𝑛𝛿𝑎𝛿𝑎)
744
+ ��������
745
+ .
746
+ (13)
747
+ The structure parameters, lift force and drag torque coefficients of the lifting-wing quadcopter are given in Tab.2.
748
+ IV. CONTROLLER DESIGN
749
+ The successive loop closure is a common control architecture for UAVs [24], which consists of an outer-loop
750
+ controlling the position and an inner-loop for attitude, as illustrated in Fig. 6. The basic idea behind successive
751
+ loop closure is to close several simple feedback loops in succession around the open-loop plant dynamics rather than
752
+ designing a single control system. The position controller receives the desired position and then computes the desired
753
+ acceleration. Then the desired acceleration is mapped to the collective thrust and attitude. The attitude command is sent
754
+ to the inner-loop, while the thrust command skips directly to the control allocation. To facilitate the control experiment
755
+ step by step, the attitude can also be commanded by the pilot. Furthermore, the attitude controller receives the desired
756
+ attitude and generates the desired moment. Finally, the control allocation algorithm distributes the moment command
757
+ from the inner-loop and the direct force command from the outer-loop to corresponding ailerons and rotors.
758
+ 13
759
+
760
+ Position
761
+ Controller
762
+ Attitude
763
+ Controller
764
+ Control
765
+ Allocation
766
+ d
767
+ m
768
+ Lifting-wing
769
+ Quadcopter
770
+ Dynamics
771
+ ,
772
+ T δ
773
+ d
774
+ p
775
+ Mannual
776
+ Inputs
777
+ Signal
778
+ Distribution
779
+ d
780
+ Θ
781
+ df
782
+ d
783
+ Θ
784
+ df
785
+ df
786
+ d
787
+ Θ
788
+ a
789
+ ,
790
+ p v
791
+ av
792
+ q
793
+ Fig. 6
794
+ Control structure
795
+ A. Controller Design Model
796
+ Since 𝛼, 𝛽 are not easy to obtain, we consider that 𝛼 ≈ 𝜅 + 𝜃 and 𝛽 ≈ 0. The translational dynamic involves five
797
+ control variables, namely three-dimensional forces in frame bF and two Euler angles (pitch and roll angles). However, it
798
+ is a bit too early to determine the five control variables according to the three-dimensional desired acceleration because
799
+ it is hard to obtain the bounds of these control variables. An improper choice may not be realized by ailerons. To
800
+ this end, we only choose 𝑓𝑧 (the main force component) in frame bF and two Euler angles (pitch and roll angles) to
801
+ determine the desired acceleration uniquely, and the desired yaw angle is directly specified, leaving 𝑓𝑦 as a disturbance.
802
+ This adopts the controlling idea of quadcopters[7, 23], but the computation method is different due to the existence of
803
+ the aerodynamic force. According to the idea above, we rewrite the system Eqs. (1) and (2) in the form as
804
+ e �p = ev
805
+ e�v = u + g + d1
806
+ �Re
807
+ l = Re
808
+ l
809
+ �l𝝎
810
+
811
+ ×
812
+ l �𝝎 = J−1 · lm + d2
813
+ .
814
+ (14)
815
+ Here
816
+ u =
817
+ Re
818
+ b
819
+ 𝑚
820
+ ����
821
+
822
+ ��������
823
+ 0
824
+ 0
825
+ − 𝑓𝑧
826
+ ��������
827
+ + 𝑄𝑆Rb
828
+ w
829
+ ��������
830
+ −𝐶𝐷
831
+ 0
832
+ −𝐶𝐿
833
+ ��������
834
+ ����
835
+
836
+ ,
837
+ and d1, d2 are disturbances, where
838
+ d1 =
839
+ Re
840
+ b
841
+ 𝑚
842
+ ����
843
+
844
+ ��������
845
+ 0
846
+ 𝑓𝑦
847
+ 0
848
+ ��������
849
+ + 𝑄𝑆Rb
850
+ w
851
+ ��������
852
+ −𝐶𝐷 𝛿𝑒𝛿𝑒
853
+ (𝐶𝑌 + 𝐶𝑌 𝛿𝑎𝛿𝑎)
854
+ −𝐶𝐿 𝛿𝑒𝛿𝑒
855
+ ��������
856
+ ����
857
+
858
+ d2 = −J−1 · l𝝎 × (J · l𝝎).
859
+ 14
860
+
861
+ B. Position Control
862
+ Given a twice differentiable trajectory pd(𝑡), in order to satisfy lim
863
+ 𝑡→∞ ∥ep(𝑡) − pd(𝑡)∥ = 0, the desired ud for Eq. (14)
864
+ can be designed as a PID controller in the form
865
+ ud = −g + �pd − KPd (ev − �pd) − KPp (ep − pd) − KPi
866
+
867
+ (ep − pd) d𝑠
868
+ (15)
869
+ where KPp, KPi, KPd ∈ R3×3 are diagonal matrices acting as control gains. The left work is to determine desired thrust
870
+ by rotors 𝑓d ∈ Ω 𝑓 and 𝜃d, 𝜙d ∈ Ω𝑎 such that
871
+ ( 𝑓d, 𝜃d, 𝜙d) =
872
+ arg min
873
+ 𝑓𝑧 ∈Ω 𝑓 ,𝜃,𝜙∈Ω𝑎
874
+ ∥u ( 𝑓𝑧, 𝜃, 𝜙) − ud∥
875
+ (16)
876
+ where Ω 𝑓 is a set to confine the force, and Ω𝑎 is a set to confine the pitch and roll. In order to reduce drag, the vehicle’s
877
+ nose should be consistent with the current direction of the vehicle velocity, that is
878
+ 𝜓d = tan−1
879
+ � 𝑣𝑦e
880
+ 𝑣𝑥e
881
+
882
+ .
883
+ (17)
884
+ The attitude command can also be given by the pilot, in case the position controller fails with GPS denied, as shown
885
+ in Fig. 6. Finally, the desired attitude is given as 𝚯d = [𝜙d 𝜃d 𝜓d]T.
886
+ C. Attitude Control
887
+ The attitude controller generates the desired moment from the output of the position controller or the attitude given
888
+ by the pilot, as shown in Fig. 6. As far as we know, studies about the hybrid UAV hardly consider the lateral control,
889
+ such as turning right or left. As for the considered UAV, the control on yaw is quite different between the multicopter
890
+ mode and the fixed-wing mode. To establish a unified control, the attitude control is performed on the lifting-wing
891
+ frame lF , so l𝚯d = [𝜙d 𝜃d + 𝜅 𝜓d]T.
892
+ 1. Basic Attitude Control
893
+ The attitude error is presented in the form of quaternion based on which the corresponding controller is designed.
894
+ This can guarantee a uniform and good convergence rate for all initial attitude errors [25]
895
+ qe = q∗
896
+ d ⊗ qe
897
+ l =
898
+
899
+ 𝑞e0 𝑞e1 𝑞e2 𝑞e3
900
+ �T,
901
+ (18)
902
+ 15
903
+
904
+ where qd is transformed from l𝚯d with ‘ZXY’ rotational sequence, (·)∗ is the conjugate of a quaternion, and ⊗ is the
905
+ quaternion product. Then qe is transformed into the axis-angle form qe =
906
+
907
+ cos 𝜗
908
+ 2 𝝃T
909
+ e sin 𝜗
910
+ 2
911
+ �T by
912
+ 𝜗 = wrap𝜋
913
+ �2acos �𝑞e0
914
+ ��
915
+ 𝝃e =
916
+
917
+ [0 0 0]T,
918
+ 𝜃 = 0
919
+ sign(𝑞e0)
920
+ 𝜗
921
+ sin 𝜗/2
922
+
923
+ 𝑞e1 𝑞e2 𝑞e3
924
+ �T,
925
+ 𝜃 ≠ 0.
926
+ (19)
927
+ The function wrap𝜋(𝜗) constrains the 𝜗 in [−𝜋 𝜋] to ensure the shortest rotation path. To eliminate the attitude error,
928
+ the attitude control is designed as
929
+ l𝝎ac = sat �K𝚯p𝝃e, 𝝎min, 𝝎max
930
+
931
+ (20)
932
+ where K𝚯p ∈ R3×3 is the diagonal matrix acting as the control gain, 𝝎min and 𝝎max ∈ R3 are the minimum and maximum
933
+ angular control rates, the function sat (x, xmin, xmax) is defined as
934
+ sat (x, xmin, xmax) ≜
935
+ ��������
936
+ sat �𝑥1, 𝑥1,min, 𝑥1,max
937
+
938
+ ...
939
+ sat �𝑥𝑛, 𝑥𝑛,min, 𝑥𝑛,max
940
+
941
+ ��������
942
+ , sat �𝑥𝑘, 𝑥𝑘,min, 𝑥𝑘,max
943
+ � ≜
944
+ �����
945
+ �����
946
+ 𝑥𝑘,min,
947
+ 𝑥𝑘 < 𝑥𝑘,min
948
+ 𝑥𝑘,max,
949
+ 𝑥𝑘 > 𝑥𝑘,max
950
+ 𝑥𝑘,
951
+ else
952
+ .
953
+ (21)
954
+ 2. Lateral Compensation
955
+ When the UAV is in high-speed flight, a roll command given to track a specified trajectory will cause a lateral skid.
956
+ In order to reduce the sideslip angle when the UAV turns at a high speed, the coordinated turn should be considered. In
957
+ lF frame, if it is assumed that there is no wind, the coordinated turn equation is expressed as [24]
958
+ �𝜓d = 𝑔 tan 𝜙
959
+ 𝑉𝑎
960
+ .
961
+ (22)
962
+ It should be noted that the Euler angles are the attitude presentation between lF . So the desired yaw rate generated by
963
+ coordinated turn is
964
+ l𝜔ct = �𝜓d cos 𝜃 cos 𝜙.
965
+ (23)
966
+ 3. Angular Rate Command Synthesis
967
+ In the lifting-wing coordinated turn, 𝑉𝑎 being zero makes no sense. In addition, considering that coordinated turn
968
+ should not be used when airspeed is small, a weight coefficient related to airspeed is added. So the desired angular rates
969
+ are rewritten as
970
+ l𝜔d =
971
+
972
+ 𝜔dac,𝑥 𝜔dac,𝑦
973
+
974
+ 𝜔dac,𝑧 + 𝑤·l𝜔ct
975
+ ��
976
+ (24)
977
+ 16
978
+
979
+ where 𝑤 = sat
980
+
981
+ 𝑉𝑎−𝑉min
982
+ 𝑉max−𝑉min , 0, 1
983
+
984
+ . When the airspeed is slower than 𝑉min, the desired yaw rate is completely decided by
985
+ the basic attitude controller. In contrast, when l𝜔d𝑧 = 0 and the airspeed reaches the specified value 𝑉max, the desired
986
+ yaw rate is completely decided by the coordinated turn.
987
+ 4. Attitude Rate Control
988
+ To eliminate the attitude rate error, the controller is designed as
989
+ lmd = sat
990
+
991
+ J(−K𝝎p(l𝝎 − l𝝎d) − md,I − K𝝎d(l �𝝎 − l �𝝎d)), −md,max, md,max
992
+
993
+ (25)
994
+ where md,I = sat
995
+
996
+ K𝝎i
997
+
998
+ (l𝝎 − l𝝎d)d𝑠, − md,Imax, md,Imax
999
+
1000
+ , K𝜔p, K𝜔i, K𝝎d ∈ R3×3 are diagonal matrices acting as
1001
+ control gains, md,Imax is maximum amplitude of integral action, md,max is the maximum moment generated by actuators.
1002
+ So far, we have obtained 𝑓d and lmd, which will be further realized by 𝑇1, · · · , 𝑇4, 𝛿𝑎𝑟, 𝛿𝑎𝑙.
1003
+ D. Control Allocation
1004
+ The lifting-wing quadcopter is an over-actuated aircraft, which provides six independent control inputs, namely
1005
+ 𝑇1, · · · , 𝑇4, 𝛿𝑎𝑟, 𝛿𝑎𝑙 to meet a specific thrust 𝑓d ∈ R and moment demand lmd ∈ R3. A method of control allocation
1006
+ based on optimization is proposed. Recalling the control in translational dynamic, if we determined the three-dimensional
1007
+ force in bF and two Euler angles (pitch and roll angles) by an optimization before, then the six control variables (plus
1008
+ desired yaw angle) will be determined by six actuators uniquely. If so, however, the optimization in the control of the
1009
+ translational dynamic is not related to energy-saving directly. This is why we only choose the 𝑜b𝑧b force (the main force
1010
+ component) and two Euler angles (pitch and roll angles) to determine the desired acceleration as in Eq.(16).
1011
+ First, Eqs. (12) and (13) are rearranged as
1012
+ ����������
1013
+ 𝑓𝑧
1014
+ l𝑚𝑥 − 𝑄𝑆𝑏𝐶𝑙
1015
+ l𝑚𝑦 − 𝑄𝑆𝑐𝐶𝑚
1016
+ l𝑚𝑧 − 𝑄𝑆𝑏𝐶𝑛
1017
+ ����������
1018
+ ����������������������������������������
1019
+ uv
1020
+ =
1021
+ ����������
1022
+ − cos 𝜂
1023
+ − cos 𝜂
1024
+ − cos 𝜂
1025
+ − cos 𝜂
1026
+ 0
1027
+ 0
1028
+ −𝐾2
1029
+ 𝐾3
1030
+ 𝐾2
1031
+ −𝐾3
1032
+ −𝑄𝑆𝑏𝐶𝑙 𝛿𝑎
1033
+ 𝑄𝑆𝑏𝐶𝑙 𝛿𝑎
1034
+ 𝑑𝑥 cos 𝜂
1035
+ −𝑑𝑥 cos 𝜂
1036
+ 𝑑𝑥 cos 𝜂
1037
+ −𝑑𝑥 cos 𝜂
1038
+ 𝑄𝑆𝑐𝐶𝑚𝛿𝑒
1039
+ 𝑄𝑆𝑐𝐶𝑚𝛿𝑒
1040
+ −𝐾5
1041
+ 𝐾4
1042
+ 𝐾5
1043
+ −𝐾4
1044
+ −𝑄𝑆𝑏𝐶𝑛𝛿𝑎
1045
+ 𝑄𝑆𝑏𝐶𝑛𝛿𝑎
1046
+ ����������
1047
+ ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������
1048
+ B
1049
+ ����������������
1050
+ 𝑇1
1051
+ 𝑇2
1052
+ 𝑇3
1053
+ 𝑇4
1054
+ 𝛿𝑎𝑟
1055
+ 𝛿𝑎𝑙
1056
+ ����������������
1057
+ ����������
1058
+ 𝜹
1059
+ (26)
1060
+ where 𝐾2 = 𝑑𝑦 cos 𝜂 cos 𝜅 +𝐾1 sin 𝜅, 𝐾3 = 𝑑𝑦 cos 𝜂 cos 𝜅 −𝐾1 sin 𝜅, 𝐾4 = 𝑑𝑦 cos 𝜂 sin 𝜅 +𝐾1 cos 𝜅, 𝐾5 = 𝑑𝑦 cos 𝜂 sin 𝜅 −
1061
+ 𝐾1 cos 𝜅. The value 𝜹 is the control input of the actuator , uv is the virtual control, B is the control efficiency matrix.
1062
+ As shown in the Eq.(26), rank (B) = 4, the dimension of 𝜹 is 6, which is higher than that of uv, so Eq.(26) has a
1063
+ minimum norm solution. To take the control priority of the uv components, actuator priority of 𝜹 and actuator saturation
1064
+ 17
1065
+
1066
+ under consideration, the control allocation is formulated to be an optimization problem as
1067
+ min
1068
+ ��Wu
1069
+ �B𝜹 − uv,d
1070
+ ���2 + 𝛾
1071
+ ��W𝜹
1072
+ �𝜹 − 𝜹p
1073
+ ���2
1074
+ s.t.
1075
+ 𝜹− ⩽ 𝜹 ⩽ ¯𝜹
1076
+ (27)
1077
+ where uv,d =
1078
+
1079
+ 𝑓d l𝑚d𝑥 − 𝑄𝑆𝑏𝐶𝑙 l𝑚d𝑦 − 𝑄𝑆𝑐𝐶𝑚 l𝑚d𝑧 − 𝑄𝑆𝑏𝐶𝑛
1080
+ �T is the desired virtual control, 𝜹p is the preferred
1081
+ control vector which will be specified later, Wu ∈ R6×6 is a positive definite weighting matrix that prioritizes the
1082
+ commands in case the desired virtual input uv,d cannot be achieved, W𝜹 ∈ R6×6 is a positive definite weighting matrix
1083
+ that prioritizes the different actuators, 𝜹− = max(𝜹min, 𝜹l − Δ𝜹) , ¯𝜹 = min(𝜹max, 𝜹l + Δ𝜹) are lower and upper bounds at
1084
+ each sampling instant of actuators, 𝜹min and 𝜹max ∈ R6 are actuator position limits, Δ𝜹 ∈ R6 is the rate limit, and 𝜹l is
1085
+ the last value of 𝜹.
1086
+ There are two optimization objectives in Eq.(27), namely
1087
+ ��Wu
1088
+ �B𝜹 − uv,d
1089
+ ���2 and
1090
+ ��W𝜹
1091
+ �𝜹 − 𝜹p
1092
+ ���2. The first one is
1093
+ the primary objective of minimizing the slack variables weighted by Wu, so the weighting factor 𝛾 is often chosen to be
1094
+ a small value. In many cases, the preferred control vector is set to the last value of 𝜹, 𝜹p = 𝜹l. But, in order to save
1095
+ energy, we prefer to use the aerodynamic force because the change of rotor force implies the motor’s rotational speed
1096
+ change, which is power-hungry. According to the consideration above, we can give more weight to 𝛿𝑎𝑟 and 𝛿𝑎𝑙 and set
1097
+ first four elements of 𝜹p to 𝑇1+𝑇2+𝑇3+𝑇4
1098
+ 4
1099
+ , and last two elements to that of 𝜹l.
1100
+ The hardware resources of the flight control are very limited, so it is very important to ensure the real-time operation
1101
+ of the proposed algorithm. Several different algorithms, like redistributed pseudoinverse, interior-point methods, and
1102
+ active set methods have been proposed to solve the constrained quadratic programming problem. Among them, active
1103
+ set methods[26] perform well in the considered control allocation problems, because they have the advantage that their
1104
+ initialization can take advantage of the solution from the previous sample (known as the warm start), which is often a
1105
+ good guess for the optimal solution at the current sample. This can reduce the number of iterations needed to find the
1106
+ optimal solution in many cases. The study [27] shows that the computational complexity of the active set methods is
1107
+ similar to the redistributed pseudoinverse method and the fixed-point algorithm, but the active set methods produce
1108
+ solutions with better accuracy.
1109
+ V. Simulation Experiments
1110
+ In order to verify the three main advantages of the designed aircraft: control performance, energy saving and reliable
1111
+ transition flight, several experiments are conducted. The verification of these performances is mainly determined by two
1112
+ factors, namely, with and without ailerons, and with and without coordinated turn.
1113
+ Therefore, experiments are primarily composed of two parts. One is the comparison of control performance, energy
1114
+ saving and transition flight with and without aileron. And the other is the comparison of control performance with and
1115
+ 18
1116
+
1117
+ without coordinated turn, but both with aileron. In addition, the transition flight of three different VTOL vehicles, as
1118
+ shown in Fig. 2, is analyzed in the HIL simulation environment.
1119
+ A. Simulation Platform
1120
+ The HIL simulation is carried out in the RflySim platform [28–30], which provides CopterSim simulator and
1121
+ supports Pixhawk/PX4 autopilot.When perform the HIL simulation, CopterSim sends sensor data such as accelerometer,
1122
+ barometer, magnetometer, which is generated by a mathematical model, to the Pixhawk system via a USB serial port.
1123
+ The Pixhawk/PX4 autopilot will receive the sensors data for state estimation by the EKF2 filter and send the estimated
1124
+ states to the controller through the internal uORB message bus as the feedback. Then the controller sends the control
1125
+ signal of each motor as output back to CopterSim. Thereby a close-loop is established in the HIL simulation. Compared
1126
+ with the numerical simulation, the control algorithm is deployed and run in a real embedded system as the real flight
1127
+ does. After HIL simulation, the controller will be directly deployed to a real vehicle and further verification experiments
1128
+ will be performed.
1129
+ PixHawk Autopilot Hardware System
1130
+ HIL Real-Time Simulation System
1131
+ Control
1132
+ Signals
1133
+ Sensor
1134
+ Data
1135
+ Multicopter Simulation + 3D Display + Fault
1136
+ Injection
1137
+ Fig. 7
1138
+ HIL simulation of RflyLW2.
1139
+ The lifting-wing quadcopter model is built in MATLAB/Simulink according to Eqs. (1),(2) (12), (13). Besides, the
1140
+ dynamics of the motor and servo are modeled as first-order transfers as follows:
1141
+ 𝜛 =
1142
+ 1
1143
+ 𝑇m𝑠 + 1 (𝐶m𝜎𝑖 + 𝜛b), 𝛿a =
1144
+ 1
1145
+ 𝑇a𝑠 + 1 (𝐶a𝜎𝑗 + 𝛿b), 𝑖 = 1, 2, 3, 4, 𝑗 = 5, 6
1146
+ (28)
1147
+ where 𝜎𝑖 ∈ [0, 1] is 𝑖th throttle command, 𝑇m and 𝑇a are the time constant parameter of motor and servo response, 𝐶m,
1148
+ 𝐶a, 𝛿b and 𝜛b are constant parameters. The sensors used in HIL simulation include IMU, magnetometer, barometer,
1149
+ GPS and airspeed meter, are modeled reference to [7].
1150
+ Due to the vehicle’s VTOL capabilities, the aerodynamic parameters must not only be modeled up to the stall AoA,
1151
+ 19
1152
+
1153
+ L
1154
+ inrgps中
1155
+ 9 0
1156
+ 0
1157
+ 1 [0
1158
+ 1 4= [0
1159
+ X0
1160
+ EX+ BI (CCIT)
1161
+ 3
1162
+ x2* 14
1163
+ [e000 rry
1164
+ [30
1165
+ (+9
1166
+ 30
1167
+ 3
1168
+ xg sa
1169
+ 玉净大量
1170
+ (去+)的内
1171
+ To
1172
+ #4
1173
+ F¥4
1174
+ [880 xten,
1175
+ 0* -[o
1176
+ 50
1177
+ SB
1178
+ 30u
1179
+ %
1180
+ 20
1181
+ O
1182
+ COWBOr eBOFb
1183
+ BRITYBIE AFTEBI
1184
+ 一VMWIE0.00
1185
+ 000
1186
+ 00.0
1187
+ 0:00
1188
+ 0:00
1189
+ 000OMES
1190
+ 128
1191
+ MAd
1192
+ LETEW
1193
+ LETEWS
1194
+ 2MUCH
1195
+ CAIAIR32
1196
+ 26K11D2V
1197
+ NE
1198
+ LMEbut also in the post-stall region, in order to cover the entire flight envelope. The full angle aerodynamic parameters
1199
+ are obtained by combining the small angle aerodynamic parameters obtained by CFD simulation and the empirical
1200
+ aerodynamic model [31]. The aerodynamic characteristics at low and large AoA are approximated as
1201
+ �����
1202
+ �����
1203
+ 𝐶𝐿𝑆 (𝛼) =
1204
+ 0.5𝑐2
1205
+ 2
1206
+ (𝑐2 − 𝑐3)cos2(𝛼) + 𝑐3
1207
+ sin(2𝛼)
1208
+ 𝐶𝐷𝑆 (𝛼) = 𝑐0 +
1209
+ 𝑐2𝑐3
1210
+ (𝑐2 − 𝑐3) cos2 (𝛼) + 𝑐3
1211
+ sin2 (𝛼)
1212
+ ,
1213
+ ����
1214
+ ����
1215
+ 𝐶𝐿𝐿 (𝛼) = 𝑐1 sin (2𝛼)
1216
+ 𝐶𝐷𝐿 (𝛼) = 𝑐0 + 2𝑐1sin2 (𝛼)
1217
+ ,
1218
+ and a pseudo-sigmoid function
1219
+ 𝜎 (𝛼0, 𝑘, 𝛼) =
1220
+ 1 + tanh �𝑘𝛼2
1221
+ 0 − 𝑘𝛼2�
1222
+ 1 + tanh
1223
+
1224
+ 𝑘𝛼2
1225
+ 0
1226
+
1227
+ , 𝛼 ∈ [−𝜋, 𝜋)
1228
+ is used to blend low and large AoA regions together
1229
+ ����
1230
+ ����
1231
+ 𝐶𝐿 (𝛼) = 𝐶𝐿𝑆 (𝛼) 𝜎 (𝛼0, 𝑘𝐿, 𝛼) + 𝐶𝐿𝐿 (𝛼) [1 − 𝜎 (𝛼0, 𝑘𝐿, 𝛼)]
1232
+ 𝐶𝐷 (𝛼) = 𝐶𝐷𝑆 (𝛼) 𝜎 (𝛼0, 𝑘𝐷, 𝛼) + 𝐶𝐷𝐿 (𝛼) [1 − 𝜎 (𝛼0, 𝑘𝐷, 𝛼)]
1233
+ .
1234
+ (29)
1235
+ The low and large AOA parameters 𝑐0, 𝑐1, 𝑐2, 𝑐3, and blending parameters 𝛼0, 𝑘𝐿, 𝑘𝐷 are turned according to CFD results,
1236
+ and their values are set to 𝑐0 = 0.055, 𝑐1 = 0.9, 𝑐2 = 13.0, 𝑐3 = 3.3, 𝛼0 = 3deg, 𝑘𝐿 = 38, 𝑘𝐷 = 48. Corresponding
1237
+ Aerodynamic curves are shown in Fig. 8.
1238
+ (deg)
1239
+
1240
+ (deg)
1241
+
1242
+ L
1243
+ C
1244
+ D
1245
+ C
1246
+ and
1247
+ L
1248
+ C
1249
+ D
1250
+ C
1251
+ L
1252
+ D
1253
+ C
1254
+ C
1255
+ Fig. 8
1256
+ Aerodynamic parameters obtained by CFD
1257
+ B. Simulation Experiments
1258
+ 1. Verifying the Effectiveness of Aileron
1259
+ In order to show the effectiveness of ailerons, two groups of comparative experiments are carried out. The attitude
1260
+ control and position control are the same, but the control allocation of the first group uses the aileron, while the second
1261
+ 20
1262
+
1263
+ 021-
1264
+ -J00
1265
+ 20
1266
+ 0
1267
+ J00
1268
+ J20
1269
+ 500
1270
+ 2.0
1271
+ 2.0
1272
+ 2.0500
1273
+ 021-
1274
+ -J00
1275
+ -20
1276
+ 0
1277
+ J00
1278
+ J20
1279
+ 500
1280
+ -J2
1281
+ -JO
1282
+ 2-
1283
+ 0
1284
+ 2
1285
+ JO
1286
+ J2Time(s)
1287
+ Throttle×1000
1288
+ Power(W)
1289
+ Fitted Power
1290
+ Measured Power
1291
+ Fig. 9
1292
+ Throttle command and identification result of the motor
1293
+ group does not. The second group only depends on the quadcopter control allocation. In this experiment, the aircraft
1294
+ tracks the specified trajectory, as shown in Fig. 10(a), which is a straight line plus a circle with a radius of 200m.
1295
+ As shown in Fig. 10(b), after adding the aileron, the control amplitude of the four motors is almost the same trend,
1296
+ especially when the aircraft transits between straight and circular trajectories, indicating that the attitude control is
1297
+ mainly realized by the aileron, and motors are more like a thruster on a fixed-wing in this case. However, when the
1298
+ attitude changes sharply, as shown in Fig. 10(b) during 𝑡 = 22 ∼ 23s, the motor and aileron will participate in the
1299
+ control at the same time to improve the control performance. When the state of the UAV is in the slow adjustment, the
1300
+ aileron undertakes the control as much as possible to save energy.
1301
+ In order to quantify the influence of ailerons, the relationship between throttle command and energy consumption of
1302
+ the motor and actuator is identified. A sinusoidal signal with the constant amplitude and linear increasing frequency
1303
+ is designed, and the energy consumption test experiments are carried out on the servo and motor at the same time.
1304
+ Measurement and identification results of the motor are shown in the Fig.9, in which the amplitude of the throttle
1305
+ decreases as the frequency increasing, because a low-pass filter in Eq.(28) is applied to it. We find that the motor power
1306
+ has a quadratic function relationship with a fixed throttle, and the power is different when the motor accelerates and
1307
+ decelerates. When the change frequency of the throttle command becomes faster, the power is increased. Therefore, the
1308
+ following formula is established to fit the relationship between throttle command and motor power
1309
+ 𝑃𝑇 =
1310
+ 4
1311
+ ∑︁
1312
+ 𝑖=1
1313
+ 𝑝1𝜎𝑖2 + 𝑝2𝜎𝑖 + 𝑝3 �𝜎 𝑝4
1314
+ 𝑢𝑝,𝑖 + 𝑝5 �𝜎 𝑝6
1315
+ 𝑑𝑜𝑤𝑛,𝑖 + 𝑝7
1316
+ (30)
1317
+ 21
1318
+
1319
+ 180
1320
+ 500
1321
+ SSO
1322
+ J40
1323
+ seo
1324
+ 580
1325
+ 300
1326
+ 400
1327
+ 420
1328
+ eoo
1329
+ Q20180
1330
+ 500
1331
+ SSO
1332
+ 540
1333
+ seo
1334
+ 580
1335
+ 300
1336
+ 20
1337
+ 100
1338
+ J20
1339
+ 50OWith the given lifting-wing quadcopter platform, 𝑝1 = 563.7, 𝑝2 = −147.4, 𝑝3 = 15, 𝑝4 = 1.05, 𝑝5 = 4, 𝑝6 = 1, 𝑝7 =
1340
+ 0.05538. With respect to the servo, its power is negligible, because the power of servo is only 0.2W, far less than the
1341
+ motor’s, even if the 100g weight is suspended on the two ailerons.
1342
+ The trajectory tracking experiment is performed three times, with the forward flight speed of 5m/s, 10m/s and 20m/s,
1343
+ respectively. The powers in three flight speeds are shown in Fig. 10(c). It can be observed that when the speed is 5m/s,
1344
+ the power is the same, because the aileron is not used at slow flight speed. When the speed increases to 10m/s, the
1345
+ power with the aileron is slightly smaller. Further when the speed increases to 20m/s, the power with the aileron is
1346
+ greatly smaller because the aerodynamic force is stronger at high speed.
1347
+ Time(s), without aileron
1348
+ T1~T4(N)
1349
+ Time(s), with aileron
1350
+ b) Control output with and without aileron when the forward flight speed is 20m/s
1351
+ c) Control indexs in different forward flight speeds
1352
+ a) Flight path
1353
+ Time(s), 5m/s
1354
+ x(m)
1355
+ y(m)
1356
+ -z(m)
1357
+ Time(s), 10m/s
1358
+ Time(s), 20m/s
1359
+ Power(W)
1360
+ without aileron
1361
+ with aileron
1362
+ ar
1363
+ al
1364
+ and
1365
+ (deg)
1366
+
1367
+
1368
+ Fig. 10
1369
+ The attitude control performance and mixer output with and without aileron
1370
+ 2. Verifying the Effectiveness of Coordinated Turn
1371
+ As for the lateral control, experiments are carried out in hover and high-speed forward flight, and the results are
1372
+ shown in Fig. 11. In the high-speed forward flight, as shown in Fig. 11 yellow region, when a turn command is given
1373
+ for the first time during 30.8 ∼ 42.2s, the lifting-wing quadcopter with coordinated turn has no sideslip angle and has a
1374
+ better lateral tracking performance, as shown in Fig.11(a.1 vs. b.1, and a.5 vs. b.5). In this state, the control of the
1375
+ lifting-wing quadcopter is similar to the fixed-wing, and the desired yaw rate is generated by Eq.(23) as the feedforward.
1376
+ When the turn command is given for the second time during 60.8 ∼ 72.2s, the lifting-wing quadcopter is in low-speed
1377
+ flight, and the control of the lifting-wing quadcopter is similar to the quadcopter, so a big sideslip angle appears both in
1378
+ experiments with and without coordinated turn.
1379
+ 22
1380
+
1381
+ 0
1382
+ 0
1383
+ J00
1384
+ J20
1385
+ S00
1386
+ 5O
1387
+ 0
1388
+ 5O0
1389
+ 0
1390
+ J00
1391
+ J20
1392
+ S00
1393
+ 0
1394
+ 2
1395
+ 10
1396
+ J2400
1397
+ e00
1398
+ 0
1399
+ 500
1400
+ Q00
1401
+ 400
1402
+ 5OO
1403
+ 0-
1404
+ -05
1405
+ 40-
1406
+ eo-
1407
+ 80-50
1408
+ JI
1409
+ 53
1410
+ 54
1411
+ 52
1412
+ e'812
1413
+ 5O
1414
+ 52
1415
+ 30
1416
+ 32
1417
+ 40
1418
+ 42
1419
+ 20
1420
+ J20
1421
+ 500
1422
+ 520
1423
+ 30012
1424
+ 50
1425
+ 52
1426
+ 30
1427
+ 32
1428
+ 40
1429
+ 42
1430
+ 300
1431
+ 400
1432
+ 200J2
1433
+ 50
1434
+ 52
1435
+ 30
1436
+ 32
1437
+ 40
1438
+ 42
1439
+ 20
1440
+ J20F
1441
+ 500
1442
+ 520
1443
+ 3000
1444
+ 20
1445
+ J00
1446
+ J20
1447
+ 500
1448
+ -
1449
+ 0
1450
+ I0
1451
+ 0
1452
+ J00
1453
+ J20
1454
+ S00
1455
+ 2
1456
+ 10
1457
+ J2Time(s)
1458
+ b) Vehicle states with coordinated turn
1459
+ Time(s)
1460
+ a) Vehicle states without coordinated turn
1461
+ Desired
1462
+ Response
1463
+ >10m/s
1464
+ 6~10m/s
1465
+ 6~10m/s
1466
+ >10m/s
1467
+ (deg)
1468
+
1469
+ (deg)
1470
+
1471
+ 30.8-42.2
1472
+ 60.8-72.2
1473
+ (a.1)
1474
+ (a.2)
1475
+ (a.3)
1476
+ (a.4)
1477
+ (a.5)
1478
+ (b.1)
1479
+ (b.2)
1480
+ (b.3)
1481
+ (b.4)
1482
+ (b.5)
1483
+ Desired Yaw Rate
1484
+ Yaw Response
1485
+ (rad)
1486
+
1487
+ a(m s)
1488
+ V
1489
+ Fig. 11
1490
+ The control response with and without coordinated turn
1491
+ 3. Transition Flight
1492
+ The transition flight is often defined as the aircraft changing from hover to level flight. To quantify the phase, the
1493
+ transition process is defined as the time it takes for the aircraft to start a transitional flight to the airspeed greater than
1494
+ 18m/s. The quality of the transition is mainly reflected in the transition time and control accuracy of altitude. In the
1495
+ simulation, the aircraft will take off to a certain altitude, and then a step signal of −30◦ is given to the pitch channel.
1496
+ This is because the wind CFD test results show that the maximum energy efficiency is obtained when 𝛼 = 4◦. As a
1497
+ comparison, the same experiment is performed on a tail-sitter quadcopter. The model of the tail-sitter quadcopter is built
1498
+ on the lifting-wing quadcopter with the installation angle of the lifting-wing from 34◦ to 90◦, as shown in Fig. 2(a).
1499
+ Fig.12 shows the response curves of the pitch angle, altitude and airspeed of the lifting-wing quadcopter and
1500
+ tail-sitter UAV in the transition phrase in the HIL simulation. It can be observed that during the transition phase of
1501
+ 23
1502
+
1503
+ 0
1504
+ O
1505
+ 40
1506
+ eo
1507
+ 80
1508
+ J00
1509
+ -50
1510
+ 0
1511
+ 5O5O
1512
+ 40
1513
+ eo
1514
+ 80
1515
+ J00
1516
+ -50
1517
+ 0
1518
+ 5O5O
1519
+ 40
1520
+ e
1521
+ 80
1522
+ J00
1523
+ -30F
1524
+ 50
1525
+ -J0
1526
+ 050
1527
+ 40
1528
+ eo
1529
+ 80
1530
+ J00
1531
+ -40
1532
+ 50
1533
+ 00
1534
+ O
1535
+ 40
1536
+ eo
1537
+ 80
1538
+ 0
1539
+ JO
1540
+ SO5O
1541
+ 40
1542
+ e
1543
+ 80
1544
+ J00
1545
+ JO
1546
+ 5O0
1547
+ 5O
1548
+ 40
1549
+ eo
1550
+ 80
1551
+ 30
1552
+ -50
1553
+ -JO
1554
+ 00
1555
+ 5O
1556
+ 40
1557
+ eo
1558
+ 80
1559
+ 0
1560
+ 5O
1561
+ 40
1562
+ eo0
1563
+ SO
1564
+ 40
1565
+ eo
1566
+ 80
1567
+ 2.I-
1568
+ -J
1569
+ 2.0-
1570
+ 050
1571
+ 40
1572
+ eo
1573
+ 80
1574
+ J00
1575
+ -J
1576
+ 0Time(s)
1577
+ Va(m/s)
1578
+ RflyLW2
1579
+ Tail-sitter-60°
1580
+ Tail-sitter-70°
1581
+ Take
1582
+ off
1583
+ Hovering Transition
1584
+ Flight
1585
+ Time(s)
1586
+ Time(s)
1587
+ Take
1588
+ off
1589
+ Hovering Transition
1590
+ Flight
1591
+ Take
1592
+ off
1593
+ Hovering Transition
1594
+ Flight
1595
+ RflyLW2
1596
+ Tail-sitter-60°
1597
+ Tail-sitter-70°
1598
+ RflyLW2
1599
+ Tail-sitter-60°
1600
+ Tail-sitter-70°
1601
+ Pitch(rad)
1602
+ Altitude(m)
1603
+ a) Airspeed response
1604
+ b) Pitch response
1605
+ c) Altitude response
1606
+ Fig. 12
1607
+ Transition flight of lifting-wing quadcopter and tail-sitter UAV.
1608
+ the lifting-wing quadcopter, the time for the pitch angle to reach the desired value is 1.1s, as shown in Fig.12(b), and
1609
+ the time it takes for the airspeed to reach the set 18m/s is 4.7s as shown in Fig.12(a). So the transition time of the
1610
+ lifting-wing quadcopter is 4.7s. Furthermore, the altitude decreases as soon as the transition starts, with a maximum
1611
+ altitude error of 0.09 m at 𝑡 = 21.68 s. As for tail-sitter UAV, the transition time is 7.1s, when the desired pitch angle is
1612
+ −70◦. But after the transition, the altitude drops sharply, as shown in Fig.12(c). When the desired pitch angle is −60◦,
1613
+ the altitude is stable, but the transition time is 20.48s much longer than that of the RflyLW2. Thus, the transition phase of
1614
+ the lifting-wing quadcopter is significantly better than the tail-sitter UAV, and does not require an additional controller.
1615
+ VI. Conclusion
1616
+ The modeling, controller design, and HIL simulation verification of the lifting-wing quadcopter—a novel type
1617
+ of hybrid aircraft—are presented in this paper. The modeling portion takes into account the forces and moments
1618
+ produced by the lifting wing and rotors. A unified controller for the entire flight phase is created based on the existing
1619
+ model. Depending on the velocity command, the controller can regard hovering and forward flight equally and enable
1620
+ a seamless transition between the two modes. The experimental results show that the proposed aircraft outperforms
1621
+ the tail-sitter UAV in terms of tracking performance during the transition phase. In addition, the controller combines
1622
+ the characteristics of the quadcopter and fixed-wing control law, allowing it to retain yaw during the hover phase and
1623
+ decrease sideslip angle during the cruise phase. What is more, a control allocation based on optimization is utilized
1624
+ to realize cooperative control for energy saving, by taking rotor thrust and aerodynamic force under consideration
1625
+ simultaneously. Through identification, we find that compared with the motor, the aileron can reduce the energy
1626
+ consumption when implementing high-frequency control inputs.
1627
+ References
1628
+ [1] Saeed, A. S., Younes, A. B., Cai, C., and Cai, G., “A survey of hybrid unmanned aerial vehicles,” Progress in Aerospace
1629
+ Sciences, Vol. 98, 2018, pp. 91 – 105.
1630
+ 24
1631
+
1632
+ 0
1633
+ 10
1634
+ S0
1635
+ 30
1636
+ 40
1637
+ 08-
1638
+ -eo
1639
+ -40
1640
+ -50
1641
+ 00
1642
+ 10
1643
+ S0
1644
+ 30
1645
+ 40
1646
+ 0
1647
+ 10
1648
+ 300
1649
+ S0
1650
+ 30
1651
+ 40
1652
+ 0
1653
+ SO[2] Raj, N., Banavar, R., Abhishek, and Kothari, M., “Attitude control of a novel tailsitter: swiveling biplane-quadrotor,” Journal of
1654
+ Guidance, Control, and Dynamics, Vol. 43, No. 3, 2020, pp. 599–607.
1655
+ [3] wsj.com, “Google tail-sitter UAV,” https://www.wsj.com/articles/BL-DGB-40935, 2015. Accessed January 1, 2023.
1656
+ [4] Ritz, R., and D’Andrea, R., “A global strategy for tailsitter hover control,” Robotics Research, Springer, 2018, pp. 21–37.
1657
+ [5] Xiao, K., Meng, Y., Dai, X., Zhang, H., and Quan, Q., “A lifting wing fixed on multirotor UAVs for long flight ranges,” 2021
1658
+ International Conference on Unmanned Aircraft Systems (ICUAS), IEEE, 2021, pp. 1605–1610.
1659
+ [6] Zhang, H., Tan, S., Song, Z., and Quan, Q., “Performance evaluation and design method of lifting-wing multicopters,”
1660
+ IEEE/ASME Transactions on Mechatronics, Vol. 27, No. 3, 2021, pp. 1606–1616.
1661
+ [7] Quan, Q., Introduction to multicopter design and control, Springer, 2017.
1662
+ [8] Emran, B. J., and Najjaran, H., “A review of quadrotor: An underactuated mechanical system,” Annual Reviews in Control,
1663
+ Vol. 46, 2018, pp. 165–180.
1664
+ [9] Theys, B., De Vos, G., and De Schutter, J., “A control approach for transitioning VTOL UAVs with continuously varying
1665
+ transition angle and controlled by differential thrust,” 2016 international conference on unmanned aircraft systems (ICUAS),
1666
+ IEEE, 2016, pp. 118–125.
1667
+ [10] tx tech.cn, “Volitation vespertilio,” http://www.tx-tech.cn/en/col.jsp?id=133, 2019. Accessed January 1, 2023.
1668
+ [11] theverge.com,
1669
+ “Amazon
1670
+ Prime
1671
+ Air,”
1672
+ https://www.theverge.com/2019/6/5/18654044/amazon-prime-air-
1673
+ delivery-drone-new-design-safety-transforming-flight-video, 2019. Accessed January 1, 2023.
1674
+ [12] Hassanalian, M., and Abdelkefi, A., “Classifications, applications, and design challenges of drones: A review,” Progress in
1675
+ Aerospace Sciences, Vol. 91, 2017, pp. 99–131.
1676
+ [13] Argyle, M. E., Beard, R. W., and Morris, S., “The vertical bat tail-sitter: dynamic model and control architecture,” 2013
1677
+ American Control Conference, IEEE, 2013, pp. 806–811.
1678
+ [14] Escareno, J., Stone, R., Sanchez, A., and Lozano, R., “Modeling and control strategy for the transition of a convertible tail-sitter
1679
+ UAV,” 2007 European Control Conference (ECC), IEEE, 2007, pp. 3385–3390.
1680
+ [15] Oosedo, A., Abiko, S., Konno, A., and Uchiyama, M., “Optimal transition from hovering to level-flight of a quadrotor tail-sitter
1681
+ UAV,” Autonomous Robots, Vol. 41, No. 5, 2017, pp. 1143–1159.
1682
+ [16] Li, B., Sun, J., Zhou, W., Wen, C.-Y., Low, K. H., and Chen, C.-K., “Transition optimization for a VTOL tail-sitter UAV,”
1683
+ IEEE/ASME Transactions on Mechatronics, 2020.
1684
+ [17] Li, B., Sun, J., Zhou, W., Wen, C.-Y., Low, K. H., and Chen, C.-K., “Nonlinear robust flight mode transition control for tail-sitter
1685
+ aircraft,” IEEE/ASME transactions on mechatronics, Vol. 25, No. 5, 2020, pp. 2534–2545.
1686
+ 25
1687
+
1688
+ [18] Smeur, E. J., Bronz, M., and de Croon, G. C., “Incremental control and guidance of hybrid aircraft applied to a tailsitter
1689
+ unmanned air vehicle,” Journal of Guidance, Control, and Dynamics, Vol. 43, No. 2, 2020, pp. 274–287.
1690
+ [19] Lyu, X., Gu, H., Zhou, J., Li, Z., Shen, S., and Zhang, F., “Simulation and flight experiments of a quadrotor tail-sitter vertical
1691
+ take-off and landing unmanned aerial vehicle with wide flight envelope,” International Journal of Micro Air Vehicles, Vol. 10,
1692
+ No. 4, 2018, pp. 303–317.
1693
+ [20] Liu, H., Peng, F., Lewis, F. L., and Wan, Y., “Robust tracking control for tail-sitters in flight mode transitions,” IEEE Transactions
1694
+ on Aerospace and Electronic Systems, Vol. 55, No. 4, 2018, pp. 2023–2035.
1695
+ [21] Zhou, J., Lyu, X., Li, Z., Shen, S., and Zhang, F., “A unified control method for quadrotor tail-sitter uavs in all flight modes:
1696
+ Hover, transition, and level flight,” 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE,
1697
+ 2017, pp. 4835–4841.
1698
+ [22] Flores, A., de Oca, A. M., and Flores, G., “A simple controller for the transition maneuver of a tail-sitter drone,” 2018 IEEE
1699
+ Conference on Decision and Control (CDC), IEEE, 2018, pp. 4277–4281.
1700
+ [23] Nascimento, T. P., and Saska, M., “Position and attitude control of multi-rotor aerial vehicles: A survey,” Annual Reviews in
1701
+ Control, Vol. 48, 2019, pp. 129–146.
1702
+ [24] Beard, R. W., and McLain, T. W., Small unmanned aircraft: Theory and practice, Princeton university press, 2012.
1703
+ [25] Lyu, X., Gu, H., Zhou, J., Li, Z., Shen, S., and Zhang, F., “A hierarchical control approach for a quadrotor tail-sitter VTOL UAV
1704
+ and experimental verification,” 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE,
1705
+ Vancouver, BC, 2017, pp. 5135–5141.
1706
+ [26] Johansen, T. A., and Fossen, T. I., “Control allocation—A survey,” Automatica, Vol. 49, No. 5, 2013, pp. 1087–1103.
1707
+ [27] Harkegard, O., “Efficient active set algorithms for solving constrained least squares problems in aircraft control allocation,”
1708
+ Proceedings of the 41st IEEE Conference on Decision and Control, 2002., Vol. 2, IEEE, 2002, pp. 1295–1300.
1709
+ [28] Wang, S., Dai, X., Ke, C., and Quan, Q., “RflySim: A rapid multicopter development platform for education and research
1710
+ based on Pixhawk and MATLAB,” 2021 International Conference on Unmanned Aircraft Systems (ICUAS), IEEE, 2021, pp.
1711
+ 1587–1594.
1712
+ [29] Dai, X., Ke, C., Quan, Q., and Cai, K.-Y., “RFlySim: Automatic test platform for UAV autopilot systems with FPGA-based
1713
+ hardware-in-the-loop simulations,” Aerospace Science and Technology, Vol. 114, 2021, p. 106727.
1714
+ [30] rflysim.com, “rflysim,” https://rflysim.com/en/index.html, 2020. Accessed January 1, 2023.
1715
+ [31] Pucci, D., Hamel, T., Morin, P., and Samson, C., “Nonlinear control of PVTOL vehicles subjected to drag and lift,” 2011 50th
1716
+ IEEE Conference on Decision and Control and European Control Conference, IEEE, 2011, pp. 6177–6183.
1717
+ 26
1718
+
0dAyT4oBgHgl3EQf1PkS/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
0tFQT4oBgHgl3EQf0TbP/content/tmp_files/2301.13416v1.pdf.txt ADDED
@@ -0,0 +1,1018 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Structure Flow-Guided Network for Real Depth Super-Resolution
2
+ Jiayi Yuan*, Haobo Jiang*, Xiang Li, Jianjun Qian, Jun Li†, Jian Yang†
3
+ PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education
4
+ Jiangsu Key Lab of Image and Video Understanding for Social Security
5
+ School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
6
+ {jiayiyuan, jiang.hao.bo, xiang.li.implus, csjqian, junli, csjyang}@njust.edu.cn
7
+ Real LR
8
+ Groundtruth
9
+ RGB image
10
+ FDSR
11
+ Ours
12
+ (c)
13
+ (e)
14
+ (d)
15
+ (f)
16
+ (g)
17
+ (h)
18
+ (i)
19
+ (j)
20
+ (k)
21
+ (l)
22
+ (a)
23
+ (b)
24
+ Synthetic LR
25
+ Figure 1: In this paper, we propose a novel structure flow-guided method for real-world DSR. Our method obtains better depth
26
+ edge recovery (g-h), compared to (e) and (f) using the SOTA method, FDSR (He et al. 2021). (a-b) Synthetic LR depth maps;
27
+ (c) Real LR depth map with the structural distortion; (d) Real LR depth map with the edge noise (e.g., holes); (i-j) Ground-truth
28
+ HR depth maps; (k-l) RGB image guidance.
29
+ Abstract
30
+ Real depth super-resolution (DSR), unlike synthetic settings,
31
+ is a challenging task due to the structural distortion and the
32
+ edge noise caused by the natural degradation in real-world
33
+ low-resolution (LR) depth maps. These defeats result in sig-
34
+ nificant structure inconsistency between the depth map and
35
+ the RGB guidance, which potentially confuses the RGB-
36
+ structure guidance and thereby degrades the DSR quality. In
37
+ this paper, we propose a novel structure flow-guided DSR
38
+ framework, where a cross-modality flow map is learned to
39
+ guide the RGB-structure information transferring for pre-
40
+ cise depth upsampling. Specifically, our framework consists
41
+ of a cross-modality flow-guided upsampling network (CFU-
42
+ Net) and a flow-enhanced pyramid edge attention network
43
+ (PEANet). CFUNet contains a trilateral self-attention module
44
+ combining both the geometric and semantic correlations for
45
+ reliable cross-modality flow learning. Then, the learned flow
46
+ maps are combined with the grid-sampling mechanism for
47
+ coarse high-resolution (HR) depth prediction. PEANet targets
48
+ *These authors contributed equally.
49
+ †corresponding authors
50
+ Copyright © 2023, Association for the Advancement of Artificial
51
+ Intelligence (www.aaai.org). All rights reserved.
52
+ at integrating the learned flow map as the edge attention into
53
+ a pyramid network to hierarchically learn the edge-focused
54
+ guidance feature for depth edge refinement. Extensive exper-
55
+ iments on real and synthetic DSR datasets verify that our ap-
56
+ proach achieves excellent performance compared to state-of-
57
+ the-art methods.
58
+ Introduction
59
+ With the fast development of cheap RGB-D sensors, depth
60
+ maps have played a much more important role in a variety
61
+ of computer vision applications, such as object recognition
62
+ (Blum et al. 2012; Eitel et al. 2015), 3D reconstruction (Hou,
63
+ Dai, and Nießner 2019; Newcombe et al. 2011), and virtual
64
+ reality (Meuleman et al. 2020)). However, the defects (e.g.,
65
+ low resolution and structural distortion) lying in the cheap
66
+ RGB-D sensors (e.g., Microsoft Kinect and HuaweiP30Pro),
67
+ still hinder their more extensive applications in real world.
68
+ Also, although the popular DSR methods (Song et al. 2020;
69
+ Kim, Ponce, and Ham 2021; Sun et al. 2021) have achieved
70
+ excellent DSR accuracy on synthetic LR depth maps, the
71
+ significant domain gap between the real and the synthetic
72
+ data largely degrades their DSR precision on the real data.
73
+ arXiv:2301.13416v1 [cs.CV] 31 Jan 2023
74
+
75
+ This domain gap is mainly caused by different genera-
76
+ tion mechanisms of the LR depth map. The synthetic LR
77
+ depth map is usually generated via artificial degradation
78
+ (e.g., down-sampling operation), while the real one is from
79
+ natural degradation (e.g., noise, blur, and distortion). Differ-
80
+ ent from the synthetic data, there are two challenges of the
81
+ real-data DSR as below. The first one is the severe structural
82
+ distortion (see Fig. 1 (c)), especially for the low-reflection
83
+ glass surface or the infrared-absorbing surface. The second
84
+ one is the edge noise even the holes (see Fig. 1 (d)), caused
85
+ by the physical limitations or the low processing power of
86
+ the depth sensors. Both of the challenges above present
87
+ a significant difference between the real and the synthetic
88
+ data, which inherently degrades the generalization precision
89
+ of the synthetic DSR methods to the real data.
90
+ In this paper, we develop a novel structure flow-guided
91
+ DSR framework to handle the above challenges. For the
92
+ structural distortion, we propose a cross-modality flow-
93
+ guided upsampling network (CFUNet) that learns a struc-
94
+ tured flow between the depth map and the RGB image to
95
+ guide their structure alignment for the recovery of the dis-
96
+ torted depth structure. It includes two key components: a
97
+ trilateral self-attention module and a cross-modality cross-
98
+ attention module. In detail, the former leverages the geomet-
99
+ ric and semantic correlations (i.e., coordinate distance, pixel
100
+ difference, feature difference) to guide the relevant depth-
101
+ feature aggregation into each depth feature to supplement
102
+ the missing depth-structure information. The latter utilizes
103
+ the enhanced depth feature and the RGB feature as the in-
104
+ put for their sufficient message passing and flow-map gen-
105
+ eration. Finally, we combine the flow map with the grid-
106
+ sampling mechanism for the coarse HR depth prediction.
107
+ For the edge noise, we present a flow-enhanced pyramid
108
+ edge attention network (PEANet) that integrates the learned
109
+ structure flow map as the edge attention into a pyramid net-
110
+ work to learn the edge-focused guidance feature for the edge
111
+ refinement of the coarse HR depth map predicted above.
112
+ Considering the structure clue (i.e., edge region tends to
113
+ own significant flow-value fluctuations) lying in the learned
114
+ flow map, we combine the flow map with the RGB fea-
115
+ ture to form the flow-enhanced RGB feature for highlighting
116
+ the RGB-structure region. Then, we feed the flow-enhanced
117
+ RGB feature into an iterative pyramid network for its edge-
118
+ focused guidance feature learning. The low-level guidance
119
+ features effectively filter the RGB-texture noise (guided by
120
+ the flow map), while the high-level guidance features exploit
121
+ the rich context information for more precise edge-feature
122
+ capture. Finally, we pass the learned guidance feature and
123
+ the depth feature into a decoder network to predict the edge-
124
+ refined HR depth map. Extensive experiments on challeng-
125
+ ing real-world datasets verify the effectiveness of our pro-
126
+ posed method (see examples in Fig. 1(g-h)). In summary,
127
+ our contributions are as follows:
128
+ • We propose an effective cross-modality flow-guided up-
129
+ sampling network (CFUNet), where a structure flow map
130
+ is learned to guide the structure alignment between the
131
+ depth map and the RGB image for the recovery of the
132
+ distorted depth edge.
133
+ • We present a flow-enhanced pyramid edge attention net-
134
+ work (PEANet) that integrates the flow map as edge at-
135
+ tention into a pyramid network to hierarchically learn the
136
+ edge-focused guidance feature for edge refinement.
137
+ • Extensive experiments on the real and synthetic datasets
138
+ verify the effectiveness of the proposed framework, and
139
+ we achieve state-of-the-art restoration performance on
140
+ multiple DSR dataset benchmarks.
141
+ Related Work
142
+ Synthetic Depth Super-Resolution
143
+ The synthetic depth super-resolution (DSR) architectures
144
+ can be divided into the pre-upsampling methods and the
145
+ progressive upsampling methods (Wang et al. 2020). The
146
+ pre-upsampling DSR methods first upsample the input depth
147
+ with interpolation algorithms (e.g., bicubic) from LR to HR,
148
+ and then feed it into depth recovery network layers. (Li
149
+ et al. 2016) introduce the first pre-upsampling network ar-
150
+ chitecture. As this method handles arbitrary scaling factor
151
+ depth, more and more similar approaches have been pre-
152
+ sented to further facilitate DSR task (Li et al. 2019; Lutio
153
+ et al. 2019; Zhu et al. 2018; Chen and Jung 2018; Hao et al.
154
+ 2019; Su et al. 2019). However, upsampling in one step is
155
+ not suitable for large scaling factors simply because it usu-
156
+ ally leads to losing much detailed information. To tackle
157
+ these issues, a progressive upsampling structure is designed
158
+ in MSG-net(Tak-Wai, Loy, and Tang 2016), which gradually
159
+ upsamples the LR depth map by transposed convolution lay-
160
+ ers at different scale levels. Since then, various progressive
161
+ upsample-based methods have been proposed that greatly
162
+ promote the development of this domain(Tak-Wai, Loy, and
163
+ Tang 2016; Guo et al. 2019; He et al. 2021; Zuo et al. 2019).
164
+ Recently, the joint-task learning framework achieves im-
165
+ pressive performance, such as DSR & completion (Yan et al.
166
+ 2022), depth estimation & enhancement (Wang et al. 2021)
167
+ and DSR & depth estimation (Tang et al. 2021; Sun et al.
168
+ 2021). Inspired by these joint-task methods, we combine the
169
+ alignment task with the super-resolution task to distill the
170
+ cross-modality knowledge for robust depth upsampling.
171
+ Real-world Depth Super-Resolution
172
+ In recent years, the super-resolution for real-world images
173
+ has been under the spotlight, which involves upsampling,
174
+ denoising, and hole-filling. Early traditional depth enhance-
175
+ ment methods (Yang et al. 2014; Liu et al. 2016, 2018) are
176
+ based on complex and time-consuming optimization. For
177
+ fast CNN-based DSR, AIR (Song et al. 2020) simulates
178
+ the real LR depth map by combining the interval degrada-
179
+ tion and the bicubic degradation, and proposes a channel at-
180
+ tention based network for real DSR. PAC (Su et al. 2019)
181
+ and DKN (Kim, Ponce, and Ham 2021) utilize the adap-
182
+ tive kernels calculated by the neighborhood pixels in RGB
183
+ image for robust DSR. FDSR(He et al. 2021) proposes the
184
+ octave convolution for frequency domain separation, which
185
+ achieves outstanding performance in real datasets. Although
186
+ these methods handle the large modality gap between the
187
+ guidance image and depth map, the structure misalignment
188
+ between the depth map and the RGB image still leads them
189
+
190
+ Cross
191
+ Attention
192
+ Trilateral
193
+ Self-
194
+ attention
195
+ Grid
196
+ Sample
197
+ �������������������������������������
198
+ ������������������������������������
199
+ ������������������������������������������������������������������������������������
200
+ Encoder
201
+ ������������������������������������
202
+ Cross-modality Flow-guided Upsampling
203
+ Flow-enhanced Pyramid Edge Attention
204
+ Edge
205
+ Decoder
206
+ ������������������������������������������������������������������������������������
207
+ Flow-enhanced
208
+ Pyramid Attention
209
+ Add
210
+ Multi-scale
211
+ features
212
+ Flow
213
+ Decoder
214
+ Upsample
215
+ Decoder
216
+ Flow maps
217
+ Figure 2: The pipeline of our structure flow-guided DSR framework. Given the LR depth map and the RGB image, the left
218
+ block (blue) first generates the flow maps through a trilateral self-attention module and a cross-attention module, and predicts
219
+ the coarse depth map Dcoarse with the flow-based grid-sampling. Then, the right block (yellow) integrates the RGB/depth
220
+ features and the flow map (as edge attention) to learn the edge-focused guidance feature for edge refinement (Drefine).
221
+ to suffer from serious errors around the edge regions. Dif-
222
+ ferent from the general paradigms, we introduce a novel
223
+ structure flow-guided framework, which exploits the cross-
224
+ modality flow map to guide the RGB-structure information
225
+ transferring for real DSR.
226
+ Approach
227
+ In the following, we introduce our structure flow-guided
228
+ DSR framework for robust real-world DSR. As shown in
229
+ Fig. 2, our framework consists of two modules: a cross-
230
+ modality flow-guided upsampling network (CFUNet) and a
231
+ flow-enhanced pyramid edge attention network (PEANet).
232
+ Given an LR depth map DLR ∈ RH0×W0 and its cor-
233
+ responding HR RGB image I ∈ RH×W ×3 (H/H0 =
234
+ W/W0 = s and s is the scale factor), CFUNet first learns
235
+ the cross-modality flow to guide the structure alignment be-
236
+ tween depth the RGB for coarse HR depth prediction. Then,
237
+ PEANet exploits the structure flow as edge attention to learn
238
+ the edge-focused guidance feature for edge refinement.
239
+ Cross-modality Flow-guided Upsampling Network
240
+ As demonstrated in Fig. 1 (c), the structural distortion of the
241
+ real LR depth map leads to the significant structure misalign-
242
+ ment between the RGB image and the depth map, which po-
243
+ tentially damages the structure guidance of RGB images for
244
+ depth edge recovery. To handle it, our solution is to learn an
245
+ effective cross-modality flow map between the depth and the
246
+ RGB to identify their structure relationship. Then, guided by
247
+ the learned flow map, we align the structure of the depth map
248
+ to the RGB image for the recovery of the distorted depth
249
+ edge. Next, we will describe our network in terms of the
250
+ feature extraction, the trilateral attention-based flow genera-
251
+ tion, and the flow-guided depth upsampling.
252
+ Feature extraction. To achieve the consistent input size,
253
+ we first upsample the LR depth map DLR to a resolution
254
+ map DBic ∈ RH×W with the bicubic interpolation.
255
+ Then, we feed the upsampled depth map and the RGB
256
+ image into an encoder for their feature extraction: {Fl ∈
257
+ RH×W ×D}L
258
+ l=1 and {Gl ∈ RH×W ×D}L
259
+ l=1 where the sub-
260
+ script l denotes the feature output in l-th layer of the encoder.
261
+ Trilateral attention-based flow generation. The key to
262
+ generating a reliable cross-modality flow map is to model
263
+ a robust relationship between the RGB and the depth map.
264
+ Nevertheless, the serious structural distortion caused by the
265
+ natural degradation potentially increases the modality gap
266
+ between the depth and the RGB. Thereby, it’s difficult to
267
+ directly exploit a general attention mechanism to model
268
+ such a relationship. To mitigate it, we target at enhanc-
269
+ ing the depth feature through a proposed trilateral self-
270
+ attention block so that the distorted depth-structure informa-
271
+ tion can be largely complemented for relationship modeling.
272
+ As shown in Fig. 3, our trilateral self-attention block fuses
273
+ the geometric-level correlation and the semantic-level cor-
274
+ relation to jointly guide the depth feature enhancement. It’s
275
+ noted that we just enhance the depth feature FL in the last
276
+ layer (L-th layer):
277
+ ¯F(i)
278
+ L =
279
+
280
+ j
281
+ αi,j · (βlow
282
+ i,j + βhigh
283
+ i,j
284
+ ) · γi,j · F(j)
285
+ L + F(j)
286
+ L ,
287
+ (1)
288
+ where F(j)
289
+ L (1 ≤ j ≤ H × W) denotes the j-th depth-pixel
290
+ feature and ¯F(i)
291
+ L
292
+ denotes the i-th enhanced depth feature
293
+ (1 ≤ i ≤ H × W). The geometric-level correlation con-
294
+ tains a spatial kernel α ∈ R(H×W )×(H×W ) and a low-level
295
+ color kernel βlow ∈ R(H×W )×(H×W ), while the semantic-
296
+ level correlation contains a high-level color semantic ker-
297
+ nel βhigh ∈ R(H×W )×(H×W ) and a depth semantic kernel
298
+ γ ∈ R(H×W )×(H×W ). In detail, we formulate the spatial
299
+ kernel as a coordinate distance-aware Gaussian kernel:
300
+ αi,j = Gaussian(∥ Coor(i) − Coor(j)∥2, σs),
301
+ (2)
302
+ where Gaussian(x, σ) =
303
+ 1
304
+ σ
305
+
306
+ 2π exp (− x2
307
+ 2σ2 ) is the Gaussian
308
+ function. Coor(i) ∈ R2 denotes the row-column coordi-
309
+ nates of pixel i at the depth map and σs is the kernel vari-
310
+ ance. The low-level and high-level color kernels are defined
311
+
312
+ K
313
+ V
314
+ Q
315
+ Cross-attention Module
316
+ Depth Transformation
317
+ Self-
318
+ Attention
319
+ Add &
320
+ Norm
321
+ Self-
322
+ Attention
323
+ Add &
324
+ Norm
325
+ Color Transformation
326
+ V
327
+ K
328
+ Q
329
+ �������������������������
330
+ ������������������������
331
+ ������������������������′
332
+ C
333
+ ������������������������+������������
334
+ C Concatenation
335
+ P Position Embedding
336
+ Element-wise Production
337
+ P
338
+ Trilateral Self-attention Module
339
+ {������������������������}������������>������������
340
+ ������������
341
+ ������������������������������������������������
342
+ ������������������������������������������������������������
343
+ ������������
344
+ Geometric-level
345
+ Semantic-level
346
+ ������������������������
347
+ ������������������������
348
+ Figure 3: The architecture of the trilateral self-attention module and the cross-attention module.
349
+ by the Gaussian kernels with the low-level and the semantic-
350
+ level RGB feature similarity, whose kernel sum is:
351
+ βlow
352
+ i,j + βhigh
353
+ i,j
354
+ =
355
+ L
356
+
357
+ l=0
358
+ Gaussian(∥G(i)
359
+ l
360
+ − G(j)
361
+ l ∥2, σc).
362
+ (3)
363
+ The depth semantic kernel is designed based on the depth
364
+ feature similarity in the L-th layer:
365
+ γi,j = Gaussian(∥F(i)
366
+ L − F(j)
367
+ L ∥2, σd).
368
+ (4)
369
+ Guided by the geometric and semantic kernels above, the
370
+ correlated depth information can be effectively aggregated
371
+ into each depth feature through Eq.1 for depth feature com-
372
+ pletion and enhancement.
373
+ Then, we feed the enhanced depth feature ¯FL and the
374
+ RGB feature GL into the cross-attention block for their ef-
375
+ ficient cross-modality feature intersection:
376
+ ˜F(i)
377
+ L = ¯F(i)
378
+ L + MLP(
379
+
380
+ j
381
+ softmaxj(φq(¯F(i)
382
+ L )⊤φk(G(j)
383
+ L ))φv(G(j)
384
+ L )),
385
+ ˜G(i)
386
+ L = G(i)
387
+ L + MLP(
388
+
389
+ j
390
+ softmaxj(φq(G(i)
391
+ L )⊤φk(¯F(j)
392
+ L )φv(¯F(j)
393
+ L )),
394
+ (5)
395
+ where φq, φk and φv are the projection functions of the
396
+ query, the key and the value in our nonlocal-style cross-
397
+ attention module. With the query-key similarity, the value
398
+ can be retrieved for feature enhancement. Then, we concate-
399
+ nate the enhanced depth feature ˜FL and RGB feature ˜GL
400
+ and pass them into a multi-layer convolutional network to
401
+ obtain their correlated feature at each layer {Gl}L′
402
+ l=L+1. Fi-
403
+ nally, following (Dosovitskiy et al. 2015), based on the pre-
404
+ viously extracted features {Gl}L
405
+ l=1 and the correlated fea-
406
+ tures {Gl}L′
407
+ l=L+1, we exploit a decoder network to generate
408
+ the multi-layer flow maps {∆l}L′
409
+ l=1, where the flow genera-
410
+ tion in layer l can be formulated as:
411
+ Gflow
412
+ l+1 , ∆l+1 = deconv(Cat[Gflow
413
+ l
414
+ , ∆l, GL′−l−1]), (6)
415
+ where Gflow
416
+ l
417
+ denotes the intermediate flow feature and
418
+ deconv consisting of a deconvolution operation and a con-
419
+ volutional block (Gflow
420
+ 1
421
+ , ∆1 = deconv(GL′)).
422
+ Flow-guided depth upsampling module. With the
423
+ learned flow map ∆L′ in the last layer, we combine it with
424
+ the grid-sampling strategy for the HR depth map predic-
425
+ tion. In detail, the value of the HR depth map is the bilinear
426
+ interpolation of the neighborhood pixels in LR depth map
427
+ DLR, where the neighborhoods are defined according to the
428
+ learned flow field, which can be formulated as:
429
+ Dcoarse = Grid-Sample(DLR, ∆L′),
430
+ (7)
431
+ where Grid-Sample denotes the upsampling operation com-
432
+ puting the output using pixel values from neighborhood pix-
433
+ els and pixel locations from the grid (Li et al. 2020).
434
+ Flow-enhanced Pyramid Edge Attention Network
435
+ In order to further improve our DSR precision in the case of
436
+ the edge noise problem, we propose a flow-enhanced pyra-
437
+ mid network, where the learned structure flow is served as
438
+ the edge attention to hierarchically mine edge-focused guid-
439
+ ance feature from the RGB image for the edge refinement
440
+ of Dcoarse. Specifically, we first feed the previously pre-
441
+ dicted HR depth map Dcoarse and the RGB image into an
442
+ encoder network to extract their features: {Fcoarse
443
+ t
444
+ }T +1
445
+ t=1 and
446
+ {Gt}T
447
+ t=1, where subscript t indicates the extracted feature at
448
+ the t-th layer. Then, we propose the flow-enhanced pyramid
449
+ attention module and the edge decoder module as follows
450
+ for refined HR depth prediction.
451
+ Flow-enhanced pyramid attention module. In this mod-
452
+ ule, we target at combining the RGB feature and the flow
453
+ map to learn the edge-focused guidance feature {Gguide
454
+ t
455
+ }
456
+ at each layer. In detail, for the t-th layer, with the RGB fea-
457
+ ture Gt and its corresponding flow map ∆L′−t, we first fuse
458
+ the flow information into the RGB feature to form the flow-
459
+ enhanced RGB feature,
460
+ Gflow
461
+ t
462
+ = ∆L′−t · Gt + Gt,
463
+ (8)
464
+
465
+ Decoded Search Feature
466
+ Decoder
467
+ Add&Ins.Norm
468
+ formator
469
+ MaskT
470
+ Encoded
471
+ Add&Ins.Norm
472
+ Mul & Iins, Norm
473
+ Template Feature
474
+ Cross-Attention
475
+ Cross-Attention
476
+ Encoder
477
+ ?
478
+ Add&ins.Nom
479
+ Add&Ins.Norm
480
+ Self-Attention
481
+ Self-Attention
482
+ eight Sharing
483
+ TemplateFeature
484
+ SearchFeature
485
+ Element-wiseProduction
486
+ @ Template Feature Mask
487
+ Figure 4. An overview of the proposed transformer architectureC
488
+ Scale Unify & Concat
489
+ CONV
490
+ CONV
491
+ CONV
492
+ CONV
493
+ CONV
494
+ CONV
495
+ ∆������������′−������������
496
+ Flow-enhanced Pyramid Attention Module
497
+ ×K
498
+ ������������������������
499
+ ������������������������
500
+ ������������������������������������������������������������������������
501
+ ������������������������
502
+ ������������������������������������������������������������
503
+ Figure 4: The architecture of the pyramid attention module.
504
+ The subscript t denotes the feature output in the t-th layer of
505
+ the encoder (1 ≤ t ≤ T). ‘×K’ indicates the iteration times
506
+ of the guidance feature updating.
507
+ where ∆L′−t ·Gt is expected to exploit the significant flow-
508
+ value fluctuations at the edge region in ∆L′−t to better
509
+ highlight the structure region of the RGB feature. To fur-
510
+ ther smooth the texture feature in Gflow
511
+ t
512
+ , we concatenate
513
+ it with the texture-less depth feature Fcoarse
514
+ t
515
+ to obtain the
516
+ texture-degraded RGB feature ˜Gflow
517
+ t
518
+ . Then, we feed ˜Gflow
519
+ t
520
+ into a pyramid network to extract its edge-focused guid-
521
+ ance features { ˜Gflow
522
+ t,k
523
+ }K
524
+ k=1 at different scales. The low-level
525
+ guidance feature is to filter the texture noise (guided by the
526
+ flow map) while the high-level is to exploit the rich context
527
+ information for edge-feature capture. After that, we unify
528
+ the scales of the hierarchical feature { ˜Gflow
529
+ t,k
530
+ }K
531
+ k=1 using the
532
+ bicubic interpolation and pass the concatenated feature into
533
+ a convolutional block to generate the flow-enhanced RGB
534
+ guidance feature Gguide
535
+ t
536
+ at the t-th layer. Notably, we de-
537
+ sign an iterative architecture to progressively refine the RGB
538
+ guidance feature as illustrated in Fig. 4.
539
+ Edge decoder. Guided by the flow-based guidance fea-
540
+ tures {Gguide
541
+ t
542
+ }T
543
+ t=1 learned at each layer, we progressively
544
+ decode the depth feature in an iterative manner:
545
+ Fedge
546
+ t+1 = FU(Cat(Fedge
547
+ t
548
+ , Gguide
549
+ T −t+1, Fcoarse
550
+ T −t+1)),
551
+ (9)
552
+ where FU function indicates the fusion and upsampling op-
553
+ eration following (Guo et al. 2020) and the initial feature
554
+ Fedge
555
+ 1
556
+ is obtained by the convolutional operation on Fcoarse
557
+ T +1 .
558
+ Finally, we pass Fedge
559
+ T +1 into a convolutional block to obtain
560
+ the edge-refined HR depth map Drefine.
561
+ Loss Function
562
+ We train our model by minimizing the smooth-L1 loss be-
563
+ tween the ground-truth depth map Dgt and the network out-
564
+ put of each sub-network, including the coarse depth predic-
565
+ tion Dcoarse and the refined one Drefine:
566
+ Ldsr =
567
+ H×W
568
+
569
+ i=1
570
+
571
+
572
+ Dcoarse
573
+ i
574
+ − Dgt
575
+ i
576
+
577
+ + ℓ
578
+
579
+ Drefine
580
+ i
581
+ − Dgt
582
+ i
583
+
584
+ , (10)
585
+ where the subscript i denote the pixel index and the smooth-
586
+ L1 loss function is defined as:
587
+ ℓ(u) =
588
+ �0.5u2,
589
+ if |u| ≤ 1
590
+ (|u| − 0.5) ,
591
+ otherwise.
592
+ (11)
593
+ Experiments
594
+ Experimental Setting
595
+ To evaluate the performance of our method, we perform ex-
596
+ tensive experiments on real-world RGB-D-D dataset (He
597
+ et al. 2021), ToFMark dataset (Ferstl et al. 2013) and syn-
598
+ thetic NYU-v2 dataset (Silberman et al. 2012). We imple-
599
+ ment our model with PyTorch and conduct all experiments
600
+ on a server containing an Intel i5 2.2 GHz CPU and a TITAN
601
+ RTX GPU with almost 24 GB. During training, we randomly
602
+ crop patches of resolution 256 × 256 as groundtruth and the
603
+ training and testing data are normalized to the range [0, 1].
604
+ In order to balance the training time and network perfor-
605
+ mance, the parameters L, L′, K, T are set to 3, 6, 3, 2 in this
606
+ paper. We quantitatively and visually compare our method
607
+ with 13 state-of-the-art (SOTA) methods: TGV (Ferstl et al.
608
+ 2013), FBS (Barron and Poole 2016), MSG (Tak-Wai, Loy,
609
+ and Tang 2016), DJF (Li et al. 2016), DJFR (Li et al. 2019),
610
+ GbFT (AlBahar and Huang 2019), PAC (Su et al. 2019),
611
+ CUNet (Deng and Dragotti 2020), FDKN (Kim, Ponce, and
612
+ Ham 2021), DKN (Kim, Ponce, and Ham 2021), FDSR (He
613
+ et al. 2021), CTKT (Sun et al. 2021) and DCTNet (Zhao
614
+ et al. 2022). For simplicity, we name our Structure Flow-
615
+ Guided method as SFG.
616
+ Experiments on Real Datasets
617
+ Depth maps captured by cheap depth sensors usually suf-
618
+ fer from structural distortion and edge noise. To verify the
619
+ efficiency and robustness of our proposed method, we em-
620
+ ploy our method on two challenging benchmarks: RGB-D-D
621
+ dataset and ToFMark dataset.
622
+ Evaluation on hand-filled RGB-D-D. To evaluate the per-
623
+ formance of our method on real LR depth maps, we conduct
624
+ experiments on RGB-D-D datasets captured by two RGB-
625
+ D sensors: Huawei P30 Pro (captures RGB images and LR
626
+ depth maps) and Helios TOF camera (captures HR depth
627
+ maps). The LR inputs are shown in Fig. 5, which suffer from
628
+ the low resolution (LR size is 192 × 144 and target size is
629
+ 512 × 384) and random structural missing in the edge re-
630
+ gion. Following FDSR (He et al. 2021), we first use 2215
631
+ hand-filled RGB/D pairs for training and 405 RGB/D pairs
632
+ for testing. As listed in the first row of Table 1, the proposed
633
+ model outperforms SOTA methods by a significant margin.
634
+ The first two rows in Fig. 5 show the visual DSR com-
635
+ parisons on hand-filled RGB-D-D dataset. We can see that
636
+ edges in the results of DKN (Kim, Ponce, and Ham 2021)
637
+ and DCTNet (Zhao et al. 2022) are over-smoothed and the
638
+ artifacts are visible in the FDSR results. In contrast, our re-
639
+ sults show more accurate structures without texture copying.
640
+ Evaluation on incomplete RGB-D-D. To further verify the
641
+ DSR performance of our method in the case of edge noise
642
+ (e.g., edge holes), instead of the hole completion above, we
643
+ directly test SFG on unfilled RGB-D dataset and achieve the
644
+
645
+ (a) LR depth
646
+ (f) Groundtruth
647
+ (d) DCTNet
648
+ (c) FDSR
649
+ (e) SFG (ours)
650
+ (b) DKN
651
+ Hand-filled
652
+ Incomplete
653
+ Figure 5: Visual comparison on RGB-D-D dataset. The first (last) two rows show DSR results of hand-filled (incomplete) LR.
654
+ RMSE
655
+ Bicubic
656
+ MSG
657
+ DJF
658
+ DJFR
659
+ CUNet
660
+ DKN
661
+ FDKN
662
+ FDSR
663
+ DCTNet
664
+ SFG (ours)
665
+ Hand-filled
666
+ 7.17
667
+ 5.50
668
+ 5.54
669
+ 5.52
670
+ 5.84
671
+ 5.08
672
+ 5.37
673
+ 5.34
674
+ 5.28
675
+ 3.88
676
+ Incomplete
677
+ -
678
+ 7.90
679
+ 5.70
680
+ 5.52
681
+ 6.54
682
+ 5.43
683
+ 5.87
684
+ 5.59
685
+ 5.49
686
+ 4.79
687
+ Noisy
688
+ 11.57
689
+ 10.36
690
+ 5.62
691
+ 5.71
692
+ 6.13
693
+ 5.16
694
+ 5.54
695
+ 5.63
696
+ 5.16
697
+ 4.45
698
+ Table 1: Quantitative comparison on RGB-D-D dataset. Best and second best results are in bold and underline, respectively.
699
+ (b) Bicubic
700
+ (d) DCTNet
701
+ (c) DKN
702
+ (e) SFG (ours)
703
+ (a) Groundtruth
704
+ Figure 6: Visual comparison on ToFMark dataset.
705
+ DJFR
706
+ DKN
707
+ FDKN
708
+ FDSR
709
+ DCTNet
710
+ SFG (ours)
711
+ RMSE
712
+ 0.27
713
+ 0.26
714
+ 0.28
715
+ 0.28
716
+ 0.27
717
+ 0.25
718
+ Table 2: Quantitative comparison on ToFMark dataset.
719
+ lowest RMSE as shown in the second row of Table 1. More-
720
+ over, as shown in the last two rows in Fig. 5, the edges recov-
721
+ ered by our method are sharper with fewer artifacts and vi-
722
+ sually closest to the ground-truth map. It’s mainly attributed
723
+ to the edge-focused guidance feature learning with our flow-
724
+ enhanced pyramid edge attention network.
725
+ Evaluation on noisy RGB-D-D and ToFMark. We evalu-
726
+ ate the denoising and generalization ability of our method on
727
+ ToFMark dataset consisting of three RGB-D pairs. The LR
728
+ inputs have irregular noise and limited resolution (LR depth
729
+ is 120 × 160 and target size is 610 × 810). To simulate the
730
+ similar degradation for training, we add the Gaussian noise
731
+ (mean 0 and standard deviation 0.07) and the Gaussian blur
732
+ (kernel size 5) on the 2215 RGB-D pairs from RGB-D-D
733
+ dataset to generate the noisy training dataset. Testing dataset
734
+ consists of 405 RGB-D pairs from noisy RGB-D-D dataset
735
+ and 3 RGB-D pairs from ToFMark dataset. As shown in the
736
+ last row of Table 1 and Table 2, our method achieves the low-
737
+ est RMSE in noisy RGB-D-D dataset and the lowest RMSE
738
+ in ToFMark dataset, which proves its ability for noise re-
739
+ moving. As shown in Fig. 6, it is observed that DKN (Kim,
740
+ Ponce, and Ham 2021) and DCTNet (Zhao et al. 2022) intro-
741
+ duce some texture artifacts and noise in the low-frequency
742
+ region, while SFG recovers clean surface owing to PEA with
743
+ effective texture removing.
744
+ Experiments on Synthetic Datasets
745
+ Since most popular methods are designed for synthetic
746
+ datasets, we further evaluate our method on NYU-v2
747
+ datasets for a more comprehensive comparison. Following
748
+ the widely used data splitting criterion, we sample 1000
749
+
750
+ T(a) LR depth
751
+ (b) DJFR
752
+ (c) DKN
753
+ (d) FDSR
754
+ (e) CFUNet
755
+ (f) SFG (ours)
756
+ (g) Groundtruth
757
+ × 8
758
+ × 16
759
+ Figure 7: Visual comparison of ×8 and ×16 DSR results on NYU-v2 dataset.
760
+ RMSE
761
+ TGV
762
+ FBS
763
+ DJFR
764
+ GbFT
765
+ PAC
766
+ CUNet
767
+ FDKN
768
+ DKN
769
+ FDSR
770
+ DCTNet
771
+ CTKT
772
+ SFG (ours)
773
+ ×4
774
+ 4.98
775
+ 4.29
776
+ 2.38
777
+ 3.35
778
+ 2.39
779
+ 1.89
780
+ 1.86
781
+ 1.62
782
+ 1.61
783
+ 1.59
784
+ 1.49
785
+ 1.45
786
+ ×8
787
+ 11.23
788
+ 8.94
789
+ 4.94
790
+ 5.73
791
+ 4.59
792
+ 3.58
793
+ 3.33
794
+ 3.26
795
+ 3.18
796
+ 3.16
797
+ 2.73
798
+ 2.84
799
+ ×16
800
+ 28.13
801
+ 14.59
802
+ 9.18
803
+ 9.01
804
+ 8.09
805
+ 6.96
806
+ 6.78
807
+ 6.51
808
+ 5.86
809
+ 5.84
810
+ 5.11
811
+ 5.56
812
+ Table 3: Quantitative comparison on NYU-v2 dataset in terms of average RMSE (cm).
813
+ Model
814
+ RMSE
815
+ CFUNet
816
+ 4.22
817
+ CFUNet w/o TriSA
818
+ 4.34
819
+ CFUNet w/o cross-attention
820
+ 4.57
821
+ Table 4: Ablation study of CFUNet on RGB-D-D dataset.
822
+ Datasets
823
+ SFG
824
+ SFG w/o PEANet
825
+ RGB-D-D
826
+ 3.88
827
+ 4.22
828
+ NYU-v2 (×4)
829
+ 1.45
830
+ 1.82
831
+ NYU-v2 (×8)
832
+ 2.84
833
+ 3.76
834
+ NYU-v2 (×16)
835
+ 5.55
836
+ 5.90
837
+ Table 5: Ablation study (in RMSE) of PEANet.
838
+ RGB-D pairs for training and the rest 449 RGB-D pairs
839
+ for testing. As shown in the Table 3, the proposed method
840
+ still achieves comparable results with the SOTA methods
841
+ on all upsampling cases (×4, ×8, ×16). In addition, Fig. 7
842
+ presents that our ×8 and ×16 upsampled depth maps own
843
+ higher accuracy and more convincing results. It verifies that
844
+ our method not only performs DSR well in low-quality maps
845
+ with noise and missing structure, but also achieves high-
846
+ quality precision in the case of large-scale upsampling.
847
+ Ablation Analysis
848
+ Ablation study on CFUNet. As shown in the first row of the
849
+ Table 4, we still achieve the lowest RMSE criterion just with
850
+ the single CFUNet (SFG w/o PEANet) on RGB-D-D dataset
851
+ when compare with SOTA methods. It proves the effective-
852
+ ness of the learned structure flow map for real DSR. The
853
+ Table 4 also shows that removing the trilateral self-attention
854
+ (TriSA) and cross-attention module in CFUNet causes per-
855
+ formance degradation on RGB-D-D datasets, which verifies
856
+ the necessary of the depth feature enhancement for reliable
857
+ flow map generation.
858
+ K=0 (w/o FPA)
859
+ K=1
860
+ K=2
861
+ K=3
862
+ Figure 8: Visual comparison of guidance features using FPA
863
+ with different iteration times K, i.e., from 0 (w/o FPA) to 3.
864
+ Ablation study on PEANet. To analyze the effectiveness of
865
+ PEANet, we train the network with and without PEANet on
866
+ the synthetic dataset (NYU-v2) and the real-world dataset
867
+ (RGB-D-D). As shown in the Table 5, PEANet consistently
868
+ brings the RMSE gain under both real and synthetic dataset
869
+ settings. It’s mainly due to our edge-focused guidance fea-
870
+ ture learning for robust edge refinement. In addition, Fig. 8
871
+ shows the guidance features under varying iteration times
872
+ in FPA (Flow-enhanced Pyramid Attention) module from
873
+ 0 (w/o FPA) to 3. Visually, as the number of iterations in-
874
+ creases, the edge regions tend to receive more attention.
875
+ Conclusion
876
+ In this paper, we proposed a novel structure flow-guided
877
+ DSR framework for real-world depth super-resolution,
878
+ which deals with issues of structural distortion and edge
879
+ noise. For the structural distortion, a cross-modality flow-
880
+ guided upsampling network was presented to learn a reli-
881
+ able cross-modality flow between depth and the correspond-
882
+ ing RGB guidance for the reconstruction of the distorted
883
+ depth edge, where a trilateral self-attention combines the ge-
884
+ ometric and semantic correlations for structure flow learn-
885
+ ing. For the edge noise, a flow-enhanced pyramid edge at-
886
+ tention network was introduced to produce edge attention
887
+ based on the learned flow map and learn the edge-focused
888
+ guidance feature for depth edge refinement with a pyramid
889
+ network. Extensive experiments on both real-world and syn-
890
+ thetic datasets demonstrated the superiority of our method.
891
+
892
+ Acknowledgement
893
+ This work was supported by the National Science Fund of
894
+ China under Grant Nos. U1713208 and 62072242.
895
+ References
896
+ AlBahar, B.; and Huang, J.-B. 2019. Guided image-to-image
897
+ translation with bi-directional feature transformation.
898
+ In
899
+ ICCV, 9016–9025.
900
+ Barron, J. T.; and Poole, B. 2016. The fast bilateral solver.
901
+ In ECCV, 617–632.
902
+ Blum, M.; Springenberg, J. T.; W¨ulfing, J.; and Riedmiller,
903
+ M. 2012. A learned feature descriptor for object recognition
904
+ in RGB-D data. In ICRA, 1298–1303.
905
+ Chen, B.; and Jung, C. 2018.
906
+ Single depth image super-
907
+ resolution using convolutional neural networks. In ICASSP,
908
+ 1473–1477.
909
+ Deng, X.; and Dragotti, P. L. 2020. Deep convolutional neu-
910
+ ral network for multi-modal image restoration and fusion.
911
+ IEEE transactions on pattern analysis and machine intelli-
912
+ gence, PP(99): 1–1.
913
+ Dosovitskiy, A.; Fischer, P.; Ilg, E.; Hausser, P.; Hazirbas,
914
+ C.; Golkov, V.; van der Smagt, P.; Cremers, D.; and Brox, T.
915
+ 2015. FlowNet: Learning Optical Flow With Convolutional
916
+ Networks. In ICCV, 2758–2766.
917
+ Eitel, A.; Springenberg, J. T.; Spinello, L.; Riedmiller, M.;
918
+ and Burgard, W. 2015. Multimodal deep learning for robust
919
+ RGB-D object recognition. In IROS, 681–687.
920
+ Ferstl, D.; Reinbacher, C.; Ranftl, R.; R¨uther, M.; and
921
+ Bischof, H. 2013. Image guided depth upsampling using
922
+ anisotropic total generalized variation. In ICCV, 993–1000.
923
+ Guo, C.; Li, C.; Guo, J.; Cong, R.; Fu, H.; and Han, P. 2019.
924
+ Hierarchical Features Driven Residual Learning for Depth
925
+ Map Super-Resolution. IEEE Transactions on Image Pro-
926
+ cessing, 2545–2557.
927
+ Guo, Y.; Chen, J.; Wang, J.; Chen, Q.; Cao, J.; Deng, Z.; Xu,
928
+ Y.; and Tan, M. 2020. Closed-loop matters: Dual regression
929
+ networks for single image super-resolution. In CVPR, 5407–
930
+ 5416.
931
+ Hao, X.; Lu, T.; Zhang, Y.; Wang, Z.; and Chen, H. 2019.
932
+ Multi-Source Deep Residual Fusion Network for Depth Im-
933
+ age Super-resolution. In RSVT, 62–67.
934
+ He, L.; Zhu, H.; Li, F.; Bai, H.; Cong, R.; Zhang, C.; Lin,
935
+ C.; Liu, M.; and Zhao, Y. 2021. Towards Fast and Accurate
936
+ Real-World Depth Super-Resolution: Benchmark Dataset
937
+ Baseline and. In CVPR, 9229–9238.
938
+ Hou, J.; Dai, A.; and Nießner, M. 2019. 3d-sis: 3d semantic
939
+ instance segmentation of rgb-d scans. In CVPR, 4421–4430.
940
+ Kim, B.; Ponce, J.; and Ham, B. 2021. Deformable kernel
941
+ networks for joint image filtering. International Journal of
942
+ Computer Vision, 129(2): 579–600.
943
+ Li, X.; You, A.; Zhu, Z.; Zhao, H.; Yang, M.; Yang, K.; Tan,
944
+ S.; and Tong, Y. 2020. Semantic flow for fast and accurate
945
+ scene parsing. In ECCV, 775–793. Springer.
946
+ Li, Y.; Huang, J.-B.; Ahuja, N.; and Yang, M.-H. 2016. Deep
947
+ joint image filtering. In ECCV, 154–169.
948
+ Li, Y.; Huang, J.-B.; Ahuja, N.; and Yang, M.-H. 2019. Joint
949
+ image filtering with deep convolutional networks.
950
+ IEEE
951
+ transactions on pattern analysis and machine intelligence,
952
+ 41(8): 1909–1923.
953
+ Liu, W.; Chen, X.; Yang, J.; and Wu, Q. 2016. Robust color
954
+ guided depth map restoration. IEEE Transactions on Image
955
+ Processing, 26(1): 315–327.
956
+ Liu, X.; Zhai, D.; Chen, R.; Ji, X.; Zhao, D.; and Gao, W.
957
+ 2018. Depth restoration from RGB-D data via joint adaptive
958
+ regularization and thresholding on manifolds. IEEE Trans-
959
+ actions on Image Processing, 28(3): 1068–1079.
960
+ Lutio, R. d.; D’aronco, S.; Wegner, J. D.; and Schindler, K.
961
+ 2019. Guided super-resolution as pixel-to-pixel transforma-
962
+ tion. In ICCV, 8829–8837.
963
+ Meuleman, A.; Baek, S.-H.; Heide, F.; and Kim, M. H. 2020.
964
+ Single-shot monocular rgb-d imaging using uneven double
965
+ refraction. In CVPR, 2465–2474.
966
+ Newcombe, R. A.; Izadi, S.; Hilliges, O.; Molyneaux, D.;
967
+ Kim, D.; Davison, A. J.; Kohi, P.; Shotton, J.; Hodges, S.;
968
+ and Fitzgibbon, A. 2011. Kinectfusion: Real-time dense sur-
969
+ face mapping and tracking. In ISMAR, 127–136. Ieee.
970
+ Silberman, N.; Hoiem, D.; Kohli, P.; and Fergus, R. 2012.
971
+ Indoor segmentation and support inference from rgbd im-
972
+ ages. In ECCV, 746–760.
973
+ Song, X.; Dai, Y.; Zhou, D.; Liu, L.; Li, W.; Li, H.; and Yang,
974
+ R. 2020. Channel attention based iterative residual learning
975
+ for depth map super-resolution. In CVPR, 5631–5640.
976
+ Su, H.; Jampani, V.; Sun, D.; Gallo, O.; Learned-Miller, E.;
977
+ and Kautz, J. 2019. Pixel-adaptive convolutional neural net-
978
+ works. In CVPR, 11166–11175.
979
+ Sun, B.; Ye, X.; Li, B.; Li, H.; Wang, Z.; and Xu, R. 2021.
980
+ Learning Scene Structure Guidance via Cross-Task Knowl-
981
+ edge Transfer for Single Depth Super-Resolution. In CVPR,
982
+ 7792–7801.
983
+ Tak-Wai; Loy, C. C.; and Tang, X. 2016. Depth Map Super-
984
+ Resolution by Deep Multi-Scale Guidance. In ECCV, 353–
985
+ 369.
986
+ Tang, Q.; Cong, R.; Sheng, R.; He, L.; Zhang, D.; Zhao, Y.;
987
+ and Kwong, S. 2021. BridgeNet: A Joint Learning Network
988
+ of Depth Map Super-Resolution and Monocular Depth Esti-
989
+ mation. In ACMMM, 2148–2157.
990
+ Wang, K.; Zhang, Z.; Yan, Z.; Li, X.; Xu, B.; Li, J.; and
991
+ Yang, J. 2021. Regularizing Nighttime Weirdness: Efficient
992
+ Self-Supervised Monocular Depth Estimation in the Dark.
993
+ In ICCV, 16055–16064.
994
+ Wang, Z.; Ye, X.; Sun, B.; Yang, J.; Xu, R.; and Li, H. 2020.
995
+ Depth upsampling based on deep edge-aware learning. Pat-
996
+ tern Recognition, 103: 107274.
997
+ Yan, Z.; Wang, K.; Li, X.; Zhang, Z.; Li, G.; Li, J.; and Yang,
998
+ J. 2022. Learning Complementary Correlations for Depth
999
+ Super-Resolution With Incomplete Data in Real World.
1000
+ IEEE Transactions on Neural Networks and Learning Sys-
1001
+ tems.
1002
+ Yang, J.; Ye, X.; Li, K.; Hou, C.; and Wang, Y. 2014. Color-
1003
+ guided depth recovery from RGB-D data using an adaptive
1004
+
1005
+ autoregressive model. IEEE transactions on image process-
1006
+ ing, 23(8): 3443–3458.
1007
+ Zhao, Z.; Zhang, J.; Xu, S.; Lin, Z.; and Pfister, H. 2022.
1008
+ Discrete cosine transform network for guided depth map
1009
+ super-resolution. In CVPR, 5697–5707.
1010
+ Zhu, J.; Zhai, W.; Cao, Y.; and Zha, Z.-J. 2018. Co-occurrent
1011
+ structural edge detection for color-guided depth map super-
1012
+ resolution. In MMM, 93–105.
1013
+ Zuo, Y.; Wu, Q.; Fang, Y.; An, P.; Huang, L.; and Chen,
1014
+ Z. 2019. Multi-scale frequency reconstruction for guided
1015
+ depth map super-resolution via deep residual network. IEEE
1016
+ Transactions on Circuits and Systems for Video Technology,
1017
+ 30(2): 297–306.
1018
+
0tFQT4oBgHgl3EQf0TbP/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
29AyT4oBgHgl3EQf1vl1/content/tmp_files/2301.00740v1.pdf.txt ADDED
@@ -0,0 +1,1514 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ P3DC-Shot: Prior-Driven Discrete Data Calibration for
2
+ Nearest-Neighbor Few-Shot Classification
3
+ Shuangmei Wanga,∗, Rui Maa,b,∗, Tieru Wua,b,∗∗, Yang Caoa,∗∗
4
+ aJilin University, No. 2699 Qianjin Street, Changchun, 130012, China
5
+ bEngineering Research Center of Knowledge-Driven Human-Machine Intelligence, MOE, No. 2699 Qianjin Street, Changchun, 130012, China
6
+ Abstract
7
+ Nearest-Neighbor (NN) classification has been proven as a simple and effective approach for few-shot learning. The
8
+ query data can be classified efficiently by finding the nearest support class based on features extracted by pretrained
9
+ deep models. However, NN-based methods are sensitive to the data distribution and may produce false prediction if
10
+ the samples in the support set happen to lie around the distribution boundary of different classes. To solve this issue,
11
+ we present P3DC-Shot, an improved nearest-neighbor based few-shot classification method empowered by prior-
12
+ driven data calibration. Inspired by the distribution calibration technique which utilizes the distribution or statistics of
13
+ the base classes to calibrate the data for few-shot tasks, we propose a novel discrete data calibration operation which is
14
+ more suitable for NN-based few-shot classification. Specifically, we treat the prototypes representing each base class
15
+ as priors and calibrate each support data based on its similarity to different base prototypes. Then, we perform NN
16
+ classification using these discretely calibrated support data. Results from extensive experiments on various datasets
17
+ show our efficient non-learning based method can outperform or at least comparable to SOTA methods which need
18
+ additional learning steps.
19
+ Keywords:
20
+ Few-Shot Learning, Image Classification, Prototype, Calibration
21
+ 1. Introduction
22
+ Deep learning has triggered significant breakthroughs
23
+ in many computer vision tasks, such as image classifi-
24
+ cation [1, 2, 3], object detection [4, 5, 6], and seman-
25
+ tic segmentation [7, 8, 9] etc. One key factor for the
26
+ success of deep learning is the emergence of large-scale
27
+ datasets, e.g., ImageNet [2], MSCOCO [10], Cityscapes
28
+ [11], just to name a few. However, it is difficult and
29
+ expensive to collect and annotate sufficient data sam-
30
+ ples to train a deep model with numerous weights. The
31
+ data limitation has become a main bottleneck for more
32
+ broader application of deep leaning, especially for the
33
+ tasks involving rarely seen samples. On the other hand,
34
+ human can learn to recognize novel visual concepts
35
+ from only a few samples. There is still a notable gap
36
+ • This work is supported in part by the National Key Research
37
+ and Development Program of China (Grant No. 2020YFA0714103)
38
+ and the National Natural Science Foundation of China (Grant No.
39
+ 61872162 and 62202199).
40
+ ∗Co-first authors.
41
+ ∗∗Corresponding authors.
42
+ between human intelligence and the deep learning based
43
+ artificial intelligence. Few-shot learning (FSL) aims to
44
+ learn neural models for novel classes with only a few
45
+ samples. Due to its ability for generalization, FSL has
46
+ attracted extensive interests in recent years [12, 13, 14].
47
+ Few-shot classification is the most widely studied
48
+ FSL task which attempts to recognize new classes or
49
+ classify data in an unseen query set. Usually, few-shot
50
+ classification is formulated in a meta-learning frame-
51
+ work [15, 16, 17, 18, 19, 20, 21, 22, 23].
52
+ In the
53
+ meta-training stage, the N-way K-shot episodic training
54
+ paradigm is often employed to learn generalizable clas-
55
+ sifiers or feature extractors for data of the base classes.
56
+ Then, in the meta-testing stage, the meta-learned clas-
57
+ sifiers can quickly adapt to a few annotated but unseen
58
+ data in a support set and attain the ability to classify the
59
+ novel query data. Although meta-learning has shown
60
+ the effectiveness for few-shot classification, it is unclear
61
+ how to set the optimal class number (N) and per-class
62
+ sample number (K) when learning the classifiers. Also,
63
+ the learned classifier may not perform well when the
64
+ sample number K used in meta-testing does not match
65
+ Preprint submitted to Elsevier
66
+ January 3, 2023
67
+ arXiv:2301.00740v1 [cs.CV] 2 Jan 2023
68
+
69
+ the one used in the meta-training [24].
70
+ On the other hand, nearest-neighbor (NN) based clas-
71
+ sification has been proven as a simple and effective ap-
72
+ proach for FSL. Based on features obtained from the
73
+ meta-learned feature extractor [15, 16] or the pretrained
74
+ deep image models [25], the query data can be effi-
75
+ ciently classified by finding the nearest support class.
76
+ Specifically, the prediction is determined by measuring
77
+ the similarity or distance between the query feature and
78
+ the prototypes (i.e., average or centroid) of the support
79
+ features. From the geometric view, NN-based classi-
80
+ fication can be solved using a Voronoi Diagram (VD)
81
+ which is a partition of the space formed by the support
82
+ features [26, 27]. Given a query feature, its class can
83
+ be predicted by computing the closest Voronoi cell that
84
+ corresponds to a certain support class. With proper VD
85
+ construction and feature distance metrics, the state-of-
86
+ the-art performance can be achieved for few-shot clas-
87
+ sification [28].
88
+ However, due to the limited number
89
+ of support samples, NN-based few-shot classification is
90
+ sensitive to the distribution of the sampled data and may
91
+ produce false prediction if the samples in the support set
92
+ happen to lie around the distribution boundary of differ-
93
+ ent classes (see Figure 1 left).
94
+ To solve above issues, various efforts have been paid
95
+ to more effectively utilize the knowledge or priors from
96
+ the base classes for few-shot classification. One natural
97
+ way is to learn pretrained classifiers or image encoders
98
+ with the abundant labeled samples of base classes and
99
+ then adapt them the novel classes via transfer learning
100
+ [29, 30, 31, 23]. Meanwhile, it has been shown that
101
+ variations in selecting the base classes can lead to dif-
102
+ ferent performance on the novel classes [32, 33, 34] and
103
+ how to select the base classes for better feature repre-
104
+ sentation learning still needs more investigation.
105
+ On
106
+ the other hand, a series of works [35, 36, 37, 38] per-
107
+ form data calibration to the novel classes so that the re-
108
+ sults are less affected by the limited number of support
109
+ samples. One representative is Distribution Calibration
110
+ (DC) [38] which assumes the features of the data fol-
111
+ low the Gaussian distribution and transfers the statis-
112
+ tics from the similar base classes to the novel classes.
113
+ Then, DC trains a simple logistic regression classifier
114
+ to classify the query features using features sampled
115
+ from the calibrated distributions of the novel classes.
116
+ Although DC has achieved superior performance than
117
+ previous meta-learning [19, 21, 22] or transfer-learning
118
+ [29, 30, 31, 23] based methods, it relies on the strong as-
119
+ sumption for Gaussian-like data distribution and it can-
120
+ not be directly used for NN-based few-shot classifica-
121
+ tion.
122
+ In this paper, we propose P3DC-Shot, an improved
123
+ Support sample Query sample Calibrated support sample
124
+ Figure 1: When samples in the support set lie around the distribution
125
+ boundary of different classes, the NN classifier may produce false pre-
126
+ diction. By performing discrete calibration for each support sample
127
+ using priors from the base classes, the calibrated support data is trans-
128
+ formed closer to the actual class centroid and can lead to less-biased
129
+ NN classification. The colored regions represent the underlying data
130
+ distribution of different classes. The gray lines are the predicted deci-
131
+ sion boundaries by the NN classifier.
132
+ NN-based few-shot classification method that employs
133
+ prior information from base classes to discretely cali-
134
+ brate or adjust the support samples so that the calibrated
135
+ data is more representative for the underlying data dis-
136
+ tribution (Figure 1 right). Our main insight is even the
137
+ novel classes have not been seen before, they still share
138
+ similar features to some base classes, and the prior in-
139
+ formation from the base classes can serve as the context
140
+ data for the novel classes. When only a few support
141
+ samples are available for the novel classes, performing
142
+ prior-driven calibration can alleviate the possible bias
143
+ introduced by the few-shot support samples. With the
144
+ calibrated support samples, the query data can be more
145
+ accurately classified by a NN-based classifier.
146
+ Specifically, for the prior information, we compute
147
+ the prototype, i.e., the average of features, for each base
148
+ class. Then, we propose three different schemes for se-
149
+ lecting the similar prototypes to calibrate the support
150
+ data. Firstly, we propose the sample-level calibration
151
+ which selects the top M most similar base prototypes for
152
+ each support sample and then apply weighted averaging
153
+ between each support sample and selected prototypes to
154
+ obtain the calibrated support sample. Secondly, to uti-
155
+ lize more context from the base classes, we propose the
156
+ task-level calibration which combines the most similar
157
+ base prototypes for each support sample into a union
158
+ and performs the calibration for the support samples us-
159
+ ing each prototype in the union. In addition, we pro-
160
+ pose a unified calibration scheme that combines the two
161
+ above schemes so that the calibration can exploit dif-
162
+ ferent levels of prior information from the base classes.
163
+ To utilize the calibrated support samples for the NN-
164
+ based classification, we further obtain the prototypes of
165
+ 2
166
+
167
+ the support class using an attention-weighted averaging,
168
+ while the attention weights are computed between the
169
+ query sample and each calibrated support sample. Fi-
170
+ nally, the classification of a query sample is simply de-
171
+ termined by finding its nearest support prototype mea-
172
+ sured by the cosine similarity.
173
+ Comparing to DC, our P3DC-Shot adopts the simi-
174
+ lar idea of transferring the information or statistics from
175
+ the base classes to the novel classes. The key differ-
176
+ ence is our data calibration is performed on each indi-
177
+ vidual support sample rather than the distribution pa-
178
+ rameters and we employ the NN-based classification in-
179
+ stead of the learned classifier as in DC. Comparing to
180
+ other NN-based few-shot classification methods such as
181
+ SimpleShot [25], since our support data is calibrated,
182
+ the NN classification is less affected by the sampling
183
+ bias for the support data, e.g, the calibrated data is more
184
+ likely to be close to the center of the corresponding
185
+ novel class. We conduct extensive comparisons with re-
186
+ cent state-of-the-art few-shot classificaiton methods on
187
+ miniImageNet [2], tiredImageNet [39] and CUB [40]
188
+ and the results demonstrate the superiority and general-
189
+ izability of our P3DC-Shot. Ablation studies on differ-
190
+ ent calibration schemes, i.e., different weights between
191
+ the sample-level and task-level calibration also show the
192
+ necessity of combining two schemes for better results.
193
+ In summary, our contributions are as follows:
194
+ 1. We
195
+ propose
196
+ P3DC-Shot,
197
+ a
198
+ prior-driven
199
+ dis-
200
+ crete data calibration strategy for nearest-neighbor
201
+ based few-shot classification to enhance the
202
+ model’s robustness to the distribution of the sup-
203
+ port samples.
204
+ 2. Without additional training and expensive compu-
205
+ tation, the proposed method can efficiently cali-
206
+ brate each support sample using information from
207
+ the prototypes of the similar base classes.
208
+ 3. We conduct extensive evaluations on three discrete
209
+ calibration schemes on various datasets and the re-
210
+ sults show our efficient non-learning based method
211
+ can outperform or at least comparable to SOTA
212
+ few-shot classification methods.
213
+ 2. Related Work
214
+ In this section, we first review the representative
215
+ meta-learning and transfer learning based few-shot clas-
216
+ sification techniques. Then, we summarize the nearest-
217
+ neighbor and data calibration based approaches which
218
+ are most relevant to our P3DC-Shot.
219
+ Meta-learning based few-shot classification. Meta-
220
+ learning [41] has been widely adopted for few-shot clas-
221
+ sification.
222
+ The core idea is to leverage the episodic
223
+ training paradigm to learn generalizable classifiers or
224
+ feature extractors using the data from the base classes
225
+ in an optimization-based framework [18, 19, 20, 21,
226
+ 22], as well as learn a distance function to measure
227
+ the similarity between the support and query samples
228
+ through metric-learning [42, 15, 17, 43, 44, 37]. For
229
+ example, MAML [19] is one of the most representa-
230
+ tive optimization-based meta-learning method for few-
231
+ shot classification and its goal is to learn good net-
232
+ work initialization parameters so that the model can
233
+ quickly adapt to new tasks with only a small amount
234
+ of new training data from the novel classes. For metric-
235
+ learning based methods such as the Matching Networks
236
+ [15], Prototypical Networks [16] and Relation Net-
237
+ works [17], the network is trained to either learn an
238
+ embedding function with a given distance function or
239
+ learn both the embedding and the distance function in
240
+ a meta-learning architecture. Unlike the optimization
241
+ and metric-learning based methods which require so-
242
+ phisticated meta-learning steps, our method can directly
243
+ utilize the features extracted by the pretrained models
244
+ and perform the prior-driven calibration to obtain less-
245
+ biased support features for classification.
246
+ Transfer learning based few-shot classification.
247
+ Transfer learning [45, 46, 47] is a classic machine learn-
248
+ ing or deep learning technique that aims to improve
249
+ the the learning of a new task through the transfer of
250
+ knowledge from one or more related tasks that have al-
251
+ ready been learned. Pretraining a deep network on the
252
+ base dataset and transferring knowledge to the novel
253
+ classes via fine-tuning [31, 48, 30] has been shown as
254
+ the strong baseline for the few-shot classification. To
255
+ learn better feature representations which can lead to
256
+ improved few-shot fine-tuning performance, Mangla et
257
+ al. [29] propose S2M2, the Self-Supervised Manifold
258
+ Mixup, to apply regularization over the feature mani-
259
+ fold enriched via the self-supervised tasks. In addition
260
+ to training new linear classifiers based on the pretrained
261
+ weights learned from the base classes, Meta-Baseline
262
+ [23] performs meta-learning to further optimize the pre-
263
+ trained weights for few-shot classification. On the other
264
+ hand, it has been shown the results of the transfer learn-
265
+ ing based methods depend on different selections of the
266
+ base classes for pretraining [32, 33], while how to se-
267
+ lect the base classes to achieve better performance is
268
+ still challenging [34]. In comparison, our P3DC-shot
269
+ does not need the additional cost for feature represen-
270
+ tation learning and can more effectively utilize the base
271
+ classes in a NN-based classification framework.
272
+ Nearest neighbor based few-shot classification.
273
+ NN-based classification has also been investigated for
274
+ few-shot classification. The main idea is to compute the
275
+ 3
276
+
277
+ prototypes of the support samples, i.e., the mean or cen-
278
+ troid of the support features, and classify the query sam-
279
+ ple using metrics such as L2 distance, cosine similarity
280
+ or a learned distance function. In SimpleShot [25], it
281
+ shows nearest neighbor classification with features sim-
282
+ ply normalized by L2 norm and measured by Euclidean
283
+ distance can achieve competitive few-shot classification
284
+ results. Instead of performing nearest neighbor classifi-
285
+ cation on the image-level features, Li et al. [49] intro-
286
+ duces a Deep Nearest Neighbor Neural Network which
287
+ performs nearest neighbor search over the deep local
288
+ descriptors and defines an image-to-class measure for
289
+ few-shot classification. From a geometric view, Ma et
290
+ al. [50] utilize the Cluster-induced Voronoi Diagram
291
+ (CIVD) to incorporate cluster-to-point and cluster-to-
292
+ cluster relationships to the nearest neighbor based clas-
293
+ sification.
294
+ Similar to above methods, our method is
295
+ based on the nearest prototype classification, while
296
+ we perform the prior-driven data calibration to obtain
297
+ less-biased support data for the prototype computation.
298
+ Meanwhile, computing the attentive or reweighted pro-
299
+ totypes [51, 52, 53] that are guided by the base classes
300
+ or query samples has also been investigated recently.
301
+ We follow the similar idea and compute the attention-
302
+ weighted prototypes for NN-based classification.
303
+ Data calibration for few-shot classification. Due to
304
+ the limited number of samples, the prototypes or cen-
305
+ troids computed from the few-shot support data may be
306
+ biased and cannot represent the underlying data distri-
307
+ bution. Simply performing NN-based classification on
308
+ these biased prototypes will lead to inaccurate classi-
309
+ fication. Several methods have been proposed to cali-
310
+ brate or rectify the data to obtain better samples or pro-
311
+ totypes of the support class [35, 36, 37, 54, 38]. Using
312
+ the images in the base classes, RestoreNet [35] learns
313
+ a class agnostic transformation on the feature of each
314
+ image to move it closer to the class center in the fea-
315
+ ture space. To reduce the bias caused by the scarcity
316
+ of the support data, Liu et al., [36] employ the pseudo-
317
+ labeling to add unlabelled samples with high prediction
318
+ confidence into the support set for prototype rectifica-
319
+ tion. In [37], Guo et al. propose a Pair-wise Similar-
320
+ ity Module to generate calibrated class centers that are
321
+ adapted to the query sample. Instead of calibrating in-
322
+ dividual support samples, Distribution Calibration (DC)
323
+ [38] aims to calibrate the underlying distribution of the
324
+ support classes by transferring the Gaussian statistics
325
+ from the base classes. With sufficient new support data
326
+ sampled from the calibrated distribution, an additional
327
+ classifier is trained in [38] to classify the query sam-
328
+ ple. In contrast to these methods, we do not require
329
+ additional training or assumption of the underlying dis-
330
+ tribution. Instead, we directly use the prototypes of the
331
+ base classes to calibrate each support sample individ-
332
+ ually and we adopt the NN-based classification which
333
+ makes the whole pipeline discrete and efficient. One
334
+ recent work that is similar to ours is Xu et al.
335
+ [54]
336
+ which proposes the Task Centroid Projection Removing
337
+ (TCPR) module and transforms all support and query
338
+ features in a given task to alleviate the sample selection
339
+ bias problem. Comparing to [54], we only calibrate the
340
+ support samples using the priors from the base classes
341
+ and keep the query samples unchanged.
342
+ 3. Method
343
+ To effectively utilize the prior knowledge from the
344
+ base classes, we first propose two independent calibra-
345
+ tion strategies, i.e., sample-level calibration and task-
346
+ level calibration, which exploit different levels of infor-
347
+ mation from the base classes. Then, we combine the
348
+ sample-level and task-level calibration together to ob-
349
+ tain the final calibrated support samples which will be
350
+ used for the nearest neighbor classification.
351
+ Figure 2 shows an illustration of the P3DC-Shot
352
+ pipeline. Given a pretrained feature extractor F and a
353
+ set of prototypes of base classes, we perform the prior-
354
+ driven discrete calibration to the normalized features of
355
+ the support data. Initially, the query sample in green
356
+ is closer to the support sample in yellow.
357
+ After the
358
+ proposed calibration using the related base class proto-
359
+ types, the query sample becomes closer to the calibrated
360
+ support sample in blue. In the following, we provide
361
+ technical details of the P3DC-Shot for few-shot classi-
362
+ fication.
363
+ 3.1. Problem Statement
364
+ In this paper, we focus on the few-shot image clas-
365
+ sification which aims to classify the new image sam-
366
+ ples from the novel classes with just a few labeled im-
367
+ age samples. Normally, the new data sample is called
368
+ a query sample and the labelled samples are called sup-
369
+ port samples. With the aid of a set of base classes rep-
370
+ resented by their prototypes Pb = {pb
371
+ i }nb
372
+ i=1, our goal is to
373
+ calibrate the support samples from novel-class so that
374
+ they can be better matched with the query samples by
375
+ a nearest neighbor classifier. Here, all data samples are
376
+ represented by the features computed from a pretrained
377
+ feature extractor F(·) : X → Rd, while X is the domain
378
+ of the image space and d is the dimension of the feature
379
+ space; pb
380
+ i is the prototype of a base class, which is com-
381
+ puted as the average feature of the samples within the
382
+ 4
383
+
384
+ L2 norm
385
+ Calibration
386
+ ������������
387
+ Feature
388
+ extraction
389
+ ������������1
390
+ ������������2
391
+ ������������
392
+ Support data
393
+ Query data
394
+ Final calibrated support features
395
+ Endpoint of sample-level calibration
396
+ Endpoint of task-level calibration
397
+ All base class prototypes
398
+ or
399
+ Selected prototypes for a sample
400
+ +
401
+ Selected prototypes for a task
402
+ ̅������������1
403
+ ̅������������1
404
+ ̅������������2
405
+ ̅������������2
406
+ ������������1
407
+ ������������
408
+ ������������2
409
+ ������������
410
+ ̅������������1
411
+ �������������
412
+ ̅������������2
413
+ Figure 2: An illustration of the P3DC-Shot pipeline for the 2-way 1-shot scenario. Note that the direct interpolation of the three triangle vertices
414
+ return a feature on the triangle plane. After normalization, the final calibrated features ¯xu
415
+ 1 and ¯xu
416
+ 2 are on the hypersphere of the normalized space.
417
+ class; nb is the number of all base classes. For simplic-
418
+ ity, we directly use xi to represent the feature F(xi) of
419
+ an image xi.
420
+ We follow the conventional few-shot learning setting,
421
+ i.e., build a series of N-way K-shot tasks where N is the
422
+ number of novel classes and K is the number of sup-
423
+ port samples in each task.
424
+ Formally, each task con-
425
+ sists of a support set S = {(xi, yi)}N×K
426
+ i=1
427
+ and a query set
428
+ Q = {qi}N×K+N×Q
429
+ i=N×K+1 . Here, yi is the label of the corre-
430
+ sponding sample, which is known for the support set
431
+ and unknown for the query set; Q is the number of query
432
+ sample for each novel class in the current task. Given a
433
+ support feature xi, we perform our prior-driven calibra-
434
+ tion to obtain the calibrated support feature xc
435
+ i = C(xi),
436
+ where C(·) : Rd → Rd conducts feature transformation
437
+ based on the information from the base classes. Then,
438
+ we predict the label of a query feature by performing
439
+ nearest neighbor classification w.r.t the novel class pro-
440
+ totypes computed from the calibrated support feature(s).
441
+ 3.2. Prior-Driven Discrete Data Calibration
442
+ Before we perform calibration to the support data, we
443
+ first apply L2 normalization to the support and query
444
+ features.
445
+ It is shown in SimpleShot [25] that using
446
+ L2-normalized feature with a NN-based classifier can
447
+ lead to competitive results for few-shot classification.
448
+ Hence, we obtain ¯xi for a support feature xi by:
449
+ ¯xi = normalize(xi) =
450
+ xi
451
+ ∥xi∥2
452
+ .
453
+ (1)
454
+ Similarly, the normalization of the query features are
455
+ also computed: ¯qi = normalize(qi). By working with
456
+ the normalized features, we can obviate the absolute
457
+ scales of the features and focus on the similarities and
458
+ differences on their directions. Note that, the normal-
459
+ ized features are used in the feature combination step
460
+ (Eq. 7, 10 and 11) for obtaining the interpolation be-
461
+ tween the normalized features and in the NN-based clas-
462
+ sification step (Eq. 12) for performance improvement.
463
+ Next, we propose the sample-level and task-level cal-
464
+ ibration, and their combination to utilize the priors from
465
+ the base classes for obtaining the less-biased support
466
+ features.
467
+ 3.2.1. Sample-Level Calibration
468
+ According to previous works [55, 38] which also use
469
+ the information from base classes for classifying the
470
+ new classes, the base classes with higher similarities
471
+ to the query classes are more important than other base
472
+ classes. Hence, we first propose to perform calibration
473
+ based on the top similar base classes for each support
474
+ sample. Moreover, following DC [38], we apply the
475
+ Tukeys’s Ladder of Powers transformation [56] to the
476
+ features of the support samples before the calibration:
477
+ ˜xi =
478
+ � xλ
479
+ i
480
+ if λ � 0
481
+ log(xi)
482
+ if λ = 0
483
+ (2)
484
+ Here, λ is a hyperparameter which controls the distri-
485
+ bution of the transformed feature, with a smaller λ can
486
+ lead to a less skewed feature distribution. We set λ = 0.5
487
+ and obtain the transformed support feature ˜xi from the
488
+ original feature xi.
489
+ Then, we select the top M base classes with higher
490
+ similarities to a transformed support feature ˜xi:
491
+ ΛM
492
+ i = {pb
493
+ j| j ∈ topM(Si)},
494
+ (3)
495
+ where Si = {< ˜xi, pb
496
+ j > | j ∈ {1, . . . nb}}.
497
+ (4)
498
+ Here, ΛM
499
+ i stores the M nearest base prototypes with re-
500
+ spect to a transformed support feature vector ˜xi; topM(·)
501
+ is an operator that returns the index of top M elements
502
+ from Si, the similarity set of ˜xi, while the similarity be-
503
+ tween ˜xi and a base prototype pb
504
+ j is computed by the
505
+ 5
506
+
507
+ inner product < ·, · >. In DC [38], the distributions
508
+ of the base and novel classes are assumed as Gaussian
509
+ distribution and the statistics (mean and co-variance) of
510
+ the base classes are used to calibrate the distribution of
511
+ the novel classes. In contrast, we directly use the sim-
512
+ ilar base prototypes to calibrate each support feature.
513
+ Specifically, the calibration for ˜xi driven by base proto-
514
+ types pb
515
+ j ∈ ΛM
516
+ i is computed as:
517
+ si = ˜xi +
518
+
519
+ j∈ΛM
520
+ i
521
+ wijpb
522
+ j,
523
+ (5)
524
+ where the weights of the M nearest base classes proto-
525
+ types in ΛM
526
+ i
527
+ are obtained by applying Softmax to the
528
+ similarities between ˜xi and these prototypes:
529
+ wij =
530
+ e<˜xi,pb
531
+ j>
532
+
533
+ k∈ΛM
534
+ i e<˜xi,pb
535
+ k> , j ∈ ΛM
536
+ i .
537
+ (6)
538
+ It should be noted that, in Eq. 5, the support feature ˜xi
539
+ is a transformed feature, while the base prototypes are
540
+ in the original feature space. This setting is the same
541
+ as DC does for calibrating the distribution of the novel
542
+ classes and it can be understood as follows: 1) the trans-
543
+ formation can initially reduce the skewness of the few-
544
+ shot-sampled support features; 2) the term wijpb
545
+ j can be
546
+ regarded as the projection of ˜xi w.r.t prototype pb
547
+ j; 3)
548
+ ˜xi is calibrated based on its projects to all of its similar
549
+ base prototypes in ΛM
550
+ i .
551
+ Finally, the sample-level calibration for a normalized
552
+ support sample ¯xi is defined as:
553
+ ¯xs
554
+ i = normalize((1 − α)¯xi + α¯si),
555
+ (7)
556
+ where α ∈ [0, 1] is a parameter to linearly combine
557
+ the normalized support feature ¯xi and normalized base-
558
+ prototypes-driven calibration ¯si = norm(si). As shown
559
+ in Figure 2, ¯xi and ¯si form a line in the normalized fea-
560
+ ture space and ¯xs
561
+ i is the normalization of a in-between
562
+ point on this line. In general, the sample-level calibra-
563
+ tion can rectify each support sample based on its own
564
+ top M most similar base classes.
565
+ 3.2.2. Task-Level Calibration
566
+ By performing the sample-level calibration, the bias
567
+ induced by the few-shot support samples can be reduced
568
+ to a certain degree. However, when the sampling bias
569
+ is too large, e.g., the support sample is lying near the
570
+ boundary of a class, the set of similar base classes ΛM
571
+ i
572
+ obtained by Eq. 3 may also be biased. To alleviate such
573
+ bias, we propose the task-level calibration which utilizes
574
+ the base prototypes related to all support samples when
575
+ calibrating each individual support feature. Concretely,
576
+ for a support set S = {(xi, yi)}N×K
577
+ i=1 w.r.t a task T , we col-
578
+ lect the top M similar base prototypes for each support
579
+ sample and form a union of related base prototypes for
580
+ T :
581
+ ΛT =
582
+ N×K
583
+
584
+ i=1
585
+ ΛM
586
+ i .
587
+ (8)
588
+ Then, for a transformed support sample ˜xi obtained
589
+ by Eq. 2, the calibration using all of the task-related
590
+ base prototypes is computed by:
591
+ ti = ˜xi +
592
+
593
+ j∈ΛT
594
+ wi jpb
595
+ j,
596
+ (9)
597
+ where wi j is calculated in the similar way as Eq. 6, but
598
+ the similarities are computed using the prototypes from
599
+ ΛT instead of ΛM
600
+ i . By involving more prototypes to cal-
601
+ ibrate the support samples, the bias caused by only using
602
+ nearby prototypes for a near-boundary support sample
603
+ can be reduced.
604
+ Then, we define the task-level calibration for a nor-
605
+ malized support sample ¯xi as:
606
+ ¯xt
607
+ i = normalize((1 − β)¯xi + β¯ti),
608
+ (10)
609
+ where ¯ti is the normalization of ti. Similar to the sample-
610
+ level calibration, ¯xi and ¯ti also form a line in the normal-
611
+ ized feature space, while the calibration for each support
612
+ sample is based on the union of all related base proto-
613
+ types ΛT .
614
+ 3.2.3. Unified Model
615
+ The sample-level and task-level calibration utilize
616
+ different levels of information from the base classes to
617
+ rectify the support samples in a discrete manner. To fur-
618
+ ther attain the merits of both calibration schemes, we
619
+ propose a unified model which linearly combines the
620
+ sample-level and task-level calibration:
621
+ xc
622
+ i = ¯xu
623
+ i = normalize((1 − α − β)¯xi + α¯si + β¯ti).
624
+ (11)
625
+ Here, ¯xu
626
+ i which is also denoted as xc
627
+ i , is the final calibra-
628
+ tion for a normalized support sample ¯xi . Geometrically,
629
+ xc
630
+ i can be understood as the normalization of an interpo-
631
+ lated feature point xu
632
+ i locating in the triangle formulated
633
+ by the three vertices ¯xi, ¯si and ¯ti, while 1 − α − β, α and
634
+ β are the barycentric coordinates of xu
635
+ i . Different α and
636
+ β values can lead to different calibration effects. When
637
+ β = 0, the unified model degenerates to the sample-
638
+ level calibration, while when α = 0, the model becomes
639
+ to the task-level calibration. We quantitatively evaluate
640
+ the effects of different α and β values in Section 4.4.
641
+ 6
642
+
643
+ 3.3. Nearest Prototype Classifier
644
+ With the calibrated support set Sc = {(xc
645
+ i , yi)}N×K
646
+ i=1 , we
647
+ compute the prototypes {pn}N
648
+ n=1 for the novel classes and
649
+ perform cosine similarity based nearest classification
650
+ for a query feature q. To simplify the notation, we fur-
651
+ ther represent Sc = {Sc
652
+ n}N
653
+ n=1, while Sc
654
+ n = {(xc
655
+ k, yk = n)}K
656
+ k=1
657
+ is the support set for a novel class CLS n.
658
+ For the 1-shot case, each calibrated support sample
659
+ becomes one prototype and the class of the query fea-
660
+ ture is predicted by the nearest prototype classifier:
661
+ y∗ = max
662
+ pn cos(¯q, pn),
663
+ (12)
664
+ where pn = xc
665
+ n is the calibrated prototype for novel class
666
+ CLS n and ¯q is the normalization of query q.
667
+ For the multi-shot case, one way to obtain the pro-
668
+ totype for a novel class is simply to compute the av-
669
+ erage of all support features for the given class as in
670
+ Prototypical Networks [16]. However, merely using the
671
+ unweighted average of the support features as prototype
672
+ does not consider the importance of the support samples
673
+ w.r.t the query. Therefore, we adopt the idea of attentive
674
+ prototype which is proposed in recent works [51, 53] for
675
+ query-guided prototype computation. In our implemen-
676
+ tation, we define the attention-weighted prototype as:
677
+ pq
678
+ n =
679
+
680
+ xc
681
+ k∈Scn
682
+ akxc
683
+ k,
684
+ (13)
685
+ where ak =
686
+ e<q,xc
687
+ k>
688
+
689
+ xcm∈Scn e<q,xcm> .
690
+ (14)
691
+ Here, xc
692
+ k and xc
693
+ m are the calibrated support samples be-
694
+ longing to the CLS n’s support set Sc
695
+ n and ak is the atten-
696
+ tion weight computed by applying Softmax to the sim-
697
+ ilarities between query q and these calibrated support
698
+ samples; pq
699
+ n is the CLS n’s prototype guided by query
700
+ q. Similar to Eq. 12, the prediction for a query q is
701
+ obtained by finding the novel class with the nearest pro-
702
+ totype pq
703
+ n.
704
+ 4. Experiments
705
+ In this section, we perform quantitative compar-
706
+ isons between our P3DC-Shot and state-of-the-art
707
+ few-shot classification methods on three represen-
708
+ tative datasets.
709
+ We also conduct ablation studies
710
+ on evaluating different hyperparameters and design
711
+ choices for our methods.
712
+ Our code is available at:
713
+ https://github.com/breakaway7/P3DC-Shot.
714
+ 4.1. Datasets
715
+ We evaluate our prior-driven data calibration strate-
716
+ gies on three popular datasets for benchmarking few
717
+ shot classificaiton: miniImageNet [2], tieredImageNet
718
+ [39] and CUB [40]. miniImageNet and tieredImageNet
719
+ contain a broad range of classes including various an-
720
+ imals and objects, while CUB is a more fine-grained
721
+ dataset that focuses on various species of birds.
722
+ Specifically, the miniImageNet [2] is derived from
723
+ the ILSVRC-2012 [58] and it contains a subset of 100
724
+ classes, each of which consisting of 600 images. We
725
+ follow the split used in [18] and obtain 64 base, 16 val-
726
+ idation and 20 novel classes for miniImageNet. Comar-
727
+ ing to miniImageNet, the tieredImageNet [39] is a larger
728
+ subset of [58] which contains 608 classes and therefore
729
+ more challenging. We follow [39] and split the tiered-
730
+ ImageNet into 351, 97, and 160 classes for base, vali-
731
+ dation, and novel classes, respectively. For CUB [40], it
732
+ is the short name for Caltech-UCSD Birds 200 dataset,
733
+ which contains a total of 11,788 images covering 200
734
+ categories of different bird species. We split the CUB
735
+ dataset into 100 base, 50 validation and 50 novel classes
736
+ following [31]. Note that the set formed by the base
737
+ classes can also be regarded as the train set and the novel
738
+ classes correspond to the test set.
739
+ 4.2. Implementation Details
740
+ For each image in the dataset, we represent it as a
741
+ 640-dimensional feature vector which is extracted us-
742
+ ing the WideResNet [59] pretrained by the S2M2 [29]
743
+ work. Our calibration pipeline can efficiently proceed
744
+ in four steps: 1) find the M = 5 nearby base prototypes
745
+ for each support sample xi; 2) compute the endpoint
746
+ of the sample-level calibration for xi, i.e., si; 3) col-
747
+ lect all nearby base prototypes for all support samples
748
+ in the task and compute the endpoint of the task-level
749
+ calibration for xi, i.e., ti; 4) combine the sample-level
750
+ and task-level calibration and obtain the final calibrated
751
+ support sample xc
752
+ i . The parameter α and β for weighting
753
+ the sample-level and task-level calibration are selected
754
+ based on the best results obtained on the validation set
755
+ for each dataset. All experiments are conducted on a
756
+ PC with a 2.70GHz CPU and 16G memory. No GPU
757
+ is needed during the calibration. On average, for a 5-
758
+ way 5-shot task, it takes 0.027 seconds to calibrate the
759
+ support samples and 0.002 seconds for performing the
760
+ nearest prototype classification.
761
+ 4.3. Comparison and Evaluation
762
+ To evaluate the performance of our P3DC-Shot, we
763
+ first conduct quantitative comparisons with some rep-
764
+ resentative and state-of-the-art few-short classification
765
+ 7
766
+
767
+ Table 1:
768
+ Quantitative comparison on the test set of miniImageNet, tieredImageNet and CUB. The 5-way 1-shot and 5-way 5-shot classification
769
+ accuracy (%) with 95% confidence intervals are measured. Best results are highlighted in bold and second best are in italic. The last line shows the
770
+ α and β selected based on the valiation set for each dataset. * 8 and 20 are the number of ensembles in DeepVoro and DeepVoro++. † The results
771
+ of [54] on tieredImageNet are obtained using its released code.
772
+ Methods
773
+ miniImageNet
774
+ tieredImageNet
775
+ CUB
776
+ 5-way 1-shot
777
+ 5-way 5-shot
778
+ 5-way 1-shot
779
+ 5-way 5-shot
780
+ 5-way 1-shot
781
+ 5-way 5-shot
782
+ Meta-learning (metric-learning)
783
+ MatchingNet [15] (2016)
784
+ 64.03 ± 0.20
785
+ 76.32 ± 0.16
786
+ 68.50 ± 0.92
787
+ 80.60 ± 0.71
788
+ 73.49 ± 0.89
789
+ 84.45 ± 0.58
790
+ ProtoNet [16] (2017)
791
+ 54.16 ± 0.82
792
+ 73.68 ± 0.65
793
+ 65.65 ± 0.92
794
+ 83.40 ± 0.65
795
+ 72.99 ± 0.88
796
+ 86.64 ± 0.51
797
+ RelationNet [17] (2018)
798
+ 52.19 ± 0.83
799
+ 70.20 ± 0.66
800
+ 54.48 ± 0.93
801
+ 71.32 ± 0.78
802
+ 68.65 ± 0.91
803
+ 81.12 ± 0.63
804
+ Meta-learning (optimization)
805
+ MAML [19] (2017)
806
+ 48.70 ± 1.84
807
+ 63.10 ± 0.92
808
+ 51.67 ± 1.81
809
+ 70.30 ± 0.08
810
+ 50.45 ± 0.97
811
+ 59.60 ± 0.84
812
+ LEO [21] (2019)
813
+ 61.76 ± 0.08
814
+ 77.59 ± 0.12
815
+ 66.33 ± 0.15
816
+ 81.44 ± 0.09
817
+ 68.22 ± 0.22
818
+ 78.27 ± 0.16
819
+ DCO [22] (2019)
820
+ 62.64 ± 0.61
821
+ 78.63 ± 0.46
822
+ 65.99 ± 0.72
823
+ 81.56 ± 0.53
824
+ -
825
+ -
826
+ Transfer learning
827
+ Baseline++ [31] (2019)
828
+ 57.53 ± 0.10
829
+ 72.99 ± 0.43
830
+ 60.98 ± 0.21
831
+ 75.93 ± 0.17
832
+ 70.40 ± 0.81
833
+ 82.92 ± 0.78
834
+ Negative-Cosine [57] (2020)
835
+ 62.33 ± 0.82
836
+ 80.94 ± 0.59
837
+ -
838
+ -
839
+ 72.66 ± 0.85
840
+ 89.40 ± 0.43
841
+ S2M2R [29] (2020)
842
+ 64.65 ± 0.45
843
+ 83.20 ± 0.30
844
+ 68.12 ± 0.52
845
+ 86.71 ± 0.34
846
+ 80.14 ± 0.45
847
+ 90.99 ± 0.23
848
+ Nearest neighbor
849
+ SimpleShot [25] (2019)
850
+ 64.29 ± 0.20
851
+ 81.50 ± 0.14
852
+ 71.32 ± 0.22
853
+ 86.66 ± 0.15
854
+ -
855
+ -
856
+ DeepVoro(8)∗ [50] (2022)
857
+ 66.45 ± 0.44
858
+ 84.55 ± 0.29
859
+ 74.02 ± 0.49
860
+ 88.90 ± 0.29
861
+ 80.98 ± 0.44
862
+ 91.47 ± 0.22
863
+ DeepVoro++(20)∗ [50] (2022)
864
+ 68.38 ± 0.46
865
+ 83.27 ± 0.31
866
+ 74.48 ± 0.50
867
+ -
868
+ 80.70 ± 0.45
869
+ -
870
+ Data calibration
871
+ RestoreNet [35] (2020)
872
+ 59.28 ± 0.20
873
+ -
874
+ -
875
+ -
876
+ 74.32 ± 0.91
877
+ -
878
+ DC [38] (2021)
879
+ 67.79 ± 0.45
880
+ 83.69 ± 0.31
881
+ 74.24 ± 0.50
882
+ 88.38 ± 0.31
883
+ 79.93 ± 0.46
884
+ 90.77 ± 0.24
885
+ MCL-Katz+PSM [37] (2022)
886
+ 67.03
887
+ 84.03
888
+ 69.90
889
+ 85.08
890
+ 85.89
891
+ 93.08
892
+ S2M2+TCPR† [54] (2022)
893
+ 68.05 ± 0.41
894
+ 84.51 ± 0.27
895
+ 72.67 ± 0.48
896
+ 87.96 ± 0.31
897
+ -
898
+ -
899
+ P3DC-Shot (α = 0, β = 0)
900
+ 65.93 ± 0.45
901
+ 84.06 ± 0.30
902
+ 73.56 ± 0.49
903
+ 88.50 ± 0.32
904
+ 81.61 ± 0.43
905
+ 91.36 ± 0.22
906
+ P3DC-Shot (α = 1, β = 0)
907
+ 68.41 ± 0.44
908
+ 83.06 ± 0.32
909
+ 74.84 ± 0.49
910
+ 88.01 ± 0.33
911
+ 81.51 ± 0.44
912
+ 90.83 ± 0.24
913
+ P3DC-Shot (α = 0, β = 1)
914
+ 68.67 ± 0.44
915
+ 83.64 ± 0.31
916
+ 75.20 ± 0.48
917
+ 88.29 ± 0.33
918
+ 81.58 ± 0.44
919
+ 91.02 ± 0.23
920
+ P3DC-Shot (α = 1
921
+ 3, β = 1
922
+ 3)
923
+ 68.33 ± 0.44
924
+ 84.19 ± 0.30
925
+ 74.91 ± 0.49
926
+ 88.54 ± 0.32
927
+ 81.75 ± 0.43
928
+ 91.21 ± 0.23
929
+ P3DC-Shot (selected α, β)
930
+ 68.68 ± 0.44
931
+ 84.37 ± 0.30
932
+ 75.20 ± 0.48
933
+ 88.67 ± 0.32
934
+ 81.86 ± 0.43
935
+ 91.36 ± 0.23
936
+ (0.0, 0.9)
937
+ (0.0, 0.4)
938
+ (0.0, 1.0)
939
+ (0.0, 0.3)
940
+ (0.2, 0.4)
941
+ (0.0, 0.4)
942
+ methods. Then, we compare with different data trans-
943
+ formation or calibration schemes and provide qualita-
944
+ tive visualization for showing the difference of our cali-
945
+ bration results w.r.t existing works. In addition, we eval-
946
+ uate the generalizability of our method by performing
947
+ classification tasks with different difficulties.
948
+ Quantitative comparisons. As there are numerous
949
+ efforts have been paid to the few-shot classification,
950
+ we mainly compare our P3DC-Shot with representative
951
+ and SOTA works which cover different types of few-
952
+ shot learning schemes. The compared methods include
953
+ the metric-learning based meta-learning [15, 16, 17],
954
+ optimization-based meta-learning [19, 21, 22], transfer
955
+ learning [31, 57, 29], nearest neighbor [25, 50] and cal-
956
+ ibration [35, 38, 37, 54] based methods. For certain
957
+ methods such as [29, 28], we only compare with their
958
+ basic versions and do not consider their model trained
959
+ with data augmentation. Note that as not every method
960
+ has conducted experiments on all three datasets, we
961
+ mainly compare with their reported results. One excep-
962
+ tion is for [54], we compare with its results generated
963
+ using its released code.
964
+ For our method, we report the results of our model
965
+ with different hyperparameters α and β. In particular,
966
+ we consider the case when α and β are both zero, which
967
+ makes our method a simple NN-based method with no
968
+ data calibration and only shows the effect for using the
969
+ query-guided prototype computation (Eq. 13). We also
970
+ compare with the results of α or β is 1, or both of them
971
+ are equal to 1
972
+ 3, which correspond to the cases that the
973
+ endpoint of the sample-level or task-level calibration or
974
+ the barycenter of the calibration triangle (Figure 2). In
975
+ the end, we provide our best results with the α or β se-
976
+ lected based on the validation set.
977
+ For each dataset, we evaluate on the 5-way 1-shot
978
+ and 5-way 5-shot classification setting. For each set-
979
+ ting, 2,000 testing tasks, each of which contains 5 × K
980
+ (K = 1 or 5) samples for the support set and 5 × 15
981
+ 8
982
+
983
+ ••
984
+
985
+ 口�
986
+
987
+ ·.. -­
988
+ . .
989
+ .
990
+ ,,, "
991
+ •• ·护h..
992
+
993
+ ..了 '. .:. -�
994
+ ~ .沁:..“心..
995
+ °'“).一..,...
996
+ ·-
997
+ --■一一- . 一护
998
+ . _
999
+
1000
+ I I . . ..
1001
+ . . .
1002
+ •.
1003
+ lJ
1004
+ .
1005
+ . .
1006
+
1007
+
1008
+
1009
+
1010
+ .1
1011
+
1012
+
1013
+
1014
+ -..
1015
+
1016
+
1017
+
1018
+ ..
1019
+
1020
+
1021
+
1022
+ . . .
1023
+
1024
+
1025
+
1026
+
1027
+ ••
1028
+
1029
+
1030
+
1031
+ 它v
1032
+ 二,
1033
+
1034
+
1035
+ ..
1036
+
1037
+
1038
+
1039
+
1040
+
1041
+
1042
+
1043
+
1044
+
1045
+
1046
+
1047
+
1048
+ 0
1049
+ v
1050
+
1051
+
1052
+ ·`i... .
1053
+
1054
+ . .
1055
+ . .
1056
+
1057
+
1058
+
1059
+
1060
+
1061
+ ••
1062
+ ..
1063
+ ', .
1064
+ . . . .
1065
+ . .,. .
1066
+ ••
1067
+ (a)
1068
+ (b)
1069
+ (c)
1070
+ Figure 3: T-SNE visualization of the calibration on example support samples from the test set of miniImageNet (a), tieredImageNet (b), and CUB
1071
+ (c). The colored dots are data from the same underlying classes as the selected sample and the star is the center of each class. Given a support
1072
+ sample (represented in square), the upside down triangle is our calibration result and the lozenge is the calibration result of DC [38].
1073
+ samples for the query set, are randomly generated from
1074
+ the test split of the corresponding dataset. Table 1 shows
1075
+ the quantitative comparison results on three datasets. It
1076
+ can be seen that our best results outperform most meth-
1077
+ ods in the 5-way 1-shot setting and are comparable to
1078
+ the SOTA methods [28, 38] for the 5-way 5-shot set-
1079
+ ting. Note that although [37] achieves best results on the
1080
+ CUB dataset, it is inferior on miniImageNet and tiered-
1081
+ ImageNet. Moreover, since [37] follows a metric-based
1082
+ few-shot learning pipeline, it still requires to train the
1083
+ feature extractor and the metric module for each dataset.
1084
+ For [28], it performs generally well on all three datasets,
1085
+ but as an ensemble-based method, its computation time
1086
+ is much longer than our method, especially when the
1087
+ ensemble number is large. In contrast, our method does
1088
+ not require any training and only needs to perform an
1089
+ efficient calibration step for each testing task.
1090
+ Also, from results of our method with different α and
1091
+ β values in Table 1, it can be found when α and β is
1092
+ zero, the query-guided prototype computation can lead
1093
+ to better performance than the simple NN-based Sim-
1094
+ pleShot [25]. When either the sample-level or task-level
1095
+ calibration is applied, i.e., α or β is not zero, the results
1096
+ are better than the non-calibrated version, showing the
1097
+ calibration can indeed reduce the bias for the support
1098
+ samples. Meanwhile, which calibration type is more
1099
+ suitable is depending on the underlying data distribu-
1100
+ tion of the dataset. By selecting the α and β based on
1101
+ the validation set of each dataset, the results are further
1102
+ improved. In the ablation study, we perform more ex-
1103
+ periments and analysis of different α and β values.
1104
+ Comparison with different data transformation or
1105
+ calibration schemes. To further verify the effectiveness
1106
+ Table 2: Comparison with different data transformation or calibration
1107
+ schemes. Accuracy (%) for 5-way 1-shot task on the test set of mini-
1108
+ ImageNet are measured.
1109
+ Model
1110
+ miniImageNet
1111
+ CUB
1112
+ 5-way 1-shot
1113
+ 5-way 1-shot
1114
+ NN
1115
+ 47.50
1116
+ 76.40
1117
+ L2N+NN
1118
+ 65.93
1119
+ 81.61
1120
+ CL2N+NN
1121
+ 65.96
1122
+ 81.54
1123
+ DC+L2N+NN
1124
+ 66.23
1125
+ 79.49
1126
+ P3DC-Shot
1127
+ 68.68
1128
+ 81.86
1129
+ (selected α, β)
1130
+ (0.0,0.9)
1131
+ (0.2,0.4)
1132
+ of our prior-driven data calibration, we compare with
1133
+ several NN-based baseline methods which perform dif-
1134
+ ferent data transformation or calibration schemes and
1135
+ the results are shown in Table 2. In this experiment, all
1136
+ methods are based on the pretrained WideResNet fea-
1137
+ tures.
1138
+ Also, only the 5-way 1-shot classification ac-
1139
+ curacy is measured so that the comparison is focused
1140
+ on feature transformation instead of the prototype com-
1141
+ putation schemes. The first baseline is NN, which is
1142
+ a naive inner product based nearest neighbor classifier.
1143
+ Then, L2N and CL2N represent L2 normalization and
1144
+ centered L2 normalization which have been shown as
1145
+ effective in SimpleShot [25]. In addition, another base-
1146
+ line that follows the data calibration scheme in DC [38]
1147
+ is compared. Comparing to the original DC, this base-
1148
+ line directly takes the calibrated and then normalized
1149
+ features and employs NN for classification instead of
1150
+ training new classifiers using the sampled data. From
1151
+ Table 2, it can be observed the data normalization or cal-
1152
+ ibration can significantly improve the NN-based classi-
1153
+ 9
1154
+
1155
+ ★Table 3: Generalizability test on different N in N-way 1-shot tasks. Accuracy (%) on the test set of miniImageNet are measured. For our P3DC-Shot,
1156
+ the same α = 0 and β = 0.9 selected based on the validation set for the 5-way 1-shot case are used for all experiments.
1157
+ Models
1158
+ 5-way
1159
+ 7-way
1160
+ 9-way
1161
+ 11-way
1162
+ 13-way
1163
+ 15-way
1164
+ 20-way
1165
+ RestroreNet [35]
1166
+ 59.56
1167
+ 50.55
1168
+ 44.54
1169
+ 39.98
1170
+ 36.34
1171
+ 33.52
1172
+ 28.48
1173
+ L2N+NN
1174
+ 65.93
1175
+ 57.86
1176
+ 52.45
1177
+ 48.25
1178
+ 44.80
1179
+ 42.12
1180
+ 37.06
1181
+ CL2N+NN
1182
+ 65.96
1183
+ 57.69
1184
+ 52.23
1185
+ 47.93
1186
+ 44.36
1187
+ 41.85
1188
+ 36.65
1189
+ P3DC-Shot
1190
+ 68.68
1191
+ 60.58
1192
+ 55.03
1193
+ 50.75
1194
+ 47.21
1195
+ 44.43
1196
+ 39.33
1197
+ fication. In addition, our data calibration achieves the
1198
+ best results comparing to other baselines. The main rea-
1199
+ son is the L2N and CL2N only perform transformation
1200
+ rather than calibration using the base priors, while the
1201
+ modified DC does not consider the attentive similarity
1202
+ between the support samples and the base classes when
1203
+ performing the calibration.
1204
+ Visualization of the calibration.
1205
+ To qualitatively
1206
+ verify the effectiveness of our calibration, we show the
1207
+ T-SNE [60] visualization of the calibration results for
1208
+ some example support samples in Figure 3. The results
1209
+ of calibrating the same sample using DC [38] are also
1210
+ compared. It can be seen from Figure 3 that our calibra-
1211
+ tion can more effectively transform the support samples
1212
+ closer to the center of the underlying classes. For DC,
1213
+ the calibration may be minor or even be far away from
1214
+ the center. The reason is still due to it treats the nearby
1215
+ base classes with the same weights. In contrast, our cal-
1216
+ ibration pays more attention to the similar base classes
1217
+ when determining the weights for combining the base
1218
+ prototypes (Eq. 5 and 9).
1219
+ Generalizability test on different N in N-way clas-
1220
+ sification. Following [35], we conduct a series of N-
1221
+ way 1-shot experiments on miniImageNet to test the
1222
+ generalizability of the proposed calibration for differ-
1223
+ ent classification tasks. Table 3 shows the results of the
1224
+ baseline methods [35], L2N and CL2N and ours. Note
1225
+ that with the N increases, there are more data samples in
1226
+ a test task and the classification becomes more difficult.
1227
+ It can be observed that our P3DC-Shot achieves con-
1228
+ sistent best results comparing to the baseline methods,
1229
+ verifying our method is generalizable to classification
1230
+ tasks with different difficulties.
1231
+ 4.4. Ablation Study
1232
+ In this section, we perform ablation studies to ver-
1233
+ ify the effectiveness of different modules and design
1234
+ choices of our method. First, we conduct experiments
1235
+ on different hyperparameter α and β to see how the
1236
+ sample-level and task-level calibration can affect the fi-
1237
+ nal results. Then, we perform the study on the effec-
1238
+ tiveness of using the query-guided attentive prototypes
1239
+ in the NN classification step.
1240
+ Effect on different hyperparameter α, β. Differ-
1241
+ ent α and β values correspond to different degrees of
1242
+ sample-level and task-level calibration applied to the in-
1243
+ put data. Geometrically, α, β and 1 − α − β can also be
1244
+ understood as the coordinates of the calibration result
1245
+ w.r.t to the triangle formed by the three points ¯xi, si, ti.
1246
+ To quantitatively reveal how these two hyperparameters
1247
+ can affect the results, we enumerate different α and β
1248
+ values on both the validation and test sets of different
1249
+ datasets. From the results in Figure 4, it can be found
1250
+ the accuracy near the origin of the figures are smaller,
1251
+ which means performing calibration can improve upon
1252
+ using the original features for classification, i.e., α and
1253
+ β is zero. Also, different datasets prefer different α and
1254
+ β combinations for achieving higher performance. For
1255
+ example, miniImageNet shows better results when α+β
1256
+ is around 0.9 and CUB prefers a relatively smaller cal-
1257
+ ibration, i.e., α + β is around 0.6. For tieredImageNet,
1258
+ better results are obtained around the topper left of the
1259
+ figure, showing the task-level calibration is more help-
1260
+ ful than the sample-level. Overall, the trend on the test
1261
+ set is consistent with the validation set. From above ex-
1262
+ periments, it shows the sample-level and task-level cali-
1263
+ bration are consistently effective, while how to selecting
1264
+ the good α and β values are dataset dependent. There-
1265
+ fore, for our best results, we use the α and β selected
1266
+ based on the validation set and report their performance
1267
+ on the test set.
1268
+ Effect on using attentive prototypes in NN classifi-
1269
+ cation. To improve the conventional prototype based
1270
+ NN classificaiton, we propose to compute the query-
1271
+ guided attentive prototypes to represent the support
1272
+ class. To verify the effectiveness of this scheme, we per-
1273
+ form ablation study for 5-way 5-shot tasks on different
1274
+ tasks using different prototype computation schemes.
1275
+ Specifically, we take the calibrated support features
1276
+ and compute the prototypes for the support classes by
1277
+ performing the conventional average operation or our
1278
+ query-guided attentive averaging (Eq. 13). The results
1279
+ 10
1280
+
1281
+ Figure 4: The effect of different α and β on the validation (top) and test (bottom) set of different datasets. Accuracy (%) for 5-way 1-shot task on
1282
+ miniImageNet, tieredImageNet and CUB are measured. The warmer color corresponds to higher accuracy.
1283
+ Table 4: Ablation study on using the query-guided attentive proto-
1284
+ types in NN classification. Accuray (%) on the test set of miniIma-
1285
+ geNet, tieredImageNet and CUB are measured.
1286
+ Model
1287
+ miniImageNet tieredImageNet
1288
+ CUB
1289
+ 5-way 5-shot
1290
+ 5-way 5-shot
1291
+ 5-way 5-shot
1292
+ Average
1293
+ 84.11
1294
+ 88.54
1295
+ 91.27
1296
+ Attentive
1297
+ 84.37
1298
+ 88.67
1299
+ 91.36
1300
+ in Table 4 show that the attentive prototypes can lead to
1301
+ better performance. Hence, we adopt the attentive pro-
1302
+ totypes in our NN-based classification.
1303
+ 5. Conclusion
1304
+ In this paper, we propose a simple yet effective frame-
1305
+ work, named P3DC-Shot, for few-shot classification.
1306
+ Without any retraining and expensive computation, our
1307
+ prior-driven discrete data calibration method can effi-
1308
+ ciently calibrate the support samples based on prior-
1309
+ information from the base classes to obtain the less-
1310
+ biased support data for NN-based classification. Exten-
1311
+ sive experiments show that our method can outperform
1312
+ or at least comparable to SOTA methods which need ad-
1313
+ ditional learning steps or more computation. One lim-
1314
+ itation of our method is we rely on the whole valida-
1315
+ tion set to select the good hyperparameters α and β to
1316
+ determine which degree of the sample-level and task-
1317
+ level calibration is more suitable for the given dataset.
1318
+ Investigating a more general scheme to combine the
1319
+ sample-level and task-level calibration is an interesting
1320
+ future work. Moreover, when exploring the combina-
1321
+ tion schemes, we only focus on exploring the inner area
1322
+ of the calibration triangle. It is worthy to extend the
1323
+ parameter search to a larger area, i.e., by extrapolation
1324
+ of the calibration triangle, to find whether better results
1325
+ can be obtained.
1326
+ References
1327
+ [1] K. Simonyan, A. Zisserman,
1328
+ Very deep convolutional net-
1329
+ works for large-scale image recognition,
1330
+ arXiv preprint
1331
+ arXiv:1409.1556 (2014).
1332
+ [2] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,
1333
+ Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al., Ima-
1334
+ genet large scale visual recognition challenge, Int. J. Comput.
1335
+ Vis. 115 (2015) 211–252.
1336
+ 11
1337
+
1338
+ [3] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for
1339
+ image recognition, in: IEEE Conf. Comput. Vis. Pattern Recog.,
1340
+ 2016, pp. 770–778.
1341
+ [4] R. Girshick, Fast r-cnn, in: Int. Conf. Comput. Vis., 2015, pp.
1342
+ 1440–1448.
1343
+ [5] S. Ren, K. He, R. Girshick, J. Sun, Faster r-cnn: Towards real-
1344
+ time object detection with region proposal networks, Adv. Neu-
1345
+ ral Inform. Process. Syst. 28 (2015).
1346
+ [6] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look
1347
+ once: Unified, real-time object detection, in: IEEE Conf. Com-
1348
+ put. Vis. Pattern Recog., 2016, pp. 779–788.
1349
+ [7] J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks
1350
+ for semantic segmentation, in: IEEE Conf. Comput. Vis. Pattern
1351
+ Recog., 2015, pp. 3431–3440.
1352
+ [8] K. He, G. Gkioxari, P. Doll´ar, R. Girshick, Mask r-cnn, in: Int.
1353
+ Conf. Comput. Vis., 2017, pp. 2961–2969.
1354
+ [9] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A. L.
1355
+ Yuille, Deeplab: Semantic image segmentation with deep con-
1356
+ volutional nets, atrous convolution, and fully connected crfs,
1357
+ IEEE Trans. Pattern Anal. Mach. Intell. 40 (2017) 834–848.
1358
+ [10] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra-
1359
+ manan, P. Doll´ar, C. L. Zitnick, Microsoft coco: Common ob-
1360
+ jects in context, in: Eur. Conf. Comput. Vis., 2014, pp. 740–755.
1361
+ [11] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler,
1362
+ R. Benenson, U. Franke, S. Roth, B. Schiele, The cityscapes
1363
+ dataset for semantic urban scene understanding, in: IEEE Conf.
1364
+ Comput. Vis. Pattern Recog., 2016, pp. 3213–3223.
1365
+ [12] Y. Wang, Q. Yao, J. T. Kwok, L. M. Ni, Generalizing from a few
1366
+ examples: A survey on few-shot learning, ACM Comput Surv
1367
+ 53 (2020) 1–34.
1368
+ [13] J. Lu, P. Gong, J. Ye, C. Zhang, Learning from very few sam-
1369
+ ples: A survey, arXiv preprint arXiv:2009.02653 (2020).
1370
+ [14] G. Huang, I. Laradji, D. V´azquez, S. Lacoste-Julien, P. Ro-
1371
+ driguez, A survey of self-supervised and few-shot object de-
1372
+ tection, IEEE Trans. Pattern Anal. Mach. Intell. (2022).
1373
+ [15] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al., Match-
1374
+ ing networks for one shot learning, Adv. Neural Inform. Pro-
1375
+ cess. Syst. 29 (2016).
1376
+ [16] J. Snell, K. Swersky, R. Zemel, Prototypical networks for few-
1377
+ shot learning, Adv. Neural Inform. Process. Syst. 30 (2017).
1378
+ [17] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, T. M.
1379
+ Hospedales, Learning to compare: Relation network for few-
1380
+ shot learning (2018) 1199–1208.
1381
+ [18] S. Ravi, H. Larochelle, Optimization as a model for few-shot
1382
+ learning (2016).
1383
+ [19] C. Finn, P. Abbeel, S. Levine, Model-agnostic meta-learning for
1384
+ fast adaptation of deep networks, in: Int. Conf. Mach. Learn.,
1385
+ PMLR, 2017, pp. 1126–1135.
1386
+ [20] M. A. Jamal, G.-J. Qi, Task agnostic meta-learning for few-shot
1387
+ learning, in: IEEE Conf. Comput. Vis. Pattern Recog., 2019.
1388
+ [21] A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu,
1389
+ S. Osindero, R. Hadsell, Meta-learning with latent embedding
1390
+ optimization, in: Int. Conf. Learn. Represent., 2019.
1391
+ [22] K. Lee, S. Maji, A. Ravichandran, S. Soatto, Meta-learning with
1392
+ differentiable convex optimization, in: IEEE Conf. Comput. Vis.
1393
+ Pattern Recog., 2019, pp. 10657–10665.
1394
+ [23] Y. Chen, X. Wang, Z. Liu, H. Xu, T. Darrell,
1395
+ A new meta-
1396
+ baseline for few-shot learning, arXiv preprint arXiv:2003.11539
1397
+ (2020).
1398
+ [24] T. Cao, M. T. Law, S. Fidler, A theoretical analysis of the num-
1399
+ ber of shots in few-shot learning, in: Int. Conf. Learn. Repre-
1400
+ sent., 2019.
1401
+ [25] Y. Wang, W.-L. Chao, K. Q. Weinberger, L. van der Maaten,
1402
+ Simpleshot: Revisiting nearest-neighbor classification for few-
1403
+ shot learning, arXiv preprint arXiv:1911.04623 (2019).
1404
+ [26] F. Aurenhammer, Voronoi diagrams—a survey of a fundamental
1405
+ geometric data structure, ACM Comput Surv 23 (1991) 345–
1406
+ 405.
1407
+ [27] D. Z. Chen, Z. Huang, Y. Liu, J. Xu, On clustering induced
1408
+ voronoi diagrams,
1409
+ SIAM Journal on Computing 46 (2017)
1410
+ 1679–1711.
1411
+ [28] C. Ma, Z. Huang, M. Gao, J. Xu, Few-shot learning as cluster-
1412
+ induced voronoi diagrams: A geometric approach, in: Int. Conf.
1413
+ Learn. Represent., 2022.
1414
+ [29] P. Mangla, N. Kumari, A. Sinha, M. Singh, B. Krishnamurthy,
1415
+ V. N. Balasubramanian, Charting the right manifold: Manifold
1416
+ mixup for few-shot learning, in: WACV, 2020, pp. 2218–2227.
1417
+ [30] Y. Tian, Y. Wang, D. Krishnan, J. B. Tenenbaum, P. Isola, Re-
1418
+ thinking few-shot image classification: a good embedding is all
1419
+ you need?, in: Eur. Conf. Comput. Vis., Springer, 2020, pp.
1420
+ 266–282.
1421
+ [31] W.-Y. Chen, Y.-C. Liu, Z. Kira, Y.-C. F. Wang, J.-B. Huang,
1422
+ A closer look at few-shot classification, in: Int. Conf. Learn.
1423
+ Represent., 2019.
1424
+ [32] W. Ge, Y. Yu,
1425
+ Borrowing treasures from the wealthy: Deep
1426
+ transfer learning through selective joint fine-tuning, in: IEEE
1427
+ Conf. Comput. Vis. Pattern Recog., 2017.
1428
+ [33] O. Sbai, C. Couprie, M. Aubry, Impact of base dataset design on
1429
+ few-shot image classification, Eur. Conf. Comput. Vis. (2020).
1430
+ [34] L. Zhou, P. Cui, X. Jia, S. Yang, Q. Tian, Learning to select base
1431
+ classes for few-shot classification, in: IEEE Conf. Comput. Vis.
1432
+ Pattern Recog., 2020, pp. 4624–4633.
1433
+ [35] W. Xue, W. Wang, One-shot image classification by learning to
1434
+ restore prototypes, in: AAAI, volume 34, 2020, pp. 6558–6565.
1435
+ [36] J. Liu, L. Song, Y. Qin,
1436
+ Prototype rectification for few-shot
1437
+ learning, in: Eur. Conf. Comput. Vis., Springer, 2020, pp. 741–
1438
+ 756.
1439
+ [37] Y. Guo, R. Du, X. Li, J. Xie, Z. Ma, Y. Dong, Learning cali-
1440
+ brated class centers for few-shot classification by pair-wise sim-
1441
+ ilarity, IEEE Trans. Image Process. 31 (2022) 4543–4555.
1442
+ [38] S. Yang, L. Liu, M. Xu, Free lunch for few-shot learning: Dis-
1443
+ tribution calibration, in: Int. Conf. Learn. Represent., 2021.
1444
+ [39] M. Ren, E. Triantafillou, S. Ravi, J. Snell, K. Swersky, J. B.
1445
+ Tenenbaum, H. Larochelle, R. S. Zemel,
1446
+ Meta-learning for
1447
+ semi-supervised few-shot classification, in: Int. Conf. Learn.
1448
+ Represent., 2018.
1449
+ [40] C. Wah, S. Branson, P. Welinder, P. Perona, S. Belongie, The
1450
+ caltech-ucsd birds-200-2011 dataset (2011).
1451
+ [41] T. Hospedales, A. Antoniou, P. Micaelli, A. Storkey,
1452
+ Meta-
1453
+ learning in neural networks: A survey 44 (2021) 5149–5169.
1454
+ [42] G. Koch, R. Zemel, R. Salakhutdinov, et al., Siamese neural net-
1455
+ works for one-shot image recognition, in: ICML deep learning
1456
+ workshop, 2015.
1457
+ [43] W. Xu, Y. Xu, H. Wang, Z. Tu, Attentional constellation nets
1458
+ for few-shot learning, in: Int. Conf. Learn. Represent., 2021.
1459
+ [44] Y. Liu, T. Zheng, J. Song, D. Cai, X. He, Dmn4: Few-shot learn-
1460
+ ing via discriminative mutual nearest neighbor neural network,
1461
+ in: AAAI, volume 36, 2022, pp. 1828–1836.
1462
+ [45] L. Torrey, J. Shavlik, Transfer learning, in: Handbook of re-
1463
+ search on machine learning applications and trends: algorithms,
1464
+ methods, and techniques, IGI global, 2010, pp. 242–264.
1465
+ [46] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, C. Liu, A survey on
1466
+ deep transfer learning, in: International conference on artificial
1467
+ neural networks, Springer, 2018, pp. 270–279.
1468
+ [47] F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong,
1469
+ Q. He, A comprehensive survey on transfer learning, Proceed-
1470
+ ings of the IEEE 109 (2020) 43–76.
1471
+ [48] G. S. Dhillon, P. Chaudhari, A. Ravichandran, S. Soatto,
1472
+ A
1473
+ baseline for few-shot image classification, in: Int. Conf. Learn.
1474
+ Represent., 2020.
1475
+ 12
1476
+
1477
+ [49] W. Li, L. Wang, J. Xu, J. Huo, Y. Gao, J. Luo, Revisiting local
1478
+ descriptor based image-to-class measure for few-shot learning,
1479
+ in: IEEE Conf. Comput. Vis. Pattern Recog., 2019, pp. 7260–
1480
+ 7268.
1481
+ [50] C. Ma, Z. Huang, M. Gao, J. Xu, Few-shot learning as cluster-
1482
+ induced voronoi diagrams: A geometric approach (2022).
1483
+ [51] F. Wu, J. S. Smith, W. Lu, C. Pang, B. Zhang, Attentive proto-
1484
+ type few-shot learning with capsule network-based embedding,
1485
+ in: Eur. Conf. Comput. Vis., 2020, pp. 237–253.
1486
+ [52] Z. Ji, X. Chai, Y. Yu, Z. Zhang, Reweighting and information-
1487
+ guidance networks for few-shot learning, Neurocomputing 423
1488
+ (2021) 13–23.
1489
+ [53] X. Wang, J. Meng, B. Wen, F. Xue,
1490
+ Racp: A network with
1491
+ attention corrected prototype for few-shot speaker recognition
1492
+ using indefinite distance metric, Neurocomputing 490 (2022)
1493
+ 283–294.
1494
+ [54] J. Xu, X. Luo, X. Pan, W. Pei, Y. Li, Z. Xu, Alleviating the sam-
1495
+ ple selection bias in few-shot learning by removing projection
1496
+ to the centroid, in: Adv. Neural Inform. Process. Syst., 2022.
1497
+ [55] L. Zhou, P. Cui, S. Yang, W. Zhu, Q. Tian, Learning to learn
1498
+ image classifiers with visual analogy, in: IEEE Conf. Comput.
1499
+ Vis. Pattern Recog., 2019, pp. 11497–11506.
1500
+ [56] J. W. Tukey, et al., Exploratory data analysis, volume 2, Read-
1501
+ ing, MA, 1977.
1502
+ [57] B. Liu, Y. Cao, Y. Lin, Q. Li, Z. Zhang, M. Long, H. Hu, Nega-
1503
+ tive margin matters: Understanding margin in few-shot classifi-
1504
+ cation, in: Eur. Conf. Comput. Vis., Springer, 2020.
1505
+ [58] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, Im-
1506
+ agenet: A large-scale hierarchical image database,
1507
+ in: IEEE
1508
+ Conf. Comput. Vis. Pattern Recog., Ieee, 2009, pp. 248–255.
1509
+ [59] S. Zagoruyko, N. Komodakis, Wide residual networks, arXiv
1510
+ preprint arXiv:1605.07146 (2016).
1511
+ [60] L. van der Maaten, G. E. Hinton, Visualizing data using t-sne,
1512
+ Journal of Machine Learning Research 9 (2008) 2579–2605.
1513
+ 13
1514
+
29AyT4oBgHgl3EQf1vl1/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
2NFAT4oBgHgl3EQfkh06/content/2301.08611v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8b5a622875b3deba32c4aca842221d1205805e309c8754d2ac893d59f46563b
3
+ size 136699
2NFAT4oBgHgl3EQfkh06/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b59f84957e08c6d484cef870366caea077c76f662afaa7fbabfd10f5c84b413
3
+ size 458797
2NFAT4oBgHgl3EQfkh06/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:afd051d6f15a8b92602c27ab5859f80fb2b7857b6ad0ee6b2c509b4627aaea21
3
+ size 23164
2dE0T4oBgHgl3EQfugHS/content/tmp_files/2301.02607v1.pdf.txt ADDED
@@ -0,0 +1,736 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.02607v1 [eess.SP] 6 Jan 2023
2
+ EXTENDED VERSION OF A POSTER PRESENTED AT THE 46TH ISCE CONFERENCE, APR 6–10, 2022, LAS VEGAS, US
3
+ 1
4
+ A Data-Driven Gaussian Process Filter for
5
+ Electrocardiogram Denoising
6
+ Mircea Dumitru, Qiao Li, Erick Andres Perez Alday, Ali Bahrami Rad, Gari D. Clifford, Reza Sameni*
7
+ Abstract—Objective: Gaussian Processes (GP)-based filters,
8
+ which have been effectively used for various applications includ-
9
+ ing electrocardiogram (ECG) filtering can be computationally
10
+ demanding and the choice of their hyperparameters is typically
11
+ ad hoc. Methods: We develop a data-driven GP filter to address
12
+ both issues, using the notion of the ECG phase domain — a time-
13
+ warped representation of the ECG beats onto a fixed number
14
+ of samples and aligned R-peaks, which is assumed to follow a
15
+ Gaussian distribution. Under this assumption, the computation of
16
+ the sample mean and covariance matrix is simplified, enabling an
17
+ efficient implementation of the GP filter in a data-driven manner,
18
+ with no ad hoc hyperparameters. The proposed filter is evaluated
19
+ and compared with a state-of-the-art wavelet-based filter, on
20
+ the PhysioNet QT Database. The performance is evaluated by
21
+ measuring the signal-to-noise ratio (SNR) improvement of the
22
+ filter at SNR levels ranging from –5 to 30 dB, in 5 dB steps,
23
+ using additive noise. For a clinical evaluation, the error between
24
+ the estimated QT-intervals of the original and filtered signals is
25
+ measured and compared with the benchmark filter. Results: It is
26
+ shown that the proposed GP filter outperforms the benchmark
27
+ filter for all the tested noise levels. It also outperforms the state-
28
+ of-the-art filter in terms of QT-interval estimation error bias
29
+ and variance. Conclusion: The proposed GP filter is a versatile
30
+ technique for preprocessing the ECG in clinical and research
31
+ applications, is applicable to ECG of arbitrary lengths and
32
+ sampling frequencies, and provides confidence intervals for its
33
+ performance.
34
+ Index Terms—ECG Bayesian filter, Gaussian processes, ECG
35
+ denoising, ECG wavelet denoising, QT-interval estimation
36
+ I. INTRODUCTION
37
+ Electrocardiogram (ECG) denoising is a recurrent problem
38
+ in traditional and wearable cardiac monitors. The problem
39
+ has been addressed by various approaches, including model-
40
+ based and non-model-based filters. A powerful non-parametric
41
+ framework for ECG filtering is via Gaussian process (GP)
42
+ models [1], [2], which considers the ECG beats as GPs with
43
+ common parameters. The choice of the beats GP hyperparam-
44
+ eters, namely the mean and kernel functions is non-evident and
45
+ ad hoc. For GP models with no beat assumptions, beside the
46
+ ambiguity in parameter selection, the GP filter implementation
47
+ involves the inversion of large covariance matrices, which
48
+ precludes the use of this framework for long ECG records.
49
+ In this paper, ECG filtering is addressed via a data-driven
50
+ non-parametric GP model. The novelty of the proposed filter
51
+ is that it requires no ad hoc GP model hyperparameters and it
52
+ is computationally efficient, making it suitable for any length
53
+ ECG records; it is based on the assumption that each phase
54
+ The authors are with the Department of Biomedical Informatics, School
55
+ of Medicine, Emory University. G. D. Clifford is also with the Biomedical
56
+ Engineering Department, Georgia Institute of Technology. Corresponding
57
+ author: R. Sameni (email: [email protected]).
58
+ domain beat — a time-warped (stretched or squeezed) repre-
59
+ sentation of the ECG beats onto a fixed number of samples
60
+ and aligned R-peaks — is an ensemble of an underlying
61
+ GP. The mean and the kernel function are set via the phase
62
+ domain sample mean and covariance matrix, computed via
63
+ the available ensembles, which are transformed back to the
64
+ time-domain and used to derive the posterior mean using the
65
+ Bayesian formalism.
66
+ This proposed filter is data-driven, does not presume any
67
+ parametric model for the underlying GP, and is computation-
68
+ ally efficient.
69
+ The filter is evaluated in terms of signal-to-noise ratio (SNR)
70
+ improvement, using as benchmark a wavelet-based ECG de-
71
+ noiser that was demonstrated in [3] to outperform adaptive
72
+ filters [4], Tikhonov regularization and Extended Kalman
73
+ filters [5], in terms of SNR improvement. The proposed filter’s
74
+ clinical performance is evaluated by measuring the QT-interval
75
+ error between the clean ECG and its corresponding filtered
76
+ version.
77
+ II. GAUSSIAN PROCESS-BASED ECG FILTERING
78
+ A. The mathematical model
79
+ The ECG measurement x(t) is assumed to be an additive
80
+ mixture of a clean ECG s(t), assumed to be a GP contami-
81
+ nated by additive white noise:
82
+ x(t) = s(t) + n(t),
83
+ t ∈ {t1 . . . tN}
84
+ ∆= TN,
85
+ (1)
86
+ where n(t) ∼ N(0, vn), vn denotes the noise variance and
87
+ N denotes the number of measurements. The signal x(t) is
88
+ assumed to be baseline-wander (BW) and powerline noise
89
+ removed, which are relatively straightforward, with classical
90
+ filtering pipelines (cf. Section III-A). Therefore, the filter
91
+ design objective is focused on in-band ECG noise removal.
92
+ For the beat i, Ti =
93
+
94
+ ti1 . . . tiRi . . . tiNi
95
+
96
+ denotes the set of
97
+ time samples, ti1 representing the first sample, tiRi the sample
98
+ corresponding to the R-peak and tiNi the last sample. We
99
+ further define xi = [x(t)]i∈Ti, si = [s(t)]i∈Ti, ni = [n(t)]i∈Ti
100
+ as vectorial representations of the measurement, clean ECG
101
+ and noise, respectively. Therefore, xi = si + ni.
102
+ Next, we define matrices Θi ∈ RT ×Ni to map the time
103
+ domain beats xi, si and ni to the phase domain beats
104
+ ξi = Θixi, ςi = Θisi, ηi = Θini,
105
+ (2)
106
+ with aligned R-peaks and the same number of samples T
107
+ (Fig. 1). The Θi matrices are defined by considering T knots
108
+
109
+ 2
110
+ EXTENDED VERSION OF A POSTER PRESENTED AT THE 46TH ISCE CONFERENCE, APR 6–10, 2022, LAS VEGAS, US
111
+ 0
112
+ 25
113
+ 50
114
+ 75
115
+ 100
116
+ 125
117
+ 150
118
+ 175
119
+ 200
120
+ −0.4
121
+ −0.2
122
+ 0.0
123
+ 0.2
124
+ 0.4
125
+ 0.6
126
+ 0.8
127
+ 1.0
128
+ amplitude [mv]
129
+ x[1]
130
+ x[2]
131
+ x[3]
132
+ x[4]
133
+ x[5]
134
+ x[6]
135
+ 0
136
+ 50
137
+ 100
138
+ 150
139
+ 200
140
+ time [samples]
141
+ −0.4
142
+ −0.2
143
+ 0.0
144
+ 0.2
145
+ 0.4
146
+ 0.6
147
+ 0.8
148
+ 1.0
149
+ amplitude [mV]
150
+ ξ[1]
151
+ ξ[2]
152
+ ξ[3]
153
+ ξ[4]
154
+ ξ[5]
155
+ ξ[6]
156
+ Fig. 1. Time-domain measurements beats (top) and the corresponding phase
157
+ domain ECG beats (bottom), with the same number T of samples for the
158
+ first 6 beats of sel100 record from QTDB [6] with 0 dB Gaussian additive
159
+ noise. Transformation matrices Θi are defined via (3).
160
+ 0
161
+ 5
162
+ 10
163
+ 15
164
+ 20
165
+ 0
166
+ 5
167
+ 10
168
+ 15
169
+ 20
170
+ 0.0
171
+ 0.2
172
+ 0.4
173
+ 0.6
174
+ 0.8
175
+ 1.0
176
+ 0
177
+ 5
178
+ 10
179
+ 15
180
+ 20
181
+ 0
182
+ 5
183
+ 10
184
+ 15
185
+ 20
186
+ 0.0
187
+ 0.2
188
+ 0.4
189
+ 0.6
190
+ 0.8
191
+ 1.0
192
+ 0
193
+ 5
194
+ 10
195
+ 15
196
+ 20
197
+ 0
198
+ 5
199
+ 10
200
+ 15
201
+ 20
202
+ 0.0
203
+ 0.5
204
+ 1.0
205
+ 1.5
206
+ 2.0
207
+ 2.5
208
+ 3.0
209
+ Fig. 2.
210
+ Corner detail example of transformation matrix Θi (left), ΘT
211
+ i
212
+ (middle) and the corresponding (diagonal) Gramian Gi = ΘT
213
+ i Θi (right).
214
+ equidistantly distributed in the interval [1, Ni] and assigning
215
+ Θi(j, k) =
216
+
217
+ 1, if j − 1 ≤ (k − 1) Ni−1
218
+ T −1 < j
219
+ 0, otherwise
220
+ ,
221
+ (3)
222
+ with j = 1, . . . , Ni − 1, k = 1, . . . , T and T ≥ maxi {Ni}.
223
+ With this choice, the corresponding Gramian matrices Gi, are
224
+ diagonal matrices (Fig. 2),
225
+ Gi = ΘT
226
+ i Θi = diag [gi] and diag
227
+
228
+ ΘiΘT
229
+ i
230
+
231
+ = 1T ,
232
+ (4)
233
+ with gi ∈ RNi and 1T ∈ RT . Therefore, Gi is invertible and
234
+ the back transformation from the phase to the time domain is
235
+ given by Ψi = G−1
236
+ i ΘT
237
+ i .
238
+ From (1) and (2), the ECG beats satisfy ξi = ςi + ηi.
239
+ As shown in Fig. 1, in the phase domain the beats have
240
+ been normalized in lengths and the R-peaks are aligned.
241
+ Therefore, the phase-domain sample variations are only due to
242
+ the stochastic inter-beat variations of the ECG beats and noise.
243
+ As our working model, we assume that the phase domain beats
244
+ ξi to be ensembles of an underlying GP
245
+ ξi ∼ N (µξ, Kξ) .
246
+ (5)
247
+ Moreover, from the time domain noise assumption and (2),
248
+ the phase domain noise beats also have a zero-mean normal
249
+ distribution ηi ∼ N
250
+
251
+ 0, vnΘiΘT
252
+ i
253
+
254
+ . Therefore, the phase do-
255
+ main ECG beats follow ςi ∼ N
256
+
257
+ µξ, Kξ − vnΘiΘT
258
+ i
259
+
260
+ , where
261
+ the model parameters µξ and Kξ can be estimated by the
262
+ sample mean ¯µξ := B−1 �B
263
+ i=1 ξi and the sample covariance
264
+ ¯
265
+ Kξ := B−1 �B
266
+ i=1(ξi− ¯µξ)(ξi− ¯µξ)T , where B is the number
267
+ of beats. Therefore, the time domain (clean) ECG beats follow
268
+ a Normal distribution si ∼ N (µsi, Ksi) with parameters
269
+ µsi = Ψi ¯µξ, Ksi = Ψi
270
+ � ¯
271
+ Kξ − ˆvnΘiΘT
272
+ i
273
+
274
+ ΨT
275
+ i ,
276
+ (6)
277
+ where ˆvn represents the noise variance estimate and the
278
+ covariance matrix corresponding to time domains beats xi is
279
+ given by
280
+ Kxi = Ψi ¯
281
+ KξΨi.
282
+ (7)
283
+ Finally, the filtered beats are defined as the time domain
284
+ posterior mean, using (6) and (7):
285
+ ˆsi = µsi + KsiK−1
286
+ xi (xi − µsi) ,
287
+ (8)
288
+ In the sequel, we refer to µsi and ˆsi as prior-based and
289
+ posterior-based GP filter results.
290
+ B. The GP filter with diagonal covariance matrix
291
+ The direct implementation of the filter in (8) requires the
292
+ inversion of covariance matrices that typically have huge
293
+ condition numbers. The matrix inversion can be avoided if
294
+ we consider the diagonal case of ¯
295
+ Kξ:
296
+ ¯kξ = diag
297
+ � ¯
298
+
299
+
300
+ , kηi
301
+ (4)= ˆvn1T
302
+ (9)
303
+ In this case, the corresponding time domain matrices are also
304
+ diagonal and can be computed via
305
+ kxi =
306
+
307
+ ΘT
308
+ i ¯kξ
309
+
310
+ ⊘ g2
311
+ i , ksi =
312
+
313
+ ΘT
314
+ i
315
+ �¯kξ − kηi
316
+ ��
317
+ ⊘ g2
318
+ i
319
+ (10)
320
+ with ◦ and ⊘ denoting the Hadamard product and division,
321
+ respectively (element-wise product and division), g2
322
+ i := gi◦gi,
323
+ the time domain (prior) mean computed via
324
+ µsi =
325
+
326
+ ΘT
327
+ i ¯µξ
328
+
329
+ ⊘ gi,
330
+ (11)
331
+ and the corresponding filter given by
332
+ ˆsi = µsi + ksi ⊘ kxi ◦ (xi − µsi) .
333
+ (12)
334
+ The overall algorithm for GP ECG filtering is summarized in
335
+ Algorithm 1 and is available online in our Git repository [7].
336
+ Algorithm 1 GP ECG filtering
337
+ 1: {tiRi}i = RPeakDetector(x)
338
+ [Section III-B]
339
+ 2: ˆvn = NoiseVartianceEstimator(x) [Section II-D]
340
+ Input: x, {tiRi}i
341
+ Output: {�si}i
342
+ 3: function GPDIAG(x, {tiRi}i, ˆvn)
343
+ ⊲ GP diagonal filter
344
+ 4:
345
+ for all beats do
346
+ ⊲ phase domain computations
347
+ 5:
348
+ compute transformation matrices Θi via (3)
349
+ 6:
350
+ compute the vectors gi = diag
351
+
352
+ ΘT
353
+ i Θi
354
+
355
+ 7:
356
+ compute kηi via (9)
357
+ 8:
358
+ compute the phase beats ξi via (2)
359
+ 9:
360
+ end for
361
+ 10:
362
+ compute phase domain sample mean ¯
363
+ µξ
364
+ 11:
365
+ compute phase domain sample variance vector ¯kξ
366
+ 12:
367
+ for all beats do
368
+ ⊲ time domain computations
369
+ 13:
370
+ compute ECG prior mean µsi via (11)
371
+ 14:
372
+ compute ECG variance ksi via (10)
373
+ 15:
374
+ compute measurements variance kxi via (10)
375
+ 16:
376
+ compute the filtered ECG ˆsi via (12)
377
+ 17:
378
+ end for
379
+ 18: end function
380
+
381
+ A DATA-DRIVEN ECG GAUSSIAN PROCESS FILTER, M. DUMITRU ET AL.
382
+ 3
383
+ C. Computational cost and model selection
384
+ The direct implementation of a GP filter (without the hereby
385
+ proposed phase-domain model) would be as follows [1], [2]:
386
+ �s = µs + KsK−1
387
+ x (x − µs) ,
388
+ (13)
389
+ with the computational complexity O(N 3), dominated by the
390
+ inversion of the measurement covariance matrix Kx. In this
391
+ approach the model’s hyperparameters are the mean µs, the
392
+ covariance matrix Ks) and the noise variance vn (or more
393
+ generally the noise covariance matrix) and optimizing them
394
+ via classical methods (e.g. maximum evidence, leave-one-
395
+ out cross validation, [8, Ch. 5]) adds to the computational
396
+ complexity. For long ECGs, the application of this model
397
+ is not possible. Previous research considered the GP beat-
398
+ wise formulation and adopted a model-based approach to
399
+ confine the structure of the covariance matrices [1], [2], but the
400
+ choice of the particular model-based mean and kernel function
401
+ families remains ad-hoc and difficult to justify.
402
+ The proposed model infers the GP mean and covariance
403
+ matrix in a data-driven way, based on the sample mean and
404
+ covariance matrix from the phase domain (6) and (7), and
405
+ in the diagonal case, Algorithm 1, does not require any
406
+ inversion. The fundamental assumption allowing the data-
407
+ driven computation is the assumption that the phase domain
408
+ beats ξi are ensembles from the same underlying GP, (5).
409
+ D. Hyperparameter selection
410
+ The number of phase domain beat samples T is chosen
411
+ greater than the longest beat in the time domain; this allows the
412
+ choice of the transformation and back transformation matrices
413
+ such that the time-phase-time transition can be done without
414
+ (transformation) errors. The noise variance ˆvn can be com-
415
+ puted via maximum evidence or practically from the baseline
416
+ segment of the ECG beats, where the heart is electrically silent
417
+ and only the noise is exhibited in the ECG.
418
+ III. RESULTS
419
+ A. Baseline wander removal
420
+ The BW is removed via two successively zero-phase first
421
+ order forward-backward lowpass filters (filtfilt in MAT-
422
+ LAB/Python SciPy) with cut-off frequencies set at fc =
423
+ 5.0 Hz and fc = 80.0 Hz, respectively. While the resulting
424
+ passband frequency range is rather narrow and eliminates some
425
+ ECG-related components, it enables us to assess the filtering
426
+ performance for the dominant ECG frequency band.
427
+ B. R-peak detection and heartbeat segmentation
428
+ The proposed filter requires the ECG R-peaks. The beats
429
+ are defined relative to the R-peaks, segmenting the mea-
430
+ surements at the midpoints between successive R-peaks. The
431
+ R-peak estimation is done using a modified version of the
432
+ Pan–Tompkins algorithm [9]. Specifically, the version used in
433
+ this paper estimates the R-peaks by successively applying a
434
+ band pass filter, an outlier saturation filter via the hyperbolic
435
+ tangent function, a square root moving average filter and a
436
+ thresholding.
437
+ C. Evaluation
438
+ The PhysioNet QT Database (QTDB) [6] is used to evaluate
439
+ the developed filter. QTDB consists of 15 minutes 2-lead
440
+ ECGs sampled at fs = 250 Hz. The baseline wander was
441
+ removed as detailed in Section III-B. The required software
442
+ for preprocessing and R-peak detection were adopted from the
443
+ Open-Source Electrophysiological Toolbox (OSET) [10].
444
+ The benchmark filter is a wavelet denoiser with a Symlet–5
445
+ mother wavelet, soft thresholding, Stein’s unbiased risk es-
446
+ timate (SURE) shrinkage rule, rescaling using a single-level
447
+ noise level estimation and four levels of decomposition. In a
448
+ previous study, this combination was proved to outperform
449
+ other ECG filtering schemes [3]. The filter evaluation is
450
+ measured in terms of SNR improvement and QT-interval
451
+ estimation error.
452
+ D. SNR improvement performance
453
+ The ECG records were contaminated by additive white
454
+ Gaussian noise at SNR levels ranging from –5 to 30 dB, in
455
+ 5 dB steps. An example of the noisy and filtered ECG are
456
+ shown in Fig. 3. The average and standard deviation of the
457
+ SNR improvement is reported for each noise level, for the
458
+ proposed and benchmark methods in Fig. 4. Accordingly, the
459
+ proposed posterior-based filter improves the SNR for every
460
+ level of noise tested and outperforms the prior-based and the
461
+ benchmark filter for all tested levels of noise.
462
+ E. Clinical parameters preservation
463
+ The accuracy of QT-interval estimation is considered to test
464
+ the quality of the proposed methods for clinical ECG parame-
465
+ ters. For this, the QT-interval estimation error (∆QT) between
466
+ the QT-interval estimated from the filtered ECG and the QT-
467
+ interval estimated from the noiseless ECG is measured and
468
+ compared between the benchmark and the proposed method at
469
+ variable input noise levels. The QT-interval estimation method
470
+ used is adopted from [11]. Fig. 5 shows the median and the
471
+ interquartile range (IQR) of ∆QT for the benchmark wavelet
472
+ and the proposed filter, measured over QTDB. Accordingly,
473
+ compared with the benchmark method, the GP posterior filter
474
+ is reducing the median error for all levels of input noise.
475
+ IV. DISCUSSION AND CONCLUSION
476
+ In this work we addressed the problem of ECG denoising
477
+ via a data-driven based GP model, with beat-wise computa-
478
+ tions. Compared with the existing non-parametric ECG filters,
479
+ the proposed filter makes no ad hoc assumptions about the GP
480
+ model and can be used for ECG records of arbitrary length,
481
+ since the computational cost has been significantly reduced
482
+ as compared with conventional GP filters. The proposed filter
483
+ is efficient in terms of SNR improvement, outperforming the
484
+ benchmark performances for all tested noise levels (Fig. 4) and
485
+ also clinically, with an improved QT-interval estimation error
486
+ compared with the benchmark wavelet denoiser, for all tested
487
+ levels of noise (Fig. 5). Another advantage of the proposed
488
+ filter is its Bayesian formulation, which allows us to quantify
489
+ the filter’s uncertainty (via the estimated variances). It also
490
+
491
+ 4
492
+ EXTENDED VERSION OF A POSTER PRESENTED AT THE 46TH ISCE CONFERENCE, APR 6–10, 2022, LAS VEGAS, US
493
+ −0.6
494
+ −0.3
495
+ 0.0
496
+ 0.3
497
+ 0.6
498
+ 0.9
499
+ amplitude [mV]
500
+ x
501
+ GP Prior [ΔSNR=12.0]
502
+ −0.6
503
+ −0.3
504
+ 0.0
505
+ 0.3
506
+ 0.6
507
+ 0.9
508
+ amplitude [mV]
509
+ GP Po terior [ΔSNR=13.3]
510
+ 0.0
511
+ 0.5
512
+ 1.0
513
+ 1.5
514
+ 2.0
515
+ 2.5
516
+ 3.0
517
+ 3.5
518
+ 4.0
519
+ time [ ]
520
+ −0.6
521
+ −0.3
522
+ 0.0
523
+ 0.3
524
+ 0.6
525
+ 0.9
526
+ amplitude [mV]
527
+ Wavelet [ΔSNR=7.1]
528
+ (a) input SNR = 0 dB
529
+ −0.40
530
+ −0.14
531
+ 0.12
532
+ 0.38
533
+ 0.64
534
+ 0.90
535
+ amplitude [mV]
536
+ x
537
+ GP P io [ΔSNR=7.3]
538
+ −0.40
539
+ −0.14
540
+ 0.12
541
+ 0.38
542
+ 0.64
543
+ 0.90
544
+ amplitude [mV]
545
+ GP Poste io [ΔSNR=10.6]
546
+ 0.0
547
+ 0.5
548
+ 1.0
549
+ 1.5
550
+ 2.0
551
+ 2.5
552
+ 3.0
553
+ 3.5
554
+ 4.0
555
+ time [s]
556
+ −0.40
557
+ −0.14
558
+ 0.12
559
+ 0.38
560
+ 0.64
561
+ 0.90
562
+ amplitude [mV]
563
+ Wavelet [ΔSNR=6.1]
564
+ (b) input SNR = 5 dB
565
+ −0.40
566
+ −0.14
567
+ 0.12
568
+ 0.38
569
+ 0.64
570
+ 0.90
571
+ amplitude [mV]
572
+ x
573
+ GP Prior [ΔSNR=2.4]
574
+ −0.40
575
+ −0.14
576
+ 0.12
577
+ 0.38
578
+ 0.64
579
+ 0.90
580
+ amplitude [mV]
581
+ GP Po terior [ΔSNR=8.1]
582
+ 0.0
583
+ 0.5
584
+ 1.0
585
+ 1.5
586
+ 2.0
587
+ 2.5
588
+ 3.0
589
+ 3.5
590
+ 4.0
591
+ time [ ]
592
+ −0.40
593
+ −0.14
594
+ 0.12
595
+ 0.38
596
+ 0.64
597
+ 0.90
598
+ amplitude [mV]
599
+ Wavelet [ΔSNR=5.1]
600
+ (c) input SNR = 10 dB
601
+ Fig. 3. The sel100 recording from the PhysioNet QTDB [6]. From top to bottom the measurements x vs. the prior estimate (11), the posterior estimate
602
+ (12), and the wavelet denoiser (Section III-C), at different input SNR levels. The post-filtering SNR improvement is noted in each case.
603
+ −5
604
+ 0
605
+ 5
606
+ 10
607
+ 15
608
+ 20
609
+ 25
610
+ 30
611
+ Input SNR [dB]
612
+ −25
613
+ −20
614
+ −15
615
+ −10
616
+ −5
617
+ 0
618
+ 5
619
+ 10
620
+ 15
621
+ 20
622
+ SNR improvemen [dB]
623
+ Wavele [benchmark]
624
+ GP Prior
625
+ GP Pos erior
626
+ Fig. 4. Mean and standard deviation SNR improvement using the proposed
627
+ GP filter and the benchmark wavelet denoiser [3] across all samples of the
628
+ PhysioNet QTDB [6], in leads I and II, with 5 repetitions using different noise
629
+ instances per record.
630
+ -5
631
+ 0
632
+ 5
633
+ 10
634
+ 15
635
+ 20
636
+ 25
637
+ 30
638
+ Input SNR [dB]
639
+ −100
640
+ −50
641
+ 0
642
+ 50
643
+ 100
644
+ 150
645
+ 200
646
+ QT es ima ion error [ms]
647
+ Wavele
648
+ GP Fil er
649
+ Fig. 5.
650
+ The median and the interquartile range for ∆QT estimations
651
+ corresponding to the proposed and benchmark filters across all samples of
652
+ the PhysioNet QTDB [6].
653
+ provides a framework that allows for synthetic ECG generation
654
+ via data-driven learned parameters, which can be used in
655
+ generative models for producing synthetic ECG records for
656
+ data greedy machine learning and deep learning applications.
657
+ In future studies, the fundamental assumption of the model,
658
+ namely the same underlying Gaussian distribution for all the
659
+ beats in the phase domain can be relaxed, by clustering the
660
+ beats and assuming different underlying distributions for the
661
+ beats in each cluster. Also, comparison with expert annotated
662
+ QT-interval (and other clinical parameters) is required and
663
+ statistical hypothesis testing should be performed to investigate
664
+ if the differences are statistically insignificant. The proposed
665
+ filter requires the R-peaks for aligning the ECG beats in the
666
+ phase-domain, which requires investigating to what extend the
667
+ filtering performance is susceptible to mis-detection of the
668
+ R-peaks and morphological variations due to ectopic beats.
669
+ The Python codes corresponding to the Algorithm 1 and the
670
+ reported results are available in [7].
671
+ V. ACKNOWLEDGEMENTS
672
+ The authors acknowledge support from the National Insti-
673
+ tute of Biomedical Imaging and Bioengineering under the NIH
674
+ grant R01EB030362, and the National Center for Advancing
675
+ Translational Sciences under the NIH Award UL1TR002378.
676
+ REFERENCES
677
+ [1] B. Rivet, M. Niknazar, and C. Jutten, “Non parametric modelling of
678
+ ECG: Applications to denoising and single sensor fetal ECG extraction,”
679
+ in LVA/ICA 2012 - 10th International Conference on Latent Variable
680
+ Analysis and Signal Separation, vol. LNCS 7191.
681
+ Tel-Aviv, Israel:
682
+ Springer, Mar 2012, pp. 470–477.
683
+ [2] M. Niknazar, B. Rivet, and C. Jutten, “Fetal ECG extraction from a
684
+ single sensor by a non-parametric modeling,” in 2012 Proceedings of
685
+ the 20th European Signal Processing Conference, 2012, pp. 949–953.
686
+ [3] R. Sameni, “Online filtering using piecewise
687
+ smoothness priors:
688
+ Application to normal and abnormal electrocardiogram denoising,”
689
+ Signal Processing, vol. 133, pp. 52–63, Apr. 2017. [Online]. Available:
690
+ https://doi.org/10.1016/j.sigpro.2016.10.019
691
+ [4] P. Laguna, R. Jane, O. Meste, P. Poon, P. Caminal, H. Rix, and
692
+ N. Thakor, “Adaptive filter for event-related bioelectric signals using an
693
+ impulse correlated reference input: comparison with signal averaging
694
+ techniques,” IEEE Transactions on Biomedical Engineering, vol. 39,
695
+ no. 10, pp. 1032–1044, 1992.
696
+ [5] R. Sameni, M. B. Shamsollahi, C. Jutten, and G. D. Clifford, “A
697
+ nonlinear bayesian filtering framework for ECG denoising,” Biomedical
698
+ Engineering, IEEE Transactions on, vol. 54, no. 12, pp. 2172–2185,
699
+ December 2007. [Online]. Available: https://doi.org/10.1109/TBME.
700
+ 2007.897817
701
+ [6] P. Laguna, R. Mark, A. Goldberg, and G. Moody, “A database for
702
+ evaluation of algorithms for measurement of QT and other waveform
703
+ intervals in the ECG,” in Computers in Cardiology 1997.
704
+ IEEE, 1997.
705
+ [Online]. Available: https://doi.org/10.1109/cic.1997.648140
706
+ [7] M.
707
+ Dumitru,
708
+ Data
709
+ Driven
710
+ Gaussian
711
+ Process
712
+ filter
713
+ for
714
+ ECG,
715
+ 2022. [Online]. Available: https://github.com/alphanumericslab/OSET/
716
+ tree/master/UnderDevelopment/DataDrivenGPFilter
717
+ [8] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine
718
+ Learning, ser. Adaptive computation and machine learning.
719
+ The MIT
720
+ Press, 2005.
721
+ [9] J. Pan and W. J. Tompkins, “A Real-Time QRS Detection Algorithm,”
722
+ Biomedical Engineering, IEEE Transactions on, vol. BME-32, no. 3, pp.
723
+ 230–236, 1985.
724
+ [10] R. Sameni, The Open-Source Electrophysiological Toolbox (OSET),
725
+ version
726
+ 3.14,
727
+ 2018.
728
+ [Online].
729
+ Available:
730
+ https://github.com/
731
+ alphanumericslab/OSET
732
+ [11] Q. Li, M. Dumitru, and E.A. Perez Alday, et al., “QT-Interval Estimation
733
+ Improved with Fusion of Multiple Automated Algorithms,” in Interna-
734
+ tional Society for Computerized Electrocardiology (ISCE), April 6-10,
735
+ 2022, Las Vegas, NV, 2022.
736
+
2dE0T4oBgHgl3EQfugHS/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,424 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf,len=423
2
+ page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
3
+ page_content='02607v1 [eess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
4
+ page_content='SP] 6 Jan 2023 EXTENDED VERSION OF A POSTER PRESENTED AT THE 46TH ISCE CONFERENCE, APR 6–10, 2022, LAS VEGAS, US 1 A Data-Driven Gaussian Process Filter for Electrocardiogram Denoising Mircea Dumitru, Qiao Li, Erick Andres Perez Alday, Ali Bahrami Rad, Gari D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
5
+ page_content=' Clifford, Reza Sameni* Abstract—Objective: Gaussian Processes (GP)-based filters, which have been effectively used for various applications includ- ing electrocardiogram (ECG) filtering can be computationally demanding and the choice of their hyperparameters is typically ad hoc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
6
+ page_content=' Methods: We develop a data-driven GP filter to address both issues, using the notion of the ECG phase domain — a time- warped representation of the ECG beats onto a fixed number of samples and aligned R-peaks, which is assumed to follow a Gaussian distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
7
+ page_content=' Under this assumption, the computation of the sample mean and covariance matrix is simplified, enabling an efficient implementation of the GP filter in a data-driven manner, with no ad hoc hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
8
+ page_content=' The proposed filter is evaluated and compared with a state-of-the-art wavelet-based filter, on the PhysioNet QT Database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
9
+ page_content=' The performance is evaluated by measuring the signal-to-noise ratio (SNR) improvement of the filter at SNR levels ranging from –5 to 30 dB, in 5 dB steps, using additive noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
10
+ page_content=' For a clinical evaluation, the error between the estimated QT-intervals of the original and filtered signals is measured and compared with the benchmark filter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
11
+ page_content=' Results: It is shown that the proposed GP filter outperforms the benchmark filter for all the tested noise levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
12
+ page_content=' It also outperforms the state- of-the-art filter in terms of QT-interval estimation error bias and variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
13
+ page_content=' Conclusion: The proposed GP filter is a versatile technique for preprocessing the ECG in clinical and research applications, is applicable to ECG of arbitrary lengths and sampling frequencies, and provides confidence intervals for its performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
14
+ page_content=' Index Terms—ECG Bayesian filter, Gaussian processes, ECG denoising, ECG wavelet denoising, QT-interval estimation I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
15
+ page_content=' INTRODUCTION Electrocardiogram (ECG) denoising is a recurrent problem in traditional and wearable cardiac monitors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
16
+ page_content=' The problem has been addressed by various approaches, including model- based and non-model-based filters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
17
+ page_content=' A powerful non-parametric framework for ECG filtering is via Gaussian process (GP) models [1], [2], which considers the ECG beats as GPs with common parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
18
+ page_content=' The choice of the beats GP hyperparam- eters, namely the mean and kernel functions is non-evident and ad hoc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
19
+ page_content=' For GP models with no beat assumptions, beside the ambiguity in parameter selection, the GP filter implementation involves the inversion of large covariance matrices, which precludes the use of this framework for long ECG records.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
20
+ page_content=' In this paper, ECG filtering is addressed via a data-driven non-parametric GP model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
21
+ page_content=' The novelty of the proposed filter is that it requires no ad hoc GP model hyperparameters and it is computationally efficient, making it suitable for any length ECG records;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
22
+ page_content=' it is based on the assumption that each phase The authors are with the Department of Biomedical Informatics, School of Medicine, Emory University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
23
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
24
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
25
+ page_content=' Clifford is also with the Biomedical Engineering Department, Georgia Institute of Technology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
26
+ page_content=' Corresponding author: R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
27
+ page_content=' Sameni (email: rsameni@dbmi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
28
+ page_content='emory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
29
+ page_content='edu).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
30
+ page_content=' domain beat — a time-warped (stretched or squeezed) repre- sentation of the ECG beats onto a fixed number of samples and aligned R-peaks — is an ensemble of an underlying GP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
31
+ page_content=' The mean and the kernel function are set via the phase domain sample mean and covariance matrix, computed via the available ensembles, which are transformed back to the time-domain and used to derive the posterior mean using the Bayesian formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
32
+ page_content=' This proposed filter is data-driven, does not presume any parametric model for the underlying GP, and is computation- ally efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
33
+ page_content=' The filter is evaluated in terms of signal-to-noise ratio (SNR) improvement, using as benchmark a wavelet-based ECG de- noiser that was demonstrated in [3] to outperform adaptive filters [4], Tikhonov regularization and Extended Kalman filters [5], in terms of SNR improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
34
+ page_content=' The proposed filter’s clinical performance is evaluated by measuring the QT-interval error between the clean ECG and its corresponding filtered version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
35
+ page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
36
+ page_content=' GAUSSIAN PROCESS-BASED ECG FILTERING A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
37
+ page_content=' The mathematical model The ECG measurement x(t) is assumed to be an additive mixture of a clean ECG s(t), assumed to be a GP contami- nated by additive white noise: x(t) = s(t) + n(t), t ∈ {t1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
38
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
39
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
40
+ page_content=' tN} ∆= TN, (1) where n(t) ∼ N(0, vn), vn denotes the noise variance and N denotes the number of measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
41
+ page_content=' The signal x(t) is assumed to be baseline-wander (BW) and powerline noise removed, which are relatively straightforward, with classical filtering pipelines (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
42
+ page_content=' Section III-A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
43
+ page_content=' Therefore, the filter design objective is focused on in-band ECG noise removal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
44
+ page_content=' For the beat i, Ti = � ti1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
45
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
46
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
47
+ page_content=' tiRi .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
48
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
49
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
50
+ page_content=' tiNi � denotes the set of time samples, ti1 representing the first sample, tiRi the sample corresponding to the R-peak and tiNi the last sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
51
+ page_content=' We further define xi = [x(t)]i∈Ti, si = [s(t)]i∈Ti, ni = [n(t)]i∈Ti as vectorial representations of the measurement, clean ECG and noise, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
52
+ page_content=' Therefore, xi = si + ni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
53
+ page_content=' Next, we define matrices Θi ∈ RT ×Ni to map the time domain beats xi, si and ni to the phase domain beats ξi = Θixi, ςi = Θisi, ηi = Θini, (2) with aligned R-peaks and the same number of samples T (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
54
+ page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
55
+ page_content=' The Θi matrices are defined by considering T knots 2 EXTENDED VERSION OF A POSTER PRESENTED AT THE 46TH ISCE CONFERENCE, APR 6–10, 2022, LAS VEGAS, US 0 25 50 75 100 125 150 175 200 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
56
+ page_content='4 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
57
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
58
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
59
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
60
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
61
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
62
+ page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
63
+ page_content='0 amplitude [mv] x[1] x[2] x[3] x[4] x[5] x[6] 0 50 100 150 200 time [samples] −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
64
+ page_content='4 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
65
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
66
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
67
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
68
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
69
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
70
+ page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
71
+ page_content='0 amplitude [mV] ξ[1] ξ[2] ξ[3] ξ[4] ξ[5] ξ[6] Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
72
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
73
+ page_content=' Time-domain measurements beats (top) and the corresponding phase domain ECG beats (bottom), with the same number T of samples for the first 6 beats of sel100 record from QTDB [6] with 0 dB Gaussian additive noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
74
+ page_content=' Transformation matrices Θi are defined via (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
75
+ page_content=' 0 5 10 15 20 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
76
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
77
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
78
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
79
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
80
+ page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
81
+ page_content='0 0 5 10 15 20 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
82
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
83
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
84
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
85
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
86
+ page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
87
+ page_content='0 0 5 10 15 20 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
88
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
89
+ page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
90
+ page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
91
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
92
+ page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
93
+ page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
94
+ page_content='0 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
95
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
96
+ page_content=' Corner detail example of transformation matrix Θi (left), ΘT i (middle) and the corresponding (diagonal) Gramian Gi = ΘT i Θi (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
97
+ page_content=' equidistantly distributed in the interval [1, Ni] and assigning Θi(j, k) = � 1, if j − 1 ≤ (k − 1) Ni−1 T −1 < j 0, otherwise , (3) with j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
98
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
99
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
100
+ page_content=' , Ni − 1, k = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
101
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
102
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
103
+ page_content=' , T and T ≥ maxi {Ni}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
104
+ page_content=' With this choice, the corresponding Gramian matrices Gi, are diagonal matrices (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
105
+ page_content=' 2), Gi = ΘT i Θi = diag [gi] and diag � ΘiΘT i � = 1T , (4) with gi ∈ RNi and 1T ∈ RT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
106
+ page_content=' Therefore, Gi is invertible and the back transformation from the phase to the time domain is given by Ψi = G−1 i ΘT i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
107
+ page_content=' From (1) and (2), the ECG beats satisfy ξi = ςi + ηi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
108
+ page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
109
+ page_content=' 1, in the phase domain the beats have been normalized in lengths and the R-peaks are aligned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
110
+ page_content=' Therefore, the phase-domain sample variations are only due to the stochastic inter-beat variations of the ECG beats and noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
111
+ page_content=' As our working model, we assume that the phase domain beats ξi to be ensembles of an underlying GP ξi ∼ N (µξ, Kξ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
112
+ page_content=' (5) Moreover, from the time domain noise assumption and (2), the phase domain noise beats also have a zero-mean normal distribution ηi ∼ N � 0, vnΘiΘT i � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
113
+ page_content=' Therefore, the phase do- main ECG beats follow ςi ∼ N � µξ, Kξ − vnΘiΘT i � , where the model parameters µξ and Kξ can be estimated by the sample mean ¯µξ := B−1 �B i=1 ξi and the sample covariance ¯ Kξ := B−1 �B i=1(ξi− ¯µξ)(ξi− ¯µξ)T , where B is the number of beats.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
114
+ page_content=' Therefore, the time domain (clean) ECG beats follow a Normal distribution si ∼ N (µsi, Ksi) with parameters µsi = Ψi ¯µξ, Ksi = Ψi � ¯ Kξ − ˆvnΘiΘT i � ΨT i , (6) where ˆvn represents the noise variance estimate and the covariance matrix corresponding to time domains beats xi is given by Kxi = Ψi ¯ KξΨi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
115
+ page_content=' (7) Finally, the filtered beats are defined as the time domain posterior mean, using (6) and (7): ˆsi = µsi + KsiK−1 xi (xi − µsi) , (8) In the sequel, we refer to µsi and ˆsi as prior-based and posterior-based GP filter results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
116
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
117
+ page_content=' The GP filter with diagonal covariance matrix The direct implementation of the filter in (8) requires the inversion of covariance matrices that typically have huge condition numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
118
+ page_content=' The matrix inversion can be avoided if we consider the diagonal case of ¯ Kξ: ¯kξ = diag � ¯ Kξ � ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
119
+ page_content=' kηi (4)= ˆvn1T (9) In this case,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
120
+ page_content=' the corresponding time domain matrices are also diagonal and can be computed via kxi = � ΘT i ¯kξ � ⊘ g2 i ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
121
+ page_content=' ksi = � ΘT i �¯kξ − kηi �� ⊘ g2 i (10) with ◦ and ⊘ denoting the Hadamard product and division,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
122
+ page_content=' respectively (element-wise product and division),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
123
+ page_content=' g2 i := gi◦gi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
124
+ page_content=' the time domain (prior) mean computed via µsi = � ΘT i ¯µξ � ⊘ gi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
125
+ page_content=' (11) and the corresponding filter given by ˆsi = µsi + ksi ⊘ kxi ◦ (xi − µsi) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
126
+ page_content=' (12) The overall algorithm for GP ECG filtering is summarized in Algorithm 1 and is available online in our Git repository [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
127
+ page_content=' Algorithm 1 GP ECG filtering 1: {tiRi}i = RPeakDetector(x) [Section III-B] 2: ˆvn = NoiseVartianceEstimator(x) [Section II-D] Input: x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
128
+ page_content=' {tiRi}i Output: {�si}i 3: function GPDIAG(x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
129
+ page_content=' {tiRi}i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
130
+ page_content=' ˆvn) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
131
+ page_content='⊲ GP diagonal filter ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
132
+ page_content='4: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
133
+ page_content='for all beats do ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
134
+ page_content='⊲ phase domain computations ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
135
+ page_content='5: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
136
+ page_content='compute transformation matrices Θi via (3) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
137
+ page_content='6: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
138
+ page_content='compute the vectors gi = diag ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
139
+ page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
140
+ page_content='ΘT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
141
+ page_content='i Θi ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
142
+ page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
143
+ page_content='7: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
144
+ page_content='compute kηi via (9) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
145
+ page_content='8: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
146
+ page_content='compute the phase beats ξi via (2) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
147
+ page_content='9: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
148
+ page_content='end for ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
149
+ page_content='10: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
150
+ page_content='compute phase domain sample mean ¯ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
151
+ page_content='µξ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
152
+ page_content='11: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
153
+ page_content='compute phase domain sample variance vector ¯kξ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
154
+ page_content='12: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
155
+ page_content='for all beats do ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
156
+ page_content='⊲ time domain computations ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
157
+ page_content='13: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
158
+ page_content='compute ECG prior mean µsi via (11) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
159
+ page_content='14: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
160
+ page_content='compute ECG variance ksi via (10) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
161
+ page_content='15: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
162
+ page_content='compute measurements variance kxi via (10) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
163
+ page_content='16: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
164
+ page_content='compute the filtered ECG ˆsi via (12) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
165
+ page_content='17: ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
166
+ page_content='end for ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
167
+ page_content='18: end function ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
168
+ page_content='A DATA-DRIVEN ECG GAUSSIAN PROCESS FILTER,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
169
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
170
+ page_content=' DUMITRU ET AL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
171
+ page_content=' 3 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
172
+ page_content=' Computational cost and model selection The direct implementation of a GP filter (without the hereby proposed phase-domain model) would be as follows [1], [2]: �s = µs + KsK−1 x (x − µs) , (13) with the computational complexity O(N 3), dominated by the inversion of the measurement covariance matrix Kx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
173
+ page_content=' In this approach the model’s hyperparameters are the mean µs, the covariance matrix Ks) and the noise variance vn (or more generally the noise covariance matrix) and optimizing them via classical methods (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
174
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
175
+ page_content=' maximum evidence, leave-one- out cross validation, [8, Ch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
176
+ page_content=' 5]) adds to the computational complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
177
+ page_content=' For long ECGs, the application of this model is not possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
178
+ page_content=' Previous research considered the GP beat- wise formulation and adopted a model-based approach to confine the structure of the covariance matrices [1], [2], but the choice of the particular model-based mean and kernel function families remains ad-hoc and difficult to justify.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
179
+ page_content=' The proposed model infers the GP mean and covariance matrix in a data-driven way, based on the sample mean and covariance matrix from the phase domain (6) and (7), and in the diagonal case, Algorithm 1, does not require any inversion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
180
+ page_content=' The fundamental assumption allowing the data- driven computation is the assumption that the phase domain beats ξi are ensembles from the same underlying GP, (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
181
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
182
+ page_content=' Hyperparameter selection The number of phase domain beat samples T is chosen greater than the longest beat in the time domain;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
183
+ page_content=' this allows the choice of the transformation and back transformation matrices such that the time-phase-time transition can be done without (transformation) errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
184
+ page_content=' The noise variance ˆvn can be com- puted via maximum evidence or practically from the baseline segment of the ECG beats, where the heart is electrically silent and only the noise is exhibited in the ECG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
185
+ page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
186
+ page_content=' RESULTS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
187
+ page_content=' Baseline wander removal The BW is removed via two successively zero-phase first order forward-backward lowpass filters (filtfilt in MAT- LAB/Python SciPy) with cut-off frequencies set at fc = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
188
+ page_content='0 Hz and fc = 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
189
+ page_content='0 Hz, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
190
+ page_content=' While the resulting passband frequency range is rather narrow and eliminates some ECG-related components, it enables us to assess the filtering performance for the dominant ECG frequency band.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
191
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
192
+ page_content=' R-peak detection and heartbeat segmentation The proposed filter requires the ECG R-peaks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
193
+ page_content=' The beats are defined relative to the R-peaks, segmenting the mea- surements at the midpoints between successive R-peaks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
194
+ page_content=' The R-peak estimation is done using a modified version of the Pan–Tompkins algorithm [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
195
+ page_content=' Specifically, the version used in this paper estimates the R-peaks by successively applying a band pass filter, an outlier saturation filter via the hyperbolic tangent function, a square root moving average filter and a thresholding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
196
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
197
+ page_content=' Evaluation The PhysioNet QT Database (QTDB) [6] is used to evaluate the developed filter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
198
+ page_content=' QTDB consists of 15 minutes 2-lead ECGs sampled at fs = 250 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
199
+ page_content=' The baseline wander was removed as detailed in Section III-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
200
+ page_content=' The required software for preprocessing and R-peak detection were adopted from the Open-Source Electrophysiological Toolbox (OSET) [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
201
+ page_content=' The benchmark filter is a wavelet denoiser with a Symlet–5 mother wavelet, soft thresholding, Stein’s unbiased risk es- timate (SURE) shrinkage rule, rescaling using a single-level noise level estimation and four levels of decomposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
202
+ page_content=' In a previous study, this combination was proved to outperform other ECG filtering schemes [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
203
+ page_content=' The filter evaluation is measured in terms of SNR improvement and QT-interval estimation error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
204
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
205
+ page_content=' SNR improvement performance The ECG records were contaminated by additive white Gaussian noise at SNR levels ranging from –5 to 30 dB, in 5 dB steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
206
+ page_content=' An example of the noisy and filtered ECG are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
207
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
208
+ page_content=' The average and standard deviation of the SNR improvement is reported for each noise level, for the proposed and benchmark methods in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
209
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
210
+ page_content=' Accordingly, the proposed posterior-based filter improves the SNR for every level of noise tested and outperforms the prior-based and the benchmark filter for all tested levels of noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
211
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
212
+ page_content=' Clinical parameters preservation The accuracy of QT-interval estimation is considered to test the quality of the proposed methods for clinical ECG parame- ters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
213
+ page_content=' For this, the QT-interval estimation error (∆QT) between the QT-interval estimated from the filtered ECG and the QT- interval estimated from the noiseless ECG is measured and compared between the benchmark and the proposed method at variable input noise levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
214
+ page_content=' The QT-interval estimation method used is adopted from [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
215
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
216
+ page_content=' 5 shows the median and the interquartile range (IQR) of ∆QT for the benchmark wavelet and the proposed filter, measured over QTDB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
217
+ page_content=' Accordingly, compared with the benchmark method, the GP posterior filter is reducing the median error for all levels of input noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
218
+ page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
219
+ page_content=' DISCUSSION AND CONCLUSION In this work we addressed the problem of ECG denoising via a data-driven based GP model, with beat-wise computa- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
220
+ page_content=' Compared with the existing non-parametric ECG filters, the proposed filter makes no ad hoc assumptions about the GP model and can be used for ECG records of arbitrary length, since the computational cost has been significantly reduced as compared with conventional GP filters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
221
+ page_content=' The proposed filter is efficient in terms of SNR improvement, outperforming the benchmark performances for all tested noise levels (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
222
+ page_content=' 4) and also clinically, with an improved QT-interval estimation error compared with the benchmark wavelet denoiser, for all tested levels of noise (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
223
+ page_content=' 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
224
+ page_content=' Another advantage of the proposed filter is its Bayesian formulation, which allows us to quantify the filter’s uncertainty (via the estimated variances).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
225
+ page_content=' It also 4 EXTENDED VERSION OF A POSTER PRESENTED AT THE 46TH ISCE CONFERENCE, APR 6–10, 2022, LAS VEGAS, US −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
226
+ page_content='6 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
227
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
228
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
229
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
230
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
231
+ page_content='9 amplitude [mV] x GP Prior [ΔSNR=12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
232
+ page_content='0] −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
233
+ page_content='6 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
234
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
235
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
236
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
237
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
238
+ page_content='9 amplitude [mV] GP Po terior [ΔSNR=13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
239
+ page_content='3] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
240
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
241
+ page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
242
+ page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
243
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
244
+ page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
245
+ page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
246
+ page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
247
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
248
+ page_content='0 time [ ] −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
249
+ page_content='6 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
250
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
251
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
252
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
253
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
254
+ page_content='9 amplitude [mV] Wavelet [ΔSNR=7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
255
+ page_content='1] (a) input SNR = 0 dB −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
256
+ page_content='40 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
257
+ page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
258
+ page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
259
+ page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
260
+ page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
261
+ page_content='90 amplitude [mV] x GP P io [ΔSNR=7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
262
+ page_content='3] −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
263
+ page_content='40 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
264
+ page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
265
+ page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
266
+ page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
267
+ page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
268
+ page_content='90 amplitude [mV] GP Poste io [ΔSNR=10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
269
+ page_content='6] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
270
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
271
+ page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
272
+ page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
273
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
274
+ page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
275
+ page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
276
+ page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
277
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
278
+ page_content='0 time [s] −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
279
+ page_content='40 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
280
+ page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
281
+ page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
282
+ page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
283
+ page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
284
+ page_content='90 amplitude [mV] Wavelet [ΔSNR=6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
285
+ page_content='1] (b) input SNR = 5 dB −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
286
+ page_content='40 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
287
+ page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
288
+ page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
289
+ page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
290
+ page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
291
+ page_content='90 amplitude [mV] x GP Prior [ΔSNR=2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
292
+ page_content='4] −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
293
+ page_content='40 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
294
+ page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
295
+ page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
296
+ page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
297
+ page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
298
+ page_content='90 amplitude [mV] GP Po terior [ΔSNR=8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
299
+ page_content='1] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
300
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
301
+ page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
302
+ page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
303
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
304
+ page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
305
+ page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
306
+ page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
307
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
308
+ page_content='0 time [ ] −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
309
+ page_content='40 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
310
+ page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
311
+ page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
312
+ page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
313
+ page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
314
+ page_content='90 amplitude [mV] Wavelet [ΔSNR=5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
315
+ page_content='1] (c) input SNR = 10 dB Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
316
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
317
+ page_content=' The sel100 recording from the PhysioNet QTDB [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
318
+ page_content=' From top to bottom the measurements x vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
319
+ page_content=' the prior estimate (11), the posterior estimate (12), and the wavelet denoiser (Section III-C), at different input SNR levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
320
+ page_content=' The post-filtering SNR improvement is noted in each case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
321
+ page_content=' −5 0 5 10 15 20 25 30 Input SNR [dB] −25 −20 −15 −10 −5 0 5 10 15 20 SNR improvemen [dB] Wavele [benchmark] GP Prior GP Pos erior Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
322
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
323
+ page_content=' Mean and standard deviation SNR improvement using the proposed GP filter and the benchmark wavelet denoiser [3] across all samples of the PhysioNet QTDB [6], in leads I and II, with 5 repetitions using different noise instances per record.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
324
+ page_content=' 5 0 5 10 15 20 25 30 Input SNR [dB] −100 −50 0 50 100 150 200 QT es ima ion error [ms] Wavele GP Fil er Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
325
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
326
+ page_content=' The median and the interquartile range for ∆QT estimations corresponding to the proposed and benchmark filters across all samples of the PhysioNet QTDB [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
327
+ page_content=' provides a framework that allows for synthetic ECG generation via data-driven learned parameters, which can be used in generative models for producing synthetic ECG records for data greedy machine learning and deep learning applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
328
+ page_content=' In future studies, the fundamental assumption of the model, namely the same underlying Gaussian distribution for all the beats in the phase domain can be relaxed, by clustering the beats and assuming different underlying distributions for the beats in each cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
329
+ page_content=' Also, comparison with expert annotated QT-interval (and other clinical parameters) is required and statistical hypothesis testing should be performed to investigate if the differences are statistically insignificant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
330
+ page_content=' The proposed filter requires the R-peaks for aligning the ECG beats in the phase-domain, which requires investigating to what extend the filtering performance is susceptible to mis-detection of the R-peaks and morphological variations due to ectopic beats.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
331
+ page_content=' The Python codes corresponding to the Algorithm 1 and the reported results are available in [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
332
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
333
+ page_content=' ACKNOWLEDGEMENTS The authors acknowledge support from the National Insti- tute of Biomedical Imaging and Bioengineering under the NIH grant R01EB030362, and the National Center for Advancing Translational Sciences under the NIH Award UL1TR002378.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
334
+ page_content=' REFERENCES [1] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
335
+ page_content=' Rivet, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
336
+ page_content=' Niknazar, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
337
+ page_content=' Jutten, “Non parametric modelling of ECG: Applications to denoising and single sensor fetal ECG extraction,” in LVA/ICA 2012 - 10th International Conference on Latent Variable Analysis and Signal Separation, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
338
+ page_content=' LNCS 7191.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
339
+ page_content=' Tel-Aviv, Israel: Springer, Mar 2012, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
340
+ page_content=' 470–477.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
341
+ page_content=' [2] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
342
+ page_content=' Niknazar, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
343
+ page_content=' Rivet, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
344
+ page_content=' Jutten, “Fetal ECG extraction from a single sensor by a non-parametric modeling,” in 2012 Proceedings of the 20th European Signal Processing Conference, 2012, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
345
+ page_content=' 949–953.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
346
+ page_content=' [3] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
347
+ page_content=' Sameni, “Online filtering using piecewise smoothness priors: Application to normal and abnormal electrocardiogram denoising,” Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
348
+ page_content=' 133, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
349
+ page_content=' 52–63, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
350
+ page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
351
+ page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
352
+ page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
353
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
354
+ page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
355
+ page_content='sigpro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
356
+ page_content='2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
357
+ page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
358
+ page_content='019 [4] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
359
+ page_content=' Laguna, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
360
+ page_content=' Jane, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
361
+ page_content=' Meste, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
362
+ page_content=' Poon, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
363
+ page_content=' Caminal, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
364
+ page_content=' Rix, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
365
+ page_content=' Thakor, “Adaptive filter for event-related bioelectric signals using an impulse correlated reference input: comparison with signal averaging techniques,” IEEE Transactions on Biomedical Engineering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
366
+ page_content=' 39, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
367
+ page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
368
+ page_content=' 1032–1044, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
369
+ page_content=' [5] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
370
+ page_content=' Sameni, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
371
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
372
+ page_content=' Shamsollahi, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
373
+ page_content=' Jutten, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
374
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
375
+ page_content=' Clifford, “A nonlinear bayesian filtering framework for ECG denoising,” Biomedical Engineering, IEEE Transactions on, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
376
+ page_content=' 54, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
377
+ page_content=' 12, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
378
+ page_content=' 2172–2185, December 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
379
+ page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
380
+ page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
381
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
382
+ page_content='1109/TBME.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
383
+ page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
384
+ page_content='897817 [6] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
385
+ page_content=' Laguna, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
386
+ page_content=' Mark, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
387
+ page_content=' Goldberg, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
388
+ page_content=' Moody, “A database for evaluation of algorithms for measurement of QT and other waveform intervals in the ECG,” in Computers in Cardiology 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
389
+ page_content=' IEEE, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
390
+ page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
391
+ page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
392
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
393
+ page_content='1109/cic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
394
+ page_content='1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
395
+ page_content='648140 [7] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
396
+ page_content=' Dumitru, Data Driven Gaussian Process filter for ECG, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
397
+ page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
398
+ page_content=' Available: https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
399
+ page_content='com/alphanumericslab/OSET/ tree/master/UnderDevelopment/DataDrivenGPFilter [8] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
400
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
401
+ page_content=' Rasmussen and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
402
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
403
+ page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
404
+ page_content=' Williams, Gaussian Processes for Machine Learning, ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
405
+ page_content=' Adaptive computation and machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
406
+ page_content=' The MIT Press, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
407
+ page_content=' [9] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
408
+ page_content=' Pan and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
409
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
410
+ page_content=' Tompkins, “A Real-Time QRS Detection Algorithm,” Biomedical Engineering, IEEE Transactions on, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
411
+ page_content=' BME-32, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
412
+ page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
413
+ page_content=' 230–236, 1985.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
414
+ page_content=' [10] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
415
+ page_content=' Sameni, The Open-Source Electrophysiological Toolbox (OSET), version 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
416
+ page_content='14, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
417
+ page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
418
+ page_content=' Available: https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
419
+ page_content='com/ alphanumericslab/OSET [11] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
420
+ page_content=' Li, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
421
+ page_content=' Dumitru, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
422
+ page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
423
+ page_content=' Perez Alday, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
424
+ page_content=', “QT-Interval Estimation Improved with Fusion of Multiple Automated Algorithms,” in Interna- tional Society for Computerized Electrocardiology (ISCE), April 6-10, 2022, Las Vegas, NV, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dE0T4oBgHgl3EQfugHS/content/2301.02607v1.pdf'}
39A0T4oBgHgl3EQfNf8t/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:afd8d8fc002db53730497678fe603bc496bead7d71ffbb08f6568c5a86473d1e
3
+ size 231731
3NE1T4oBgHgl3EQfSQNJ/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:289920a9e2500b15264c5f89ebb3cfb5a3b83faaeb1bed8a6de3346669751303
3
+ size 7929901
4tE2T4oBgHgl3EQf6ghP/content/2301.04200v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3d1b651dd5813325b934eb68a61ef52f452d26a9664787694c510282589453f
3
+ size 1184946
4tE2T4oBgHgl3EQf6ghP/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8427bba55b836bf574e1cc7f5fc1148eb03ff60a60f2c050a3108e5cee101be
3
+ size 171463
59E5T4oBgHgl3EQfPQ7C/content/tmp_files/2301.05504v1.pdf.txt ADDED
@@ -0,0 +1,2254 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Combining Dynamic Mode Decomposition with Ensemble Kalman
2
+ Filtering for Tracking and Forecasting
3
+ Stephen A Falconer1, David J.B. Lloyd1, and Naratip Santitissadeekorn1
4
+ 1Department of Mathematics, University of Surrey, Guildford, GU2 7XH, UK
5
+ January 16, 2023
6
+ Abstract
7
+ Data assimilation techniques, such as ensemble Kalman filtering, have been shown to be a
8
+ highly effective and efficient way to combine noisy data with a mathematical model to track
9
+ and forecast dynamical systems. However, when dealing with high-dimensional data, in many
10
+ situations one does not have a model, so data assimilation techniques cannot be applied. In
11
+ this paper, we use dynamic mode decomposition to generate a low-dimensional, linear model
12
+ of a dynamical system directly from high-dimensional data, which is defined by temporal and
13
+ spatial modes, that we can then use with data assimilation techniques such as the ensemble
14
+ Kalman filter.
15
+ We show how the dynamic mode decomposition can be combined with the
16
+ ensemble Kalman filter (which we call the DMDEnKF) to iteratively update the current state
17
+ and temporal modes as new data becomes available. We demonstrate that this approach is able
18
+ to track time varying dynamical systems in synthetic examples, and experiment with the use
19
+ of time-delay embeddings. We then apply the DMDEnKF to real world seasonal influenza-like
20
+ illness data from the USA Centers for Disease Control and Prevention, and find that for short
21
+ term forecasting, the DMDEnKF is comparable to the best mechanistic models in the ILINet
22
+ competition.
23
+ Keywords
24
+ Dynamic mode decomposition; Ensemble Kalman filter; Data-driven modelling; Data assimilation;
25
+ Dynamical systems
26
+ 1
27
+ Introduction
28
+ Data assimilation refers to the collection of methods that integrate vast data sets with sophisticated
29
+ mathematical models, to track and forecast systems that may evolve or change [38]. The majority
30
+ of its applications lie in the earth sciences [51], however due to the generality of its techniques they
31
+ have also been successfully applied in a wide range of areas from medicine [18] to ecology [40].
32
+ The Kalman filter [36] is one such data assimilation technique widely used throughout industry [3]
33
+ that optimally combines predictions from a linear model with Gaussian data. Whilst traditionally
34
+ applied to a model’s state, the parameters of the model can simultaneously be filtered, leading to
35
+ 1
36
+ arXiv:2301.05504v1 [math.DS] 13 Jan 2023
37
+
38
+ what is known as the joint state-parameter estimation problem [33]. If the system being filtered is
39
+ nonlinear, alternative versions of the Kalman filter can be utilized such as the extended Kalman filter
40
+ [53], unscented Kalman filter [59] or ensemble Kalman filter (EnKF) [23]. The EnKF represents
41
+ the distribution of a system’s state with an ensemble of random samples, that can then be used to
42
+ estimate useful statistics like the state’s covariance via the sample covariance or a point estimate
43
+ of the state via the sample mean [23] and is well-suited for high-dimensional problems. All of these
44
+ methods require a model of the system, however if no model exists then one must be generated and
45
+ the most generalizable way to do this is via data-driven modelling.
46
+ Dynamic mode decomposition (DMD) is a data-driven modelling technique for identifying low di-
47
+ mensional, spatial and temporal patterns within a dynamical system directly from high-dimensional
48
+ data [54]. It does this by postulating the state vector is evolved via a linear system and looking for
49
+ a low-dimensional approximation of the eigenvalues (temporal modes) and corresponding eigenvec-
50
+ tors (spatial modes). Spatial modes can be thought of as modes that decompose state variables
51
+ into separate components that evolve together linearly in time. The corresponding temporal modes
52
+ describe whether a spatial mode is growing, decaying or stationary in time. DMD has been used
53
+ to approximate dynamical systems from measurement data in a multitude of fields, ranging from
54
+ epidemiology [49], finance [43] and neuroscience [12]. Due to its popularity, it has been extended
55
+ to systems that are nonlinear in their recorded measurement functions via Extended/Kernel DMD
56
+ [61]/[44], with one such extension Hankel-DMD [2] employing time-delay embeddings of the original
57
+ observables. In the presence of measurement noise, the standard DMD has been shown to induce a
58
+ systematic bias by asymmetrically attributing all noise to the model’s target output measurements
59
+ and none to its inputs during training [15]. This systematic bias, prompted the creation of noise
60
+ handling variants of DMD that directly account for the noise term [15], the Forward Backward
61
+ DMD [15] that performs DMD forwards and backward in time and combines the results, and To-
62
+ tal DMD (TDMD) [31] that minimizes the total least squares error as opposed to minimizing the
63
+ ordinary least squares error.
64
+ The aim of this paper is to develop an algorithm that iteratively improves the temporal modes
65
+ (eigenvalues) and state estimates produced by DMD with the EnKF as new data becomes available.
66
+ This would be highly useful for dynamical systems that make a change from growing or decaying
67
+ behaviour over time. While estimating just the state of the system using the DMD modes can be
68
+ done using a standard Kalman filter, without also filtering the model’s temporal mode, estimates
69
+ are likely to suffer if the system changes over time. Methods already exist that combine DMD with
70
+ the Kalman filter [45] or extended Kalman filter [46], which apply filtering to estimate the entire
71
+ system dynamics matrix. The filtering in our work is instead focused on efficiently tracking the
72
+ system’s temporal modes, and forecasting the system’s future states. DMD produces a linear model
73
+ which makes it a natural fit for the Kalman filter, however when a system’s state and temporal
74
+ modes are estimated simultaneously the filtering process becomes nonlinear. Hence, we need to use
75
+ a filter designed for a nonlinear model, and we chose the EnKF due to its versatility, scalability to
76
+ large dynamical systems, and ease of implementation [52]. While any DMD variant that produces
77
+ temporal modes would be compatible with the DMDEnKF framework, we use TDMD to remain
78
+ consistent with the EnKF’s assumption that noise is present in the data. In tandem, we apply
79
+ the DMDEnKF using a total least squares version of Hankel-DMD, henceforth referred to as the
80
+ Hankel-DMDEnKF, to investigate the effect time-delay embeddings have on our framework.
81
+ 2
82
+
83
+ To demonstrate the DMDEnKF method, we first test it on synthetically generated datasets. Ini-
84
+ tially, on a simple noisy oscillating system with a decreasing period of oscillation, we use the DM-
85
+ DEnKF to track the system’s temporal modes and compare results with the Hankel-DMDEnKF,
86
+ other iterative DMD variants, and “gold standard” filtering methods. Next, we simulate a pan-
87
+ demic and evaluate the DMDEnKF’s ability to track the system’s temporal modes and generate
88
+ multistep ahead forecasts.
89
+ Figure 1: ILI consultations as a percentage of total weekly GP consultations in the US from 2003 to end of
90
+ 2018. The data shows the annual peaks in ILI consultations that vary in size, timing and shape, which
91
+ would make them difficult to model with a simple SIR-type model.
92
+ Finally, we apply the DMDEnKF and Hankel-DMDEnKF to real seasonal influenza-like illness
93
+ (ILI) data in the United States from the Centers for Disease Control and Prevention (CDC) ILINet
94
+ [25] shown in Figure 1, with the aim of investigating their forecasting skills for ILI consultation
95
+ rates. ILI is defined as a fever with a cough or sore throat that has no known cause other than
96
+ influenza [25] and infects between 9 and 35 million people in the US each year [24]. Due to its
97
+ prevalence, a multitude of methods have already been developed to model the spread of ILI [14, 47]
98
+ and the approaches these models take can broadly be classified as either mechanistic or statistical
99
+ [37]. Mechanistic methods [5, 48] make explicit hypotheses about what is driving the spread of an
100
+ infectious disease, before then fitting parameters in the proposed models to the data. They have
101
+ the advantage of being highly interpretable making them useful when trying to understand how
102
+ one can control the spread of a disease [39], however can make assumptions that are oversimplified
103
+ [4]. For example, a simple SIR-type model would struggle to describe specific behaviours like the
104
+ drop in ILI consultations around Christmastime seen in Figure 1.
105
+ Statistical methods [11, 60]
106
+ are generally more versatile as they require fewer domain-specific assumptions, but both methods
107
+ achieve a similar predictive skill in real time on the ILINet dataset [50]. The DMDEnKF attempts
108
+ to find a middle ground between the two methods, remaining versatile by virtue of being purely
109
+ data-driven but also providing some level of interpretability via the associated DMD modes.
110
+ The remainder of this paper will be structured as follows. First, a brief summary of DMD, Hankel-
111
+ 3
112
+
113
+ 8
114
+ 7
115
+ 6
116
+ 4
117
+
118
+ %3
119
+ 2
120
+ 1
121
+ 2004
122
+ 2006
123
+ 2008
124
+ 2010
125
+ 2012
126
+ 2014
127
+ 2016
128
+ 2018
129
+ DateDMD and EnKF algorithms for completeness.
130
+ After which, the DMDEnKF algorithm will be
131
+ described in full. We will then apply the DMDEnKF and Hankel-DMDEnKF to synthetic data
132
+ and compare their performance against other pre-existing, iterative DMD variants. Finally, we will
133
+ use the DMDEnKF and Hankel-DMDEnKF on ILINet data to forecast the rate of ILI consultations
134
+ up to 4 weeks into the future and examine their performance.
135
+ 2
136
+ DMDEnKF
137
+ 2.1
138
+ Dynamic Mode Decomposition (DMD)
139
+ Consider an n dimensional state xk ∈ Rn measured at regular time intervals k = 1, ..., m. Assuming
140
+ this time-series data was generated by a linear dynamical system, the consecutive states xk and
141
+ xk+1 are connected via
142
+ xk+1 = Axk
143
+ (2.1)
144
+ for some unknown matrix A ∈ Rn×n. By denoting
145
+ X =
146
+
147
+ ��
148
+ |
149
+ |
150
+ |
151
+ x1
152
+ x2
153
+ ...
154
+ xm−1
155
+ |
156
+ |
157
+ |
158
+
159
+ �� ,
160
+ X′ =
161
+
162
+ ��
163
+ |
164
+ |
165
+ |
166
+ x2
167
+ x3
168
+ ...
169
+ xm
170
+ |
171
+ |
172
+ |
173
+
174
+ �� ,
175
+ (2.2)
176
+ equation (2.1) can be written succinctly over all consecutive data pairs as
177
+ X′ = AX.
178
+ (2.3)
179
+ To minimize the mean squared error term �m−1
180
+ k=1 ∥xk+1 − Axk∥2
181
+ 2, the standard DMD defines
182
+ A = X′X+,
183
+ (2.4)
184
+ where X+ is the Moore-Penrose pseudoinverse [6] of X. Efficiently solving for the eigendecom-
185
+ position of A is the primary purpose of DMD, as these eigenvalues/eigenvectors correspond to
186
+ spatio-temporal patterns in the data.
187
+ The DMD method starts by applying the Singular Value Decomposition (SVD) to the data matrix
188
+ X, representing it as the matrix multiplication of 2 real-valued, orthonormal matrices (complex and
189
+ unitary if X ∈ Cn×m) U ∈ Rn×n, V ∈ Rm×m and a rectangular diagonal matrix with decreasing
190
+ non-negative real values (Σ ∈ Rn×m) in the form
191
+ X = UΣV∗.
192
+ (2.5)
193
+ The best rank r approximation of a matrix according to the Eckart-Young Theorem [22] is obtained
194
+ by truncating its SVD, hence by truncating equation (2.5) to a suitable rank r [26] we can compress
195
+ the data matrix with minimal loss of information, which we write as
196
+ X ≈ UrΣrV∗
197
+ r.
198
+ (2.6)
199
+ By performing this compression, we are implicitly assuming that there exists a low dimensional
200
+ (≤ r), linear structure within the high-dimensional data.
201
+ 4
202
+
203
+ The Moore-Penrose pseudoinverse can be found directly from the SVD computed in equation (2.5)
204
+ as VΣ−1U∗. We use the rank r truncated matrices from equation (2.6) for reasons of efficiency,
205
+ setting
206
+ A = X′X+,
207
+ ≈ X′VrΣ−1
208
+ r U∗
209
+ r.
210
+ (2.7)
211
+ This approximation of A now acts only on an r dimensional subspace defined by Col(Ur). Hence,
212
+ we can restrict A onto this r dimensional subspace (representing the largest r POD modes of X)
213
+ and denote the restricted A ∈ Rr×r as
214
+ ˜A = U∗
215
+ rAUr,
216
+ ≈ U∗
217
+ rX′VrΣ−1
218
+ r .
219
+ (2.8)
220
+ We calculate the eigenvalues (λi), and corresponding eigenvectors (vi) of ˜A, and define
221
+ Λ =
222
+
223
+ ���
224
+ λ1
225
+ 0
226
+ 0
227
+ 0
228
+ ...
229
+ 0
230
+ 0
231
+ 0
232
+ λr
233
+
234
+ ��� ,
235
+ W =
236
+
237
+ ��
238
+ |
239
+ |
240
+ |
241
+ v1
242
+ v2
243
+ ...
244
+ vr
245
+ |
246
+ |
247
+ |
248
+
249
+ �� .
250
+ (2.9)
251
+ Reconstructing the eigenvalues/eigenvectors of the original operator A will provide insights into
252
+ the structure of the system [1] and allow us to propagate it forward in time. The eigenvalues of ˜A
253
+ (Λ) can be shown to be equal to the eigenvalues of the original operator A [34], however recovering
254
+ the original eigenvectors is more involved and can be done using either projected or exact DMD.
255
+ We use the exact DMD method introduced by Tu et al. [34] as it finds the exact DMD modes (Φ)
256
+ for all eigenvectors with non-zero λi, where Φ is defined as
257
+ Φ = X′VrΣ−1
258
+ r W.
259
+ (2.10)
260
+ DMD modes with zero eigenvalues have no effect on the system’s dynamics, so this restriction of
261
+ exact DMD is of little consequence. This method finds A such that AX = X′ exactly provided
262
+ r ≥ rank(X) and X and X′ are linearly consistent [34].
263
+ With Λ and Φ in hand, we can construct a r dimensional approximation of A, however still need
264
+ to find the initial phase and amplitude of each mode. The standard method [54] for computing this
265
+ vector (b) is to rewrite the initial state x1 in a basis of the DMD modes via
266
+ b = Φ+x1.
267
+ (2.11)
268
+ It is worth noting that there exist alternative methods for example [13, 35] that focus on optimizing
269
+ b over all data points with additional conditions.
270
+ To summarise, the final solution to the discrete system can be written as
271
+ xk = ΦΛkb.
272
+ (2.12)
273
+ In the remainder of the paper, we call Λ the temporal modes and Φ the spatial modes.
274
+ 5
275
+
276
+ 2.1.1
277
+ Hankel-DMD
278
+ Hankel-DMD first augments the original, measured state xk ∈ Rn, by appending to it measurements
279
+ of the state at the previous d − 1 time steps
280
+ h(xk) =
281
+
282
+ xkT
283
+ xk−1T
284
+ . . .
285
+ xk−(d−1)T �T
286
+ ,
287
+ (2.13)
288
+ to form a new state h(xk) ∈ Rdn. This is known as a time-delay embedding, and we refer to d
289
+ as the delay-embedding dimension. Taking time-delay embeddings, h(xk), to be our new states,
290
+ matrices X and X′ from equation (2.2) now become
291
+ X =
292
+
293
+ ���
294
+ xd
295
+ . . .
296
+ xm−1
297
+ ...
298
+ ...
299
+ ...
300
+ x1
301
+ . . .
302
+ xm−d
303
+
304
+ ��� ,
305
+ X′ =
306
+
307
+ ���
308
+ xd+1
309
+ . . .
310
+ xm
311
+ ...
312
+ ...
313
+ ...
314
+ x2
315
+ . . .
316
+ xm−(d−1)
317
+
318
+ ��� .
319
+ (2.14)
320
+ With X and X′ defined above, Hankel-DMD proceeds exactly as the standard DMD algorithm,
321
+ generating eigenvalues Λ, DMD modes Φ and their initial states b as described above. The original
322
+ system can be reconstructed/forecast for all time steps from d onwards, by applying equation (2.12)
323
+ and restricting the result to the first n rows.
324
+ 2.1.2
325
+ Iterative DMD Variants
326
+ There exists other variants of DMD that are designed to be applied iteratively, and in this paper we
327
+ will compare these with the DMDEnKF in their ability to track a system’s eigenvalues and make
328
+ future state predictions. Streaming DMD [32] is an adaption of the standard DMD algorithm to
329
+ efficiently process new data as it becomes available, and the noise aware variant Streaming TDMD
330
+ [30] is the first variant we wish to compare against. The second method we will use for comparison
331
+ is Windowed DMD [62], where the standard DMD described above is applied over a sliding window
332
+ of the w most recent data snapshots only. The final method we will be comparing against is Online
333
+ DMD [62], specifically the variant of this algorithm that places an exponentially decaying weight ρ
334
+ on the importance of past measurements.
335
+ 2.2
336
+ Ensemble Kalman Filter (EnKF)
337
+ Consider a discrete-time, nonlinear dynamical system with a stochastic perturbation
338
+ xk = F(xk−1) + wk,
339
+ wk ∼ N(0, Qk),
340
+ (2.15)
341
+ where F is a nonlinear function F : Rn → Rn, xk ∈ Rn is the system’s state, wk ∈ Rn is a
342
+ stochastic perturbation and N is the normal distribution with mean 0 and covariance matrix Qk.
343
+ A measurement equation that relates what we observe to the true state of the system is given by
344
+ yk = H(xk) + vk,
345
+ vk ∼ N(0, Rk),
346
+ (2.16)
347
+ where H : Rn → Rl is the system’s observation operator, yk ∈ Rl is an observation of the system,
348
+ vk ∈ Rl is the noise in the observation and N is the normal distribution with mean 0 and covariance
349
+ matrix Rk. We focus on the instance relevant to our use case where H is linear, so can be represented
350
+ by a matrix H ∈ Rl×n.
351
+ 6
352
+
353
+ In general, filtering methods aim to combine information from the state-transition model (2.15) and
354
+ observation model (2.16) to compute the conditional density p(xk|Yk), where Yk = (y1, ..., yk).
355
+ The Kalman filter is the optimal filter if F and H are both linear and the stochastic perturbations
356
+ are normal [36]. The EnKF was developed to deal with the filtering problem where either the linear
357
+ or normal assumption (or both) is violated [23]. It exploits the Kalman formulation to propagate
358
+ an ensemble of the state into a region of high probability in such a way that the ensemble spread
359
+ would be consistent with the linear and normal model.
360
+ To begin the EnKF algorithm, an initial ensemble of N state estimates ˆx(1)
361
+ 0 ,..., ˆx(N)
362
+ 0
363
+ is required. If
364
+ an ensemble is not available, one can be generated from initial state estimates ˆx0 and covariance
365
+ matrix P0 by taking N independent draws from N(ˆx0, P0).
366
+ Algorithm
367
+ The EnKF then acts as follows [23]:
368
+ Step 1: Propagate forward in time each ensemble member using equation (2.15) for i = 1, ..., N
369
+ via
370
+ ˆx(i)
371
+ k|k−1 = F(ˆx(i)
372
+ k−1|k−1) + w(i)
373
+ k .
374
+ (2.17)
375
+ The notation ˆx(i)
376
+ k|k−1 denotes the state estimate at time k of the ith ensemble member ˆx(i)
377
+ k
378
+ using
379
+ only information up to time k − 1, and ˆx(i)
380
+ k−1|k−1 represents the same ensemble member at time
381
+ k − 1 using information up to time k − 1. Each w(i)
382
+ k
383
+ is independently drawn from N(0, Qk). The
384
+ current covariance matrix can also now be estimated via the sample covariance of the ensemble,
385
+ which we denote as ˆPk|k−1. This can then be used to estimate the Kalman Gain matrix ˆKk as
386
+ ˆKk = ˆPk|k−1HT (HˆPk|k−1HT + Rk)−1.
387
+ (2.18)
388
+ Step 2: Calculate the measurement innovation utilizing equation (2.16).
389
+ From measurement yk, we again use i = 1, ..., N and generate simulated measurements
390
+ y(i)
391
+ k
392
+ = yk + v(i)
393
+ k
394
+ (2.19)
395
+ where each v(i)
396
+ k
397
+ is an independent draw from N(0, Rk). These simulated measurements y(i)
398
+ k
399
+ are
400
+ combined with the ensemble members ˆx(i)
401
+ k|k−1 from equation (2.17) to define N measurement inno-
402
+ vations
403
+ e(i)
404
+ k = y(i)
405
+ k − Hˆx(i)
406
+ k|k−1.
407
+ (2.20)
408
+ The e(i)
409
+ k
410
+ represent samples from the distribution of the distance of the model’s prediction from the
411
+ measured value.
412
+ Step 3: Combine the model estimates in equation (2.17) and measurement innovation of equation
413
+ (2.20) via the estimated Kalman gain from (2.18) to update each ensemble member’s state estimate
414
+ ˆx(i)
415
+ k|k = ˆx(i)
416
+ k|k−1 + ˆKke(i)
417
+ k .
418
+ (2.21)
419
+ We can generate a point estimate for the state ˆxk using the mean of the N updated ensemble
420
+ members. This process then repeats every time a new state measurement becomes available, with
421
+ the updated ensemble from the previous data point becoming the initial ensemble for the new one.
422
+ We combine these 2 previously described techniques to form the DMDEnKF. This new, hybrid
423
+ method uses DMD to generate a low dimensional model of a dynamical system that is then itera-
424
+ tively improved by the EnKF as new data emerges.
425
+ 7
426
+
427
+ 2.3
428
+ DMDEnKF
429
+ We now describe how we carry out filtering of the temporal modes and state of the system, while
430
+ keeping the spatial modes found by one’s chosen version of DMD on the “spin-up” fixed. We note
431
+ that once we allow the temporal modes to vary with the spatial modes being fixed, these are no
432
+ longer eigenvalues/eigenvectors, and we then call them temporal modes. Consider an n dimensional
433
+ state xk ∈ Rn measured at regular time intervals k = 1, ..., m and then measured iteratively at times
434
+ k = m + 1, ....
435
+ Algorithm
436
+ Step 1: Perform the chosen version of DMD on the dataset x1, ..., xm, defining X, X′ as before in
437
+ equation (2.2) to obtain the expression
438
+ xk = ΦΛkb,
439
+ (2.22)
440
+ where
441
+ Λ =
442
+
443
+ ���
444
+ λ1
445
+ 0
446
+ 0
447
+ 0
448
+ ...
449
+ 0
450
+ 0
451
+ 0
452
+ λr
453
+
454
+ ��� ,
455
+ Φ =
456
+
457
+ ��
458
+ |
459
+ |
460
+ |
461
+ d1
462
+ d2
463
+ ...
464
+ dr
465
+ |
466
+ |
467
+ |
468
+
469
+ �� ,
470
+ b =
471
+
472
+ ���
473
+ b1
474
+ ...
475
+ br
476
+
477
+ ��� ,
478
+ (2.23)
479
+ and defining λi, di, bi as the ith temporal mode, DMD mode, initial condition triplet of the r
480
+ retained modes. This acts as a spin-up process to generate a model we can then filter using the
481
+ EnKF.
482
+ Step 2: Define the matrices required for the EnKF’s ensemble initialisation, propagation via
483
+ equation (2.15), and measurement using equation (2.16).
484
+ First, rewrite each of the r temporal modes in polar coordinates as
485
+ λi = τieθii,
486
+ (2.24)
487
+ where τi ≥ 0, 0 ≤ θi < 2π and i2 = −1. As xk ∈ Rn, the temporal modes in the DMD model’s
488
+ spectrum will either be real or in a complex conjugate pair. When filtering, we view the temporal
489
+ modes as a time varying parameter.
490
+ However, we must enforce that the real temporal modes
491
+ remain real and complex conjugate pairs remain intact, as this ensures the state output by the
492
+ model will still be real. We do this by defining the filterable model parameters µi as new variables
493
+ for i = 1, ..., r
494
+ µi =
495
+
496
+ τi,
497
+ if θi = 0, or ∄ j for j < i such that λ∗
498
+ j = λi,
499
+ θi,
500
+ otherwise.
501
+ (2.25)
502
+ Written in this way, these µi’s represent all the possible degrees of freedom in the model’s temporal
503
+ modes under the additional constraint of producing a real state estimate. By maintaining a note
504
+ of the positional indexes of each complex conjugate pair produced in the initial DMD, it is possible
505
+ to recover the λi representation from the µi’s. While this transformation technically requires the
506
+ full list of µi’s, we informally write Λ(µi) = λi to symbolize the reversion from µi’s back to λi’s.
507
+ Ensemble initialisation: We can now define an augmented joint parameter state z0 ∈ Rn+r to
508
+ be used as the initial state for the EnKF
509
+ z0 =
510
+
511
+
512
+ xm
513
+
514
+ µ1
515
+ . . .
516
+ µr
517
+ �T
518
+ .
519
+ (2.26)
520
+ 8
521
+
522
+ We denote the joint parameter state at time m + k as zk ∈ Rn+r. To generate an initial ensemble
523
+ from this state, we first define sample covariance C = (1/m)(X′ − ΦΛΦ+X)(X′ − ΦΛΦ+X)T ,
524
+ which represents the current state uncertainty based on prediction errors in the spin-up DMD. We
525
+ then form the initial covariance matrix
526
+ P0 =
527
+
528
+ C
529
+ 0
530
+ 0
531
+ α2Ir
532
+
533
+ ,
534
+ (2.27)
535
+ where α2 > 0, Ir is the r-dimensional identity matrix, and the α2Ir term determines the initial
536
+ uncertainty in the spin-up DMD’s temporal modes. Take independent draws from N(z0, P0) until
537
+ the ensemble is sufficiently large. The optimal ensemble size will vary from problem to problem,
538
+ adding ensemble members will increase accuracy but at the cost of computational efficiency.
539
+ Propagation: Using the notation zi
540
+ k to signify the ith element of zk, we define the matrix Λzk ∈
541
+ Rr×r for state zk as
542
+ Λzk =
543
+
544
+ ���
545
+ Λ(zn+1
546
+ k
547
+ )
548
+ 0
549
+ 0
550
+ 0
551
+ ...
552
+ 0
553
+ 0
554
+ 0
555
+ Λ(zn+r
556
+ k
557
+ )
558
+
559
+ ��� .
560
+ (2.28)
561
+ The EnKF’s propagation equation can be written as
562
+ zk+1 =
563
+
564
+ ΦΛzkΦ+
565
+ 0
566
+ 0
567
+ Ir
568
+
569
+ zk + wk.
570
+ (2.29)
571
+ For convenience, we introduce notation zi:j
572
+ k ∈ Rj−i+1 to denote the ith through to the jth element
573
+ of zk where i ≤ j
574
+ zi:j
575
+ k =
576
+
577
+ zi
578
+ k
579
+ . . .
580
+ zj
581
+ k
582
+ �T
583
+ .
584
+ (2.30)
585
+ Equation (2.29) propagates z1:n
586
+ k
587
+ representing the state in the DMD framework xm+k forward in
588
+ time using the standard DMD equation with the updated temporal modes from Λzk. The vector
589
+ zn+1:n+r
590
+ k
591
+ represents the current estimate of the temporal modes in their µi representation and is
592
+ unchanged other than the addition of noise by the propagation equation, for although we assume
593
+ the temporal modes vary in time no direction of drift in the parameters is explicitly foreknown.
594
+ The vector wk ∈ Rn+r is a normally distributed variable wk ∼ N(0, Qk), and this represents the
595
+ uncertainty within the model of the system. We construct Qk as follows,
596
+ Qk =
597
+
598
+ α1In
599
+ 0
600
+ 0
601
+ α2Ir
602
+
603
+ ,
604
+ (2.31)
605
+ where α1 and α2 are constants determined by the user such that α2 ≪ α1. This construction
606
+ with Qk a diagonal matrix assumes model errors for each element of zk are uncorrelated with
607
+ one another.
608
+ The condition α2 ≪ α1 ensures that the state of the DMD system z1:n
609
+ k
610
+ changes
611
+ significantly faster than its temporal modes zn+1:n+r
612
+ k
613
+ , as parameters by definition should vary
614
+ slowly in time. Furthermore, for the temporal mode’s moduli being filtered, it prevents the strictly
615
+ positive modulus dropping below 0.
616
+ Measurement: We write the EnKF’s measurement equation as
617
+ yk =
618
+
619
+ In
620
+ 0
621
+ 0
622
+ 0
623
+
624
+ zk + vk,
625
+ (2.32)
626
+ 9
627
+
628
+ where yk ∈ Rn are observations of the DMD state xm+k, and vk ∈ Rn is a normally distributed
629
+ variable ∼ N(0, Rk) representing the noise in the measurements. We assume new measurements
630
+ yk to be available for the full DMD state z1:n
631
+ k
632
+ but not its temporal modes zn+1:n+r
633
+ k
634
+ , as this is
635
+ consistent with the format of the data used to generate the spin-up DMD model. We also assume
636
+ uncorrelated measurement noise on each dimension of the state, so choose a diagonal matrix Rk.
637
+ Step 3 State measurements xm+k at times k = 1, ... are being iteratively generated. By setting
638
+ yk = xm+k as each new measurement arrives, we can iteratively apply the EnKF to produce a
639
+ hybrid estimate for zk that combines model predictions from zk−1 and noisy measurement yk.
640
+ A brief summary of how the EnKF does this is provided in Section 2.2, and a more expansive
641
+ description can be found at [42].
642
+ Step 4: The state of the original system xm+k can be reconstructed from zk by simply taking it’s
643
+ first n elements z1:n
644
+ k . Predictions p steps ahead at time m + k can also be forecast from zk via
645
+ xm+k+p = ΦΛp
646
+ zkΦ+z1:n
647
+ k .
648
+ (2.33)
649
+ The Hankel-DMDEnKF is defined algorithmically in exactly the same way, with the only difference
650
+ being that Hankel-DMD is applied over the “spin-up” period as opposed to standard DMD.
651
+ 3
652
+ Synthetic Applications
653
+ 3.1
654
+ Comparison against other iterative DMD variants
655
+ To test the DMDEnKF, we first apply it to data generated from a synthetic system with time
656
+ varying eigenvalues, which we aim to track. The dynamics of this system are governed by the 2
657
+ dimensional rotation matrix, where the angle of rotation θk increases linearly from π/64 to π/8
658
+ over the course of 500 time steps. The evolution of the state xk of the system can hence be written
659
+ as
660
+ xk+1 =
661
+
662
+ cos(θk)
663
+ − sin(θk)
664
+ sin(θk)
665
+ cos(θk)
666
+
667
+ xk,
668
+ x1 =
669
+
670
+ 1
671
+ 0
672
+
673
+ ,
674
+ (3.1)
675
+ where θk = π/64 + (k−1)(7π/64)
676
+ 499
677
+ and k = (1, ..., 500).
678
+ We assume noisy measurement values yk to be available for the state at each time step, such that
679
+ yk = xk + vk,
680
+ vk ∼ N(0, σ2I2),
681
+ (3.2)
682
+ where each experiment σ = 0.05 or 0.5 to simulate a low or high level of measurement noise
683
+ respectively.
684
+ The 500 values of yk (shown in Figure 2) are used to train the DMDEnKF and
685
+ Hankel-DMDEnKF, with the first 100 time steps being used for the spin-up process described in
686
+ Step 1 of the DMDEnKF algorithm to produce the output described in equation (2.23).
687
+ We will also train the iterative variants of DMD described at the end of Section 2.1 (Streaming
688
+ TDMD1, Windowed DMD and Online DMD) on this dataset to compare their ability to track the
689
+ 1As the synthetic dataset is small, it is computationally tractable to apply batch methods to the data. Hence,
690
+ instead of applying the true Streaming TDMD algorithm, we use batch TDMD over all data up to the current
691
+ time step as a proxy for Streaming TDMD utilizing code from the PyDMD library [19].
692
+ As Streaming TDMD
693
+ approximates the results of TDMD with the only differences occurring due to additional data compression steps in
694
+ Streaming TDMD’s algorithm, we believe this to be an acceptable substitution.
695
+ 10
696
+
697
+ Figure 2: Time series for a synthetic system with a linearly increasing eigenvalue argument, showing the
698
+ state’s first dimension with no, low (σ = 0.05) and high (σ = 0.5) measurement noise.
699
+ system’s time varying eigenvalues against that of the DMDEnKF. Within the Windowed DMD
700
+ algorithm, we replace DMD with TDMD to allow for this method to effectively handle the noise
701
+ in the data, henceforth referring to this amalgamation of the two methods as Windowed TDMD.
702
+ To implement Online DMD, we use code made available by its creators here [29]. Computational
703
+ parameters were set as follows; window size w = 10 for Windowed TDMD, exponential decay
704
+ rate ρ = 0.9 for Online DMD, delay-embedding dimension d = 50 for the Hankel-DMDEnKF and
705
+ spin-up time steps m = 100 for the DMDEnKF as previously stated.
706
+ At each time step k, the system’s true eigenvalues can be written in modulus-argument form as
707
+ λk = 1e±θki,
708
+ (3.3)
709
+ and for each time step where the models are defined their estimates of the system’s eigenvalues can
710
+ also be written as
711
+ ˆλk = ˆτke±ˆθki.
712
+ (3.4)
713
+ We start by comparing the errors in each method’s estimate of the constant eigenvalue modulus
714
+ (ˆτk−1). A thousand runs of the synthetic data were generated for each value of σ, and the difference
715
+ of each method’s eigenvalue modulus and argument from their true values at every time step after
716
+ the spin-up period were collected. When any of the methods failed to identify the eigenvalues as a
717
+ complex conjugate pair at a given time step in a run, the dominant eigenvalue’s modulus was used
718
+ for modulus error calculations. The average errors in the eigenvalue modulus estimates are shown
719
+ in Table 1.
720
+ For all levels of measurement noise, Streaming TDMD estimated the eigenvalue modulus the most
721
+ accurately. This is due to the method’s assumption of a stationary system, hence assigning an
722
+ equal weight to the importance of each data point, which works well in the case of estimating a
723
+ constant parameter. At low levels of measurement noise as seen in the first column of Table 1,
724
+ Windowed TDMD, Online DMD, the DMDEnKF and Hankel-DMDEnKF all performed similarly
725
+ 11
726
+
727
+ 2
728
+ dimension
729
+ 0
730
+ 1
731
+ XkTrue State
732
+ yk for = 0.05
733
+ -2
734
+ yk for o = 0.5
735
+ 0
736
+ 100
737
+ 200
738
+ 300
739
+ 400
740
+ 500
741
+ TimestepsIterative DMD
742
+ Mean Eigenvalue Modulus Error
743
+ Variant
744
+ σ = 0.05
745
+ σ = 0.5
746
+ Windowed TDMD
747
+ 9.82 × 10−3
748
+ 1.39
749
+ Online DMD
750
+ 6.04 × 10−3
751
+ 3.06 × 10−1
752
+ Streaming TDMD
753
+ 2.31 × 10−4
754
+ 2.50 × 10−3
755
+ DMDEnKF
756
+ 8.07 × 10−3
757
+ 1.89 × 10−2
758
+ Hankel-DMDEnKF
759
+ 9.49 × 10−3
760
+ 1.38 × 10−2
761
+ Table 1: Mean absolute errors in the synthetic system’s eigenvalue modulus estimates produced by each
762
+ iterative DMD variant over all time steps over the course of all 1000 runs. Measurement noise is set to
763
+ either low levels with σ = 0.05 (left) or high levels with σ = 0.5 (right). Streaming TDMD scored
764
+ significantly lower errors than all other methods, and as noise levels increased, errors in Windowed TDMD
765
+ and Online DMD grew significantly larger than those produced by the DMDEnKF and Hankel-DMDEnKF.
766
+ well with mean eigenvalue modulus errors below 0.01. As errors in the eigenvalue modulus grow
767
+ exponentially when forecasting future states, these 4 methods could produce acceptable short term
768
+ forecasts but would quickly diverge from the true state as the forecast horizon was extended. At
769
+ high levels of noise shown in the second column of Table 1, Windowed TDMD and Online DMD’s
770
+ eigenvalue modulus estimates degrade significantly, making them unsuitable for forecasting in this
771
+ scenario. The errors in the DMDEnKF and Hankel-DMDEnKF remain fairly small, however are
772
+ still an order of magnitude greater than those produced by Streaming TDMD.
773
+ A typical trajectory of the eigenvalue argument estimates (ˆθk) for each method over the course
774
+ of one run from the end of the spin-up period onwards can be seen in Figures 3a and 3c. The
775
+ error distributions for each method’s eigenvalue argument estimates (ˆθk − θk) over all 1000 runs
776
+ are plotted in Figures 3b and 3d.
777
+ At low levels of noise as seen in Figures 3a and 3b, all 5 methods on average underestimated the
778
+ eigenvalue argument of the system. This is to be expected as the eigenvalue argument is increasing
779
+ with time, meaning that all but the last data pair available to each method would have been
780
+ generated using an argument smaller than its current value. Streaming TDMD exhibited the worst
781
+ performance, again due to its equal weighting of every data point, however in this instance being a
782
+ negative quality as it hampers the model’s ability to adapt to fresh data that reflects the changing
783
+ parameter. Windowed TDMD, Online DMD, the DMDEnKF and Hankel-DMDEnKF all performed
784
+ similarly. Online DMD produced a tighter error distribution, but with a slightly larger bias than
785
+ Windowed TDMD. This suggests that Online DMD’s soft thresholding reduces the model volatility
786
+ caused by measurement noise compared to the hard cut-off employed by Windowed TDMD. For
787
+ this same reason however, Online DMD is slower to adapt to new measurements than Windowed
788
+ TDMD, leading to a larger bias below the system’s true eigenvalue argument. The DMDEnKF
789
+ and Hankel-DMDEnKF performed very similar to Windowed TDMD at this noise level, however
790
+ tweaks to the magnitude of the DMDEnKF’s system uncertainty matrix can be made to balance
791
+ the speed of model innovation with its volatility and produce distributions closer to that of Online
792
+ DMD if required.
793
+ At higher noise levels shown in Figures 3c and 3d, the performance of Windowed TDMD and Online
794
+ DMD significantly degrades. Placing a larger weight on more recent samples allowed these methods
795
+ 12
796
+
797
+ (a) Eigenvalue argument trajectory for σ = 0.05.
798
+ (b) Error distribution for σ = 0.05.
799
+ (c) Eigenvalue argument trajectory for σ = 0.5.
800
+ (d) Error distribution for σ = 0.5.
801
+ Figure 3:
802
+ Estimates of the synthetic system’s eigenvalue argument produced by each iterative DMD
803
+ variant. Presented are typical trajectories of the eigenvalue argument at each time step over the course of 1
804
+ experiment’s run (left) and error distributions of the difference between the true system’s eigenvalue
805
+ argument and the estimated eigenvalue argument over all time steps over the course of all 10 runs (right).
806
+ Measurement noise is set to either low levels with σ = 0.05 (top) or high levels with σ = 0.5 (bottom). The
807
+ DMDEnKF and Hankel-DMDEnKF experience similar errors to Online DMD and Windowed TDMD at
808
+ low measurement noise, but track the eigenvalue argument much more accurately than them for high
809
+ measurement noise.
810
+ to quickly adapt to changes in the system’s parameters, however as the noise increases this induces
811
+ an extreme volatility in their respective models.
812
+ The performance of Streaming TDMD is not
813
+ largely changed from the low noise case, still lagging behind the true system values but somewhat
814
+ insulated from the noise by its symmetric treatment of all data points. Here the benefit of explicit
815
+ inclusion of measurement noise in the DMDEnKF framework becomes apparent, as at this noise
816
+ level the DMDEnKF and Hankel-DMDEnKF are the only techniques tested capable of producing
817
+ an accurate eigenvalue argument estimate.
818
+ Furthermore, here we see the first significant difference in the performance of the DMDEnKF and
819
+ Hankel-DMDEnKF, as the DMDEnKF’s error distribution has a thin tail extending down to −π/8
820
+ which is not present in the error distribution of the Hankel-DMDEnKF. These additional errors are
821
+ caused by the spin-up of the DMD for the DMDEnKF method occasionally failing to identify the
822
+ system’s eigenvalues as a complex conjugate pair (empirically, this happens ∼ 3% of the time), due
823
+ to the increased noise in the data. When this happens, the DMDEnKF catastrophically fails for
824
+ the EnKF is unable to generate complex eigenvalues from real ones regardless of how many future
825
+ time steps are filtered due to its formulation in equation (2.25). This failure of the DMDEnKF can
826
+ be mitigated in the following way. If the errors produced by the DMDEnKF during the filtering
827
+ stage are deemed too large (e.g. exceed a given threshold) for a prolonged period of time, then the
828
+ spin-up DMD process can be rerun on an extended dataset consisting of the original spin up data,
829
+ 13
830
+
831
+ 0.5
832
+ True Eigenvalue
833
+ Streaming TDMD
834
+ igenvalue Argument
835
+ 0.4
836
+ Windowed TDMD
837
+ Online DMD
838
+ 0.3
839
+ Hankel-DMDEnKF
840
+ DMDEnKF
841
+ 0.2
842
+ 0.1
843
+ 0.0
844
+ 100
845
+ 200
846
+ 300
847
+ 400
848
+ 500
849
+ TimestepsStreaming TDMD
850
+ 70
851
+ WindowedTDMD
852
+ 60
853
+ Online DMD
854
+ Hankel-DMDEnKE
855
+ 50
856
+ DMDEnKF
857
+ Density
858
+ 40
859
+ 30
860
+ 20
861
+ 10
862
+ %.20
863
+ -0.15
864
+ -0.10
865
+ -0.05
866
+ 0.00
867
+ 0.05
868
+ Distance from True Eigenvalues Argument0.5
869
+ TrueEigenvalue
870
+ Streaming TDMD
871
+ 0.4
872
+ Windowed TDMD
873
+ OnlineDMD
874
+ 0.3
875
+ Hankel-DMDEnKF
876
+ DMDEnKF
877
+ 0.2
878
+ 0.1
879
+ 0.0
880
+ 100
881
+ 200
882
+ 300
883
+ 400
884
+ 500
885
+ TimestepsStreaming TDMD
886
+ 70
887
+ Windowed TDMD
888
+ 60
889
+ Online DMD
890
+ Hankel-DMDEnKF
891
+ 50
892
+ DMDEnKF
893
+ Density
894
+ 40
895
+ 30
896
+ 20
897
+ 10
898
+ -%.20
899
+ -0.15
900
+ -0.10
901
+ -0.05
902
+ 0.00
903
+ 0.05
904
+ Distance from True Eigenvalues Argumentplus the newly available data used so far in the filtering step. By including more data in the spin-up
905
+ process, the spin-up DMD model is more likely to successfully capture the signal component in the
906
+ data as a pose to measurement noise, and hence produce eigenvalues with the same structure as
907
+ those of the true system. Time-delay embeddings make the SVD step in the DMD algorithm more
908
+ robust to measurement noise [16]. Hence, while the Hankel-DMDEnKF is similarly restricted by
909
+ the eigenvalues it can produce at the filtering stage, in all 1000 runs of the synthetic data the spin
910
+ up Hankel-DMD was able to identify the system’s eigenvalues to be a complex conjugate pair, so
911
+ this was not an issue for the Hankel-DMDEnKF.
912
+ 3.2
913
+ Comparing against DMD with a particle filter
914
+ Having compared the performance of the DMDEnKF against other iterative DMD variants, we
915
+ now focus on evaluating the filtering component of the algorithm. Since the linear DMD model
916
+ acts nonlinearly in the filter when applied to both the model’s state and eigenvalues, we compare
917
+ the EnKF filter with a particle filter. Particle filters [27] have been show to converge to the optimal
918
+ filter as the number of particles tends to infinity for general nonlinear models with non-Gaussian
919
+ noise [17]. However, particle filters are restricted to low dimensional systems only, as the number of
920
+ particles required scales approximately exponentially with the dimension of the state [57]. Hence,
921
+ we compare the DMDEnKF and Hankel-DMDEnKF with a DMD plus particle filter which we will
922
+ take to be the “gold standard” estimation to assess how well the EnKF does with the nonlinear
923
+ filtering problem.
924
+ We use the same synthetic system (3.1) with a linearly increasing eigenvalue argument as in the
925
+ previous subsection to generate data with high levels of measurement noise (σ = 0.5); a trajectory
926
+ of which can be seen in Figure 2. Again, the time-delay embedding dimension d = 50 for the
927
+ Hankel-DMDEnKF, and the first 100 time steps are used to train a spin-up DMD model, with the
928
+ next 400 used to filter the state and spin-up model’s eigenvalues.
929
+ The DMDEnKF’s filter state thus has dimension 4 (2 state dimensions and 2 temporal modes),
930
+ while the Hankel-DMDEnKF’s filter state is of dimension 102 (100 state dimensions and 2 temporal
931
+ modes). To generate a “gold standard” solution, at the filtering step we use a particle filter with
932
+ 10,000 particles, applying multinomial importance resampling [20] every time the effective sample
933
+ size falls below half the number of particles to avoid sample degeneracy [21]. For the DMDEnKF
934
+ and Hankel-DMDEnKF at the filtering step, we run the EnKF with varying numbers of ensemble
935
+ members (N), to see if as N increases their estimates mean and covariance will tend to that of
936
+ the particle filter ensemble. We generated 1000 runs of the synthetic data to apply the particle
937
+ filter/EnKF with each value of N to and collected the errors in the eigenvalue argument estimates
938
+ for each method at every time step.
939
+ As can be seen in Figure 4a, the DMD particle filter with 10,000 particles produces an extremely
940
+ tight error distribution that is slightly biased to produce estimates below that of the true eigen-
941
+ value’s argument. This is to be expected, as mentioned in the previous subsection, due to the
942
+ system’s eigenvalue argument constantly increasing. There is also a thin tail in the error distribu-
943
+ tion that extends down to −π/8. This is again a result of the spin up DMD sometimes failing to
944
+ identify a complex conjugate eigenvalue pair, trapping the particle filter in the faulty assumption
945
+ that the eigenvalues are real.
946
+ 14
947
+
948
+ (a) Error distributions.
949
+ (b) Mean squared errors.
950
+ Figure 4: Error distributions (left) and mean squared errors (right) for estimates of the synthetic system’s
951
+ eigenvalue arguments produced by the DMDEnKF and Hankel-DMDEnKF with varying numbers of
952
+ ensemble members (N) against those produced by a particle filter with 10,000 particles. Increasing N
953
+ quickly leads to error levels in the DMDEnKF and Hankel-DMDEnKF that are similar to those produced by
954
+ their respective “gold standards”.
955
+ For low numbers of ensemble members (N = 5), the DMDEnKF and Hankel-DMDEnKF are centred
956
+ at a similar value to the “gold standard”. However, they produce a far larger spread with long tails
957
+ in both directions that imply a lack of robustness with this few ensemble members. With only a
958
+ small increase to N = 10, both methods become more stable, as although they still have a larger
959
+ variance than the particle filter, the long positive tails from N = 5 have been eliminated. A similar
960
+ pattern occurs as we move to N = 20, with more ensemble members resulting in a tighter error
961
+ distribution. At this point, the Hankel-DMDEnKF’s distribution can be distinguished from that
962
+ of the DMDEnKF and DMD particle filter by its aforementioned lack of a persistent thin negative
963
+ tail. By N = 40, the main peaks of the DMDEnKF, Hankel-DMDEnKF and “gold standard” are
964
+ almost indistinguishable on the graphical scale, with the DMDEnKF and DMD particle filter both
965
+ sharing a thin negative tail.
966
+ Figure 4b shows how the mean squared error for the eigenvalue arguments predicted by the DM-
967
+ DEnKF and Hankel-DMDEnKF are affected by varying the number of ensemble members. For
968
+ the DMDEnKF, errors initially sharply decline as N is increased, however on this small synthetic
969
+ system returns diminish quickly after N = 20. By N = 50, we achieve a mean squared error with
970
+ the DMDEnKF only ∼ 3% larger than that of the “gold standard”, despite using 200 times fewer
971
+ particles. When comparing the Hankel-DMDEnKF to the “gold standard”, the errors in the DMD
972
+ particle filter’s eigenvalue estimates are skewed by the runs in which the spin up DMD was unable
973
+ to identify a complex conjugate eigenvalue pair, as Hankel-DMD did not encounter this problem
974
+ on these synthetic examples. To attempt to fairly compare the filtering methods, we remove all
975
+ runs in which the spin up DMD failed in this way, before again calculating the mean squared error
976
+ for the DMD particle filter and recording it in Figure 4b. A similar pattern of reducing errors with
977
+ diminishing returns can be seen for the Hankel-DMDEnKF as ensemble size is increased, and by
978
+ N = 50 its mean squared error is within 5% of the newly calculated DMD particle filter’s score.
979
+ Our results show that in this simple, synthetic case at least, the EnKF is an efficient and effective
980
+ solution to the nonlinear filtering problem that arise within the DMDEnKF framework.
981
+ 15
982
+
983
+ No. of Ensemble Members = 5
984
+ No. of Ensemble Members = 10
985
+ DMDParticleFilter
986
+ DMDParticle Filter
987
+ 20
988
+ with10,000particles
989
+ with10,000particles
990
+ 15
991
+ Hankel-DMDEnKF
992
+ Hankel-DMDEnKF
993
+ DMDEnKF
994
+ DMDEnKF
995
+ 10
996
+ 5
997
+ Densit
998
+ No. of Ensemble Members = 20
999
+ No. of Ensemble Members = 40
1000
+ DMDParticleFilter
1001
+ DMDParticleFilter
1002
+ 20
1003
+ with10,000 particles
1004
+ with10,000 particles
1005
+ 15
1006
+ Hankel-DMDEnKF
1007
+ Hankel-DMDEnKF
1008
+ DMDEnKF
1009
+ DMDEnKF
1010
+ 10
1011
+ 5
1012
+ 0
1013
+ 0.2
1014
+ 0.0
1015
+ 0.2
1016
+ 0.4
1017
+ 0.6
1018
+ 0.8
1019
+ 1.0
1020
+ 0.20.0
1021
+ 0.2
1022
+ 0.4
1023
+ 0.6
1024
+ 0.8
1025
+ 1.0
1026
+ Distance from TrueEigenvalues ArgumentHankel-DMDEnKF
1027
+ 10-1
1028
+ DMDEnKF
1029
+ Mean Squared Error
1030
+ 10-2
1031
+ DMD Particle Filter
1032
+ with 10,000 particles
1033
+ 10-3
1034
+ DMD Particle Filter
1035
+ with 10,000 particles
1036
+ and failed runs removed
1037
+ 10
1038
+ 20
1039
+ 30
1040
+ 40
1041
+ 50
1042
+ No. of Ensemble Members3.3
1043
+ Tracking a synthetically generated pandemic
1044
+ Lastly, we test the DMDEnKF’s performance on synthetic data designed to simulate a simple
1045
+ pandemic with a state xk representing the level of infection in 3 different population classes. The
1046
+ system’s dynamics are governed by a matrix A ∈ R3×3 that we randomly generate with non-
1047
+ negative elements each being drawn from the Uniform distribution U[0, 1). The (i, j)th element of
1048
+ A represents how the level of infection in class j at time k will affect the level of infection in class i
1049
+ at time k + 1. To control whether the synthetic pandemic is spreading or dying off, we then define
1050
+ a new matrix ˆA = A
1051
+ λ1 where λ1 is the largest eigenvalue of A, thus ensuring the spectral radius
1052
+ ρ(ˆA) = 1. By introducing a constant γ, we can replace A with γ ˆA causing the state to grow if
1053
+ γ > 1 or decay for γ < 1. To simulate a pandemic, we linearly decrease γ from 1.01 to 0.99 over the
1054
+ course of the experiment’s 1000 time steps. The initial state used is a vector of ones. The system
1055
+ that generates the synthetic data can be written as
1056
+ xk+1 = γk ˆAxk,
1057
+ x1 =
1058
+
1059
+ 1
1060
+ 1
1061
+ 1
1062
+ �T
1063
+ ,
1064
+ (3.5)
1065
+ where the state xk ∈ R3, γk = 1.01 − 0.02(k−1)
1066
+ 999
1067
+ and k = (1, ..., 1000). We assume not to have access
1068
+ to the true state of the system xk but instead noisy measurements yk defined by
1069
+ yk = xk + vk,
1070
+ (3.6)
1071
+ The constant σ that governs the level of measurement noise is set to σ = 0.05 to represent low
1072
+ noise and σ = 0.5 for high noise as in (3.2). Figure 5 shows the values of the system’s three state
1073
+ dimensions and the respective available measurements over the course of one run.
1074
+ Figure 5:
1075
+ Time series for a synthetic system that simulates a pandemic, showing all 3 dimensions of the
1076
+ state with no, low (σ = 0.05) and high (σ = 0.5) measurement noise.
1077
+ All five DMD variants tested had their computational parameters set to the same values as those
1078
+ used in the synthetic experiments in Section 3.1. The only small difference was that Streaming
1079
+ TDMD, Windowed TDMD, the DMDEnKF and Hankel-DMDEnKF truncated the data by remov-
1080
+ ing the smallest singular value to reduce model instability caused by what was often a very low
1081
+ 16
1082
+
1083
+ 16
1084
+ Xk True State
1085
+ 14
1086
+ ykforg=0.05
1087
+ 12
1088
+ yk for g= 0.5
1089
+ Values
1090
+ 10
1091
+ 8
1092
+ State
1093
+ 6
1094
+ 4
1095
+ 0
1096
+ 0
1097
+ 200
1098
+ 400
1099
+ 600
1100
+ 800
1101
+ 1000
1102
+ Timestepssignal-to-noise ratio in this direction. Online DMD did not apply any truncation to the data as the
1103
+ method was not designed to do so, however it did not appear to suffer from any stability issues as
1104
+ a consequence.
1105
+ The first 100 measurements (y1, ..., y100) were used to initialize the models, and as each new data
1106
+ point (y100, ..., y1000) was successively fed into the models, they produced 50 step ahead forecasts
1107
+ (ˆx150, ..., ˆx1050).
1108
+ We generate 1000 data points, however a standard flu season lasts around 20
1109
+ weeks. For this reason, we chose to forecast 50 steps ahead to mimic forecasting 1 week ahead in
1110
+ a more realistic timescale. The relative prediction errors ˆek = ∥xk−ˆxk∥
1111
+ ∥xk∥
1112
+ could then be calculated
1113
+ for k = (150, ..., 1000) and the mean of these errors was the main metric we used to evaluate the
1114
+ forecasting skill of each method over the course of one run.
1115
+ A thousand runs were performed
1116
+ for both low and high levels of noise and the empirical cumulative distributions of 50 step ahead
1117
+ forecast mean run relative errors for low noise (σ = 0.05) can be seen in Figure 6.
1118
+ Figure 6:
1119
+ Cumulative error distributions of the mean run relative errors for the 50 step ahead forecasts of
1120
+ each iterative DMD variant. Mean relative errors were calculated over all time steps for each run of the
1121
+ experiment, with the results from 1000 runs under low levels of measurement noise (σ = 0.05) displayed.
1122
+ Forecast errors had a wide range for some methods, due to exponentially compounding errors caused by
1123
+ forecasting 50 steps ahead. The DMDEnKF, Hankel-DMDEnKF and Online DMD produced errors orders
1124
+ of magnitude smaller than those of Streaming TDMD and Windowed TDMD.
1125
+ The first noteworthy feature of the cumulative error distributions is the wide range in some method’s
1126
+ forecast errors. This is a result of the 50 step ahead forecasts being produced by training each
1127
+ model to forecast 1 step ahead, then applying the trained model to the data 50 times. As such,
1128
+ forecast errors compound exponentially and small errors over a 1-step forecast horizon can become
1129
+ vast after 50 iterations. Inspecting the individual methods, we see Windowed TDMD to be the
1130
+ worst performing method. This is due to its aforementioned instability under measurement noise
1131
+ caused by considering only a small subset of the data at a time. This instability could be reduced
1132
+ by increasing the window size (w) computational parameter, however as w increases the model’s
1133
+ ability to track a system that changes with time diminishes. Streaming TDMD had the second-
1134
+ largest errors, caused by the method’s assumption of a stationary system hindering its ability to
1135
+ 17
1136
+
1137
+ 1.0
1138
+ Streaming TDMD
1139
+ Windowed TDMD
1140
+ Online DMD
1141
+ DMDEnKF
1142
+ 0.8
1143
+ Hankel-DMDEnKF
1144
+ Proportion of runs
1145
+ 0.6
1146
+ 0.4
1147
+ 0.2
1148
+ 0.0
1149
+ 1010
1150
+ 1029
1151
+ 1048
1152
+ 1067
1153
+ 1086
1154
+ 10105
1155
+ 10124
1156
+ 10143
1157
+ 50 step ahead forecast mean run relative errorcorrectly adapt to the system’s changing eigenvalues as new data became available. In the majority
1158
+ of cases, Online DMD, the DMDEnKF and Hankel-DMDEnKF all performed similarly well. All
1159
+ three methods exhibited cumulative error distributions tightly concentrated around a low error
1160
+ value, however in a few runs, the DMDEnKF became unstable and produced large errors. It is
1161
+ clear even at low levels of noise that the forecasting performance of Online DMD, the DMDEnKF
1162
+ and Hankel-DMDEnKF are far superior on this type of system to those of Windowed TDMD and
1163
+ Streaming TDMD. Hence, we now focus exclusively on these top three performing methods to allow
1164
+ for a thorough comparison of them on an appropriate error scale.
1165
+ (a) σ = 0.05
1166
+ (b) σ = 0.5
1167
+ Figure 7: Error distributions of the mean run relative errors for the 50 step ahead forecasts of Online
1168
+ DMD, the DMDEnKF and Hankel-DMDEnKF, attained over 1000 runs under low σ = 0.05 (left) and high
1169
+ σ = 0.5 (right) levels of measurement noise. Similar errors were found at both noise levels, with Online
1170
+ DMD performing better at low measurement noise and the DMDEnKF/Hankel-DMDEnKF performing
1171
+ better at high measurement noise.
1172
+ In Figure 7 for Online DMD, the DMDEnKF and Hankel-DMDEnKF, we plot the distributions of
1173
+ 50 step ahead forecast mean run relative errors at both low and high levels of measurement noise.
1174
+ At low levels of noise, Online DMD’s errors peak at a lower level than those of the DMDEnKF
1175
+ and Hankel-DMDEnKF, however as noise levels increase we see the peaks switch sides, and the
1176
+ DMDEnKF/Hankel-DMDEnKF become the better performing methods. At both noise levels, the
1177
+ peak in the DMDEnKF’s error distribution is centred at the same value as the Hankel-DMDEnKF’s
1178
+ peak, however it is less dense due to the additional probability mass stored in the long tail of the
1179
+ DMDEnKF’s error distribution, which is not present in that of the Hankel-DMDEnKF.
1180
+ These disproportionately large errors in the DMDEnKF distribution’s tail occur when the spin-up
1181
+ DMD process fails to produce a model similar enough to the system’s true dynamics. As briefly
1182
+ touched upon in the first synthetic example, if the spin-up DMD model is sufficiently inaccurate
1183
+ then it can stop the EnKF from effectively assimilating new data, leading to the catastrophic
1184
+ failure of the DMDEnKF. In this example, as the signal-to-noise ratio in the direction of the
1185
+ second-largest singular value was often low, an unfortunate random draw of the system dynamics
1186
+ (A) and measurement noise (vk) in the spin-up period could produce large errors in DMD’s second
1187
+ eigenvalue. Empirically, using the interquartile range method to detect outlying forecast errors,
1188
+ this DMDEnKF failure occurred 5.5% of the time for σ = 0.05. The errors would persist in the
1189
+ filtering step as new data was processed, whereas other methods were able to recover from poor
1190
+ initial model estimates more effectively. The quality of the model produced by the initial DMD is
1191
+ dependent on the quality and volume of the spin-up data, hence this problem was exacerbated and
1192
+ occurred much more regularly at higher noise levels (21.9% of the time for σ = 0.5). It could be
1193
+ 18
1194
+
1195
+ 35
1196
+ Online DMD
1197
+ Hankel-DMDEnKF
1198
+ 30
1199
+ DMDEnKF
1200
+ 25
1201
+ 10
1202
+ 5
1203
+ 0
1204
+ 3 × 10-2
1205
+ 4 × 10-2
1206
+ 6 × 10-2
1207
+ 10-1
1208
+ 1052
1209
+ 1056
1210
+ 5o step ahead forecast mean run relative errorOnline DMD
1211
+ 16
1212
+ Hankel-DMDEnKF
1213
+ 14
1214
+ DMDEnKF
1215
+ 12
1216
+ 8
1217
+ 6
1218
+ 4
1219
+ 2
1220
+ 0
1221
+ 10-1
1222
+ 2 × 10-1
1223
+ 3 × 10-14× 10-1
1224
+ 6 × 10-1
1225
+ 1043
1226
+ 1047
1227
+ 5o step ahead forecast mean run relative errormitigated somewhat by increasing the number of time steps in the spin-up stage as described at
1228
+ the end of Section 3.1, however similarly to Windowed TDMD as the system is assumed to be time
1229
+ varying there likely exists a point of negative returns once the spin-up period becomes too long due
1230
+ to the stationarity assumption of batch DMD becoming progressively more violated.
1231
+ Unlike the DMDEnKF, the Hankel-DMDEnKF and Online DMD do not suffer from a long tail in
1232
+ their error distributions, and perform consistently well over all 1000 runs. At both noise levels,
1233
+ their error distributions have a similar variance, with the Hankel-DMDEnKF’s errors being slightly
1234
+ more tightly grouped than those of Online DMD. Hence, the average error is the main factor when
1235
+ differentiating between the method’s performance in this example, meaning Online DMD is the
1236
+ preferred method at low noise and the Hankel-DMDEnKF (or DMDEnKF provided the spin-up
1237
+ DMD does not catastrophically fail) is more accurate at high noise. As both methods posses useful
1238
+ yet differing attributes, we generate a typical data trajectory (one for which the DMDEnKF does
1239
+ not fail) for both low and high measurement noise. We then investigate how each model’s 50 step
1240
+ ahead forecasts and dominant eigenvalue estimates change over the course of each run, as shown
1241
+ in Figure 8.
1242
+ (a) 50 step ahead forecasts for σ = 0.05.
1243
+ (b) Dominant eigenvalue’s modulus for σ = 0.05.
1244
+ (c) 50 step ahead forecasts for σ = 0.5.
1245
+ (d) Dominant eigenvalue’s modulus for σ = 0.5.
1246
+ Figure 8: Typical trajectories of the 50 step ahead forecasts for the value of the state’s first dimension (left)
1247
+ and estimates of the dominant eigenvalue’s current modulus (right) under low (σ = 0.05) and high
1248
+ (σ = 0.5) levels of measurement noise for Online DMD, the DMDEnKF and Hankel-DMDEnKF over the
1249
+ course of 1 run. Online DMD forecasts 50 steps ahead more accurately at low noise, and the
1250
+ DMDEnKF/Hankel-DMDEnKF more accurately at high noises, however when signal-to-noise ratio is low
1251
+ (at the start and end of the experiment) Online DMD’s eigenvalue estimates become unstable.
1252
+ First, observing the low noise forecasts in Figure 8a, it is clear Online DMD produces forecasts
1253
+ that are more robust and closer to the true state’s value than those of the DMDEnKF and Hankel-
1254
+ DMDEnKF. This was to be expected, by virtue of Online DMD’s lower average errors in the error
1255
+ distributions of Figure 7a at this noise level. As noise is increased, the forecasts in Figure 8c show
1256
+ the DMDEnKF and Hankel-DMDEnKF becoming the more accurate methods, however Online
1257
+ 19
1258
+
1259
+ 14
1260
+ Xk True State
1261
+ Online DMD
1262
+ 12
1263
+ Hankel-DMDEnKF
1264
+ State in dimension
1265
+ DMDEnKF
1266
+ 10
1267
+ 8
1268
+ 6
1269
+ 4
1270
+ 2
1271
+ 200
1272
+ 400
1273
+ 600
1274
+ 800
1275
+ 1000
1276
+ Timesteps1.010
1277
+ True Eigenvalue Mod
1278
+ OnlineDMD
1279
+ 1.005
1280
+ Hankel-DMDEnKE
1281
+ DMDEnKF
1282
+ 1.000
1283
+ 0.995
1284
+ 0.990
1285
+ 0.985
1286
+ 200
1287
+ 400
1288
+ 600
1289
+ 800
1290
+ 1000
1291
+ Timesteps18
1292
+ XkTrueState
1293
+ 16
1294
+ OnlineDMD
1295
+ 14
1296
+ Hankel-DMDEnKE
1297
+ dimension
1298
+ DMDEnKF
1299
+ 12
1300
+ 10
1301
+ 8
1302
+ in
1303
+ State i
1304
+ 6
1305
+ 4
1306
+ 0
1307
+ 200
1308
+ 400
1309
+ 600
1310
+ 800
1311
+ 1000
1312
+ TimestepsModulus
1313
+ .02
1314
+ 1.01
1315
+ 1.00
1316
+ PT
1317
+ 0.99
1318
+ TrueEigenvalueMod
1319
+ 0.98
1320
+ OnlineDMD
1321
+ Hankel-DMDEnKF
1322
+ 0.97
1323
+ DMDEnKF
1324
+ 200
1325
+ 400
1326
+ 600
1327
+ 800
1328
+ 1000
1329
+ TimestepsDMD’s forecasts remains fairly stable, and still appear to be a viable forecasting option.
1330
+ Analysing the eigenvalue estimates in Figures 8b and 8d, we see that over the middle section of data
1331
+ where k = (250, ..., 750), Online DMD is able to track the dominant eigenvalue effectively. However,
1332
+ at the beginning and end of the dataset when states and hence the signal component of each new
1333
+ data point is small relative to the measurement noise, Online DMD’s eigenvalue estimates become
1334
+ progressively more unstable. In the low noise case this is not a problem, as Online DMD’s estimates
1335
+ are significantly more accurate than those of the DMDEnKF/Hankel-DMDEnKF, so even in the
1336
+ poorly performing sections of the data it’s estimates still better/match those of the DMDEnKF.
1337
+ For higher noise however, Online DMD provides significantly less robust estimates of the dominant
1338
+ eigenvalue at the start and end of the datasets than those generated by the DMDEnKF and Hankel-
1339
+ DMDEnKF. In the epidemiological context of an infectious disease outbreak, which this synthetic
1340
+ example attempts to mimic, scientists will often try to calculate the basic reproduction number
1341
+ (R0) [58] using noisy data from the small number of initial infections. If R0 > 1 the number of
1342
+ infections will grow exponentially if left unchecked, and if R0 < 1 the number of infections will
1343
+ decay naturally to 0.
1344
+ Within this example, using the DMDEnKF/Hankel-DMDEnKF one can
1345
+ quickly determine that initially R0 > 1 and take any required action thanks to the stability of it’s
1346
+ early eigenvalue estimates, whereas it takes significantly longer and a higher level of infection for
1347
+ Online DMD to consistently determine if R0 is above or below the growth/decay threshold.
1348
+ 4
1349
+ Seasonal Influenza-like Illness Application
1350
+ 4.1
1351
+ Problem setup
1352
+ DMD based methods have previously been applied to infectious disease data [49]. In this case,
1353
+ DMD modes can be viewed as stationary, spatial modes used to create a reduced order model in
1354
+ which only the amplitudes and frequencies are time varying [9]. Hence, modelling influenza-like
1355
+ illness (ILI) data is a prime potential application for the DMDEnKF/Hankel-DMDEnKF.
1356
+ The CDC’s ILINet data [25] we will be using records the number of ILI General Practitioner (GP)
1357
+ consultations in the US each week, alongside the number of total GP consultations which can be
1358
+ used to normalize the ILI data. We use a subset of the data from the start of 2003, the first year
1359
+ when data is available all year round, to the end of 2018 as seen in Figure 1. We then split each
1360
+ week’s data into demographics, consisting of 4 age groups (0-4, 5-24, 25-24, 65+) and 10 Health
1361
+ and Human Services (HHS) regions. Each region consists of the following locations:
1362
+ • Region 1 - Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont.
1363
+ • Region 2 - New Jersey, New York, Puerto Rico, and the U.S. Virgin Islands.
1364
+ • Region 3 - Delaware, District of Columbia, Maryland, Pennsylvania, Virginia, and West
1365
+ Virginia.
1366
+ • Region 4 - Alabama, Florida, Georgia, Kentucky, Mississippi, North Carolina, South Carolina,
1367
+ and Tennessee.
1368
+ • Region 5 - Illinois, Indiana, Michigan, Minnesota, Ohio, and Wisconsin.
1369
+ 20
1370
+
1371
+ • Region 6 - Arkansas, Louisiana, New Mexico, Oklahoma, and Texas.
1372
+ • Region 7 - Iowa, Kansas, Missouri, and Nebraska.
1373
+ • Region 8 - Colorado, Montana, North Dakota, South Dakota, Utah, and Wyoming.
1374
+ • Region 9 - Arizona, California, Hawaii, and Nevada.
1375
+ • Region 10 - Alaska, Idaho, Oregon, and Washington.
1376
+ Whilst ILI consultation data is available over all 40 of these strata, total GP consultation data
1377
+ is only provided by region. To generate an age breakdown for a region’s total consultations we
1378
+ linearly interpolate using census data to approximate the US population’s age demographics for
1379
+ a given week. We then allocate the region’s total consultations to each age group based on the
1380
+ proportion of the total population they represent. This method assumes that all age groups have
1381
+ a similar likelihood of attending the GP’s, which may be flawed but we believe it to be sufficient
1382
+ for the purpose of demonstrating the DMDEnKF on real-world data.
1383
+ 4.2
1384
+ Building the spin-up DMD model
1385
+ The format of the ILI data used in the DMDEnKF is thus a 40 dimensional vector for each week,
1386
+ recording ILI consultations as a percentage of total GP consultations over every demographic. This
1387
+ data exists in R40
1388
+ + however DMD computes modes in R40. Hence, to ensure the model’s estimates
1389
+ are non-negative, we first transform the data by adding a small constant (c = 1) then taking
1390
+ the natural logarithm of each element. For the Hankel-DMDEnKF, this transformed data is then
1391
+ delay-embedded with the previous 99 time steps (d = 100) to form a state in R4000. We use data
1392
+ up to the end of 2012 as training data for the spin-up DMD processes of the DMDEnKF/Hankel-
1393
+ DMDEnKF detailed in Step 1 of the DMDEnKF algorithm, and then filter the remaining years
1394
+ from 2013-2018. The transformed, centred data with the split where the spin-up process ends and
1395
+ the filtering begins marked is shown in Figure 9.
1396
+ We initially choose to truncate to 8 DMD modes for the DMDEnKF to demonstrate the method.
1397
+ We discuss the effect of changing the truncation on the DMDEnKF method below, but at 8 DMD
1398
+ modes approximately the amount of additional variance in the data that is retained by keeping more
1399
+ modes diminishes significantly. This is evidenced by the “elbow” seen in the cumulative variance
1400
+ plot of Figure 14a, at the point where the graph transitions from rapidly increasing in variance
1401
+ with r to a more gradual ascent. We also truncate the Hankel-DMDEnKF to 8 DMD modes, to
1402
+ allow for a more direct comparison between the two variants.
1403
+ The spectrum and dominant DMD/Hankel-DMD modes associated with each frequency identified
1404
+ by the spin-up DMD processes can be seen in Figure 10. All eigenvalues shown in Figures 10a
1405
+ and 10c had a modulus of ∼ 1, meaning in both cases each mode was expected to persist in the
1406
+ data without growing exponentially. The major difference between the two methods spectra is that
1407
+ Hankel-DMD identifies the most dominant mode to have a period of one year, whereas DMD does
1408
+ not detect any modes with this period. Annual peaks in ILI consultations occurring at a relatively
1409
+ similar time each year indicates that the data contains a strong mode of period one year, and this
1410
+ is supported by Fourier analysis [10] which also identifies one year as the dominant period. Hence,
1411
+ DMD is missing the yearly mode present in the data which Hankel-DMD is able to detect, and
1412
+ 21
1413
+
1414
+ Figure 9: ILI consultations as a percentage of total weekly GP consultations across 4 age brackets and the
1415
+ 10 HHS regions in the US, log transformed and centred. The peaks of varying size, timing and shape in
1416
+ Figure 1 are visible here as vertical red areas of varying width and intensity that encompass most
1417
+ demographics.
1418
+ this is likely due to Hankel-DMD’s aforementioned enhanced robustness to measurement noise.
1419
+ There are two clear patterns in the structure of the dominant DMD and Hankel-DMD modes seen
1420
+ in Figures 10b and 10d. Firstly, their strata generally move together. This is shown by the vast
1421
+ majority of entries for each DMD mode, and entries within the same delay-embedded week (denoted
1422
+ by a vertical slice through the mode) for Hankel-DMD modes, sharing the same sign. This implies
1423
+ that the percentage of ILI consultations increases and decreases at a similar time across all ages
1424
+ and regions. Secondly, the variance is higher for the younger age groups. This is demonstrated
1425
+ by the absolute value of elements in the top two rows of each region generally being larger than
1426
+ those in the bottom two. From Figure 10b, this is visible trivially for the DMD modes. In Figure
1427
+ 10d, age groups are arranged in ascending order for each region, so this effect is evidenced in the
1428
+ Hankel-DMD modes by the presence of more intensely coloured horizontal lines of width 2, followed
1429
+ by less intensely coloured lines of width 2 repeating over each of the 10 regions. This indicates that
1430
+ there are sharper peaks and deeper troughs in the percentage of ILI consultations for the young,
1431
+ while the rates for those 25 and over remain more stable.
1432
+ 4.3
1433
+ Applying the filter
1434
+ The filtering steps of the DMDEnKF/Hankel-DMDEnKF are then applied over the remaining data
1435
+ using the spatial and temporal modes from the spin-up DMD and spin-up Hankel-DMD respectively.
1436
+ Producing a 4-week ahead ILI forecast for the ILINet data that consistently outperforms a simple
1437
+ historical baseline prediction is difficult even for state-of-the-art models [50]. As such, to test the
1438
+ DMDEnKF/Hankel-DMDEnKF we use a forecast horizon of 4 weeks when making predictions.
1439
+ In Figure 11, the DMDEnKF and Hankel-DMDEnKF’s forecasting of total ILI consultations as
1440
+ 22
1441
+
1442
+ 0-4
1443
+ 2
1444
+ consultations
1445
+ 2
1446
+ 3
1447
+ 65 +
1448
+ centred % ILI
1449
+ Demographic
1450
+ 25.64
1451
+ 0-4
1452
+ 65+
1453
+ Re
1454
+ 25.64
1455
+ 0
1456
+ 65 ±
1457
+ Log transformed,
1458
+ Region
1459
+ 2
1460
+ 0-4
1461
+ 6
1462
+ 25.64
1463
+ -2
1464
+ 65 +
1465
+ 2003
1466
+ 2005
1467
+ 2007
1468
+ 2009
1469
+ 2011
1470
+ Split
1471
+ 2015
1472
+ 2017
1473
+ 2019
1474
+ 201
1475
+ Date(a) DMD Eigenvalue Spectrum.
1476
+ (b) Dominant DMD Modes.
1477
+ (c) Hankel-DMD Eigenvalue Spectrum.
1478
+ (d) Dominant Hankel-DMD Modes.
1479
+ Figure 10: Eigenvalue Spectrum (left) and DMD modes in descending order of dominance (right) generated
1480
+ by the DMD (top)/Hankel-DMD (bottom) applied to the data in Figure 9 up to the spin-up date. In both
1481
+ cases, all eigenvalues lie approximately on the unit circle, and dominant modes feature the same sign for
1482
+ most demographics with a magnitude that varies with age. The DMD modes are more interpretable, but
1483
+ Hankel-DMD identifies the mode with period 1 year, which DMD does not.
1484
+ 23
1485
+
1486
+ 1
1487
+ 0.5
1488
+ Imaginary
1489
+ 0
1490
+ -0.5
1491
+ -1
1492
+ -1
1493
+ -0.5
1494
+ 0
1495
+ 0.5
1496
+ 1
1497
+ RealMode l:
1498
+ Eigenvalue Period = 1.9 Years
1499
+ 0-4
1500
+ 5-24
1501
+ 25-64
1502
+ 65 +
1503
+ Mode 2:
1504
+ EigenvaluePeriod=0.6Years
1505
+ 0-4
1506
+ 0.3
1507
+ 5-24
1508
+ 25-64
1509
+ 0.2
1510
+ 65 +
1511
+ es
1512
+ Mode 3:
1513
+ Eigenvalue Period = 4.4 Years
1514
+ 0.1
1515
+ 0-4
1516
+ 5-24
1517
+ 0.0
1518
+ 25-64
1519
+ -0.1
1520
+ 65 +
1521
+ Mode 4:
1522
+ Eigenvalue Period = 22.5'Years
1523
+ -0.2
1524
+ 0-4
1525
+ 5-24
1526
+ 25-64
1527
+ 65 +
1528
+ 1
1529
+ 2
1530
+ 3
1531
+ 4
1532
+ 5
1533
+ 6
1534
+ 8
1535
+ 9
1536
+ ion
1537
+ Region
1538
+ Region
1539
+ Region
1540
+ ion
1541
+ ion
1542
+ Region
1543
+ Region
1544
+ Region
1545
+ Regi
1546
+ Regi
1547
+ Regi
1548
+ Regions1
1549
+ 0.5
1550
+ Imaginary
1551
+ 0
1552
+ -0.5
1553
+ -1
1554
+ -1
1555
+ -0.5
1556
+ 0.5
1557
+ 1
1558
+ RealMode l:
1559
+ Eigenvalue Period = l.0 Years
1560
+ 21
1561
+ 28
1562
+ 35
1563
+ Mode 2:
1564
+ Eigenvalue Period = 13.5 Years
1565
+ 07
1566
+ 0.04
1567
+ 14
1568
+ Age/Regions
1569
+ 21
1570
+ 28
1571
+ 0.02
1572
+ 35
1573
+ Mode 3:
1574
+ Eigenvalue Period = 1.6 Years
1575
+ 07
1576
+ 0.00
1577
+ 14
1578
+ -0.02
1579
+ 21
1580
+ 28
1581
+ 35
1582
+ -0.04
1583
+ Mode 4:
1584
+ Eigenvalue Period = 0.6 Years
1585
+ 07:
1586
+ 14
1587
+ 21
1588
+ 28
1589
+ 35
1590
+ 0
1591
+ 10
1592
+ 20
1593
+ 30
1594
+ 40
1595
+ 50
1596
+ 60
1597
+ 70
1598
+ 80
1599
+ 90
1600
+ Weeks Delay-Embededa percentage of total weekly GP consultations can be seen in full. Up until 2012, we generate
1601
+ 4-week ahead reconstructions using the spin-up DMD models only, estimating each state by taking
1602
+ the state measurement from 4 weeks prior, then iteratively applying the model to it 4 times. The
1603
+ 4-week ahead DMD reconstruction in Figure 11a captures more fluctuations in the data than that
1604
+ of Hankel-DMD, however these high frequency fluctuations can also indicate the effect of noise in
1605
+ the measurements. The Hankel-DMD reconstruction shown in Figure 11b is much less sensitive
1606
+ to noise, although fails to identify the sharper peaks in the data, which suggest it may be over-
1607
+ smoothing. From 2012 onwards the filtering begins, and forecasts are generated as described in the
1608
+ DMDEnKF algorithm using equation (2.33). The DMDEnKF forecasts become significantly more
1609
+ stable, while the Hankel-DMDEnKF forecasts improve in capturing the true shape of the data,
1610
+ however both suffered from some degree of lag in their predictions.
1611
+ During this second section of the data, the models are producing actual forecasts, as the DM-
1612
+ DEnKFs only have access to data up to 4 weeks prior to the prediction target’s date. Hence, it
1613
+ is in this section of the data we compare the models’ performance against that of the historical
1614
+ baseline. The historical baseline prediction was created in a similar manner to that used in [8],
1615
+ taking ILI consultation rates from the same week of every previous year in the data (excluding the
1616
+ pandemic year of 2009) and then producing a probability distribution for the current week’s con-
1617
+ sultations via Gaussian kernel density estimation (KDE) [55]. KDE Bandwidths were determined
1618
+ using Silverman’s rule of thumb [56], and when point estimates were required they were taken as
1619
+ the median of the distribution. The results of the comparisons can be seen in Figure 12 and Table
1620
+ 2. Here it is worth noting that although we use data dated 4 weeks prior to the prediction date,
1621
+ in reality this data is often subject to revisions so the ILINet data as it currently stands would not
1622
+ necessarily be available in real time [50].
1623
+ 4.4
1624
+ Evaluating the DMDEnKF’s performance
1625
+ Figure 12 demonstrates graphically the 4-step ahead DMDEnKF, Hankel-DMDEnKF and historical
1626
+ baseline forecasts. The DMDEnKFs more successfully capture the shape and height of each flu sea-
1627
+ son’s peak, however tend to predict the peaks late, whilst the historical baseline consistently under-
1628
+ predicts the peak rates but is fairly accurate on the timings. The Hankel-DMDEnKF’s forecasts are
1629
+ smoother than those of the DMDEnKF, however do not capture smaller details within the shape of
1630
+ the peaks. We also plot the 95% confidence intervals for the DMDEnKF and Hankel-DMDEnKF’s
1631
+ forecasts in Figure 12, generated using the ensemble that is maintained and propagated in the
1632
+ EnKF framework. At all times, the real data lies within the DMDEnKF’s confidence interval,
1633
+ which is not true for the Hankel-DMDEnKF. The DMDEnKF’s confidence interval is significantly
1634
+ wider than that of the Hankel-DMDEnKF, and this is due to Hankel-DMD’s robustness to noise,
1635
+ meaning that when the ensemble is propagated through the model, a large amount of probability
1636
+ mass is concentrated in a small area of the state space. This then leads to the Hankel-DMDEnKF
1637
+ underestimating the uncertainty in the system, and hence some real data values falling outside the
1638
+ boundaries of it’s 95% confidence interval.
1639
+ To numerically compare performance, we used metrics designed for the Forecast the Influenza
1640
+ Season Collaborative Challenge (FluSight), in which multiple teams would submit predictions about
1641
+ the weekly ILINet consultation rates for the upcoming flu season at a national and HHS regional
1642
+ level [7], [8]. The FluSight challenge evaluated models abilities to generate 1-4-week ahead forecasts
1643
+ 24
1644
+
1645
+ (a) DMDEnKF 4-week ahead forecast.
1646
+ (b) Hankel-DMDEnKF 4-week ahead forecast.
1647
+ Figure 11: ILI consultations as a percentage of total weekly GP consultations forecast 4 weeks ahead using
1648
+ the DMDEnKF (top) and Hankel-DMDEnKF (bottom). The DMD reconstruction captures the shape of the
1649
+ data well but is unstable, whereas the Hankel-DMD reconstruction is less sensitive to noise but
1650
+ over-smooths. The DMDEnKF and Hankel-DMDEnKF forecasts help reduce these issues present in their
1651
+ respective reconstructions, but both suffer from some degree of lag in their predictions.
1652
+ 25
1653
+
1654
+ 10
1655
+ Spin-up DMD
1656
+ Real Data
1657
+ DMDEnKE
1658
+ 8
1659
+ Iconsultations
1660
+ 4
1661
+ %
1662
+ 2
1663
+ 2004
1664
+ 2006
1665
+ 2008
1666
+ 2010
1667
+ 2012
1668
+ 2014
1669
+ 2016
1670
+ 2018
1671
+ Date10
1672
+ Real Data
1673
+ Spin-up Hankel-DMD
1674
+ Hankel-DMDEnKF
1675
+ 8
1676
+ Iconsultations
1677
+
1678
+ %
1679
+ 2
1680
+ 0
1681
+ 2004
1682
+ 2006
1683
+ 2008
1684
+ 2010
1685
+ 2012
1686
+ 2014
1687
+ 2016
1688
+ 2018
1689
+ DateFigure 12: ILI consultations as a percentage of total weekly GP consultations, forecast 4 weeks ahead using
1690
+ the DMDEnKF (top) and Hankel-DMDEnKF (bottom). A 95% confidence interval for each forecast, and
1691
+ historical baseline predictions are also shown. The Hankel-DMDEnKF forecasts are smoother than those of
1692
+ the DMDEnKF, but both forecasts contain some lag. The real data always lies within the DMDEnKF’s
1693
+ confidence interval but not the Hankel-DMDEnKF’s, however this is likely due to the DMDEnKF’s
1694
+ confidence interval being significantly wider than that of the Hankel-DMDEnKF.
1695
+ 26
1696
+
1697
+ 18
1698
+ Real Data
1699
+ 16
1700
+ Historical Baseline
1701
+ DMDEnKF
1702
+ 95% CI
1703
+ 14
1704
+ consultations
1705
+ 12
1706
+ 10
1707
+ 8
1708
+ %
1709
+ 6
1710
+ 4
1711
+ 2
1712
+ 0
1713
+ 2012
1714
+ 2013
1715
+ 2014
1716
+ 2015
1717
+ 2016
1718
+ 2017
1719
+ 2018
1720
+ 2019
1721
+ Date18
1722
+ Real Data
1723
+ Historical Baseline
1724
+ 16
1725
+ Hankel-DMDEnKF
1726
+ 95% CI
1727
+ 14
1728
+ consultations
1729
+ 12
1730
+ 10
1731
+ 8
1732
+
1733
+ %
1734
+ 6
1735
+ 4
1736
+ 2
1737
+ 0
1738
+ 2012
1739
+ 2013
1740
+ 2014
1741
+ 2015
1742
+ 2016
1743
+ 2017
1744
+ 2018
1745
+ 2019
1746
+ DateForecast
1747
+ Log Score
1748
+ Mean Squared Error
1749
+ Historical Baseline
1750
+ 0.28
1751
+ 1.24
1752
+ 1-week ahead DMDEnKF
1753
+ 0.49
1754
+ 0.33
1755
+ 2-week ahead DMDEnKF
1756
+ 0.38
1757
+ 0.61
1758
+ 3-week ahead DMDEnKF
1759
+ 0.32
1760
+ 0.87
1761
+ 4-week ahead DMDEnKF
1762
+ 0.27
1763
+ 1.16
1764
+ 1-week ahead Hankel-DMDEnKF
1765
+ 0.41
1766
+ 0.49
1767
+ 2-week ahead Hankel-DMDEnKF
1768
+ 0.33
1769
+ 0.70
1770
+ 3-week ahead Hankel-DMDEnKF
1771
+ 0.29
1772
+ 0.97
1773
+ 4-week ahead Hankel-DMDEnKF
1774
+ 0.23
1775
+ 1.26
1776
+ Table 2: The log scores and mean squared errors for the DMDEnKF and Hankel-DMDEnKF with differing
1777
+ forecast horizons, and the historical baseline prediction. The DMDEnKF achieves a higher log score and
1778
+ mean squared error than the historical baseline for forecast horizons up to 4 weeks ahead, where it attains a
1779
+ similar level of forecast skill. The Hankel-DMDEnKF consistently underperforms against the DMDEnKF
1780
+ in both metrics over these short forecast horizons. Scores are calculated over the 6 flu seasons from
1781
+ 2012/13 to 2017/18.
1782
+ known as short-term targets over the course of a flu season, as well as other longer term targets,
1783
+ known as seasonal targets, before the season had begun. The DMDEnKF is primarily intended to
1784
+ be a tool for tracking and short-term forecasting, hence we focus on forecasting these short-term
1785
+ targets only. For this purpose we used two different metrics, the log probability measure (log score)
1786
+ slightly adjusted from the FluSight challenge as used in [50] and the mean squared error due to its
1787
+ popular use in regression problems. The log score represents the geometric average probability of
1788
+ each model’s prediction being accurate, with accuracy deemed as a forecast within +/ − 0.5 of the
1789
+ true ILI consultation rate. The higher the log score, the better the forecast. Metrics are calculated
1790
+ from week 40 to week 20 of the following year to prioritize evaluation of forecasts during the flu
1791
+ season, and we use the 6 full seasons from 2012/13 to 2017/18.
1792
+ The results for the historical baseline prediction and DMDEnKF/Hankel-DMDEnKF’s forecasts at
1793
+ a national level can be seen in Table 2. As one would expect, the accuracy of both DMDEnKFs
1794
+ degrade as they make predictions further into the future. The DMDEnKF achieves a higher log score
1795
+ and mean squared error than the historical baseline for forecast horizons up to 4 weeks ahead, where
1796
+ it attains a similar level of forecast skill. For forecasts of 5 or more weeks ahead, the DMDEnKF is
1797
+ unable to outperform the historical baseline in either metric. The Hankel-DMDEnKF consistently
1798
+ underperforms against the DMDEnKF in both metrics over these short forecast horizons. The top
1799
+ 3 statistical models and top 3 mechanistic models in the FluSight challenge achieved log scores of
1800
+ 0.32 and 0.3 respectively for their 4-week ahead forecasts, hence the DMDEnKF has lower (but
1801
+ comparable) forecasting skill than current state of the art ILI models. As the forecast horizon is
1802
+ extended up to 12 weeks ahead, the DMDEnKF’s forecast scores continue to decrease monotonically,
1803
+ whereas the Hankel-DMDEnKF’s log scores for 9-12 weeks ahead are no worse than those for 5-8
1804
+ weeks ahead. As such, the DMDEnKF is preferred for short-term forecasting, while the Hankel-
1805
+ DMDEnKF is considered superior when forecasting over longer timescales.
1806
+ Figure 13 shows the log scores for the 4-week ahead DMDEnKF forecast, and how these compare to
1807
+ 27
1808
+
1809
+ (a) DMDEnKF 4-week ahead forecast.
1810
+ (b) DMDEnKF 4-week ahead forecast - historical
1811
+ baseline.
1812
+ Figure 13: Log scores over all ages and regions for the DMDEnKF’s 4-week ahead forecast (left), followed
1813
+ by those same scores with the log scores of the historical baseline prediction subtracted (right). The
1814
+ Hankel-DMDEnKF scored similarly to the DMDEnKF across all ages and regions, so we do not include its
1815
+ breakdown to avoid redundancy. In the top figure, the generally increasing intensity of red as one moves
1816
+ down the age groups shows the DMDEnKF performing more accurately for older age groups. The bottom
1817
+ figure’s varying areas of red/blue shows the DMDEnKF/historical baseline vary in superiority of forecasting
1818
+ skill depending on the age and region being forecast, with the historical baseline scoring more highly for
1819
+ most regions.
1820
+ the scores attained by the historical baseline prediction at an age and regional level. The Hankel-
1821
+ DMDEnKF scored similarly to the DMDEnKF across all ages and regions, so its breakdown is
1822
+ rather similar to that of the DMDEnKF, and we do not include it to avoid redundancy. In the
1823
+ DMDEnKF’s log scores, we see a major pattern in the older age groups scoring higher and hence
1824
+ being better predicted than the younger demographics. This pattern does not persist when the
1825
+ historical baseline’s scores are removed, indicating it is a more general trait of the data as opposed
1826
+ to a specific quality of the DMDEnKF’s modelling technique. There is also a significant difference in
1827
+ the predictability from region to region. For example, region 1 was the most predictable region for
1828
+ both the DMDEnKF and historical baseline, which is consistent with the findings in [50]. However,
1829
+ the DMDEnKF improved on the historical baseline’s forecast for only two of the four age groups
1830
+ in this region. In [50] it was found that the most overall improvement gained by forecasting for a
1831
+ region using a model as opposed to the historical baseline prediction also occurred in region 1, so
1832
+ one would expect to see improvements by the DMDEnKF over the historical baseline in all four age
1833
+ groups. As log score is heavily influenced by the amount of uncertainty in a forecast, it is possible
1834
+ that the covariance matrices used in the DMDEnKF were too large for this region. Hence, setting
1835
+ the magnitude of the DMDEnKF’s variance on a region by region basis could lead to better results
1836
+ and more accurate forecasts. Region 6 was the worst forecast region by the DMDEnKF, and the
1837
+ historical baseline predictions were slightly more accurate. Again, this is consistent with [50] where
1838
+ region 6 was the lowest scoring region for the models. In that work however, region 6 experienced
1839
+ the second most improvement by using a model over the historical baseline prediction. Hence, for
1840
+ this region a larger variance within the DMDEnKF may have been more appropriate to account
1841
+ for its extra unpredictability, further supporting the idea of varying the variance by region in the
1842
+ future.
1843
+ 28
1844
+
1845
+ 0-4
1846
+ 0.40
1847
+ 0.35
1848
+ 5-24
1849
+ 0.30
1850
+ score
1851
+ Ages
1852
+ 0.25
1853
+ s
1854
+ 25-64
1855
+ 0.20
1856
+ Log
1857
+ 0.15
1858
+ 65 +
1859
+ 0.10
1860
+ 0.05
1861
+ 1
1862
+ 2
1863
+ 3
1864
+ 4
1865
+ 5
1866
+ 6
1867
+ 7
1868
+ 8
1869
+ 9
1870
+ 10
1871
+ Region
1872
+ Region
1873
+ Region
1874
+ Region
1875
+ egion
1876
+ egion
1877
+ Region
1878
+ Region
1879
+ Region
1880
+ Region
1881
+ Re
1882
+ e
1883
+ R
1884
+ R
1885
+ Regions-0.05
1886
+ 0-4
1887
+ 0.00
1888
+ 5-24
1889
+ -0.05
1890
+ core
1891
+ Ages
1892
+ -0.10S
1893
+ 25-64
1894
+ Log
1895
+ -0.15
1896
+ 65 +
1897
+ -0.20
1898
+ Region 1
1899
+ Region 2.
1900
+ 3
1901
+ Region 4
1902
+ 5
1903
+ Region 6.
1904
+ Region 7.
1905
+ 8
1906
+ 9
1907
+ Region 10
1908
+ Region 3
1909
+ Region
1910
+ Region 8
1911
+ Region 9
1912
+ Regions4.5
1913
+ Varying the truncation rank
1914
+ Having analysed the DMDEnKF and Hankel-DMDEnKF’s ILI forecasting with 8 DMD modes,
1915
+ we now investigate the effect different truncation ranks (r) have on their performance in Figure
1916
+ 14. From Figure 14a, the subjective process of identifying an “elbow” in the data could lead an
1917
+ observer to determine a suitable rank for truncation as low as 4 or as high as 12. Application of
1918
+ the algorithm of Gavish and Donoho for identifying the optimal truncation threshold [26] also finds
1919
+ the truncation rank to be 12, hence we will focus on investigating values of r in the interval from
1920
+ 4 to 12.
1921
+ (a) % of total variance in the data.
1922
+ (b) DMDEnKF log score/mean squared errors for
1923
+ r = 4, ..., 12.
1924
+ (c) % of total variance in the delay-embedded data.
1925
+ (d) Hankel-DMDEnKF log score/mean squared errors
1926
+ for r = 4, ..., 12.
1927
+ Figure 14: On the left, the % of the total variance in the data (top) and delay-embedded data (bottom),
1928
+ dependent on the number of singular values that are retained (r). An “elbow” in the data occurs around
1929
+ r = 8 where we choose to truncate, however determining the exact position of the “elbow” is subjective and
1930
+ could be considered anywhere from r = 4 to r = 12. On the right, the log score and mean squared errors for
1931
+ 4-step ahead forecasts generated using the DMDEnKF (top) and Hankel-DMDEnKF (bottom) with differing
1932
+ values of r. In both cases, log score is maximised and mean squared error minimised for r = 8.
1933
+ Figures 14b and 14d show how the metrics we use to measure the DMDEnKF and Hankel-
1934
+ DMDEnKF’s forecasting skill vary with r. An ideal forecast will have a high log score reflecting
1935
+ a relatively tight and accurate probability distribution, with a low mean squared error indicating
1936
+ a point estimate close to the true percentage of ILI consultations. For both methods, log score
1937
+ is maximised and mean squared error minimised by r = 8, indicating this is the optimal rank to
1938
+ truncate at for our models. For r = 4, we have the simplest model tested, hence it has a low degree
1939
+ of uncertainty resulting in a relatively high log score, however is too simple to properly model the
1940
+ system so receives a high mean squared error. By increasing r, we allow the DMDEnKFs more
1941
+ freedom to capture complexity within the system, resulting in a more accurate representation of
1942
+ the true dynamics and hence a generally lower mean squared error. When the number of eigen-
1943
+ 29
1944
+
1945
+ lained
1946
+ 100%
1947
+ 95%
1948
+ ex
1949
+ 90%
1950
+ iance
1951
+ 85%
1952
+ vari
1953
+ 80%
1954
+ 75%
1955
+ total
1956
+ 70%
1957
+ Rank cut off
1958
+ of
1959
+ 65%
1960
+ at r=8.
1961
+ %
1962
+ 0
1963
+ 5
1964
+ 10
1965
+ 15
1966
+ 20
1967
+ 25
1968
+ 30
1969
+ 35
1970
+ 40
1971
+ Singular values retained (r)1.6
1972
+ .7
1973
+ rror
1974
+ 1.5
1975
+ .5
1976
+ 1.4
1977
+ Mean Squared
1978
+ 1.3
1979
+ .4
1980
+ 1.2
1981
+ .6
1982
+ 1.1
1983
+ .10
1984
+ .12
1985
+ .11
1986
+ 1.0
1987
+ .9
1988
+ .8
1989
+ 0.9
1990
+ 0.16
1991
+ 0.18
1992
+ 0.20
1993
+ 0.22
1994
+ 0.24
1995
+ 0.26
1996
+ 0.28
1997
+ Log Score<plained
1998
+ 100%
1999
+ 95%
2000
+ ex
2001
+ 90%
2002
+ iance
2003
+ 85%
2004
+ vari
2005
+ 80%
2006
+ 75%
2007
+ total
2008
+ 70%
2009
+ Rank cut off
2010
+ of
2011
+ 65%
2012
+ at r=8.
2013
+ %
2014
+ 0
2015
+ 50
2016
+ 100
2017
+ 150
2018
+ 200
2019
+ 250
2020
+ 300
2021
+ 350
2022
+ Singular values retained (r)1.7
2023
+ Error
2024
+ 1.6
2025
+ Squared
2026
+ 1.5
2027
+ 1.4
2028
+ 4
2029
+ .10
2030
+ 5
2031
+ Mean
2032
+ .12
2033
+ 1.3
2034
+ 11
2035
+ 1.2
2036
+ .8
2037
+ 1.1
2038
+ 0.19
2039
+ 0.20
2040
+ 0.21
2041
+ 0.22
2042
+ 0.23
2043
+ 0.24
2044
+ Log Scorevalues is increased too far however, it begins modelling elements of the noise in the system which
2045
+ negatively impacts future predictions, as seen in the increase in mean squared errors for r > 8.
2046
+ The additional freedom afforded to the DMDEnKF by increasing r also means the model contains
2047
+ more parameters, each of which have an associated degree of uncertainty. This causes the overall
2048
+ forecast’s probability distribution to become more spread out, and when no longer offset by the
2049
+ increased model accuracy up to r = 8, reduces the forecasts log score.
2050
+ 5
2051
+ Conclusion
2052
+ To conclude, we have defined two new algorithms, the DMDEnKF and Hankel-DMDEnKF, that
2053
+ combine dynamic mode decomposition and Hankel dynamic mode decomposition respectively with
2054
+ ensemble Kalman filtering, to update state and temporal mode estimates of a dynamical system
2055
+ as new data becomes available. When applied to simple, synthetic systems with a time varying
2056
+ parameter and low measurement noise, the DMDEnKFs performed similarly to other iterative DMD
2057
+ variants tested in tracking the system’s time varying parameter and forecasting future states. As
2058
+ measurement noise was increased, the DMDEnKFs outperformed the other methods tested in both
2059
+ metrics, and the Hankel-DMDEnKF produced more stable forecasts than those of the DMDEnKF.
2060
+ Both DMDEnKFs achieved similar performance levels to their equivalent DMD Particle Filters
2061
+ (an alteration to the DMDEnKF algorithms where the ensemble Kalman filters were switched for
2062
+ Particle Filters), while requiring significantly fewer ensemble members. When forecasting influenza-
2063
+ like illness across age groups and HHS regions in the US using data from the CDC, the DMDEnKF
2064
+ produced more accurate forecasts than a historical baseline prediction up to 3 weeks ahead, and
2065
+ forecasts approximately as accurate 4 weeks ahead. The Hankel-DMDEnKF produced less accurate
2066
+ forecasts for these short-term targets than the DMDEnKF, however in general it’s forecasts were
2067
+ more stable. Also, the Hankel-DMDEnKF was able to identify the presence of a mode with period 1
2068
+ year, which is strongly visible in the data, yet not identified by the DMDEnKF. Both DMDEnKFs
2069
+ exhibited lower forecasting skill than current state of the art influenza-like illness models.
2070
+ A natural extension of the DMDEnKF would be to apply extended/kernel DMD in the spin-up
2071
+ DMD phase, allowing the algorithm to be used more effectively on dynamical systems that act
2072
+ nonlinearly in their measured states. Instead of taking the observed values alone as the system’s
2073
+ state xk, these variants use for the state a collection of functions on the observables g(xk), which
2074
+ often increases the state’s dimension n. The EnKF is well suited to this pairing, as it scales more
2075
+ computationally efficiently in the state dimension than other Kalman filtering methods [23]. The
2076
+ best choice of the collection of functions g(xk) as an embedding for nonlinear systems so that DMD
2077
+ may be effectively utilized is an interesting area of future work. Many methods have been developed
2078
+ that propose ways of generating g(xk), for example using deep learning [41] or reservoir computing
2079
+ [28], and this remains a promising avenue for future work.
2080
+ Code availability
2081
+ Codes used to produce the results in this paper are available at https://github.com/falconical/DMDEnKF.
2082
+ 30
2083
+
2084
+ Data availability statement
2085
+ All data used to produce the results in this paper will be made available upon reasonable request.
2086
+ Acknowledgements
2087
+ This work was supported by the UKRI, whose Doctoral Training Partnership Studentship helped
2088
+ fund Stephen Falconers PhD. He would also like to thank for their valuable discussions, Nadia
2089
+ Smith and Spencer Thomas from the National Physics Laboratory.
2090
+ References
2091
+ [1] H. Abdi, The eigen-decomposition: Eigenvalues and eigenvectors, Encyclopedia of measure-
2092
+ ment and statistics, (2007).
2093
+ [2] H. Arbabi and I. Mezi´c, Ergodic theory, dynamic mode decomposition, and computation of
2094
+ spectral properties of the koopman operator, SIAM Journal on Applied Dynamical Systems, 16
2095
+ (2017), pp. 2096–2126.
2096
+ [3] F. Auger, M. Hilairet, J. Guerrero, E. Monmasson, T. Orlowska-Kowalska,
2097
+ and S. Katsura, Industrial applications of the kalman filter: A review, IEEE Transactions
2098
+ on Industrial Electronics, 60 (2013), p. 5458.
2099
+ [4] R. Baker, J.-M. Pe˜na, J. Jayamohan, and A. J´erusalem, Mechanistic models versus
2100
+ machine learning, a fight worth fighting for the biological community?, Biology Letters, 14
2101
+ (2018), p. 20170660.
2102
+ [5] D. Balcan, V. Colizza, B. Gonc¸alves, H. Hu, J. J. Ramasco, and A. Vespignani,
2103
+ Multiscale mobility networks and the spatial spreading of infectious diseases, Proceedings of
2104
+ the National Academy of Sciences, 106 (2009), pp. 21484–21489.
2105
+ [6] J. C. A. Barata and M. S. Hussein, The moore–penrose pseudoinverse: A tutorial review
2106
+ of the theory, Brazilian Journal of Physics, 42 (2011), p. 146–165.
2107
+ [7] M. Biggerstaff, D. Alper, M. Dredze, S. Fox, I. C.-H. Fung, K. S. Hickmann,
2108
+ B. Lewis, R. Rosenfeld, J. Shaman, M.-H. Tsou, et al., Results from the centers
2109
+ for disease control and prevention’s predict the 2013–2014 influenza season challenge, BMC
2110
+ infectious diseases, 16 (2016), pp. 1–10.
2111
+ [8] M. Biggerstaff, M. Johansson, D. Alper, L. C. Brooks, P. Chakraborty, D. C.
2112
+ Farrow, S. Hyun, S. Kandula, C. McGowan, N. Ramakrishnan, R. Rosenfeld,
2113
+ J. Shaman, R. Tibshirani, R. J. Tibshirani, A. Vespignani, W. Yang, Q. Zhang,
2114
+ and C. Reed, Results from the second year of a collaborative effort to forecast influenza
2115
+ seasons in the united states, Epidemics, 24 (2018), pp. 26–33.
2116
+ [9] D. A. Bistrian, G. Dimitriu, and I. M. Navon, Processing epidemiological data using
2117
+ dynamic mode decomposition method, AIP Conference Proceedings, 2164 (2019), p. 080002.
2118
+ 31
2119
+
2120
+ [10] R. N. Bracewell and R. N. Bracewell, The Fourier transform and its applications,
2121
+ vol. 31999, McGraw-Hill New York, 1986.
2122
+ [11] L. Brooks, D. Farrow, S. Hyun, R. Tibshirani, and R. Rosenfeld, Flexible modeling
2123
+ of epidemics with an empirical bayes framework, PLOS Computational Biology, 11 (2014).
2124
+ [12] B. W. Brunton, L. A. Johnson, J. G. Ojemann, and J. N. Kutz, Extracting spa-
2125
+ tial–temporal coherent patterns in large-scale neural recordings using dynamic mode decompo-
2126
+ sition, Journal of Neuroscience Methods, 258 (2016), p. 1–15.
2127
+ [13] K. Chen, J. Tu, and C. Rowley, Variants of dynamic mode decomposition: Boundary
2128
+ condition, koopman, and fourier analyses, Journal of Nonlinear Science, 22 (2012).
2129
+ [14] J.-P. Chretien, D. George, J. Shaman, R. Chitale, and F. McKenzie, Influenza
2130
+ forecasting in human populations: A scoping review, PloS one, 9 (2014), p. e94130.
2131
+ [15] S. T. M. Dawson, M. S. Hemati, M. O. Williams, and C. W. Rowley, Characterizing
2132
+ and correcting for the effect of sensor noise in the dynamic mode decomposition, Experiments
2133
+ in Fluids, 57 (2016).
2134
+ [16] A. de Cheveign´e and J. Z. Simon, Denoising based on time-shift pca, Journal of Neuro-
2135
+ science Methods, 165 (2007), pp. 297–305.
2136
+ [17] P. Del Moral, Nonlinear filtering:
2137
+ Interacting particle resolution, Comptes Rendus de
2138
+ l’Acad´emie des Sciences - Series I - Mathematics, 325 (1997), pp. 653–658.
2139
+ [18] M. D’Elia, L. Mirabella, T. Passerini, M. Perego, M. Piccinelli, C. Vergara, and
2140
+ A. Veneziani, Applications of variational data assimilation in computational hemodynamics,
2141
+ Modeling, Simulation and Applications, 5 (2012).
2142
+ [19] N. Demo, M. Tezzele, and G. Rozza, PyDMD: Python Dynamic Mode Decomposition,
2143
+ The Journal of Open Source Software, 3 (2018), p. 530.
2144
+ [20] R. Douc and O. Capp´e, Comparison of resampling schemes for particle filtering, in ISPA
2145
+ 2005. Proceedings of the 4th International Symposium on Image and Signal Processing and
2146
+ Analysis, 2005., IEEE, 2005, pp. 64–69.
2147
+ [21] A. Doucet and A. Johansen, A tutorial on particle filtering and smoothing: Fifteen years
2148
+ later, Handbook of Nonlinear Filtering, 12 (2009).
2149
+ [22] C. Eckart and G. Young, The approximation of one matrix by another of lower rank,
2150
+ Psychometrika, 1 (1936), pp. 211–218.
2151
+ [23] G. Evensen, The ensemble kalman filter: Theoretical formulation and practical implementa-
2152
+ tion, Ocean dynamics, 53 (2003), pp. 343–367.
2153
+ [24] S. I. Flu, Estimated influenza illnesses, medical visits, hospitalizations, and deaths averted by
2154
+ vaccination in the united states, Prevent, (2007), pp. 2006–2007.
2155
+ [25] C. for Disease Control and Prevention, Cdc u.s. influenza surveillance system: Pur-
2156
+ pose and methods. https://www.cdc.gov/flu/weekly/overview.htm, 2021. Accessed: 2021-
2157
+ 04-27.
2158
+ 32
2159
+
2160
+ [26] M. Gavish and D. L. Donoho, The optimal hard threshold for singular values is 4/sqrt(3),
2161
+ 2014.
2162
+ [27] N. Gordon, D. Salmond, and A. F. M. Smith, Novel approach to nonlinear/non-gaussian
2163
+ bayesian state estimation, IEE Proceedings F (Radar and Signal Processing), 140 (1993),
2164
+ pp. 107–113(6).
2165
+ [28] M. Gulina and A. Mauroy, Two methods to approximate the koopman operator with a reser-
2166
+ voir computer, Chaos: An Interdisciplinary Journal of Nonlinear Science, 31 (2021), p. 023116.
2167
+ [29] C. W. R. Hao Zhang, Online dmd github repository. https://github.com/haozhg/odmd,
2168
+ 2020. Accessed: 2021-01-19.
2169
+ [30] M. Hemati, E. Deem, M. Williams, C. W. Rowley, and L. N. Cattafesta, Improving
2170
+ separation control with noise-robust variants of dynamic mode decomposition, in 54th AIAA
2171
+ Aerospace Sciences Meeting, 2016, p. 1103.
2172
+ [31] M. S. Hemati, C. W. Rowley, E. A. Deem, and L. N. Cattafesta, De-biasing the dy-
2173
+ namic mode decomposition for applied koopman spectral analysis of noisy datasets, Theoretical
2174
+ and Computational Fluid Dynamics, 31 (2017), p. 349–368.
2175
+ [32] M. S. Hemati, M. O. Williams, and C. W. Rowley, Dynamic mode decomposition for
2176
+ large and streaming datasets, Physics of Fluids, 26 (2014), p. 111701.
2177
+ [33] R. Isermann and M. M¨unchhof, State and Parameter Estimation by Kalman Filtering,
2178
+ Springer Berlin Heidelberg, Berlin, Heidelberg, 2011, pp. 539–551.
2179
+ [34] T. Jonathan H., R. Clarence W., L. Dirk M., B. Steven L., and K. J. Nathan, On
2180
+ dynamic mode decomposition: Theory and applications, Journal of Computational Dynamics,
2181
+ 1 (2014), p. 391.
2182
+ [35] M. R. Jovanovi´c, P. J. Schmid, and J. W. Nichols, Sparsity-promoting dynamic mode
2183
+ decomposition, Physics of Fluids, 26 (2014), p. 024103.
2184
+ [36] R. E. Kalman, A New Approach to Linear Filtering and Prediction Problems, Journal of
2185
+ Basic Engineering, 82 (1960), pp. 35–45.
2186
+ [37] S. Kandula, T. Yamana, S. Pei, W. Yang, H. Morita, and J. Shaman, Evaluation of
2187
+ mechanistic and statistical methods in forecasting influenza-like illness, Journal of The Royal
2188
+ Society Interface, 15 (2018), p. 20180174.
2189
+ [38] K. J. H. Law, A. M. Stuart, and K. C. Zygalakis, Data assimilation: A mathematical
2190
+ introduction, 2015.
2191
+ [39] J. Lessler and D. Cummings, Mechanistic models of infectious disease and their impact on
2192
+ public health, American Journal of Epidemiology, 183 (2016), p. kww021.
2193
+ [40] Y. Luo, K. Ogle, C. Tucker, S. Fei, C. Gao, S. LaDeau, J. S. Clark, and D. S.
2194
+ Schimel, Ecological forecasting and data assimilation in a data-rich era, Ecological Applica-
2195
+ tions, 21 (2011), pp. 1429–1442.
2196
+ 33
2197
+
2198
+ [41] B. Lusch, J. N. Kutz, and S. L. Brunton, Deep learning for universal linear embeddings
2199
+ of nonlinear dynamics, Nature Communications, 9 (2018).
2200
+ [42] J. Mandel, A brief tutorial on the ensemble kalman filter, 2009.
2201
+ [43] J. Mann and J. N. Kutz, Dynamic mode decomposition for financial trading strategies, 2015.
2202
+ [44] I. G. K. ”Matthew O. Williams”, ”Clarence W. Rowley”, A kernel-based method for
2203
+ data-driven koopman spectral analysis, Journal of Computational Dynamics, 2 (2015), p. 247.
2204
+ [45] T. Nonomura, H. Shibata, and R. Takaki, Dynamic mode decomposition using a kalman
2205
+ filter for parameter estimation, AIP Advances, 8 (2018), p. 105106.
2206
+ [46] T. Nonomura, H. Shibata, and R. Takaki, Extended-kalman-filter-based dynamic mode
2207
+ decomposition for simultaneous system identification and denoising, PloS one, 14 (2019),
2208
+ p. e0209836.
2209
+ [47] E. O. Nsoesie, J. S. Brownstein, N. Ramakrishnan, and M. V. Marathe, A system-
2210
+ atic review of studies on forecasting the dynamics of influenza outbreaks, Influenza and Other
2211
+ Respiratory Viruses, 8 (2014), pp. 309–316.
2212
+ [48] D. Osthus, K. S. Hickmann, P. C. Caragea, D. Higdon, and S. Y. D. Valle, Fore-
2213
+ casting seasonal influenza with a state-space SIR model, The Annals of Applied Statistics, 11
2214
+ (2017), pp. 202 – 224.
2215
+ [49] J. Proctor and P. Welkhoff, Discovering dynamic patterns from infectious disease data
2216
+ using dynamic mode decomposition, International health, 7 (2015), pp. 139–45.
2217
+ [50] N. G. Reich, L. C. Brooks, S. J. Fox, S. Kandula, C. J. McGowan, E. Moore,
2218
+ D. Osthus, E. L. Ray, A. Tushar, T. K. Yamana, M. Biggerstaff, M. A. Johansson,
2219
+ R. Rosenfeld, and J. Shaman, A collaborative multiyear, multimodel assessment of seasonal
2220
+ influenza forecasting in the united states, Proceedings of the National Academy of Sciences,
2221
+ 116 (2019), pp. 3146–3154.
2222
+ [51] R. H. Reichle, Data assimilation methods in the earth sciences, Advances in water resources,
2223
+ 31 (2008), pp. 1411–1418.
2224
+ [52] R. H. Reichle, J. P. Walker, R. D. Koster, and P. R. Houser, Extended versus
2225
+ ensemble kalman filtering for land data assimilation, Journal of Hydrometeorology, 3 (01 Dec.
2226
+ 2002), pp. 728 – 740.
2227
+ [53] M. Ribeiro and I. Ribeiro, Kalman and extended kalman filters: Concept, derivation and
2228
+ properties, 04 2004.
2229
+ [54] P. Schmid and J. Sesterhenn, Dynamic Mode Decomposition of numerical and experi-
2230
+ mental data, in APS Division of Fluid Dynamics Meeting Abstracts, vol. 61 of APS Meeting
2231
+ Abstracts, Nov. 2008, p. MR.007.
2232
+ [55] S. J. Sheather, Density estimation, Statistical Science, 19 (2004), pp. 588–597.
2233
+ [56] B. W. Silverman, Density estimation for statistics and data analysis, Routledge, 2018.
2234
+ 34
2235
+
2236
+ [57] C. Snyder, T. Bengtsson, P. Bickel, and J. L. Anderson, Obstacles to high-
2237
+ dimensional particle filtering, Monthly Weather Review, 136 (2008), pp. 4629–4640.
2238
+ [58] P. Van den Driessche, Reproduction numbers of infectious disease models, Infectious Disease
2239
+ Modelling, 2 (2017), pp. 288–303.
2240
+ [59] E. A. Wan and R. Van Der Merwe, The unscented kalman filter for nonlinear estimation,
2241
+ in Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications,
2242
+ and Control Symposium (Cat. No.00EX373), 2000, pp. 153–158.
2243
+ [60] Z. Wang, P. Chakraborty, S. R. Mekaru, J. S. Brownstein, J. Ye, and N. Ra-
2244
+ makrishnan, Dynamic poisson autoregression for influenza-like-illness case count prediction,
2245
+ in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery
2246
+ and Data Mining, KDD ’15, New York, NY, USA, 2015, Association for Computing Machinery,
2247
+ p. 1285–1294.
2248
+ [61] M. O. Williams, I. G. Kevrekidis, and C. W. Rowley, A data–driven approximation of
2249
+ the koopman operator: Extending dynamic mode decomposition, Journal of Nonlinear Science,
2250
+ 25 (2015), p. 1307–1346.
2251
+ [62] H. Zhang, C. W. Rowley, E. A. Deem, and L. N. Cattafesta, Online dynamic mode
2252
+ decomposition for time-varying systems, 2017.
2253
+ 35
2254
+
59E5T4oBgHgl3EQfPQ7C/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
6dE3T4oBgHgl3EQfpwoB/content/tmp_files/2301.04644v1.pdf.txt ADDED
@@ -0,0 +1,2684 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ DOES PROGRESS ON IMAGENET TRANSFER
2
+ TO REAL-WORLD DATASETS?
3
+ Alex Fang
4
+ University of Washington
5
6
+ Simon Kornblith∗
7
+ Google Research, Brain Team
8
9
+ Ludwig Schmidt∗
10
+ University of Washington, Allen Institute for AI
11
12
+ ABSTRACT
13
+ Does progress on ImageNet transfer to real-world datasets? We investigate this
14
+ question by evaluating ImageNet pre-trained models with varying accuracy (57% -
15
+ 83%) on six practical image classification datasets. In particular, we study datasets
16
+ collected with the goal of solving real-world tasks (e.g., classifying images from
17
+ camera traps or satellites), as opposed to web-scraped benchmarks collected for
18
+ comparing models. On multiple datasets, models with higher ImageNet accuracy
19
+ do not consistently yield performance improvements. For certain tasks, interven-
20
+ tions such as data augmentation improve performance even when architectures
21
+ do not. We hope that future benchmarks will include more diverse datasets to
22
+ encourage a more comprehensive approach to improving learning algorithms.
23
+ 1
24
+ INTRODUCTION
25
+ ImageNet is one of the most widely used datasets in machine learning. Initially, the ImageNet com-
26
+ petition played a key role in re-popularizing neural networks with the success of AlexNet in 2012.
27
+ Ten years later, the ImageNet dataset is still one of the main benchmarks for state-of-the-art com-
28
+ puter vision models (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016; Liu
29
+ et al., 2018; Howard et al., 2019; Touvron et al., 2021; Radford et al., 2021). As a result of Ima-
30
+ geNet’s prominence, the machine learning community has invested tremendous effort into develop-
31
+ ing model architectures, training algorithms, and other methodological innovations with the goal of
32
+ increasing performance on ImageNet. Comparing methods on a common task has important benefits
33
+ because it ensures controlled experimental conditions and results in rigorous evaluations. But the
34
+ singular focus on ImageNet also raises the question whether the community is over-optimizing for
35
+ this specific dataset.
36
+ As a first approximation, ImageNet has clearly encouraged effective methodological innovation be-
37
+ yond ImageNet itself. For instance, the key finding from the early years of ImageNet was that
38
+ large convolution neural networks (CNNs) can succeed on contemporary computer vision datasets
39
+ by leveraging GPUs for training. This paradigm has led to large improvements in other computer
40
+ vision tasks, and CNNs are now omnipresent in the field. Nevertheless, this clear example of trans-
41
+ fer to other tasks early in the ImageNet evolution does not necessarily justify the continued focus
42
+ ImageNet still receives. For instance, it is possible that early methodological innovations transferred
43
+ more broadly to other tasks, but later innovations have become less generalizable. The goal of our
44
+ paper is to investigate this possibility specifically for neural network architecture and their transfer
45
+ to real-world data not commonly found on the Internet.
46
+ When discussing the transfer of techniques developed for ImageNet to other datasets, a key ques-
47
+ tion is what other datasets to consider. Currently there is no comprehensive characterization of the
48
+ many machine learning datasets and transfer between them. Hence we restrict our attention to a
49
+ limited but well-motivated family of datasets. In particular, we consider classification tasks derived
50
+ from image data that were specifically collected with the goal of classification in mind. This is in
51
+ ∗Equal contribution
52
+ 1
53
+ arXiv:2301.04644v1 [cs.CV] 11 Jan 2023
54
+
55
+ contrast to many standard computer vision datasets – including ImageNet – where the constituent
56
+ images were originally collected for a different purpose, posted to the web, and later re-purposed for
57
+ benchmarking computer vision methods. Concretely, we study six datasets ranging from leaf dis-
58
+ ease classification over melanoma detection to categorizing animals in camera trap images. Since
59
+ these datasets represent real-world applications, transfer of methods from ImageNet is particularly
60
+ relevant.
61
+ We find that on four out of our six real-world datasets, ImageNet-motivated architecture improve-
62
+ ments after VGG resulted in little to no progress (see Figure 1). Specifically, when we fit a line to
63
+ downstream model accuracies as a function of ImageNet accuracy, the resulting slope is less than
64
+ 0.05. The two exceptions where post-VGG architectures yield larger gains are the Caltech Camera
65
+ Traps-20 (CCT-20) (Beery et al., 2018) dataset (slope 0.11) and the Human Protein Atlas Image
66
+ Classification (Ouyang et al., 2019) dataset (slope 0.29). On multiple other datasets, we find that
67
+ task-specific improvements such as data augmentations or extra training data lead to larger gains
68
+ than using a more recent ImageNet architecture. We evaluate on a representative testbed of 19 Im-
69
+ ageNet models, ranging from the seminal AlexNet (Krizhevsky et al., 2012) over VGG (Simonyan
70
+ & Zisserman, 2015) and ResNets (He et al., 2016) to the more recent and higher-performing Effi-
71
+ cientNets (Tan & Le, 2019) and ConvNexts (Liu et al., 2022) (ImageNet top-1 accuracies 56.5% to
72
+ 83.4%). Our testbed includes three Vision Transformer models to cover non-CNN architectures.
73
+ Interestingly, our findings stand in contrast to earlier work that investigated the aforementioned
74
+ image classification benchmarks such as CIFAR-10 (Krizhevsky & Hinton, 2009), PASCAL VOC
75
+ 2007 (Everingham et al., 2010), and Caltech-101 (Fei-Fei et al., 2004) that were scraped from the
76
+ Internet. On these datasets, Kornblith et al. (2019) found consistent gains in downstream task ac-
77
+ curacy for a similar range of architectures as we study in our work. Taken together, these findings
78
+ indicate that ImageNet accuracy may be a good predictor for other web-scraped datasets, but less
79
+ informative for real-world image classification datasets that are not sourced through the web. On the
80
+ other hand, the CCT-20 data point shows that even very recent ImageNet models do help on some
81
+ downstream tasks that do not rely on images from the web. Overall, our results highlight the need
82
+ for a more comprehensive understanding of machine learning datasets to build and evaluate broadly
83
+ useful data representations.
84
+ 2
85
+ RELATED WORK
86
+ Transferability of ImageNet architectures. Although there is extensive previous work investigat-
87
+ ing the effect of architecture upon the transferability of ImageNet-pretrained models to different
88
+ datasets, most of this work focuses on performance on datasets collected for the purpose of bench-
89
+ marking. Kornblith et al. (2019) previously showed that ImageNet accuracy of different models is
90
+ strongly correlated with downstream accuracy on a wide variety of web-scraped object-centric com-
91
+ puter vision benchmark tasks. Later studies have investigated the relationship between ImageNet
92
+ and transfer accuracy for self-supervised networks (Ericsson et al., 2021; Kotar et al., 2021; Nayman
93
+ et al., 2022), adversarially trained networks (Salman et al., 2020), or networks trained with different
94
+ loss functions (Kornblith et al., 2021), but still evaluate primarily on web-scraped benchmark tasks.
95
+ The Visual Task Adaptation Benchmark (VTAB) (Zhai et al., 2019) comprises a more diverse set of
96
+ tasks, including natural and non-natural classification tasks as well as non-classification tasks, but
97
+ nearly all consist of web-scraped or synthetic images. In the medical imaging domain, models have
98
+ been extensively evaluated on real-world data, with limited gains from newer models that perform
99
+ better on ImageNet (Raghu et al., 2019; Bressem et al., 2020; Ke et al., 2021).
100
+ Most closely related to our work, Tuggener et al. (2021) investigate performance of 500 CNN archi-
101
+ tectures on yet another set of datasets, several of which are not web-scraped, and find that accuracy
102
+ correlates poorly with ImageNet accuracy when training from scratch, but correlations are higher
103
+ when fine-tuning ImageNet-pretrained models. Our work differs from theirs in our focus solely on
104
+ real-world datasets (e.g., from Kaggle competitions) and in that we perform extensive tuning in order
105
+ to approach the best single-model performance obtainable on these datasets whereas Tuggener et al.
106
+ (2021) instead devote their compute budget to increasing the breadth of architectures investigated.
107
+ Transferability of networks trained on other datasets. Other work has evaluated transferability
108
+ of representations of networks trained on datasets beyond ImageNet. Most notably, Abnar et al.
109
+ (2022) explore the relationship between upstream and downstream accuracy for models pretrained
110
+ on JFT and ImageNet-21K and find that, on many tasks, downstream accuracy saturates with up-
111
+ 2
112
+
113
+ 60
114
+ 65
115
+ 70
116
+ 75
117
+ 80
118
+ ImageNet top-1 accuracy
119
+ 64
120
+ 66
121
+ 68
122
+ 70
123
+ 72
124
+ 74
125
+ 76
126
+ 78
127
+ Accuracy
128
+ Caltech Camera Traps 20
129
+ 60
130
+ 65
131
+ 70
132
+ 75
133
+ 80
134
+ ImageNet top-1 accuracy
135
+ 0.895
136
+ 0.900
137
+ 0.905
138
+ 0.910
139
+ 0.915
140
+ 0.920
141
+ 0.925
142
+ 0.930
143
+ Quadratic weighted kappa
144
+ APTOS 2019 Blindness
145
+ 60
146
+ 65
147
+ 70
148
+ 75
149
+ 80
150
+ ImageNet top-1 accuracy
151
+ 0.40
152
+ 0.45
153
+ 0.50
154
+ 0.55
155
+ 0.60
156
+ 0.65
157
+ 0.70
158
+ 0.75
159
+ Macro F1 score
160
+ Human Protein Atlas
161
+ 60
162
+ 65
163
+ 70
164
+ 75
165
+ 80
166
+ ImageNet top-1 accuracy
167
+ 0.91
168
+ 0.92
169
+ 0.93
170
+ 0.94
171
+ 0.95
172
+ 0.96
173
+ 0.97
174
+ Area under ROC
175
+ SIIM-ISIC Melanoma
176
+ 60
177
+ 65
178
+ 70
179
+ 75
180
+ 80
181
+ ImageNet top-1 accuracy
182
+ 83
183
+ 84
184
+ 85
185
+ 86
186
+ 87
187
+ 88
188
+ 89
189
+ Accuracy
190
+ Cassava Leaf Disease
191
+ 60
192
+ 65
193
+ 70
194
+ 75
195
+ 80
196
+ ImageNet top-1 accuracy
197
+ 98.0
198
+ 98.2
199
+ 98.4
200
+ 98.6
201
+ 98.8
202
+ 99.0
203
+ 99.2
204
+ 99.4
205
+ Accuracy
206
+ EuroSAT
207
+ AlexNet
208
+ MobileNetV3-small
209
+ VGG-13 BN
210
+ DeiT-tiny
211
+ ResNet-50
212
+ ResNet-152
213
+ DeiT-small
214
+ PNASNet-5
215
+ Inception-ResNet v2
216
+ VGG-16 BN
217
+ EfficientNet B0
218
+ EfficientNet B4
219
+ DenseNet-121
220
+ ResNeXt-50-32x4d
221
+ ShuffleNetV2x1.0
222
+ ConvNext-tiny
223
+ ShuffleNetV2x0.5
224
+ SqueezeNet 1.1
225
+ ViT-B/16
226
+ Figure 1: Overview of transfer performance across models from ImageNet to each of the datasets we study.
227
+ Although there seems to be a strong linear trends between ImageNet accuracy and the target metrics (green),
228
+ these trends become less certain when we restrict the models to those above 70% ImageNet accuracy (blue).
229
+ Versions with error bars and spline interpolation can be found in Appendix B.
230
+ stream accuracy. However, they evaluate representational quality using linear transfer rather than
231
+ end-to-end fine-tuning. Other studies have investigated the impact of relationships between pre-
232
+ training and fine-tuning tasks (Zamir et al., 2018; Mensink et al., 2021) or the impact of scaling the
233
+ model and dataset (Goyal et al., 2019; Kolesnikov et al., 2020).
234
+ Another direction of related work relates to the effect of pretraining data on transfer learning. Huh
235
+ et al. (2016) look into the factors that make ImageNet good for transfer learning. They find that
236
+ fine-grained classes are not needed for good transfer performance, and that reducing the dataset size
237
+ and number of classes only results in slight drops in transfer learning performance. Though there is
238
+ a common goal of exploring what makes transfer learning work well, our work differs from this line
239
+ of work by focusing on the fine-tuning aspect of transfer learning.
240
+ Other studies of external validity of benchmarks. Our study fits into a broader literature inves-
241
+ tigating the external validity of image classification benchmarks. Early work in this area identified
242
+ lack of diversity as a key shortcoming of the benchmarks of the time (Ponce et al., 2006; Torralba
243
+ & Efros, 2011), a problem that was largely resolved with the introduction of the much more di-
244
+ verse ImageNet benchmark (Deng et al., 2009; Russakovsky et al., 2015). More recent studies have
245
+ investigated the extent to which ImageNet classification accuracy correlates with accuracy on out-
246
+ of-distribution (OOD) data (Recht et al., 2019; Taori et al., 2020) or accuracy as measured using
247
+ higher-quality human labels (Shankar et al., 2020; Tsipras et al., 2020; Beyer et al., 2020).
248
+ As in previous studies of OOD generalization, transfer learning involves generalization to test sets
249
+ that differ in distribution from the (pre-)training data. However, there are also key differences be-
250
+ tween transfer learning and OOD generalization. First, in transfer learning, additional training data
251
+ from the target task is used to adapt the model, while OOD evaluations usually apply trained models
252
+ to a new distribution without any adaptation. Second, OOD evaluations usually focus on settings
253
+ with a shared class space so that evaluations without adaptation are possible. In contrast, transfer
254
+ learning evaluation generally involves downstream tasks with classes different from those in the pre-
255
+ training dataset. These differences between transfer learning and OOD generalization are not only
256
+ conceptual but also lead to different empirical phenomena. Miller et al. (2021) has shown that in-
257
+ 3
258
+
259
+ distribution accuracy improvements often directly yield out-of-distribution accuracy improvements
260
+ as well. This is the opposite of our main experimental finding that ImageNet improvements do not
261
+ directly yield performance improvements on many real-world downstream tasks. Hence our work
262
+ demonstrates an important difference between OOD generalization and transfer learning.
263
+ 3
264
+ DATASETS
265
+ As mentioned in the introduction, a key choice in any transfer study is the set of target tasks on which
266
+ to evaluate model performance. Before we introduce our suite of target tasks, we first describe three
267
+ criteria that guided our dataset selection: (i) diverse data sources, (ii) relevance to an application,
268
+ and (iii) availability of well-tuned baseline models for comparison.
269
+ 3.1
270
+ SELECTION CRITERIA
271
+ Prior work has already investigated transfer of ImageNet architectures to many downstream
272
+ datasets (Donahue et al., 2014; Sharif Razavian et al., 2014; Chatfield et al., 2014; Simonyan &
273
+ Zisserman, 2015). The 12 datasets used by Kornblith et al. (2019) often serve as a standard evalu-
274
+ ation suite (e.g., in (Salman et al., 2020; Ericsson et al., 2021; Radford et al., 2021)). While these
275
+ datasets are an informative starting point, they are all object-centric natural image datasets, and do
276
+ not represent the entire range of image classification problems. There are many applications of com-
277
+ puter vision; the Kaggle website alone lists more than 1,500 datasets as of May 2022. To understand
278
+ transfer from ImageNet more broadly, we selected six datasets guided by the following criteria.
279
+ Diverse data sources. Since collecting data is an expensive process, machine learning researchers
280
+ often rely on web scraping to gather data when assembling a new benchmark. This practice has led to
281
+ several image classification datasets with different label spaces such as food dishes, bird species, car
282
+ models, or other everyday objects. However, the data sources underlying these seemingly different
283
+ tasks are actually often similar. Specifically, we surveyed the 12 datasets from Kornblith et al.
284
+ (2019) and found that all of these datasets were harvested from the web, often via keyword searches
285
+ in Flickr, Google image search, or other search engines (see Appendix K). This narrow range of
286
+ data sources limits the external validity of existing transfer learning experiments. To get a broader
287
+ understanding of transfer from ImageNet, we focus on scientific, commercial, and medical image
288
+ classification datasets that were not originally scraped from the web.
289
+ Application relevance. In addition to the data source, the classification task posed on a given set of
290
+ images also affects how relevant the resulting problem is for real-world applications. For instance,
291
+ it would be possible to start with real-world satellite imagery that shows multiple building types
292
+ per image, but only label one of the building types for the purpose of benchmarking (e.g., to avoid
293
+ high annotation costs). The resulting task may then be of limited value for an actual application
294
+ involving the satellite images that requires all buildings to be annotated. We aim to avoid such
295
+ pitfalls by limiting our attention to classification tasks that were assembled by domain experts with
296
+ a specific application in mind.
297
+ Availability of baselines. If methodological progress does not transfer from ImageNet to a given
298
+ target task, we should expect that, as models perform better on ImageNet, accuracy on the target
299
+ task saturates. However, observing such a trend in an experiment is not sufficient to reach a conclu-
300
+ sion regarding transfer because there is an alternative explanation for this empirical phenomenon.
301
+ Besides a lack of transfer, the target task could also simply be easier than the source task so that
302
+ models with sub-optimal source task accuracy already approach the Bayes error rate. As an illus-
303
+ trative example, consider MNIST as a target task for ImageNet transfer. A model with mediocre
304
+ ImageNet accuracy is already sufficient to get 99% accuracy on MNIST, but this finding does not
305
+ mean that better ImageNet models are insufficient to improve MNIST accuracy — the models have
306
+ already hit the MNIST performance ceiling.
307
+ More interesting failures of transfer occur when ImageNet architectures plateau on the target task,
308
+ but it is still possible to improve accuracy beyond what the best ImageNet architecture can achieve
309
+ without target task-specific modifications. In order to make such comparisons, well-tuned base-
310
+ lines for the target task are essential. If improving ImageNet accuracy alone is insufficient to reach
311
+ these well-tuned baselines, we can indeed conclude that architecture transfer to this target task is
312
+ limited. In our experiments, we use multiple datasets from Kaggle competitions since the resulting
313
+ leaderboards offer well-tuned baselines arising from a competitive process.
314
+ 4
315
+
316
+ 3.2
317
+ DATASETS STUDIED
318
+ Table 1: We examine a variety of real-world datasets that cover different types of tasks.
319
+ Dataset
320
+ # of classes
321
+ Train size
322
+ Eval size
323
+ Eval metric
324
+ Kaggle
325
+ Caltech Camera Traps
326
+ 15
327
+ 14,071
328
+ 15,215
329
+ Accuracy
330
+ APTOS 2019 Blindness
331
+ 5
332
+ 2,930
333
+ 732
334
+ Quadratic
335
+ 
336
+ weighted kappa
337
+ Human Protein Atlas
338
+ 28
339
+ 22,582
340
+ 5,664
341
+ Macro F1 score
342
+ 
343
+ SIIM-ISIC Melanoma
344
+ 2
345
+ 46,372
346
+ 11,592
347
+ Area under ROC
348
+ 
349
+ Cassava Leaf Disease
350
+ 5
351
+ 17,118
352
+ 4,279
353
+ Accuracy
354
+ 
355
+ EuroSAT
356
+ 10
357
+ 21,600
358
+ 5,400
359
+ Accuracy
360
+ Caltech Camera
361
+ Traps-20
362
+ APTOS 2019
363
+ Blindness Detection
364
+ Human Protein Atlas
365
+ Image Classification
366
+ SIIM-ISIC Melanoma
367
+ Classification
368
+ Cassava Leaf Disease
369
+ Classification
370
+ EuroSAT
371
+ Figure 2: Sample images from each of the datasets.
372
+ The datasets studied in this work are practical and cover a variety of applications. We choose four
373
+ of the most popular image classification competitions on Kaggle, as measured by number of com-
374
+ petitors, teams, and submissions. Each of these competitions is funded by an organization with the
375
+ goal of advancing performance on that real-world task. Additionally, we supplement these datasets
376
+ with Caltech Camera Traps (Beery et al., 2018) and EuroSAT (Helber et al., 2019) to broaden the
377
+ types of applications studied. Details for each dataset can be found in Table 1 1.
378
+ 4
379
+ MAIN EXPERIMENTS
380
+ We run our experiments across 19 model architectures, including both CNNs and Vision Transform-
381
+ ers (ViT and DeiT). They range from 57% to 83% ImageNet top-1 accuracy, allowing us to observe
382
+ the relationship between ImageNet performance and target dataset performance. In order to get the
383
+ best performance out of each architecture, we do extensive hyperparameter tuning over learning
384
+ rate, weight decay, optimizer, and learning schedule. Details about our experiment setup can be
385
+ found in Appendix C. We now present our results for each of the datasets we investigated. Figure 1
386
+ summarizes our results across all datasets, with additional statistics in Table 2. Appendix A contains
387
+ complete results for all datasets across the hyperparameter grids.
388
+ 4.1
389
+ CALTECH CAMERA TRAPS
390
+ Table 2: We summarize the blue regression lines from
391
+ Figure 1, calculated on models above 70% ImageNet
392
+ accuracy, with their correlation and slope. Slope is cal-
393
+ culated so that all metrics have a range from 0 to 100.
394
+ Dataset
395
+ Correlation
396
+ Slope
397
+ Caltech Camera Traps
398
+ 0.17
399
+ 0.11
400
+ APTOS 2019 Blindness
401
+ 0.06
402
+ 0.01
403
+ Human Protein Atlas
404
+ 0.26
405
+ 0.29
406
+ SIIM-ISIC Melanoma
407
+ 0.44
408
+ 0.05
409
+ Cassava Leaf Disease
410
+ 0.12
411
+ 0.02
412
+ EuroSAT
413
+ 0.05
414
+ 0.00
415
+ Beery et al. (2018) created Caltech Camera
416
+ Traps-20 (CCT-20) using images taken from
417
+ camera traps deployed to monitor animal pop-
418
+ ulations. The images contain 15 different ani-
419
+ mal classes, as well as an empty class that we
420
+ remove for our experiments 2. The dataset con-
421
+ tains two sets of validation and test sets which
422
+ differ by whether they come from locations that
423
+ are the same as or different from the training set
424
+ locations. While one of the goals of the dataset
425
+ is to study generalization to new environments,
426
+ here we only study the sets from the same locations. Although CCT-20 is not a Kaggle competition,
427
+ it is a subset of the iWildCam Challenge 2018, whose yearly editions have been hosted on Kaggle.
428
+ We see in Figure 1 (top-left) an overall positive trend between ImageNet performance and CCT-
429
+ 20 performance. The overall trend is unsurprising, given the number of animal classes present in
430
+ ImageNet. But despite the drastic reduction in the number of classes when compared to ImageNet,
431
+ 1Dataset download links and PyTorch datasets and splits can be found at https://github.com/
432
+ mlfoundations/imagenet-applications-transfer.
433
+ 2Empty class is removed for the classification experiments in Table 1 of Beery et al. (2018)
434
+ 5
435
+
436
+ 店CCT-20 has its own set of challenges. Animals are often pictured at difficult angles, and sometimes
437
+ are not even visible in the image because a sequence of frames triggered by activity all have the
438
+ same label. Despite these challenges, an even higher performing model still does better on this task
439
+ - we train a CLIP ViT L/14-336px model (85.4% ImageNet top-1) with additional augmentation to
440
+ achieve 83.4% accuracy on CCT-20.
441
+ 4.2
442
+ APTOS 2019 BLINDNESS DETECTION
443
+ This dataset was created for a Kaggle competition run by the Asia Pacific Tele-Ophthalmology
444
+ Society (APTOS) with the goal of advancing medical screening for diabetic retinopathy in rural
445
+ areas (Asia Pacific Tele-Ophthalmology Society, 2019). Images are taken using fundus photography
446
+ and vary in terms of clinic source, camera used, and time taken. Images are labeled by clinicians on
447
+ a scale of 0 to 4 for the severity of diabetic retinopathy. Given the scaled nature of the labels, the
448
+ competition uses quadratic weighted kappa (QWK) as the evaluation metric. We create a local 80%
449
+ to 20% random class-balanced train/validation split, as the competition test labels are hidden.
450
+ We find that models after VGG do not show significant improvement. Similar to in CCT-20, DeiT
451
+ and EfficientNets performs slightly worse, while deeper models from the same architecture slightly
452
+ help performance. We also find that accuracy has a similar trend as QWK, despite it being an inferior
453
+ metric in the context of this dataset.
454
+ When performance stagnates, one might ask whether we have reached a performance limit for our
455
+ class of models on the dataset. To answer this question, we compare with the Kaggle leaderboard’s
456
+ top submissions. The top Kaggle submission achieves 0.936 QWK on the private leaderboard (85%
457
+ of the test set) (Xu, 2019). They do this by using additional augmentation, using external data,
458
+ training on L1-loss, replacing the final pooling layer with generalized mean pooling, and ensembling
459
+ a variety of models trained with different input sizes. The external data consists of 88,702 images
460
+ from the 2015 Diabetic Retinopathy Detection Kaggle competition.
461
+ Even though performance saturates with architecture, we find that additional data augmentation and
462
+ other interventions still improve accuracy. We submitted our ResNet-50 and ResNet-152 models
463
+ with additional interventions, along with an Inception-ResNet v2 (Szegedy et al., 2017b) model with
464
+ hyperparameter tuning. We find that increasing color and affine augmentation by itself can account
465
+ for a 0.03 QWK point improvement. Once we train on 512 input size, additional augmentation, and
466
+ additional data, our ResNet-50 and Inception-ResNet v2 both achieve 0.896 QWK on the private
467
+ leaderboard, while ResNet-152 achieves 0.890 QWK, once again suggesting that better ImageNet
468
+ architectures by themselves do not lead to increased performance on this task.
469
+ As a comparison, the ensemble from the top leaderboard entry included a single model Inception-
470
+ ResNet v2 trained with additional interventions that achieves 0.927 QWK. We submitted the original
471
+ models we trained to Kaggle as well, finding that the new models trained with additional interven-
472
+ tions do at least 0.03 QWK points better. See Appendix F for additional experimental details. Both
473
+ this result and the gap between our models and the top leaderboard models show that there exist
474
+ interventions that do improve task performance.
475
+ 4.3
476
+ HUMAN PROTEIN ATLAS IMAGE CLASSIFICATION
477
+ The Human Protein Atlas runs the Human Protein Atlas Image Classification competition on Kaggle
478
+ to build an automated tool for identifying and locating proteins from high-throughput microscopy
479
+ images (Ouyang et al., 2019). Images can contain multiple of the 28 different proteins, so the
480
+ competition uses the macro F1 score. Given the multi-label nature of the problem, this requires
481
+ thresholding for prediction. We use a 73% / 18% / 9% train / validation / test-validation split created
482
+ by a previous competitor (Park, 2019). We report results on the validation split, as we find that the
483
+ thresholds selected for the larger validation split generalize well to the smaller test-validation split.
484
+ We find a slightly positive trend between task performance and ImageNet performance, even when
485
+ ignoring AlexNet and MobileNet. This is surprising because ImageNet is quite visually distinct from
486
+ human protein slides. These results suggest that models with more parameters help with downstream
487
+ performance, especially for tasks that have a lot of room for improvement.
488
+ Specific challenges for this dataset are extreme class imbalance, multi-label thresholding, and gen-
489
+ eralization from the training data to the test set. Competitors were able to improve performance
490
+ beyond the baselines we found by using external data as well as techniques such as data cleaning,
491
+ 6
492
+
493
+ additional training augmentation, test time augmentation, ensembling, and oversampling (Dai, 2019;
494
+ Park, 2019; Shugaev, 2019). Additionally, some competitors modified commonly-used architectures
495
+ by substituting pooling layers or incorporating attention (Park, 2019; Zheng, 2019). Uniquely, the
496
+ first place solution used metric learning on top of a single DenseNet121 (Dai, 2019). These tech-
497
+ niques may be useful when applied to other datasets, but are rarely used in a typical workflow.
498
+ 4.4
499
+ SIIM-ISIC MELANOMA CLASSIFICATION
500
+ The Society for Imaging Informatics in Medicine (SIIM) and the International Skin Imaging Collab-
501
+ oration (ISIC) jointly ran this Kaggle competition for identifying Melanoma (SIIM & ISIC, 2020),
502
+ a serious type of skin cancer. Competitors use images of skin lesions to predict the probability that
503
+ each observed image is malignant. Images come from the ISIC Archive, which is publicly available
504
+ and contains images from a variety of countries. The competition provided 33,126 training images,
505
+ plus an additional 25,331 images from previous competitions. We split the combined data into an
506
+ 80% to 20% class-balanced and year-balanced train/validation split. Given the imbalanced nature of
507
+ the data (8.8% positive), the competition uses area under ROC curve as the evaluation metric.
508
+ We find only a weak positive correlation (0.44) between ImageNet performance and task perfor-
509
+ mance, with a regression line with a normalized slope of close to zero (0.05). But if we instead look
510
+ at classification accuracy, Appendix H shows that there is a stronger trend for transfer than that of
511
+ area under ROC curve, as model task accuracy more closely follows the same order as ImageNet
512
+ performance. This difference shows that characterizing the relationship between better ImageNet
513
+ models and better transfer performance is reliant on the evaluation metric as well. We use a rela-
514
+ tively simple setup to measure the impact of ImageNet models on task performance, but we know we
515
+ can achieve better results with additional strategies. The top two Kaggle solutions used models with
516
+ different input size, ensembling, cross-validation and a significant variety of training augmentation
517
+ to create a stable model that generalized to the hidden test set (Ha et al., 2020; Pan, 2020).
518
+ 4.5
519
+ CASSAVA LEAF DISEASE CLASSIFICATION
520
+ The Makerere Artificial Intelligence Lab is an academic research group focused on applications
521
+ that benefit the developing world. Their goal in creating the Cassava Leaf Disease Classification
522
+ Kaggle competition (Makerere University AI Lab, 2021) was to give farmers access to methods
523
+ for diagnosing plant diseases, which could allow farmers to prevent these diseases from spreading,
524
+ increasing crop yield. Images were taken with an inexpensive camera and labeled by agricultural
525
+ experts. Each image was classified as healthy or as one of four different diseases. We report results
526
+ using a 80%/20% random class-balanced train/validation split of the provided training data.
527
+ Once we ignore models below 70% ImageNet accuracy, the relationship between the performance on
528
+ the two datasets has both a weak positive correlation (0.12) and a near-zero normalized slope (0.02).
529
+ While these are natural images similar to portions of ImageNet, it is notable that ImageNet contains
530
+ very few plant classes (e.g., buckeye, hip, rapeseed). Yet based on a dataset’s perceived similarity to
531
+ ImageNet, it is surprising that leaf disease classification is not positively correlated with ImageNet,
532
+ while the microscopy image based Human Protein Atlas competition is. Our results are supported
533
+ by Kaggle competitors: the first place solution found that on the private leaderboard, EfficientNet
534
+ B4 (Tan & Le, 2019), MobileNet, and ViT (Dosovitskiy et al., 2021b) achieve 89.5%, 89.4%, and
535
+ 88.8% respectively (Hanke, 2021). Their ensemble achieves 91.3% on the private leaderboard.
536
+ 4.6
537
+ EUROSAT
538
+ Helber et al. (2019) created EuroSAT from Sentinel-2 satellite images to classify land use and land
539
+ cover. Past work has improved performance on the dataset through additional training time tech-
540
+ niques (Naushad et al., 2021) and using 13 spectral bands (Yassine et al., 2021). We use RGB
541
+ images and keep our experimental setup consistent to compare across a range of models. Since
542
+ there is no set train/test split, we create a 80%/20% class-balanced split.
543
+ All models over 60% ImageNet accuracy achieve over 98.5% EuroSAT accuracy, and the majority
544
+ of our models achieve over 99.0% EuroSAT accuracy. There are certain tasks where using better
545
+ ImageNet models does not improve performance, and this would be the extreme case where perfor-
546
+ mance saturation is close to being achieved. While it is outside the scope of this study, a next step
547
+ would be to investigate the remaining errors and find other methods to reduce this last bit of error.
548
+ 7
549
+
550
+ 5
551
+ ADDITIONAL STUDIES
552
+ 5.1
553
+ AUGMENTATION ABLATIONS
554
+ In our main experiments, we keep augmentation simple to minimize confounding factors when com-
555
+ paring models. However, it is possible pre-training and fine-tuning with different combinations of
556
+ augmentations may have different results. This is an important point because different architectures
557
+ may have different inductive biases and often use different augmentation strategies at pre-training
558
+ time. To investigate these effects, we run additional experiments on CCT-20 and APTOS to explore
559
+ the effect of data augmentation on transfer. Specifically, we take ResNet-50 models pre-trained with
560
+ standard crop and flip augmentation, AugMix (Hendrycks et al., 2020), and RandAugment (Cubuk
561
+ et al., 2020), and then fine-tune on our default augmentation, AugMix, and RandAugment. We also
562
+ study DeiT-tiny and Deit-small models by fine-tuning on the same three augmentations mentioned
563
+ above. We choose to examine DeiT models because they are pre-trained using RandAugment and
564
+ RandErasing (Zhong et al., 2020). We increase the number of epochs we fine-tune on from 30 to 50
565
+ to account for augmentation. Our experimental results are found in Appendix G.
566
+ In our ResNet-50 experiments, both AugMix and RandAugment improve performance on ImageNet,
567
+ but while pre-training with RandAugment improves performance on downstream tasks, pre-training
568
+ with AugMix does not. Furthermore, fine-tuning with RandAugment usually yields additional per-
569
+ formance gains when compared to our default fine-tuning augmentation, no matter which pre-trained
570
+ model is used. For DeiT models, we found that additional augmentation did not significantly in-
571
+ crease performance on the downstream tasks. Thus, as with architectures, augmentation strategies
572
+ that improve accuracy on ImageNet do not always improve accuracy on real-world tasks.
573
+ 5.2
574
+ CLIP MODELS
575
+ A natural follow-up to our experiments is to change the source of pre-training data. We exam-
576
+ ine CLIP models from Radford et al. (2021), which use diverse pre-training data and achieve high
577
+ performance on a variety of downstream datasets. We fine-tune CLIP models on each of our down-
578
+ stream datasets by linear probing then fine-tuning (LP-FT) (Kumar et al., 2022).3 Our results are
579
+ visualized by the purple stars in Appendix I Figure 8. We see that by using a model that takes larger
580
+ images we can do better than all previous models, and even without the larger images, ViT-L/14
581
+ does better on four out of the six datasets. While across all CLIP models the change in pre-training
582
+ data increases performance for CCT-20, the effect on the other datasets is more complicated. When
583
+ controlling for architecture changes by only looking at ResNet-50 and ViT/B16, we see that the ad-
584
+ ditional pre-training data helps for CCT-20, HPA, and Cassava, the former two corresponding to the
585
+ datasets that empirically benefit most from using better ImageNet models. Additional results can be
586
+ found in Appendix I, while additional fine-tuning details can be found in Appendix J.
587
+ 6
588
+ DISCUSSION
589
+ Alternative explanations for saturation. Whereas Kornblith et al. (2019) reported a high degree
590
+ of correlation between ImageNet and transfer accuracy, we find that better ImageNet models do not
591
+ consistently transfer better on our real-world tasks. We believe these differences are related to the
592
+ tasks themselves. Here, we rule out alternative hypotheses for our findings.
593
+ Comparison of datasets statistics suggests that the number of classes and dataset size also do not
594
+ explain the differences from Kornblith et al. (2019). The datasets we study range from two to 28
595
+ classes. Although most of the datasets studied in Kornblith et al. (2019) have more classes, CIFAR-
596
+ 10 has 10. In Appendix E, we replicate CIFAR-10 results from Kornblith et al. (2019) using our
597
+ experimental setup, finding a strong correlation between ImageNet accuracy and transfer accuracy.
598
+ Thus, the number of classes is likely not the determining factor. Training set sizes are similar
599
+ between our study and that of Kornblith et al. (2019) and thus also do not seem to play a major role.
600
+ A third hypothesis is that it is parameter count, rather than ImageNet accuracy, that drives trends.
601
+ We see that VGG BN models appear to outperform their ImageNet accuracy on multiple datasets,
602
+ and they are among the largest models by parameter count. However, in Appendix L, we find that
603
+ model size is also not a good indicator of improved transfer performance on real world datasets.
604
+ 3We use LP-FT because, in past experiments, we have found that LP-FT makes hyperparameter tuning
605
+ easier for CLIP models, but does not significantly alter performance when using optimal hyperparameters.
606
+ 8
607
+
608
+ Differences between web-scraped datasets and real-world images We conjecture that it is possi-
609
+ ble to perform well on most, if not all, web-scraped target datasets simply by collecting a very large
610
+ amount of data from the Internet and training a very large model on it. Web-scraped target datasets
611
+ are by definition within the distribution of data collected from the web, and a sufficiently large model
612
+ can learn that distribution. In support of this conjecture, recent models such as CLIP (Radford et al.,
613
+ 2021), ALIGN (Jia et al., 2021), ViT-G (Zhai et al., 2022), BASIC (Pham et al., 2021), and CoCa (Yu
614
+ et al., 2022) are trained on very large web-scraped datasets and achieve high accuracy on a variety of
615
+ web-scraped benchmarks. However, this strategy may not be effective for non-web-scraped datasets,
616
+ where there is no guarantee that we will train on data that is close in distribution to the target data,
617
+ even if we train on the entire web. Thus, it makes sense to distinguish these two types of datasets.
618
+ There are clear differences in image distribution between the non-web-scraped datasets we consider
619
+ and web-scraped datasets considered by previous work. In Figure 3 and Appendix M, we compute
620
+ Figure 3: FID scores vs ImageNet for the datasets
621
+ we study in this work (red), and the web-scraped
622
+ datasets studied by Kornblith et al. (2019) (blue).
623
+ Fr´echet inception distance (FID) (Heusel et al.,
624
+ 2017) between ImageNet and each of the datasets
625
+ we study in this work as well as the ones found in
626
+ Kornblith et al. (2019). The real-world datasets are
627
+ further away from ImageNet than those found in Ko-
628
+ rnblith et al. (2019), implying that there is a large
629
+ amount of distribution shift between web-scraped
630
+ datasets and real-world datasets. However, FID is
631
+ only a proxy measure and may not capture all fac-
632
+ tors that lead to differences in transferability.
633
+ Whereas web-scraped data is cheap to acquire, real-
634
+ world data can be more expensive. Ideally, progress
635
+ in computer vision architectures should improve per-
636
+ formance not just on web-scraped data, but also on
637
+ real-world tasks. Our results suggest that the latter has not happened. Gains in ImageNet accuracy
638
+ over the last decade have primarily come from improving and scaling architectures, and past work
639
+ has shown that these gains generally transfer to other web-scraped datasets, regardless of size (Sun
640
+ et al., 2017; Kornblith et al., 2019; Mahajan et al., 2018; Xie et al., 2020; Kolesnikov et al., 2020).
641
+ However, we find that improvements arising from architecture generally do not transfer to non-web-
642
+ scraped tasks. Nonetheless, data augmentation and other tweaks can provide further gains on these
643
+ tasks.
644
+ Recommendations towards better benchmarking. While it is unclear whether researchers have
645
+ over-optimized for ImageNet, our work suggests that researchers should explicitly search for meth-
646
+ ods that improve accuracy on real-world non-web-scraped datasets, rather than assuming that meth-
647
+ ods that improve accuracy on ImageNet will provide meaningful improvements on real-world
648
+ datasets as well. Just as there are methods that improve accuracy on ImageNet but not on the tasks
649
+ we investigate, there may be methods that improve accuracy on our tasks but not ImageNet. The
650
+ Kaggle community provides some evidence for the existence of such methods; Kaggle submissions
651
+ often explore architectural improvements that are less common in traditional ImageNet pre-trained
652
+ models. To measure such improvements on real-world problems, we suggest simply using the aver-
653
+ age accuracy across our tasks as a benchmark for future representation learning research.
654
+ Further analysis of our results shows consistencies in the accuracies of different models across the
655
+ non-web-scraped datasets, suggesting that accuracy improvements on these datasets may translate
656
+ to other datasets. For each dataset, we use linear regression to predict model accuracies on the target
657
+ dataset as a linear combination of ImageNet accuracy and accuracy averaged across the other real-
658
+ world datasets. We perform an F-test to determine whether the average accuracy on other real-world
659
+ datasets explains significant variance beyond that explained by ImageNet accuracy. We find that this
660
+ F-test is significant on all datasets except EuroSAT, where accuracy may be very close to ceiling
661
+ (see further analysis in Appendix N.1). Additionally, in Appendix N.2 we compare the Spearman
662
+ rank correlation (i.e., the Pearson correlation between ranks) between each dataset and the accuracy
663
+ averaged across the other real-world datasets to the Spearman correlation between each dataset and
664
+ ImageNet. We find that the correlation with the average over real-world datasets is higher than
665
+ the correlation with ImageNet and statistically significant for CCT-20, APTOS, HPA, and Cassava.
666
+ Thus, there is some signal in the average accuracy across the datasets that we investigate that is not
667
+ captured by ImageNet top-1 accuracy.
668
+ 9
669
+
670
+ Web (count)
671
+ Non-web (count)
672
+ 3
673
+
674
+ Number of datasets
675
+ 2
676
+ 0 H
677
+ 00
678
+ 00
679
+ 00
680
+ 00
681
+ 00
682
+ 00
683
+ 100
684
+ 50
685
+ 125
686
+ FIDWhere do our findings leave ImageNet? We suspect that most of the methodological innovations
687
+ that help on ImageNet are useful for some real-world tasks, and in that sense it has been a successful
688
+ benchmark. However, the innovations that improve performance on industrial web-scraped datasets
689
+ such as JFT (Sun et al., 2017) or IG-3.5B-17k (Mahajan et al., 2018) (e.g., model scaling) may be
690
+ almost entirely disjoint from the innovations that help with the non-web-scraped real-world tasks
691
+ studied here (e.g., data augmentation strategies). We hope that future benchmarks will include more
692
+ diverse datasets to encourage a more comprehensive approach to improving learning algorithms.
693
+ 7
694
+ ACKNOWLEDGEMENTS
695
+ We would like to thank Samuel Ainsworth, Sara Beery, Gabriel Ilharco, Pieter-Jan Kindermans,
696
+ Sarah Pratt, Matthew Wallingford, Ross Wightman, and Mitchell Wortsman for valuable conversa-
697
+ tions while working on this project. We would especially like to thank Sarah Pratt for help with
698
+ early experimentation and brainstorming.
699
+ We would also like to thank Hyak computing cluster at the University of Washington and the Google
700
+ TPU Research Cloud program for access to compute resources that allowed us to run our experi-
701
+ ments.
702
+ This work is in part supported by the NSF AI Institute for Foundations of Machine Learning (IFML),
703
+ Open Philanthropy, Google, and the Allen Institute for AI.
704
+ REFERENCES
705
+ Samira Abnar, Mostafa Dehghani, Behnam Neyshabur, and Hanie Sedghi. Exploring the limits of
706
+ large scale pre-training. In International Conference on Learning Representations, 2022. URL
707
+ https://openreview.net/forum?id=V3C8p78sDa.
708
+ Asia Pacific Tele-Ophthalmology Society. Aptos 2019 blindness detection, 2019. URL https:
709
+ //www.kaggle.com/competitions/aptos2019-blindness-detection/
710
+ overview.
711
+ Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In Vittorio Ferrari,
712
+ Martial Hebert, Cristian Sminchisescu, and Yair Weiss (eds.), Computer Vision - ECCV 2018 -
713
+ 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XVI,
714
+ volume 11220 of Lecture Notes in Computer Science, pp. 472–489. Springer, 2018. doi: 10.1007/
715
+ 978-3-030-01270-0\ 28.
716
+ URL https://doi.org/10.1007/978-3-030-01270-0_
717
+ 28.
718
+ Lucas Beyer, Olivier J H´enaff, Alexander Kolesnikov, Xiaohua Zhai, and A¨aron van den Oord. Are
719
+ we done with imagenet? arXiv preprint arXiv:2006.07159, 2020.
720
+ Keno K Bressem, Lisa C Adams, Christoph Erxleben, Bernd Hamm, Stefan M Niehues, and Janis L
721
+ Vahldiek. Comparing different deep learning architectures for classification of chest radiographs.
722
+ Scientific reports, 10(1):1–16, 2020.
723
+ Ken Chatfield, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Return of the devil in the
724
+ details: Delving deep into convolutional nets. arXiv preprint arXiv:1405.3531, 2014.
725
+ Ekin
726
+ Dogus
727
+ Cubuk,
728
+ Barret
729
+ Zoph,
730
+ Jonathon
731
+ Shlens,
732
+ and
733
+ Quoc
734
+ Le.
735
+ Randaugment:
736
+ Practical
737
+ automated
738
+ data
739
+ augmentation
740
+ with
741
+ a
742
+ reduced
743
+ search
744
+ space.
745
+ In
746
+ Hugo
747
+ Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien
748
+ Lin (eds.), Advances in Neural Information Processing Systems 33:
749
+ Annual Conference
750
+ on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020,
751
+ virtual,
752
+ 2020.
753
+ URL https://proceedings.neurips.cc/paper/2020/hash/
754
+ d85b63ef0ccb114d0a3bb7b7d808028f-Abstract.html.
755
+ Shubin
756
+ Dai.
757
+ A
758
+ cnn
759
+ classifier
760
+ and
761
+ a
762
+ metric
763
+ learning
764
+ model,
765
+ 1st
766
+ place
767
+ so-
768
+ lution,
769
+ 2019.
770
+ URL
771
+ https://www.kaggle.com/competitions/
772
+ human-protein-atlas-image-classification/discussion/78109.
773
+ Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hi-
774
+ erarchical image database. In 2009 IEEE conference on computer vision and pattern recognition,
775
+ pp. 248–255. Ieee, 2009.
776
+ 10
777
+
778
+ Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor
779
+ Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In Inter-
780
+ national conference on machine learning, pp. 647–655. PMLR, 2014.
781
+ Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas
782
+ Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszko-
783
+ reit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition
784
+ at scale.
785
+ In 9th International Conference on Learning Representations, ICLR 2021, Virtual
786
+ Event, Austria, May 3-7, 2021. OpenReview.net, 2021a. URL https://openreview.net/
787
+ forum?id=YicbFdNTTy.
788
+ Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas
789
+ Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszko-
790
+ reit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition
791
+ at scale.
792
+ In 9th International Conference on Learning Representations, ICLR 2021, Virtual
793
+ Event, Austria, May 3-7, 2021. OpenReview.net, 2021b. URL https://openreview.net/
794
+ forum?id=YicbFdNTTy.
795
+ Linus Ericsson, Henry Gouk, and Timothy M Hospedales. How well do self-supervised models
796
+ transfer? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni-
797
+ tion, pp. 5414–5423, 2021.
798
+ Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John M. Winn, and Andrew Zis-
799
+ serman.
800
+ The pascal visual object classes (VOC) challenge.
801
+ Int. J. Comput. Vis., 88(2):
802
+ 303–338, 2010. doi: 10.1007/s11263-009-0275-4. URL https://doi.org/10.1007/
803
+ s11263-009-0275-4.
804
+ Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training
805
+ examples: An incremental bayesian approach tested on 101 object categories. In IEEE Conference
806
+ on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2004, Washington,
807
+ DC, USA, June 27 - July 2, 2004, pp. 178. IEEE Computer Society, 2004. doi: 10.1109/CVPR.
808
+ 2004.383. URL https://doi.org/10.1109/CVPR.2004.383.
809
+ Priya Goyal, Dhruv Mahajan, Abhinav Gupta, and Ishan Misra. Scaling and benchmarking self-
810
+ supervised visual representation learning. In Proceedings of the ieee/cvf International Conference
811
+ on computer vision, pp. 6391–6400, 2019.
812
+ Qishen Ha, Bo Liu, and Fuxu Liu. Identifying melanoma images using efficientnet ensemble: Win-
813
+ ning solution to the SIIM-ISIC melanoma classification challenge. CoRR, abs/2010.05351, 2020.
814
+ URL https://arxiv.org/abs/2010.05351.
815
+ Jannis Hanke. 1st place solution, 2021. URL https://www.kaggle.com/competitions/
816
+ cassava-leaf-disease-classification/discussion/221957.
817
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
818
+ Deep residual learning for image
819
+ recognition.
820
+ In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR
821
+ 2016, Las Vegas, NV, USA, June 27-30, 2016, pp. 770–778. IEEE Computer Society, 2016. doi:
822
+ 10.1109/CVPR.2016.90. URL https://doi.org/10.1109/CVPR.2016.90.
823
+ Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset
824
+ and deep learning benchmark for land use and land cover classification. IEEE J. Sel. Top. Appl.
825
+ Earth Obs. Remote. Sens., 12(7):2217–2226, 2019. doi: 10.1109/JSTARS.2019.2918242. URL
826
+ https://doi.org/10.1109/JSTARS.2019.2918242.
827
+ Dan Hendrycks, Norman Mu, Ekin Dogus Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshmi-
828
+ narayanan. Augmix: A simple data processing method to improve robustness and uncertainty. In
829
+ 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia,
830
+ April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=
831
+ S1gmrxHFvB.
832
+ Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter.
833
+ Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Isabelle
834
+ 11
835
+
836
+ Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vish-
837
+ wanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30:
838
+ Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long
839
+ Beach, CA, USA, pp. 6626–6637, 2017.
840
+ URL https://proceedings.neurips.cc/
841
+ paper/2017/hash/8a1d694707eb0fefe65871369074926d-Abstract.html.
842
+ Andrew Howard, Ruoming Pang, Hartwig Adam, Quoc V. Le, Mark Sandler, Bo Chen, Weijun
843
+ Wang, Liang-Chieh Chen, Mingxing Tan, Grace Chu, Vijay Vasudevan, and Yukun Zhu. Search-
844
+ ing for mobilenetv3. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV
845
+ 2019, Seoul, Korea (South), October 27 - November 2, 2019, pp. 1314–1324. IEEE, 2019. doi:
846
+ 10.1109/ICCV.2019.00140. URL https://doi.org/10.1109/ICCV.2019.00140.
847
+ Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected
848
+ convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition,
849
+ CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pp. 2261–2269. IEEE Computer Society,
850
+ 2017. doi: 10.1109/CVPR.2017.243. URL https://doi.org/10.1109/CVPR.2017.
851
+ 243.
852
+ Minyoung Huh, Pulkit Agrawal, and Alexei A. Efros. What makes imagenet good for transfer
853
+ learning? CoRR, abs/1608.08614, 2016. URL http://arxiv.org/abs/1608.08614.
854
+ Forrest N. Iandola, Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt
855
+ Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <1mb model size.
856
+ CoRR, abs/1602.07360, 2016. URL http://arxiv.org/abs/1602.07360.
857
+ Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan
858
+ Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning
859
+ with noisy text supervision. In International Conference on Machine Learning, pp. 4904–4916.
860
+ PMLR, 2021.
861
+ Alexander Ke, William Ellsworth, Oishi Banerjee, Andrew Y Ng, and Pranav Rajpurkar. Chextrans-
862
+ fer: performance and parameter efficiency of imagenet models for chest x-ray interpretation. In
863
+ Proceedings of the Conference on Health, Inference, and Learning, pp. 116–124, 2021.
864
+ Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly,
865
+ and Neil Houlsby. Big transfer (bit): General visual representation learning. In European confer-
866
+ ence on computer vision, pp. 491–507. Springer, 2020.
867
+ Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In
868
+ Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2661–
869
+ 2671, 2019.
870
+ Simon Kornblith, Ting Chen, Honglak Lee, and Mohammad Norouzi. Why do better loss functions
871
+ lead to less transferable features? Advances in Neural Information Processing Systems, 34, 2021.
872
+ Klemen Kotar, Gabriel Ilharco, Ludwig Schmidt, Kiana Ehsani, and Roozbeh Mottaghi. Contrasting
873
+ contrastive self-supervised representation learning pipelines. In Proceedings of the IEEE/CVF
874
+ International Conference on Computer Vision, pp. 9949–9959, 2021.
875
+ Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Tech-
876
+ nical report, University of Toronto, 2009.
877
+ Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep con-
878
+ volutional neural networks.
879
+ In Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C.
880
+ Burges, L´eon Bottou, and Kilian Q. Weinberger (eds.), Advances in Neural Information Pro-
881
+ cessing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012.
882
+ Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States, pp.
883
+ 1106–1114, 2012. URL https://proceedings.neurips.cc/paper/2012/hash/
884
+ c399862d3b9d6b76c8436e924a68c45b-Abstract.html.
885
+ Ananya Kumar, Aditi Raghunathan, Robbie Matthew Jones, Tengyu Ma, and Percy Liang. Fine-
886
+ tuning can distort pretrained features and underperform out-of-distribution. In International Con-
887
+ ference on Learning Representations, 2022.
888
+ URL https://openreview.net/forum?
889
+ id=UYneFzXSJWh.
890
+ 12
891
+
892
+ Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan L.
893
+ Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In Vittorio
894
+ Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss (eds.), Computer Vision - ECCV
895
+ 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part
896
+ I, volume 11205 of Lecture Notes in Computer Science, pp. 19–35. Springer, 2018. doi: 10.1007/
897
+ 978-3-030-01246-5\ 2. URL https://doi.org/10.1007/978-3-030-01246-5_2.
898
+ Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie.
899
+ A convnet for the 2020s. CoRR, abs/2201.03545, 2022. URL https://arxiv.org/abs/
900
+ 2201.03545.
901
+ Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet V2: practical guidelines
902
+ for efficient CNN architecture design. In Vittorio Ferrari, Martial Hebert, Cristian Sminchis-
903
+ escu, and Yair Weiss (eds.), Computer Vision - ECCV 2018 - 15th European Conference, Mu-
904
+ nich, Germany, September 8-14, 2018, Proceedings, Part XIV, volume 11218 of Lecture Notes
905
+ in Computer Science, pp. 122–138. Springer, 2018. doi: 10.1007/978-3-030-01264-9\ 8. URL
906
+ https://doi.org/10.1007/978-3-030-01264-9_8.
907
+ Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li,
908
+ Ashwin Bharambe, and Laurens Van Der Maaten. Exploring the limits of weakly supervised
909
+ pretraining. In Proceedings of the European conference on computer vision (ECCV), pp. 181–
910
+ 196, 2018.
911
+ Makerere University AI Lab.
912
+ Cassava leaf disease classification, 2021.
913
+ URL https://
914
+ www.kaggle.com/competitions/cassava-leaf-disease-classification/
915
+ overview.
916
+ Thomas Mensink, Jasper Uijlings, Alina Kuznetsova, Michael Gygli, and Vittorio Ferrari. Factors
917
+ of influence for transfer learning across diverse appearance domains and task types. IEEE Trans-
918
+ actions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2021. doi: 10.1109/TPAMI.2021.
919
+ 3129870.
920
+ John Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar,
921
+ Percy Liang, Yair Carmon, and Ludwig Schmidt. Accuracy on the line: on the strong correla-
922
+ tion between out-of-distribution and in-distribution generalization. In Marina Meila and Tong
923
+ Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML
924
+ 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Re-
925
+ search, pp. 7721–7735. PMLR, 2021. URL http://proceedings.mlr.press/v139/
926
+ miller21b.html.
927
+ Raoof Naushad, Tarunpreet Kaur, and Ebrahim Ghaderpour. Deep transfer learning for land use
928
+ and land cover classification: A comparative study. Sensors, 21(23):8083, 2021. doi: 10.3390/
929
+ s21238083. URL https://doi.org/10.3390/s21238083.
930
+ Niv Nayman, Avram Golbert, Asaf Noy, Tan Ping, and Lihi Zelnik-Manor. Diverse imagenet models
931
+ transfer better. arXiv preprint arXiv:2204.09134, 2022.
932
+ Wei Ouyang, Casper F. Winsnes, Martin Hjelmare, Anthony J. Cesnik, Lovisa ˚Akesson, Hao Xu,
933
+ Devin P. Sullivan, Shubin Dai, Jun Lan, Park Jinmo, Shaikat M. Galib, Christof Henkel, Kevin
934
+ Hwang, Dmytro Poplavskiy, Bojan Tunguz, Russel D. Wolfinger, Yinzheng Gu, Chuanpeng Li,
935
+ Jinbin Xie, Dmitry Buslov, Sergei Fironov, Alexander Kiselev, Dmytro Panchenko, Xuan Cao,
936
+ Runmin Wei, Yuanhao Wu, Xun Zhu, Kuan-Lun Tseng, Zhifeng Gao, Cheng Ju, Xiaohan Yi,
937
+ Hongdong Zheng, Constantin Kappel, and Emma Lundberg. Analysis of the human protein at-
938
+ las image classification competition. Nature Methods, 16(12):1254–1261, 2019. doi: 10.1038/
939
+ s41592-019-0658-6. URL https://doi.org/10.1038/s41592-019-0658-6.
940
+ Ian Pan.
941
+ [2nd place] solution overview, 2020.
942
+ URL https://www.kaggle.com/
943
+ competitions/siim-isic-melanoma-classification/discussion/
944
+ 175324.
945
+ Jinmo Park.
946
+ 3rd place solution with code., 2019.
947
+ URL https://www.kaggle.com/c/
948
+ human-protein-atlas-image-classification/discussion/77320.
949
+ 13
950
+
951
+ Hieu Pham, Zihang Dai, Golnaz Ghiasi, Hanxiao Liu, Adams Wei Yu, Minh-Thang Luong, Mingx-
952
+ ing Tan, and Quoc V Le.
953
+ Combined scaling for zero-shot transfer learning.
954
+ arXiv preprint
955
+ arXiv:2111.10050, 2021.
956
+ Jean Ponce, Tamara L Berg, Mark Everingham, David A Forsyth, Martial Hebert, Svetlana Lazeb-
957
+ nik, Marcin Marszalek, Cordelia Schmid, Bryan C Russell, Antonio Torralba, et al. Dataset issues
958
+ in object recognition. In Toward category-level object recognition, pp. 29–48. Springer, 2006.
959
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal,
960
+ Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual
961
+ models from natural language supervision. In International Conference on Machine Learning,
962
+ pp. 8748–8763. PMLR, 2021.
963
+ Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, and Samy Bengio. Transfusion: Understanding
964
+ transfer learning for medical imaging. Advances in neural information processing systems, 32,
965
+ 2019.
966
+ Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers
967
+ generalize to imagenet?
968
+ In International Conference on Machine Learning, pp. 5389–5400.
969
+ PMLR, 2019.
970
+ Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng
971
+ Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual
972
+ recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
973
+ Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do adver-
974
+ sarially robust imagenet models transfer better?
975
+ Advances in Neural Information Processing
976
+ Systems, 33:3533–3545, 2020.
977
+ Vaishaal Shankar, Rebecca Roelofs, Horia Mania, Alex Fang, Benjamin Recht, and Ludwig
978
+ Schmidt. Evaluating machine accuracy on imagenet. In International Conference on Machine
979
+ Learning, pp. 8634–8644. PMLR, 2020.
980
+ Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-
981
+ the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on
982
+ computer vision and pattern recognition workshops, pp. 806–813, 2014.
983
+ Maxim
984
+ Shugaev.
985
+ Pretrained
986
+ resnet34
987
+ with
988
+ rgby
989
+ (0.460
990
+ public
991
+ lb),
992
+ 2019.
993
+ URL
994
+ https://www.kaggle.com/code/iafoss/
995
+ pretrained-resnet34-with-rgby-0-460-public-lb/notebook.
996
+ SIIM and ISIC. Siim-isic melanoma classification, 2020. URL https://www.kaggle.com/
997
+ competitions/siim-isic-melanoma-classification/overview.
998
+ Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image
999
+ recognition. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning
1000
+ Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceed-
1001
+ ings, 2015. URL http://arxiv.org/abs/1409.1556.
1002
+ Andreas Steiner, Alexander Kolesnikov, Xiaohua Zhai, Ross Wightman, Jakob Uszkoreit, and Lucas
1003
+ Beyer. How to train your vit? data, augmentation, and regularization in vision transformers.
1004
+ CoRR, abs/2106.10270, 2021. URL https://arxiv.org/abs/2106.10270.
1005
+ Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable ef-
1006
+ fectiveness of data in deep learning era. In Proceedings of the IEEE international conference on
1007
+ computer vision, pp. 843–852, 2017.
1008
+ Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A. Alemi.
1009
+ Inception-v4,
1010
+ inception-resnet and the impact of residual connections on learning. In Satinder Singh and Shaul
1011
+ Markovitch (eds.), Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence,
1012
+ February 4-9, 2017, San Francisco, California, USA, pp. 4278–4284. AAAI Press, 2017a. URL
1013
+ http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14806.
1014
+ 14
1015
+
1016
+ Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A. Alemi.
1017
+ Inception-v4,
1018
+ inception-resnet and the impact of residual connections on learning. In Satinder Singh and Shaul
1019
+ Markovitch (eds.), Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence,
1020
+ February 4-9, 2017, San Francisco, California, USA, pp. 4278–4284. AAAI Press, 2017b. URL
1021
+ http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14806.
1022
+ Mingxing Tan and Quoc V. Le. Efficientnet: Rethinking model scaling for convolutional neural
1023
+ networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th
1024
+ International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, Cali-
1025
+ fornia, USA, volume 97 of Proceedings of Machine Learning Research, pp. 6105–6114. PMLR,
1026
+ 2019. URL http://proceedings.mlr.press/v97/tan19a.html.
1027
+ Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig
1028
+ Schmidt. Measuring robustness to natural distribution shifts in image classification. Advances
1029
+ in Neural Information Processing Systems, 33:18583–18599, 2020.
1030
+ Antonio Torralba and Alexei A Efros. Unbiased look at dataset bias. In CVPR 2011, pp. 1521–1528.
1031
+ IEEE, 2011.
1032
+ Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and
1033
+ Herv´e J´egou. Training data-efficient image transformers & distillation through attention. In Ma-
1034
+ rina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine
1035
+ Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine
1036
+ Learning Research, pp. 10347–10357. PMLR, 2021. URL http://proceedings.mlr.
1037
+ press/v139/touvron21a.html.
1038
+ Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Andrew Ilyas, and Aleksander Madry. From
1039
+ imagenet to image classification: Contextualizing progress on benchmarks. In International Con-
1040
+ ference on Machine Learning, pp. 9625–9635. PMLR, 2020.
1041
+ Lukas Tuggener, J¨urgen Schmidhuber, and Thilo Stadelmann. Is it enough to optimize cnn architec-
1042
+ tures on imagenet? arXiv preprint arXiv:2103.09108, 2021.
1043
+ Ross Wightman, Hugo Touvron, and Herv´e J´egou.
1044
+ Resnet strikes back: An improved training
1045
+ procedure in timm. CoRR, abs/2110.00476, 2021. URL https://arxiv.org/abs/2110.
1046
+ 00476.
1047
+ Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student
1048
+ improves imagenet classification. In Proceedings of the IEEE/CVF conference on computer vision
1049
+ and pattern recognition, pp. 10687–10698, 2020.
1050
+ Saining Xie, Ross B. Girshick, Piotr Doll´ar, Zhuowen Tu, and Kaiming He. Aggregated resid-
1051
+ ual transformations for deep neural networks. In 2017 IEEE Conference on Computer Vision
1052
+ and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pp. 5987–5995.
1053
+ IEEE Computer Society, 2017. doi: 10.1109/CVPR.2017.634. URL https://doi.org/10.
1054
+ 1109/CVPR.2017.634.
1055
+ Guanshuo Xu.
1056
+ 1st place solution summary, 2019.
1057
+ URL https://www.kaggle.com/
1058
+ competitions/aptos2019-blindness-detection/discussion/108065.
1059
+ H. Yassine, K. Tout, and M. Jaber.
1060
+ Improving Lulc Classification from Satellite Imagery Us-
1061
+ ing Deep Learning - Eurosat Dataset.
1062
+ ISPRS - International Archives of the Photogram-
1063
+ metry, Remote Sensing and Spatial Information Sciences, 43B3:369–376, June 2021.
1064
+ doi:
1065
+ 10.5194/isprs-archives-XLIII-B3-2021-369-2021.
1066
+ Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and Yonghui
1067
+ Wu.
1068
+ Coca:
1069
+ Contrastive captioners are image-text foundation models.
1070
+ arXiv preprint
1071
+ arXiv:2205.01917, 2022.
1072
+ Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio
1073
+ Savarese. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE con-
1074
+ ference on computer vision and pattern recognition, pp. 3712–3722, 2018.
1075
+ 15
1076
+
1077
+ Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario
1078
+ Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, et al. The
1079
+ visual task adaptation benchmark. 2019.
1080
+ Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision transformers.
1081
+ In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.
1082
+ 12104–12113, 2022.
1083
+ Kevin
1084
+ Zheng.
1085
+ 39th
1086
+ solution-attention
1087
+ gated
1088
+ resnet18
1089
+ (single
1090
+ model
1091
+ with-
1092
+ out
1093
+ cv),
1094
+ 2019.
1095
+ URL
1096
+ https://www.kaggle.com/competitions/
1097
+ human-protein-atlas-image-classification/discussion/77637.
1098
+ Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data aug-
1099
+ mentation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The
1100
+ Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The
1101
+ Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New
1102
+ York, NY, USA, February 7-12, 2020, pp. 13001–13008. AAAI Press, 2020.
1103
+ URL https:
1104
+ //ojs.aaai.org/index.php/AAAI/article/view/7000.
1105
+ 16
1106
+
1107
+ Appendix
1108
+ A
1109
+ DETAILED EXPERIMENT RESULTS
1110
+ Table 3: For each ImageNet pre-trained model, we provide the best performing model when fine-tuned on each
1111
+ dataset across our hyperparameter grid
1112
+ Model
1113
+ ImageNet top-1
1114
+ CCT20
1115
+ APTOS
1116
+ HPA
1117
+ Melanoma
1118
+ Cassava
1119
+ EuroSAT
1120
+ AlexNet
1121
+ 56.5
1122
+ 63.59
1123
+ 0.8835
1124
+ 0.3846
1125
+ 0.9283
1126
+ 82.58
1127
+ 97.93
1128
+ SqueezeNet 1.1
1129
+ 58.2
1130
+ 66.36
1131
+ 0.9021
1132
+ 0.3972
1133
+ 0.9073
1134
+ 85.15
1135
+ 98.07
1136
+ ShuffleNetV2x0.5
1137
+ 60.6
1138
+ 66.37
1139
+ 0.9227
1140
+ 0.5867
1141
+ 0.9289
1142
+ 85.64
1143
+ 98.56
1144
+ MobileNet V3 small
1145
+ 67.7
1146
+ 66.01
1147
+ 0.9230
1148
+ 0.6108
1149
+ 0.9455
1150
+ 85.81
1151
+ 99.15
1152
+ ShuffleNetV2x1.0
1153
+ 69.4
1154
+ 69.27
1155
+ 0.9202
1156
+ 0.6202
1157
+ 0.9418
1158
+ 87.33
1159
+ 98.91
1160
+ VGG-13 BN
1161
+ 71.6
1162
+ 75.06
1163
+ 0.9268
1164
+ 0.6794
1165
+ 0.9529
1166
+ 88.99
1167
+ 98.85
1168
+ DeiT-tiny
1169
+ 72.2
1170
+ 68.77
1171
+ 0.9130
1172
+ 0.5777
1173
+ 0.9510
1174
+ 86.25
1175
+ 99.11
1176
+ VGG-16 BN
1177
+ 73.4
1178
+ 75.93
1179
+ 0.9287
1180
+ 0.6791
1181
+ 0.9531
1182
+ 88.45
1183
+ 98.93
1184
+ DenseNet-121
1185
+ 74.4
1186
+ 74.66
1187
+ 0.9287
1188
+ 0.7019
1189
+ 0.9514
1190
+ 87.80
1191
+ 99.06
1192
+ ResNet-50
1193
+ 76.1
1194
+ 73.96
1195
+ 0.9215
1196
+ 0.6718
1197
+ 0.9524
1198
+ 87.75
1199
+ 99.19
1200
+ ResNeXt-50-32x4d
1201
+ 77.6
1202
+ 73.73
1203
+ 0.9212
1204
+ 0.6906
1205
+ 0.9588
1206
+ 88.15
1207
+ 99.24
1208
+ EfficientNet B0
1209
+ 77.7
1210
+ 71.02
1211
+ 0.9195
1212
+ 0.6942
1213
+ 0.9456
1214
+ 87.63
1215
+ 98.80
1216
+ ResNet-152
1217
+ 78.3
1218
+ 74.05
1219
+ 0.9228
1220
+ 0.6732
1221
+ 0.9562
1222
+ 87.75
1223
+ 99.15
1224
+ ViT-B/16
1225
+ 78.7
1226
+ 72.07
1227
+ 0.9262
1228
+ 0.5852
1229
+ 0.9600
1230
+ 86.63
1231
+ 99.28
1232
+ DeiT-small
1233
+ 79.9
1234
+ 71.41
1235
+ 0.9205
1236
+ 0.6148
1237
+ 0.9583
1238
+ 87.19
1239
+ 99.20
1240
+ Inception-ResNet v2
1241
+ 80.4
1242
+ 70.68
1243
+ 0.9168
1244
+ 0.6882
1245
+ 0.9483
1246
+ 87.84
1247
+ 98.93
1248
+ ConvNext-tiny
1249
+ 82.5
1250
+ 78.51
1251
+ 0.9297
1252
+ 0.6992
1253
+ 0.9628
1254
+ 88.89
1255
+ 99.11
1256
+ PNASNet-5 large
1257
+ 82.9
1258
+ 75.21
1259
+ 0.9271
1260
+ 0.6941
1261
+ 0.9584
1262
+ 87.77
1263
+ 99.17
1264
+ EfficientNet B4
1265
+ 83.4
1266
+ 73.49
1267
+ 0.9211
1268
+ 0.6954
1269
+ 0.9552
1270
+ 88.36
1271
+ 98.70
1272
+ See the following link for experiment results across hyperparameters: https://docs.google.
1273
+ com/spreadsheets/d/1aDeuTH0V1Kid_JMRUt3sF1N76LUCAMDQ007Ykjo3Z4U/
1274
+ edit?usp=sharing.
1275
+ 17
1276
+
1277
+ B
1278
+ MAIN FIGURE VARIATIONS
1279
+ 55
1280
+ 60
1281
+ 65
1282
+ 70
1283
+ 75
1284
+ 80
1285
+ 85
1286
+ ImageNet top-1 accuracy
1287
+ 64
1288
+ 66
1289
+ 68
1290
+ 70
1291
+ 72
1292
+ 74
1293
+ 76
1294
+ 78
1295
+ Accuracy
1296
+ Caltech Camera Traps 20
1297
+ 55
1298
+ 60
1299
+ 65
1300
+ 70
1301
+ 75
1302
+ 80
1303
+ 85
1304
+ ImageNet top-1 accuracy
1305
+ 0.88
1306
+ 0.90
1307
+ 0.92
1308
+ 0.94
1309
+ Quadratic weighted kappa
1310
+ APTOS 2019 Blindness
1311
+ 55
1312
+ 60
1313
+ 65
1314
+ 70
1315
+ 75
1316
+ 80
1317
+ 85
1318
+ ImageNet top-1 accuracy
1319
+ 0.35
1320
+ 0.40
1321
+ 0.45
1322
+ 0.50
1323
+ 0.55
1324
+ 0.60
1325
+ 0.65
1326
+ 0.70
1327
+ 0.75
1328
+ Macro F1 score
1329
+ Human Protein Atlas
1330
+ 55
1331
+ 60
1332
+ 65
1333
+ 70
1334
+ 75
1335
+ 80
1336
+ 85
1337
+ ImageNet top-1 accuracy
1338
+ 0.90
1339
+ 0.91
1340
+ 0.92
1341
+ 0.93
1342
+ 0.94
1343
+ 0.95
1344
+ 0.96
1345
+ 0.97
1346
+ Area under ROC
1347
+ SIIM-ISIC Melanoma
1348
+ 55
1349
+ 60
1350
+ 65
1351
+ 70
1352
+ 75
1353
+ 80
1354
+ 85
1355
+ ImageNet top-1 accuracy
1356
+ 82
1357
+ 84
1358
+ 86
1359
+ 88
1360
+ 90
1361
+ Accuracy
1362
+ Cassava Leaf Disease
1363
+ 55
1364
+ 60
1365
+ 65
1366
+ 70
1367
+ 75
1368
+ 80
1369
+ 85
1370
+ ImageNet top-1 accuracy
1371
+ 97.50
1372
+ 97.75
1373
+ 98.00
1374
+ 98.25
1375
+ 98.50
1376
+ 98.75
1377
+ 99.00
1378
+ 99.25
1379
+ 99.50
1380
+ Accuracy
1381
+ EuroSAT
1382
+ AlexNet
1383
+ MobileNetV3-small
1384
+ VGG-13 BN
1385
+ DeiT-tiny
1386
+ ResNet-50
1387
+ ResNet-152
1388
+ DeiT-small
1389
+ PNASNet-5
1390
+ Inception-ResNet v2
1391
+ VGG-16 BN
1392
+ EfficientNet B0
1393
+ EfficientNet B4
1394
+ DenseNet-121
1395
+ ResNeXt-50-32x4d
1396
+ ShuffleNetV2x1.0
1397
+ ConvNext-tiny
1398
+ ShuffleNetV2x0.5
1399
+ SqueezeNet 1.1
1400
+ ViT-B/16
1401
+ Figure 4: Figure 1 with error bars. Green is linear trend of all models, while blue is linear trend for models
1402
+ above 70% ImageNet accuracy. We use 95% confidence intervals computed with Clopper-Pearson for accuracy
1403
+ metrics and bootstrap with 10,000 trials for other metrics.
1404
+ 55
1405
+ 60
1406
+ 65
1407
+ 70
1408
+ 75
1409
+ 80
1410
+ 85
1411
+ ImageNet top-1 accuracy
1412
+ 62.5
1413
+ 65.0
1414
+ 67.5
1415
+ 70.0
1416
+ 72.5
1417
+ 75.0
1418
+ 77.5
1419
+ 80.0
1420
+ Accuracy
1421
+ Caltech Camera Traps 20
1422
+ 55
1423
+ 60
1424
+ 65
1425
+ 70
1426
+ 75
1427
+ 80
1428
+ 85
1429
+ ImageNet top-1 accuracy
1430
+ 0.88
1431
+ 0.90
1432
+ 0.92
1433
+ 0.94
1434
+ Quadratic weighted kappa
1435
+ APTOS 2019 Blindness
1436
+ 55
1437
+ 60
1438
+ 65
1439
+ 70
1440
+ 75
1441
+ 80
1442
+ 85
1443
+ ImageNet top-1 accuracy
1444
+ 0.2
1445
+ 0.3
1446
+ 0.4
1447
+ 0.5
1448
+ 0.6
1449
+ 0.7
1450
+ Macro F1 score
1451
+ Human Protein Atlas
1452
+ 55
1453
+ 60
1454
+ 65
1455
+ 70
1456
+ 75
1457
+ 80
1458
+ 85
1459
+ ImageNet top-1 accuracy
1460
+ 0.89
1461
+ 0.90
1462
+ 0.91
1463
+ 0.92
1464
+ 0.93
1465
+ 0.94
1466
+ 0.95
1467
+ 0.96
1468
+ 0.97
1469
+ Area under ROC
1470
+ SIIM-ISIC Melanoma
1471
+ 55
1472
+ 60
1473
+ 65
1474
+ 70
1475
+ 75
1476
+ 80
1477
+ 85
1478
+ ImageNet top-1 accuracy
1479
+ 82
1480
+ 84
1481
+ 86
1482
+ 88
1483
+ 90
1484
+ Accuracy
1485
+ Cassava Leaf Disease
1486
+ 55
1487
+ 60
1488
+ 65
1489
+ 70
1490
+ 75
1491
+ 80
1492
+ 85
1493
+ ImageNet top-1 accuracy
1494
+ 97.50
1495
+ 97.75
1496
+ 98.00
1497
+ 98.25
1498
+ 98.50
1499
+ 98.75
1500
+ 99.00
1501
+ 99.25
1502
+ 99.50
1503
+ Accuracy
1504
+ EuroSAT
1505
+ AlexNet
1506
+ MobileNetV3-small
1507
+ VGG-13 BN
1508
+ DeiT-tiny
1509
+ ResNet-50
1510
+ ResNet-152
1511
+ DeiT-small
1512
+ PNASNet-5
1513
+ Inception-ResNet v2
1514
+ VGG-16 BN
1515
+ EfficientNet B0
1516
+ EfficientNet B4
1517
+ DenseNet-121
1518
+ ResNeXt-50-32x4d
1519
+ ShuffleNetV2x1.0
1520
+ ConvNext-tiny
1521
+ ShuffleNetV2x0.5
1522
+ SqueezeNet 1.1
1523
+ ViT-B/16
1524
+ Figure 5: Figure 4 with spline interpolation fits instead of linear fits.
1525
+ 18
1526
+
1527
+ C
1528
+ EXPERIMENT SETUP
1529
+ C.1
1530
+ MODELS
1531
+ Table 4: We examine the effectiveness of transfer learning from a number of models pretrained on ImageNet,
1532
+ including both CNNs and Vision Transformers.
1533
+ Model
1534
+ ImageNet top-1
1535
+ # params
1536
+ Year Released
1537
+ AlexNet (Krizhevsky et al., 2012)
1538
+ 56.5
1539
+ 61M
1540
+ 2012
1541
+ SqueezeNet 1.1 (Iandola et al., 2016)
1542
+ 58.2
1543
+ 1.2M
1544
+ 2016
1545
+ ShuffleNetV2x0.5 (Ma et al., 2018)
1546
+ 60.6
1547
+ 1.4M
1548
+ 2018
1549
+ MobileNet V3 small (Howard et al., 2019)
1550
+ 67.7
1551
+ 2.5M
1552
+ 2019
1553
+ ShuffleNetV2x1.0 (Ma et al., 2018)
1554
+ 69.4
1555
+ 2.3M
1556
+ 2018
1557
+ VGG-13 BN (Simonyan & Zisserman, 2015)
1558
+ 71.6
1559
+ 133M
1560
+ 2014/2015
1561
+ DeiT-tiny (Touvron et al., 2021)
1562
+ 72.2
1563
+ 5.7M
1564
+ 2020
1565
+ VGG-16 BN (Simonyan & Zisserman, 2015)
1566
+ 73.4
1567
+ 138M
1568
+ 2014/2015
1569
+ DenseNet-121 (Huang et al., 2017)
1570
+ 74.4
1571
+ 8.0M
1572
+ 2016
1573
+ ResNet-50 (He et al., 2016)
1574
+ 76.1
1575
+ 26M
1576
+ 2015
1577
+ ResNeXt-50-32x4d (Xie et al., 2017)
1578
+ 77.6
1579
+ 25M
1580
+ 2016
1581
+ EfficientNet B0 (Tan & Le, 2019)
1582
+ 77.7
1583
+ 5.3M
1584
+ 2019
1585
+ ResNet-152 (He et al., 2016)
1586
+ 78.3
1587
+ 60M
1588
+ 2015
1589
+ ViT-B/16 (Dosovitskiy et al., 2021a; Steiner et al., 2021)
1590
+ 78.7
1591
+ 304M
1592
+ 2020
1593
+ DeiT-small (Touvron et al., 2021)
1594
+ 79.9
1595
+ 22M
1596
+ 2020
1597
+ Inception-ResNet v2 (Szegedy et al., 2017a)
1598
+ 80.4
1599
+ 56M
1600
+ 2016
1601
+ ConvNext-tiny (Liu et al., 2022)
1602
+ 82.5
1603
+ 29M
1604
+ 2022
1605
+ PNASNet-5 large (Liu et al., 2018)
1606
+ 82.9
1607
+ 86M
1608
+ 2017
1609
+ EfficientNet B4 (Tan & Le, 2019)
1610
+ 83.4
1611
+ 19M
1612
+ 2019
1613
+ We examine 19 model architectures in this work that cover a diverse range of accuracies on ImageNet
1614
+ in order to observe the relationship between ImageNet performance and target dataset performance.
1615
+ In addition to the commonly used CNNs, we also include data-efficient image transformers (DeiT)
1616
+ due to the recent increase in usage of Vision Transformers. Additional model details are in Table 4.
1617
+ C.2
1618
+ HYPERPARAMETER GRID
1619
+ Hyperparameter tuning is a key part of neural network training, as using suboptimal hyperparameters
1620
+ can lead to suboptimal performance. Furthermore, the correct hyperparameters vary across both
1621
+ models and training data. To get the best performance out of each model, we train each model
1622
+ on AdamW with a cosine decay learning rate schedule, SGD with a cosine decay learning rate
1623
+ schedule, and SGD with a multi-step decay learning rate schedule. We also grid search for optimal
1624
+ initial learning rate and weight decay combinations, searching logarithmically between 10−1 to
1625
+ 10−4 for SGD learning rate, 10−2 to 10−5 for AdamW learning rate, and 10−3 to 10−6 as well as
1626
+ 0 for weight decay. All models are pretrained on ImageNet and then fine-tuned on the downstream
1627
+ task. Additional training details for each dataset can be found in Appendix D. We also run our
1628
+ hyperparameter grid on CIFAR-10 in Appendix E to verify that we find a strong relationship between
1629
+ ImageNet and CIFAR-10 accuracy as previously reported by Kornblith et al. (2019).
1630
+ D
1631
+ TRAINING DETAILS BY DATASET (IMAGENET MODELS)
1632
+ Experiments on Cassava Leaf Disease, SIIM-ISIC Melanoma, and EuroSAT datasets were ran on
1633
+ TPU v2-8s, while all other datasets were ran on NVIDIA A40s.
1634
+ All experiments were ran with mini-batch size of 128.
1635
+ For SGD experiments, we use Nesterov momentum, set momentum to 0.9, and try learning rates of
1636
+ 1e-1, 1e-2, 1e-3, and 1e-4. For AdamW experiments, we try learning rates of 1e-2, 1e-3, 1e-4, 1e-5.
1637
+ For all experiments, we try weight decays of 1e-3, 1e-4, 1e-5, 1e-6, and 0.
1638
+ For all experiments, we use weights that are pretrained on ImageNet. AlexNet, DenseNet, Mo-
1639
+ bileNet, ResNet, ResNext, ShuffleNet, SqueezeNet and VGG models are from torchvision, while
1640
+ ConvNext, DeiT, EfficientNet, InceptionResNet, and PNASNet models are from timm. Addition-
1641
+ ally, we normalize images to ImageNet’s mean and standard deviation.
1642
+ For EuroSAT we random resize crop to 224 with area at least 0.65.
1643
+ For all other datasets, we random resize crop with area at least 0.65 to 224 for DeiT models, and 256
1644
+ for all other models. Additionally, we use horizontal flips. For Human Protein Atlas, Cassava Leaf
1645
+ Disease, and SIIM-ISIC Melanoma, we also use vertical flips.
1646
+ 19
1647
+
1648
+ For SIIM-ISIC Melanoma, we train for 10 epochs, and for the step scheduler decay with factor 0.1
1649
+ at 5 epochs.
1650
+ For all other datasets, we train for 30 epochs, and for the step scheduler decay with factor 0.1 at 15,
1651
+ 20, and 25 epochs.
1652
+ E
1653
+ CIFAR-10 ON HYPERPARAMETER GRID
1654
+ 55
1655
+ 60
1656
+ 65
1657
+ 70
1658
+ 75
1659
+ 80
1660
+ 85
1661
+ ImageNet top-1 accuracy
1662
+ 93
1663
+ 94
1664
+ 95
1665
+ 96
1666
+ 97
1667
+ 98
1668
+ 99
1669
+ Accuracy
1670
+ CIFAR-10
1671
+ AlexNet
1672
+ MobileNetV3-small
1673
+ VGG-13 BN
1674
+ DeiT-tiny
1675
+ ResNet-50
1676
+ ResNet-152
1677
+ DeiT-small
1678
+ PNASNet-5
1679
+ Inception-ResNet v2
1680
+ VGG-16 BN
1681
+ EfficientNet B0
1682
+ EfficientNet B4
1683
+ DenseNet-121
1684
+ ResNeXt-50-32x4d
1685
+ ShuffleNetV2x1.0
1686
+ ConvNext-tiny
1687
+ Figure 6: Transfer performance across models from ImageNet to CIFAR-10. Green linear trend is computed
1688
+ across all models, while blue linear trend is restricted to models above 70% ImageNet accuracy. We use 95%
1689
+ confidence intervals computed with Clopper-Pearson.
1690
+ F
1691
+ APTOS 2019 BLINDNESS DETECTION ABLATIONS
1692
+ Scores presented are submissions to the Kaggle leaderboard. All scores are evaluated with quadratic
1693
+ weighted kappa. Within each entry, we first present the private leaderboard score, then the pub-
1694
+ lic leaderboard score. The private leaderboard represents 85% of the test data, while the public
1695
+ leaderboard is the remaining 15%.
1696
+ Models used here are trained using AdamW with a cosine scheduler. We random resize crop to 512,
1697
+ use random rotations, and use color jitter (brightness=0.2, contrast=0.2, saturation=0.2, hue=0.1).
1698
+ We train on all the available training data, no longer using the local train/validation split mentioned
1699
+ in the main text. This includes both the training data in the 2019 competition, as well as data from a
1700
+ prior 2015 diabetic retinopathy competition.
1701
+ Table 5: Comparing various models with additional interventions by evaluating on the Kaggle leaderboard.
1702
+ lr \wd
1703
+ 1.00E-04
1704
+ 1.00E-05
1705
+ 1.00E-06
1706
+ ResNet-50
1707
+ 1.00E-03
1708
+ 0.8610 / 0.6317
1709
+ 0.8570 / 0.6180
1710
+ 0.8548 / 0.6646
1711
+ 1.00E-04
1712
+ 0.8952 / 0.7531
1713
+ 0.8918 / 0.7204
1714
+ 0.8961 / 0.7547
1715
+ ResNet-152
1716
+ 1.00E-03
1717
+ 0.8658 / 0.6812
1718
+ 0.8686 / 0.6612
1719
+ 0.8640 / 0.6554
1720
+ 1.00E-04
1721
+ 0.8898 / 0.7164
1722
+ 0.8836 / 0.6946
1723
+ 0.8859 / 0.6947
1724
+ Inception-Resnet-v2
1725
+ 1.00E-03
1726
+ 0.8933 / 0.7748
1727
+ 0.8905 / 0.7565
1728
+ 0.8960 / 0.7585
1729
+ 1.00E-04
1730
+ 0.8897 / 0.7210
1731
+ 0.8929 / 0.7420
1732
+ 0.8944 / 0.7439
1733
+ 20
1734
+
1735
+ Table 6: Comparing the effect of augmentation on Kaggle leaderboard scores. More augmentation is as de-
1736
+ scribed earlier in this section. Less augmentation only uses random resize crop with at least 0.65 area and
1737
+ horizontal flips.
1738
+ lr \wd
1739
+ 1.00E-04
1740
+ 1.00E-05
1741
+ 1.00E-06
1742
+ ResNet-50
1743
+ less aug
1744
+ 1.00E-03
1745
+ 0.8669 / 0.6405
1746
+ 0.8520 / 0.6013
1747
+ 0.8613 / 0.6269
1748
+ 1.00E-04
1749
+ 0.8525 / 0.6115
1750
+ 0.8570 / 0.6431
1751
+ 0.8483 / 0.6147
1752
+ 1.00E-05
1753
+ 0.8186 / 0.5071
1754
+ 0.8287 / 0.5647
1755
+ 0.8288 / 0.5328
1756
+ ResNet-50
1757
+ more aug
1758
+ 1.00E-03
1759
+ 0.8440 / 0.6432
1760
+ 0.8547 / 0.6856
1761
+ 0.8524 / 0.7125
1762
+ 1.00E-04
1763
+ 0.8948 / 0.7490
1764
+ 0.8972 / 0.7693
1765
+ 0.8999 / 0.7758
1766
+ 1.00E-05
1767
+ 0.8724 / 0.7370
1768
+ 0.8685 / 0.7567
1769
+ 0.8623 / 0.7376
1770
+ G
1771
+ AUGMENTATION ABLATION DETAILS
1772
+ Table 7:
1773
+ We examine the effect of pre-training augmentation and fine-tuning augmentation on downstream
1774
+ transfer performance. The model specifies the architecture and pre-training augmentation, while each column
1775
+ specifies the downstream task and fine-tuning augmentation. We find that augmentation strategies that improve
1776
+ ImageNet accuracy do not always improve accuracy on downstream tasks. Pre-trained augmentation models
1777
+ are from Wightman et al. (2021).
1778
+ Model
1779
+ ImageNet
1780
+ CCT-20
1781
+ CCT-20
1782
+ CCT-20
1783
+ APTOS
1784
+ APTOS
1785
+ APTOS
1786
+ Acc
1787
+ Base Aug
1788
+ AugMix
1789
+ RandAug
1790
+ Base Aug
1791
+ AugMix
1792
+ RandAug
1793
+ ResNet-50
1794
+ 76.1
1795
+ 72.02
1796
+ 72.24
1797
+ 73.57
1798
+ 0.9210
1799
+ 0.9212
1800
+ 0.9250
1801
+ ResNet-50
1802
+ 77.5
1803
+ 71.63
1804
+ 71.53
1805
+ 72.39
1806
+ 0.9239
1807
+ 0.9152
1808
+ 0.9222
1809
+ w/ AugMix
1810
+ ResNet-50
1811
+ 78.8
1812
+ 72.94
1813
+ 73.54
1814
+ 73.76
1815
+ 0.9190
1816
+ 0.9204
1817
+ 0.9302
1818
+ w/ RandAug
1819
+ Deit-tiny
1820
+ 72.2
1821
+ 66.57
1822
+ 66.47
1823
+ 66.95
1824
+ 0.9153
1825
+ 0.9197
1826
+ 0.9172
1827
+ Deit-small
1828
+ 79.9
1829
+ 70.65
1830
+ 69.72
1831
+ 70.07
1832
+ 0.9293
1833
+ 0.9212
1834
+ 0.9277
1835
+ H
1836
+ MELANOMA METRIC COMPARISON
1837
+ 55
1838
+ 60
1839
+ 65
1840
+ 70
1841
+ 75
1842
+ 80
1843
+ 85
1844
+ ImageNet top-1 accuracy
1845
+ 0.90
1846
+ 0.91
1847
+ 0.92
1848
+ 0.93
1849
+ 0.94
1850
+ 0.95
1851
+ 0.96
1852
+ 0.97
1853
+ Area under ROC
1854
+ SIIM-ISIC Melanoma ROC
1855
+ 55
1856
+ 60
1857
+ 65
1858
+ 70
1859
+ 75
1860
+ 80
1861
+ 85
1862
+ ImageNet top-1 accuracy
1863
+ 93.0
1864
+ 93.5
1865
+ 94.0
1866
+ 94.5
1867
+ 95.0
1868
+ 95.5
1869
+ 96.0
1870
+ 96.5
1871
+ Accuracy
1872
+ SIIM-ISIC Melanoma Acc
1873
+ AlexNet
1874
+ MobileNetV3-small
1875
+ VGG-13 BN
1876
+ DeiT-tiny
1877
+ ResNet-50
1878
+ ResNet-152
1879
+ DeiT-small
1880
+ PNASNet-5
1881
+ Inception-ResNet v2
1882
+ VGG-16 BN
1883
+ EfficientNet B0
1884
+ EfficientNet B4
1885
+ DenseNet-121
1886
+ ResNeXt-50-32x4d
1887
+ ShuffleNetV2x1.0
1888
+ ConvNext-tiny
1889
+ ShuffleNetV2x0.5
1890
+ SqueezeNet 1.1
1891
+ Figure 7: Comparing transfer performance from ImageNet to Melanoma when using different metrics. Green
1892
+ linear trend is computed across all models, while blue linear trend is restricted to models above 70% ImageNet
1893
+ accuracy. Using accuracy implies that better ImageNet models transfer better; however, ROC is a better metric
1894
+ for this task.
1895
+ 21
1896
+
1897
+ I
1898
+ CLIP EXPERIMENT DETAILS
1899
+ 55
1900
+ 60
1901
+ 65
1902
+ 70
1903
+ 75
1904
+ 80
1905
+ 85
1906
+ ImageNet top-1 accuracy
1907
+ 65
1908
+ 70
1909
+ 75
1910
+ 80
1911
+ Accuracy
1912
+ Caltech Camera Traps 20
1913
+ 55
1914
+ 60
1915
+ 65
1916
+ 70
1917
+ 75
1918
+ 80
1919
+ 85
1920
+ ImageNet top-1 accuracy
1921
+ 0.88
1922
+ 0.90
1923
+ 0.92
1924
+ 0.94
1925
+ Quadratic weighted kappa
1926
+ APTOS 2019 Blindness
1927
+ 55
1928
+ 60
1929
+ 65
1930
+ 70
1931
+ 75
1932
+ 80
1933
+ 85
1934
+ ImageNet top-1 accuracy
1935
+ 0.35
1936
+ 0.40
1937
+ 0.45
1938
+ 0.50
1939
+ 0.55
1940
+ 0.60
1941
+ 0.65
1942
+ 0.70
1943
+ 0.75
1944
+ Macro F1 score
1945
+ Human Protein Atlas
1946
+ 55
1947
+ 60
1948
+ 65
1949
+ 70
1950
+ 75
1951
+ 80
1952
+ 85
1953
+ ImageNet top-1 accuracy
1954
+ 0.90
1955
+ 0.91
1956
+ 0.92
1957
+ 0.93
1958
+ 0.94
1959
+ 0.95
1960
+ 0.96
1961
+ 0.97
1962
+ 0.98
1963
+ Area under ROC
1964
+ SIIM-ISIC Melanoma
1965
+ 55
1966
+ 60
1967
+ 65
1968
+ 70
1969
+ 75
1970
+ 80
1971
+ 85
1972
+ ImageNet top-1 accuracy
1973
+ 82
1974
+ 84
1975
+ 86
1976
+ 88
1977
+ 90
1978
+ Accuracy
1979
+ Cassava Leaf Disease
1980
+ 55
1981
+ 60
1982
+ 65
1983
+ 70
1984
+ 75
1985
+ 80
1986
+ 85
1987
+ ImageNet top-1 accuracy
1988
+ 97.5
1989
+ 98.0
1990
+ 98.5
1991
+ 99.0
1992
+ 99.5
1993
+ Accuracy
1994
+ EuroSAT
1995
+ AlexNet
1996
+ MobileNetV3-small
1997
+ VGG-13 BN
1998
+ DeiT-tiny
1999
+ ResNet-50
2000
+ ResNet-152
2001
+ DeiT-small
2002
+ PNASNet-5
2003
+ Inception-ResNet v2
2004
+ VGG-16 BN
2005
+ EfficientNet B0
2006
+ EfficientNet B4
2007
+ DenseNet-121
2008
+ ResNeXt-50-32x4d
2009
+ ShuffleNetV2x1.0
2010
+ ConvNext-tiny
2011
+ ShuffleNetV2x0.5
2012
+ SqueezeNet 1.1
2013
+ ViT-B/16
2014
+ CLIP-RN50
2015
+ CLIP-RN101
2016
+ CLIP-B32
2017
+ CLIP-B16
2018
+ CLIP-L14
2019
+ CLIP-L14@336
2020
+ Figure 8:
2021
+ Figure 4 with CLIP models overlaid (purple stars). The best CLIP models do better than all the
2022
+ ImageNet models, but when looking across all CLIP models, the patterns are more complicated.
2023
+ Table 8: For each CLIP pre-trained model, we provide the best performing model when fine-tuned on each
2024
+ dataset across our LP-FT hyperparameter grid
2025
+ Model
2026
+ ImageNet top-1
2027
+ CCT20
2028
+ APTOS
2029
+ HPA
2030
+ Melanoma
2031
+ Cassava
2032
+ EuroSAT
2033
+ CLIP-RN50
2034
+ 73.3
2035
+ 74.45
2036
+ 0.9135
2037
+ 0.7053
2038
+ 0.9350
2039
+ 87.89
2040
+ 98.80
2041
+ CLIP-RN101
2042
+ 75.7
2043
+ 75.19
2044
+ 0.9235
2045
+ 0.6909
2046
+ 0.9378
2047
+ 87.68
2048
+ 99.11
2049
+ CLIP-B32
2050
+ 76.1
2051
+ 70.57
2052
+ 0.9137
2053
+ 0.5338
2054
+ 0.9546
2055
+ 86.28
2056
+ 99.26
2057
+ CLIP-B16
2058
+ 80.2
2059
+ 77.81
2060
+ 0.9213
2061
+ 0.6365
2062
+ 0.9619
2063
+ 87.82
2064
+ 99.24
2065
+ CLIP-L14
2066
+ 83.9
2067
+ 79.99
2068
+ 0.9330
2069
+ 0.6687
2070
+ 0.9717
2071
+ 88.82
2072
+ 99.33
2073
+ CLIP-L14@336
2074
+ 85.4
2075
+ 83.17
2076
+ 0.9337
2077
+ 0.7131
2078
+ 0.9738
2079
+ 89.24
2080
+ 99.48
2081
+ Table 9: We directly compare models pre-trained on ImageNet with models pre-trained on OpenAI’s CLIP
2082
+ data. Specifically, we look at ResNet 50 and ViT B/16.
2083
+ Model
2084
+ ImageNet top-1
2085
+ CCT20
2086
+ APTOS
2087
+ HPA
2088
+ Melanoma
2089
+ Cassava
2090
+ EuroSAT
2091
+ IN-ResNet-50
2092
+ 76.1
2093
+ 73.96
2094
+ 0.9215
2095
+ 0.6718
2096
+ 0.9524
2097
+ 87.75
2098
+ 99.19
2099
+ CLIP-RN50
2100
+ 73.3
2101
+ 74.45
2102
+ 0.9135
2103
+ 0.7053
2104
+ 0.9350
2105
+ 87.89
2106
+ 98.80
2107
+ IN-ViT-B/16
2108
+ 78.7
2109
+ 72.07
2110
+ 0.9262
2111
+ 0.5852
2112
+ 0.9600
2113
+ 86.63
2114
+ 99.28
2115
+ CLIP-B16
2116
+ 80.2
2117
+ 77.81
2118
+ 0.9213
2119
+ 0.6365
2120
+ 0.9619
2121
+ 87.82
2122
+ 99.24
2123
+ J
2124
+ CLIP FINE-TUNING DETAILS
2125
+ We fine-tune by running a linear probe, followed by end-to-end fine-tuning on the best model from
2126
+ the first part. We keep total epochs consistent with the previous models, with a third of the epochs
2127
+ going toward linear probing. We use AdamW with a cosine decay schedule. During the linear probe,
2128
+ we search over 10−1, 10−2, and 10−3 learning rates, and during fine-tuning, we search over 10−4,
2129
+ 10−5, and 10−6 learning rates. For both parts, we search over 10−3 to 10−6 and 0 for weight decay.
2130
+ 22
2131
+
2132
+ K
2133
+ CREATION INFORMATION FOR DATASETS STUDIED IN KORNBLITH ET AL.
2134
+ (2019)
2135
+ Table 10: We find that the 12 datasets studied in Kornblith et al. (2019) come from web scraping.
2136
+ Dataset
2137
+ Origin
2138
+ Additional information
2139
+ Food-101
2140
+ foodspotting.com
2141
+ Users upload an image of their food and anno-
2142
+ tate the type of food; categories chosen by pop-
2143
+ ularity
2144
+ CIFAR-10
2145
+ TinyImages
2146
+ Web crawl
2147
+ CIFAR-100
2148
+ TinyImages
2149
+ Web crawl
2150
+ Birdsnap
2151
+ Flickr
2152
+ Also used MTurk
2153
+ SUN397
2154
+ Web search engines
2155
+ Also used WordNet
2156
+ Stanford Cars
2157
+ Flickr,
2158
+ Google,
2159
+ Bing
2160
+ Also used MTurk
2161
+ FGVC Aircraft
2162
+ airliners.net
2163
+ Images taken by 10 photographers
2164
+ Pascal VOC 2007 Cls.
2165
+ Flickr
2166
+ N/A
2167
+ Describable Textures
2168
+ Google and Flickr
2169
+ Also used MTurk
2170
+ Oxford-IIT Pets
2171
+ Flickr, Google,
2172
+ Catster, Dogster
2173
+ Catster and Dogster are social websites for col-
2174
+ lecting and discussing pet images
2175
+ Caltech-101
2176
+ Google
2177
+ 97 categories chosen from Webster Collegiate
2178
+ Dictionary categories associated with a drawing
2179
+ Oxford 102 Flowers
2180
+ Mostly collected
2181
+ from web
2182
+ A small number of images acquired by the pa-
2183
+ per authors taking the pictures
2184
+ L
2185
+ RELATIONSHIP BETWEEN MODEL SIZE AND TRANSFER PERFORMANCE
2186
+ 0
2187
+ 20
2188
+ 40
2189
+ 60
2190
+ 80
2191
+ 100
2192
+ 120
2193
+ 140
2194
+ # of parameters (in millions)
2195
+ 62.5
2196
+ 65.0
2197
+ 67.5
2198
+ 70.0
2199
+ 72.5
2200
+ 75.0
2201
+ 77.5
2202
+ 80.0
2203
+ Accuracy
2204
+ Caltech Camera Traps 20
2205
+ 0
2206
+ 20
2207
+ 40
2208
+ 60
2209
+ 80
2210
+ 100
2211
+ 120
2212
+ 140
2213
+ # of parameters (in millions)
2214
+ 0.88
2215
+ 0.90
2216
+ 0.92
2217
+ 0.94
2218
+ Quadratic weighted kappa
2219
+ APTOS 2019 Blindness
2220
+ 0
2221
+ 20
2222
+ 40
2223
+ 60
2224
+ 80
2225
+ 100
2226
+ 120
2227
+ 140
2228
+ # of parameters (in millions)
2229
+ 0.4
2230
+ 0.5
2231
+ 0.6
2232
+ 0.7
2233
+ 0.8
2234
+ Macro F1 score
2235
+ Human Protein Atlas
2236
+ 0
2237
+ 20
2238
+ 40
2239
+ 60
2240
+ 80
2241
+ 100
2242
+ 120
2243
+ 140
2244
+ # of parameters (in millions)
2245
+ 0.90
2246
+ 0.91
2247
+ 0.92
2248
+ 0.93
2249
+ 0.94
2250
+ 0.95
2251
+ 0.96
2252
+ 0.97
2253
+ Area under ROC
2254
+ SIIM-ISIC Melanoma
2255
+ 0
2256
+ 20
2257
+ 40
2258
+ 60
2259
+ 80
2260
+ 100
2261
+ 120
2262
+ 140
2263
+ # of parameters (in millions)
2264
+ 82
2265
+ 84
2266
+ 86
2267
+ 88
2268
+ 90
2269
+ Accuracy
2270
+ Cassava Leaf Disease
2271
+ 0
2272
+ 20
2273
+ 40
2274
+ 60
2275
+ 80
2276
+ 100
2277
+ 120
2278
+ 140
2279
+ # of parameters (in millions)
2280
+ 97.50
2281
+ 97.75
2282
+ 98.00
2283
+ 98.25
2284
+ 98.50
2285
+ 98.75
2286
+ 99.00
2287
+ 99.25
2288
+ 99.50
2289
+ Accuracy
2290
+ EuroSAT
2291
+ AlexNet
2292
+ MobileNetV3-small
2293
+ VGG-13 BN
2294
+ DeiT-tiny
2295
+ ResNet-50
2296
+ ResNet-152
2297
+ DeiT-small
2298
+ PNASNet-5
2299
+ Inception-ResNet v2
2300
+ VGG-16 BN
2301
+ EfficientNet B0
2302
+ EfficientNet B4
2303
+ DenseNet-121
2304
+ ResNeXt-50-32x4d
2305
+ ShuffleNetV2x1.0
2306
+ ConvNext-tiny
2307
+ ShuffleNetV2x0.5
2308
+ SqueezeNet 1.1
2309
+ Figure 9: We compare model size with downstream transfer performance. Again we use separate trend lines
2310
+ for all models (green) and only those above 70% ImageNet accuracy (blue). We use 95% confidence intervals
2311
+ computed with Clopper-Pearson for accuracy metrics and bootstrap with 10,000 trials for other metrics.
2312
+ 23
2313
+
2314
+ M
2315
+ FID SCORE DETAILS
2316
+ Table 11: We calculate FID scores between the ImageNet validation set and each of the datasets we study, as
2317
+ well as between the ImageNet validation set and each of the datasets in Kornblith et al. (2019). We found that
2318
+ dataset size affects FID score, so we take a 3,662 subset of each downstream dataset. Note that 3,662 is the size
2319
+ of APTOS, which is the smallest dataset.
2320
+ Dataset
2321
+ FID
2322
+ CCT-20
2323
+ 162.69
2324
+ APTOS
2325
+ 196.24
2326
+ HPA
2327
+ 230.70
2328
+ Cassava
2329
+ 179.24
2330
+ Melanoma
2331
+ 186.34
2332
+ EuroSAT
2333
+ 151.85
2334
+ Food-101
2335
+ 108.35
2336
+ CIFAR-10
2337
+ 132.53
2338
+ CIFAR-100
2339
+ 120.72
2340
+ Birdsnap
2341
+ 94.08
2342
+ SUN397
2343
+ 62.95
2344
+ Stanford Cars
2345
+ 143.35
2346
+ FGVC Aircraft
2347
+ 183.35
2348
+ Pascal VOC 2007 Cls.
2349
+ 39.84
2350
+ Describable Textures
2351
+ 89.13
2352
+ Oxford-IIT Pets
2353
+ 77.27
2354
+ Caltech-101
2355
+ 50.77
2356
+ Oxford 102 Flowers
2357
+ 140.21
2358
+ N
2359
+ PREDICTIVE POWER OF ACCURACY ON NON-WEB-SCRAPED DATASETS ON
2360
+ NOVEL DATASETS
2361
+ We observe that, on many non-web-scraped datasets, accuracy correlates only weakly with Ima-
2362
+ geNet accuracy. It is thus worth asking whether other predictors might correlate better. In this
2363
+ section, we examine the extent to which accuracy on a given non-web-scraped target dataset can be
2364
+ predicted from the accuracy on the other non-web-scraped target datasets.
2365
+ N.1
2366
+ F-TEST
2367
+ We can further measure the extent to which the averages of the five other datasets beyond the pre-
2368
+ dictive power provided by ImageNet by using F-tests. For each target task, we fit a linear regression
2369
+ model that predicts accuracy as either ImageNet accuracy or the average accuracy on the other five
2370
+ non-web-scraped datasets, and a second linear regression model that predicts accuracy as a func-
2371
+ tion of both ImageNet accuracy and the average accuracy on the other five datasets. Since the first
2372
+ model is nested within the second, the second model must explain at least as much variance as the
2373
+ first. The F-test measures whether the increase in explained variance is significant. For these ex-
2374
+ periments, we logit-transform accuracy values and standardize them to zero mean and unit variance
2375
+ before computing the averages, as in the middle column of Table 13.
2376
+ Results are shown in Table 12. The average accuracy across the other five datasets explains variance
2377
+ beyond that explained by ImageNet accuracy alone on five of the six datasets. The only exception
2378
+ is EuroSAT, where the range of accuracies is low (most models get ∼99%) and a significant fraction
2379
+ of the variance among models may correspond to noise. By contrast, ImageNet accuracy explains
2380
+ variance beyond the average accuracy only on two datasets (APTOS and Melanoma). These results
2381
+ indicate that there are patterns in how well different models transfer to non-web-scraped data that
2382
+ are not captured by ImageNet accuracy alone, but are captured by the accuracy on other non-web-
2383
+ scraped datasets.
2384
+ 24
2385
+
2386
+ Table 12: Results of the F-test described in Section N.1. “+Avg. across datasets” tests whether a model that
2387
+ includes both ImageNet accuracy and the average accuracy across the 5 other datasets explains more variance
2388
+ than a model that includes only ImageNet accuracy. “+ImageNet” tests whether a model that includes both
2389
+ predictors explains more variance than a model that includes only the average accuracy across the 5 other
2390
+ datasets. In addition to F and p values, we report adjusted R2 for all models. p-values < 0.05 are bold-faced.
2391
+ +Avg. across datasets
2392
+ +ImageNet
2393
+ Dataset
2394
+ F (1, 16)
2395
+ p-value
2396
+ F (1, 16)
2397
+ p-value
2398
+ Adj. R2
2399
+ (ImageNet-only)
2400
+ Adj. R2
2401
+ (Average-only)
2402
+ Adj. R2
2403
+ (Both predictors)
2404
+ CCT-20
2405
+ 8.2
2406
+ 0.01
2407
+ 0.69
2408
+ 0.42
2409
+ 0.56
2410
+ 0.70
2411
+ 0.69
2412
+ APTOS
2413
+ 31.0
2414
+ 0.00004
2415
+ 4.6
2416
+ 0.047
2417
+ 0.34
2418
+ 0.71
2419
+ 0.76
2420
+ HPA
2421
+ 11.8
2422
+ 0.003
2423
+ 0.84
2424
+ 0.37
2425
+ 0.60
2426
+ 0.76
2427
+ 0.76
2428
+ Melanoma
2429
+ 5.8
2430
+ 0.03
2431
+ 7.8
2432
+ 0.01
2433
+ 0.74
2434
+ 0.71
2435
+ 0.79
2436
+ Cassava
2437
+ 13.2
2438
+ 0.002
2439
+ 0.14
2440
+ 0.71
2441
+ 0.55
2442
+ 0.75
2443
+ 0.74
2444
+ EuroSAT
2445
+ 2.9
2446
+ 0.11
2447
+ 0.72
2448
+ 0.41
2449
+ 0.43
2450
+ 0.52
2451
+ 0.49
2452
+ N.2
2453
+ SPEARMAN CORRELATION
2454
+ Table 13: We measure the Spearman correlation between each dataset with either the average of the 5 other
2455
+ datasets we study, or with ImageNet. Normalization is done by logit transforming accuracies, and then stan-
2456
+ dardizing to zero mean and unit variance. The results suggest that using additional datasets is more predictive
2457
+ of model performance than just using ImageNet.
2458
+ Avg of 5 others
2459
+ (unnormalized)
2460
+ Avg of 5 others
2461
+ (normalized)
2462
+ ImageNet
2463
+ Dataset
2464
+ ρ
2465
+ p-value
2466
+ ρ
2467
+ p-value
2468
+ ρ
2469
+ p-value
2470
+ CCT-20
2471
+ 0.8684
2472
+ 0.0000
2473
+ 0.9263
2474
+ 0.0000
2475
+ 0.5825
2476
+ 0.0089
2477
+ APTOS
2478
+ 0.7205
2479
+ 0.0005
2480
+ 0.6950
2481
+ 0.0010
2482
+ 0.3010
2483
+ 0.2105
2484
+ HPA
2485
+ 0.7351
2486
+ 0.0003
2487
+ 0.6825
2488
+ 0.0013
2489
+ 0.6491
2490
+ 0.0026
2491
+ Melanoma
2492
+ 0.6561
2493
+ 0.0023
2494
+ 0.7807
2495
+ 0.0000
2496
+ 0.7667
2497
+ 0.0001
2498
+ Cassava
2499
+ 0.8872
2500
+ 0.0000
2501
+ 0.7442
2502
+ 0.0003
2503
+ 0.5222
2504
+ 0.0218
2505
+ EuroSAT
2506
+ 0.3030
2507
+ 0.2073
2508
+ 0.3821
2509
+ 0.1065
2510
+ 0.4734
2511
+ 0.0406
2512
+ 25
2513
+
2514
+ O
2515
+ PRE-TRAINING AUGMENTATION DETAILS
2516
+ Table 14: For each ImageNet pre-trained model, we provide the augmentation strategy used during pre-training
2517
+ time.
2518
+ Model
2519
+ Augmentation
2520
+ AlexNet
2521
+ Resize + Crop + Flip
2522
+ SqueezeNet 1.1
2523
+ Resize + Crop + Flip
2524
+ ShuffleNetV2x0.5
2525
+ AutoAugment (TrivialAugmentWide) + RandErasing + MixUp + CutMix
2526
+ MobileNet V3 small
2527
+ AutoAugment (ImageNet/Default)+ RandErasing
2528
+ ShuffleNetV2x1.0
2529
+ AutoAugment (TrivialAugmentWide) + RandErasing + MixUp + CutMix
2530
+ VGG-13 BN
2531
+ Resize + Crop + Flip
2532
+ DeiT-tiny
2533
+ RandAugment + RandErasing
2534
+ VGG-16 BN
2535
+ Resize + Crop + Flip
2536
+ DenseNet-121
2537
+ Resize + Crop + Flip
2538
+ ResNet-50
2539
+ Resize + Crop + Flip
2540
+ ResNeXt-50-32x4d
2541
+ Resize + Crop + Flip
2542
+ EfficientNet B0
2543
+ RandAugment
2544
+ ResNet-152
2545
+ Resize + Crop + Flip
2546
+ ViT-B/16
2547
+ RandAugment + MixUp
2548
+ DeiT-small
2549
+ RandAugment + RandErasing
2550
+ Inception-ResNet v2
2551
+ Inception Preprocessing (Color Distort + Resize + Crop + Flip)
2552
+ ConvNext-tiny
2553
+ AutoAugment (TrivialAugmentWide) + RandErasing + MixUp + CutMix
2554
+ PNASNet-5 large
2555
+ Whiten + Resize + Crop + Flip
2556
+ EfficientNet B4
2557
+ RandAugment
2558
+ 55
2559
+ 60
2560
+ 65
2561
+ 70
2562
+ 75
2563
+ 80
2564
+ 85
2565
+ ImageNet top-1 accuracy
2566
+ 64
2567
+ 66
2568
+ 68
2569
+ 70
2570
+ 72
2571
+ 74
2572
+ 76
2573
+ 78
2574
+ Accuracy
2575
+ Caltech Camera Traps 20
2576
+ 55
2577
+ 60
2578
+ 65
2579
+ 70
2580
+ 75
2581
+ 80
2582
+ 85
2583
+ ImageNet top-1 accuracy
2584
+ 0.88
2585
+ 0.90
2586
+ 0.92
2587
+ 0.94
2588
+ Quadratic weighted kappa
2589
+ APTOS 2019 Blindness
2590
+ 55
2591
+ 60
2592
+ 65
2593
+ 70
2594
+ 75
2595
+ 80
2596
+ 85
2597
+ ImageNet top-1 accuracy
2598
+ 0.35
2599
+ 0.40
2600
+ 0.45
2601
+ 0.50
2602
+ 0.55
2603
+ 0.60
2604
+ 0.65
2605
+ 0.70
2606
+ 0.75
2607
+ Macro F1 score
2608
+ Human Protein Atlas
2609
+ 55
2610
+ 60
2611
+ 65
2612
+ 70
2613
+ 75
2614
+ 80
2615
+ 85
2616
+ ImageNet top-1 accuracy
2617
+ 0.90
2618
+ 0.91
2619
+ 0.92
2620
+ 0.93
2621
+ 0.94
2622
+ 0.95
2623
+ 0.96
2624
+ 0.97
2625
+ Area under ROC
2626
+ SIIM-ISIC Melanoma
2627
+ 55
2628
+ 60
2629
+ 65
2630
+ 70
2631
+ 75
2632
+ 80
2633
+ 85
2634
+ ImageNet top-1 accuracy
2635
+ 82
2636
+ 84
2637
+ 86
2638
+ 88
2639
+ 90
2640
+ Accuracy
2641
+ Cassava Leaf Disease
2642
+ 55
2643
+ 60
2644
+ 65
2645
+ 70
2646
+ 75
2647
+ 80
2648
+ 85
2649
+ ImageNet top-1 accuracy
2650
+ 97.50
2651
+ 97.75
2652
+ 98.00
2653
+ 98.25
2654
+ 98.50
2655
+ 98.75
2656
+ 99.00
2657
+ 99.25
2658
+ 99.50
2659
+ Accuracy
2660
+ EuroSAT
2661
+ AlexNet
2662
+ MobileNetV3-small
2663
+ VGG-13 BN
2664
+ DeiT-tiny
2665
+ ResNet-50
2666
+ ResNet-152
2667
+ DeiT-small
2668
+ PNASNet-5
2669
+ Inception-ResNet v2
2670
+ VGG-16 BN
2671
+ EfficientNet B0
2672
+ EfficientNet B4
2673
+ DenseNet-121
2674
+ ResNeXt-50-32x4d
2675
+ ShuffleNetV2x1.0
2676
+ ConvNext-tiny
2677
+ ShuffleNetV2x0.5
2678
+ SqueezeNet 1.1
2679
+ ViT-B/16
2680
+ Figure 10: Figure 1 with points colored by general pre-training augmentation strategy. Cyan points use simple
2681
+ augmentation (resize, crops, flips, etc.), and red points use automatic augmentation (RandAugment, AutoAug-
2682
+ ment, TrivialAugmentWide).
2683
+ 26
2684
+
6dE3T4oBgHgl3EQfpwoB/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
6tFJT4oBgHgl3EQflyzE/content/2301.11585v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3623a1992e007cc1ea0ff4bef1413a323983518b0c8518d11e4ba1de8851c33
3
+ size 1263253
6tFJT4oBgHgl3EQflyzE/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89d8147dfcb70ed1bde123d0ee9318cedfea6f4f57b61e605894146bffec34f9
3
+ size 136730
8tA0T4oBgHgl3EQfOv9y/content/2301.02165v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27ceb270b1c38343701dfdd70b9cb62ee5e8b9cf4b3fda23325025a73668db71
3
+ size 1358164
8tA0T4oBgHgl3EQfOv9y/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:197faf22f03be284ac871507d433e8460e86807cc3a9ff81b75f5999f522791a
3
+ size 2162733
8tA0T4oBgHgl3EQfOv9y/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e002cab116507f26f0698be1fdcf71a419f88975e5458ddb463dd54767e0821
3
+ size 91507
8tAyT4oBgHgl3EQfqPgO/content/2301.00537v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8d3b9973971b156a68ab0537c0bb291f42a8c2867554c44efdde2ce36750b43
3
+ size 988443
8tAyT4oBgHgl3EQfqPgO/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59a37f4754fdc1ec7c8cd5aea4f774ee02627667fb4fb1b9c1e4022c9fbaf2c4
3
+ size 5439533
8tAyT4oBgHgl3EQfqPgO/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7cda5b460d1c642f8381d37a056dd57aacd3524b2c73d944ea1eb5a1902de7d
3
+ size 192220
8tAzT4oBgHgl3EQfE_rp/content/tmp_files/2301.01005v1.pdf.txt ADDED
@@ -0,0 +1,1589 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.01005v1 [math.AG] 3 Jan 2023
2
+ MUMFORD TATE GROUPS AND THE HODGE CONJECTURE
3
+ ANANYO DAN AND INDER KAUR
4
+ Abstract. In this article we study the (cohomological) Hodge conjecture for singular varieties.
5
+ We prove the conjecture for simple normal crossing varieties that can be embedded in a family
6
+ where the Mumford-Tate group remains constant.
7
+ We show how to produce such families.
8
+ Furthermore, we show for varieties with worse singularities the conjecture can be expressed
9
+ solely in terms of the algebraic classes.
10
+ 1. Introduction
11
+ The underlying field will always be C.
12
+ Recall, the classical Hodge conjecture claims that
13
+ given a smooth projective variety X, every (rational) Hodge class in X is the cohomology class
14
+ of an algebraic cycle in X. The conjecture is known in some cases (see [20, 32] for a survey
15
+ of known results and [6, 30] for related results), but is open in general. A typical strategy has
16
+ been to consider smooth, projective low dimensional varieties that are birational to already
17
+ known cases. This is primarily because the exceptional divisors arising from the resolution of
18
+ the indeterminacy locus satisfy the Hodge conjecture. However, this strategy fails in higher
19
+ dimension. Another approach is to consider families of varieties (e.g. in the case of abelian
20
+ varieties) and then use a Noether-Lefschetz-type argument to conclude that the Hodge classes
21
+ in a very general fiber in the family are powers of the first Chern class of a line bundle. This
22
+ implies the Hodge conjecture for a very general fiber. In this article, we combine ideas from
23
+ both these approaches.
24
+ It is well-known that any smooth projective variety X is birational to a hypersurface Xhyp in
25
+ a projective space. This hypersurface Xhyp is almost always singular. Note that there is homo-
26
+ logical version of the Hodge conjecture for singular varieties given by Jannsen [13, Conjecture
27
+ 7.2] (see also [18]). He proved that the classical Hodge conjecture is equivalent to the singular
28
+ version (see [13, Theorem 7.9], see also [19]). Therefore, proving the singular Hodge conjecture
29
+ for Xhyp would imply the Hodge conjecture for X.
30
+ In the present article, we give a cohomological formulation of the Hodge conjecture for singular
31
+ varieties. There are obvious reasons why this interpretation has so far been unexplored. Firstly
32
+ for X singular, the classical Chow group is not compatible with pull-back morphisms. In [9,
33
+ Chapter 17] (see also [10, Proposition 4]), Fulton and MacPherson developed the operational
34
+ Chow group, denoted Ap(X) which is compatible with pull-back morphisms and for smooth
35
+ varieties coincides with the classical Chow group. However, even for the operational Chow group,
36
+ we know by [29] that in general, there is no map Ap(X) → H2p(X, Q) with good properties.
37
+ Date: January 4, 2023.
38
+ 2010 Mathematics Subject Classification. 14C15, 14C30, 32S35, 32G20, 14D07, 14C05.
39
+ Key words and phrases. Hodge conjecture, Limit mixed Hodge structures, Operational Chow group, Cycle
40
+ class map, flag Hilbert schemes, singular varieties.
41
+ A.D. is funded by EPSRC grant number EP/T019379/1. I. K. was funded by the DFG, TRR 326 Geometry
42
+ and Arithmetic of Uniformized Structures, project number 444845124 and is currently funded by EPSRC grant
43
+ number EP/W026554/1.
44
+ 1
45
+
46
+ 2
47
+ A. DAN AND I. KAUR
48
+ Nevertheless, by the work of Bloch-Gillet-Soul´e (see [2]) there is a (functorial) cycle class map:
49
+ clp : Ap(X) ⊗ Q → GrW
50
+ 2pH2p(X, Q).
51
+ Using this we formulate the cohomological singular Hodge conjecture as follows:
52
+ Singular Hodge conjecture. Let X be a projective variety such that the dimension of the
53
+ singular locus is at most p − 1. Then, the image of the cycle class map clp coincides with
54
+ H2p
55
+ Hdg(X) := GrW
56
+ 2pH2p(X, Q) ∩ F pGrW
57
+ 2pH2p(X, C).
58
+ If X is of dimension n and the above conjecture holds for X, then we say that X satisfies
59
+ SHC(p, n). Of course, if X is non-singular then the singular Hodge conjecture is the same as
60
+ the classical Hodge conjecture. In this case, we say that X satisfies HC(p, n). The Lefschetz
61
+ (1, 1)-theorem implies HC(1, n) holds true, for any n.
62
+ Recall, a very general hypersurface of any dimension satisfies the Hodge conjecture (as the
63
+ cohomology ring is generated by the class of the hyperplane section). Therefore we can always
64
+ embed Xhyp in a one parameter family of hypersurfaces such that a general fibre satisfies the
65
+ Hodge conjecture. One then expects that the Hodge classes on Xhyp “spread out” to Hodge
66
+ classes in the family. Since a general member of the family satisfies the Hodge conjecture, we
67
+ know that the Hodge class away from the centre is the cohomology class of an algebraic cycle.
68
+ By the simple operation of taking closure, one can then extend the algebraic cycles on the
69
+ general fiber to the central fiber. One needs to check that the cohomology class of this “new”
70
+ algebraic cycle on the central fiber coincides with the Hodge class we started with. However,
71
+ there are several technical problems. Heuristically, the specialization map is not injective and
72
+ hence Hodge classes need not “spread out”. Even if a Hodge class does spread out, it might
73
+ not restrict to a Hodge class on the general fibre! In this article we study these problems and
74
+ give several examples of families of varieties where these problems can be circumvented. Let us
75
+ make this precise.
76
+ Let X be a singular, projective variety of dimension n and π : X → ∆ be a flat family of
77
+ projective varieties, smooth over ∆∗ with the central fiber X. Fix an integer p. Denote by h
78
+ the universal cover for ∆∗ and by X∞ the pull-back of X to h. By Ehresmann’s theorem, for
79
+ every u ∈ h there is an isomorphism of cohomology groups H2p(X∞, Q) and H2p(Xu, Q). The
80
+ natural Hodge filtration on H2p(Xu, Q) induces a filtration F p
81
+ u on H2p(X∞, Q). The limit Hodge
82
+ filtration on H2p(X∞, Q) arises as the limit of this filtration as the imaginary part of u tends to
83
+ ∞ (see §2.3 for details). However, there may be rational points H2p(X∞, Q) ∩ F pH2p(X∞, C)
84
+ of the limit Hodge filtration that do not come from the rational points of the filtration F p
85
+ u.
86
+ The Noether-Lefschetz locus gives examples of this phenomena even for smooth families (see
87
+ Example 3.3). As a result, H2p(X∞, Q) may contain more Hodge classes than that on a general
88
+ fiber! This means that although a Hodge class on X0 maps to a Hodge class on X∞ via the
89
+ specialization map, it need not extend to a Hodge class on the family.
90
+ The jump in the rank of the Hodge lattice is captured by Mumford-Tate groups (see §3.1
91
+ for the definition). We call π a Mumford-Tate family if the rank of the Mumford-Tate group
92
+ remains “constant in the limit” (see §3.2 for precise definitions). Moreover, we call a singular,
93
+ projective variety MT-smoothable if it can be embedded as a central fiber of a Mumford-Tate
94
+ family where the general fiber satisfies the Hodge conjecture. We prove the following:
95
+ Theorem 1.1. Let X be a projective variety of dimension 4 with strict normal crossings sin-
96
+ gularities. If X is MT-smoothable, then X satisfies SHC(p, 4) for every p.
97
+ In Theorem 5.2 below, we prove Theorem 1.1 for any dimension. Clearly Theorem 1.1 leads
98
+ to the following questions:
99
+
100
+ MUMFORD TATE GROUPS AND THE HODGE CONJECTURE
101
+ 3
102
+ • Question 1: How to find Mumford-Tate families?
103
+ • Question 2: Can we generalize Theorem 1.1 to varieties with worse singularities?
104
+ For an exhaustive answer of Question 1 one would need a complete description of the Noether-
105
+ Lefschetz locus for families of hypersurfaces in all dimensions greater than 3. This problem
106
+ is largely open.
107
+ However in §6, we give a general method to obtain Mumford-Tate families
108
+ from known ones using the theory of correspondences. Recall, that given a coherent sheaf E
109
+ on a product of two smooth, projective varieties X × Y , the i-th Chern class of E induces a
110
+ morphism of pure Hodge structures from H2m−k(X) to H2i−k(Y ) for all integers i and k, where
111
+ m = dim(X) (see §6.2). Let us denote such a morphism by Φ(i,k)
112
+ E
113
+ . We say Y is cohomologically
114
+ generated by (X, E) if the cohomology ring H∗(Y ) is generated (as a ring) by the images of
115
+ morphisms of the form Φ(i,k)
116
+ E
117
+ as i and k varies over all integers (see Definition 6.3).
118
+ Note
119
+ that several examples of cohomologically generated varieties appear in existing literature. For
120
+ example, in [23] Mumford and Newstead proved that the moduli space of stable rank 2 bundles
121
+ with odd degree determinant over a curve C is cohomologically generated by the pair (C, U),
122
+ where U is the universal bundle associated to the moduli space. In [21,22] Markmann showed a
123
+ similar result for moduli spaces of sheaves over certain surfaces. In §6 we show how this notion
124
+ of cohomologically generated leads to producing more Mumford-Tate families.
125
+ Theorem 1.2. Let π1 : X ∗ → ∆∗ and π2 : Y∗ → ∆∗ be two smooth, projective families. Assume
126
+ that there exists a coherent sheaf U over X ∗ ×∆∗ Y∗ such that it is flat over ∆∗. Suppose that for
127
+ general t ∈ ∆∗, Yt is cohomologically generated by (Xt, Ut), where Ut := U|Xt×Yt. If the family
128
+ π1 is (strictly) Mumford-Tate family, then so is the family π2.
129
+ See Theorem 6.5 for the precise formulation. An obvious choice for π1 is a family of smooth
130
+ curves degenerating to a singular curve (with arbitrary singularities). See Proposition 6.1 for a
131
+ proof in the case when the singular curve is nodal.
132
+ Let us turn to Question 2. Suppose X is a singular projective variety of dimension n and p be
133
+ an integer such that dim(Xsing) ≤ p − 1. Suppose φ : �
134
+ X → X is any resolution of singularities
135
+ and E is the exceptional divisor. By [25, Corollary-Definition 5.37], we have an exact sequence
136
+ on cohomology
137
+ H2p(X) → H2p( �
138
+ X) → H2p(E).
139
+ We conjecture that taking algebraic cohomology groups preserves the exactness of the sequence:
140
+ Conjecture A. The following sequence is exact:
141
+ H2p
142
+ A (X) → H2p
143
+ A ( �
144
+ X) → H2p
145
+ A (E).
146
+ Note that, this conjecture does not involve Hodge classes.
147
+ Surprisingly, we prove that if
148
+ X is MT-smoothable, then this conjecture is equivalent to the singular Hodge conjecture. In
149
+ particular,
150
+ Theorem 1.3. Let X be as above. If X satisfies SHC(p, n), then X satisfies Conjecture A.
151
+ Conversely, if HC(p−1, n−1) holds true, X is MT-smoothable and satisfies Conjecture A, then
152
+ X satisfies SHC(p, n).
153
+ See Theorem 5.5 for the precise statement.
154
+ Outline: The paper is organised as follows: in §2 we briefly recall the necessary preliminaries
155
+ on limit mixed Hodge structures and flag Hilbert schemes. In §3 we recall the definition of a
156
+ Mumford-Tate group and introduce Mumford-Tate families. We give both examples and non-
157
+ examples of such families. In §4, we define limit algebraic cohomology groups and limit Hodge
158
+ classes. We recall the preliminaries on Operational Chow group and the Bloch-Gillet-Soul´e cycle
159
+
160
+ 4
161
+ A. DAN AND I. KAUR
162
+ class map. We give the singular Hodge conjecture and prove some of the preliminary results
163
+ which we use later. In §5, we prove the main results of this article. Finally, in §6 we give a
164
+ method to produce Mumford-Tate families.
165
+ 2. Preliminaries
166
+ In this section we briefly recall some of the basics on limit mixed Hodge structures and flag
167
+ Hilbert schemes. Limit mixed Hodge structures play an important role throughout this article.
168
+ See [25, §11] for a detailed treatment of the topic.
169
+ 2.1. Setup. Consider a flat family of projective varieties,
170
+ π : X → ∆,
171
+ smooth over ∆∗ of relative dimension n. Suppose the central fiber X0 := π−1(0) is a reduced,
172
+ simple normal crossings divisor. Denote by π′ : X∆∗ → ∆∗ the restriction of π to the punctured
173
+ disc ∆∗. Denote by X1, ..., Xr the irreducible components of the central fiber X0. For m ≥ 2,
174
+ denote by X(m) the disjoint union of the intersections of m number of irreducible components
175
+ of X0 i.e.,
176
+ X(m) :=
177
+
178
+ |I|=m
179
+ I=(1≤i1<i2<...<im≤r)
180
+ � m
181
+
182
+ k=1
183
+ Xik
184
+
185
+ .
186
+ Let e : h → ∆∗ be the exponential map from the upper half plane h to the punctured disc
187
+ ∆∗. Denote by X∞ := X∆∗ ×∆∗ h the base change of X∆∗ to h via the exponential map e.
188
+ 2.2. Monodromy operator. Since h is simply connected, the natural inclusion
189
+ is : Xe(s) ֒→ X∞
190
+ for any s ∈ h, induces an isomorphism of cohomology groups:
191
+ i∗
192
+ s : H2p(X∞, Z) ∼
193
+ −→ H2p(Xe(s), Z).
194
+ Note that, the morphism i∗
195
+ s changes even if e(s) does not. In particular, we have the monodromy
196
+ operator associate to the family π given by the composition:
197
+ T : H2p(X∞, Z)
198
+ i∗
199
+ s+1
200
+ −−→
201
+
202
+ H2p(Xe(s), Z)
203
+ (i∗
204
+ s)−1
205
+ −−−−→
206
+
207
+ H2p(X∞, Z).
208
+ See [16, p. 67, (2.4.13)] for further details. Denote by N := −(1/2πi) log(T). Using this operator
209
+ N we will recall the limit Hodge filtration.
210
+ 2.3. Limit Hodge filtration. Denote by
211
+ F •
212
+ s H2p(X∞, C) := (i∗
213
+ s)−1(F •H2p(Xe(s), C))
214
+ the preimage of the Hodge filtration on H2p(Xe(s), C). The dimension of F k
215
+ s H2p(X∞, C), denoted
216
+ mk, does not depend on the choice of s ∈ h. Consider the Grassmann variety parameterizing
217
+ mk-dimensional subspaces of H2p(X∞, C), denoted Grass(mk, H2p(X∞, C)). There is a natural
218
+ map:
219
+ h → Grass(mk, H2p(X∞, C)) sending s ∈ h to exp(2πisN)F k
220
+ s H2p(X∞, C).
221
+ This map is invariant under the translation s �→ s + 1 and tends to a limit F kH2p(X∞, C) as
222
+ the imaginary part of s tends to ∞ i.e.,
223
+ F kH2p(X∞, C) :=
224
+ lim
225
+ Im(s)→∞ exp(2πisN)F k
226
+ s H2p(X∞, C).
227
+
228
+ MUMFORD TATE GROUPS AND THE HODGE CONJECTURE
229
+ 5
230
+ See [16, §I.2.6] or [26, p. 254, 255] for further details. Clearly,
231
+ lim
232
+ Im(s)→∞ exp(2πisN)(F p
233
+ s H2p(X∞, C) ∩ H2p(X∞, Q)) ⊂ F pH2p(X∞, C) ∩ H2p(X∞, Q).
234
+ (2.1)
235
+ This inclusion will play an important role in the definition of the Mumford-Tate family in §3.
236
+ 2.4. Limit weight filtration. One can observe that the decreasing filtration
237
+ F 0H2p(X∞, C) ⊇ F 1H2p(X∞, C) ⊇ ... ⊇ F 2pH2p(X∞, C) ⊇ 0
238
+ need not be a Hodge filtration i.e., F k ∩ F
239
+ 2p+1−k need not be 0. It was observed by Schmid
240
+ that H2p(X∞, Q) can be equipped with an increasing limit weight filtration W•, arising from
241
+ the monodromy action by T, such that the two filtrations F • and W• together define a mixed
242
+ Hodge structure on H2p(X∞, Q) (see [26, Theorem 6.16]). Steenbrink in [28] retrieved the limit
243
+ weight filtration using a spectral sequence. We recall the E1-terms of the spectral sequence:
244
+ Theorem 2.1 ( [25, Corollary 11.23]). The spectral sequence
245
+ ∞Ep,q
246
+ 1
247
+ :=
248
+
249
+ k≥max{0,p}
250
+ Hq+2p−2k(X(2k − p + 1), Q)(p − k)
251
+ with the differential map d : ∞Ep−1,q
252
+ 1
253
+ → ∞Ep,q
254
+ 1
255
+ being a combination of the restriction morphism
256
+ and the Gysin morphism, degenerates at E2. Moreover, ∞Ep,q
257
+ 1
258
+ ⇒ Hp+q(X∞, Q) with the weight
259
+ filtration given by ∞Ep,q
260
+ 2
261
+ = GrW
262
+ q Hp+q(X∞, Q).
263
+ 2.5. Specialization map. By the identification between H2p(X∞, Z) and H2p(Xs, Z) men-
264
+ tioned above, we get a specialization morphism (see [1, §2]) which is a morphism of mixed
265
+ Hodge structures:
266
+ sp : H2p(X0, Z) → H2p(X∞, Z),
267
+ where H2p(X∞, Q) is equipped with the limit mixed Hodge structure. Using the Mayer-Vietoris
268
+ sequence observe that the weight filtration on H2p(X0, Q) arises from the spectral sequence with
269
+ E1-terms:
270
+ Ep,q
271
+ 1
272
+ = Hq(X(p + 1), Q) ⇒ Hp+q(X0, Q)
273
+ where the differential d : Ep−1,q
274
+ 1
275
+ → Ep,q
276
+ 1
277
+ is the restriction morphism (see [28, Example 3.5]).
278
+ Note that, the spectral sequence degenerates at E2.
279
+ Remark 2.2. By the definition of Ej,q
280
+ 1
281
+ and ∞Ej,q
282
+ 1
283
+ given above, we have a natural morphism
284
+ from Ej,q
285
+ 1
286
+ to ∞Ej,q
287
+ 1 , which commutes with the respective differential maps d. As a result, this
288
+ induces a morphism of spectral sequences:
289
+ φ : Ep,q
290
+ 2
291
+ → ∞Ep,q
292
+ 2 .
293
+ (2.2)
294
+ We now compute the kernel over the weight graded pieces of the specialization morphism:
295
+ Proposition 2.3. For p ≥ 0, we have an exact sequence of the form:
296
+ Hq−2(X(p + 2), Q) → Ep,q
297
+ 2
298
+ φ−→
299
+ ∞Ep,q
300
+ 2
301
+ where the first morphism is induced by the Gysin morphism
302
+ Hq−2(X(p + 2), Q) → Hq(X(p + 1), Q) = Ep,q
303
+ 1
304
+ and φ is as in (2.2).
305
+
306
+ 6
307
+ A. DAN AND I. KAUR
308
+ Proof. Note that the composed morphism
309
+ Hq−2(X(p + 2), Q) → Hq(X(p + 1), Q) → Hq(X(p + 2), Q) is the zero map,
310
+ where the first morphism is simply the Gysin morphism and the second morphism is the restric-
311
+ tion map. Therefore, there is a natural map from Hq−2(X(p + 2), Q) to Ep,q
312
+ 2 . The difference
313
+ between the spectral sequences Ep,q
314
+ 1
315
+ and ∞Ep,q
316
+ 1
317
+ is that the differential map in the latter case also
318
+ allows Gysin morphism. Therefore, the kernel of the morphism φ is isomorphic to the image of
319
+ the Gysin map. This proves the proposition.
320
+
321
+ 2.6. Flag Hilbert schemes. We refer the reader to [27, §4.5] for a detailed study of flag Hilbert
322
+ schemes. Let
323
+ π : X∆∗ → ∆∗
324
+ be a smooth, projective morphism over the punctured disc ∆∗. Fix a relative polarization L on
325
+ X∆∗ inducing a closed immersion of X∆∗ into a relative projective space PN
326
+ ∆∗ for some integer
327
+ N. By the constancy of Hilbert polynomials in flat, projective families, every fiber of π has the
328
+ same Hilbert polynomial (with respect to the polarization L), say Q (see [12, Theorem III.9.9]).
329
+ Recall, given a Hilbert polynomial P, there exists a projective scheme, denoted HilbP,Q, called
330
+ a flag Hilbert scheme parameterizing pairs of the form (Y ⊂ X ⊂ PN), where Y (resp. X) is of
331
+ Hilbert polynomial P (resp. Q).
332
+ The flag Hilbert scheme HilbP,Q is equipped with an universal family Y ⊂ Xuniv with Y, Xuniv
333
+ flat over HilbP,Q and for every s ∈ HilbP,Q, the corresponding fiber Ys (resp. Xs) has Hilbert
334
+ polynomial P (resp. Q) satisfying the universal property: if there exists a closed subscheme
335
+ Z ⊂ X∆∗, flat over ∆∗ with fibers having Hilbert polynomial P, then there exists an unique
336
+ morphism f : ∆∗ → HilbP,Q such that the pull-back of the universal family Y ⊂ Xuniv to ∆∗ is
337
+ isomorphic to Z ⊂ X∆∗ (see [27, Theorem 4.5.1]).
338
+ Lemma 2.4. For every 0 < ǫ ∈ R small enough, there exists sǫ ∈ ∆∗ of distance less than
339
+ ǫ from the origin, such that every closed subvariety Zsǫ of codimension p in Xsǫ extends to a
340
+ ∆∗-flat closed subscheme Z ⊂ X∆∗ such that the fiber Z ∩ Xsǫ over sǫ is isomorphic to Zsǫ.
341
+ Proof. Since the Hilbert polynomial of the fibers of π is Q, by the universal property of Hilbert
342
+ schemes there is a natural morphism
343
+ f : ∆∗ → HilbQ
344
+ such that the pull-back of the universal family on HilbQ to ∆∗ is isomorphic to X∆∗. Let S be
345
+ the set of Hilbert polynomials P of degree n − p such that the image of the natural projection
346
+ morphism from HilbP,Q to HilbQ does not contain the image of f i.e., intersects properly the
347
+ image of f. Clearly, S is a countable set. Note that the union of countably many proper closed
348
+ subsets in ∆∗ does not contain any open subsets. Hence, for every 0 < ǫ ∈ R small enough, there
349
+ exists sǫ ∈ ∆∗ of distance less than ǫ from the origin, such that f(sǫ) does not lie in the image
350
+ of the projection from HilbP,Q to HilbQ, as P varies in the set S. In other words, every closed
351
+ subscheme in Xsǫ extends to to a ∆∗-flat closed subscheme of X∆∗. This proves the lemma.
352
+
353
+ 3. Mumford-Tate families
354
+ In this section we introduce the concept of Mumford-Tate families. These are smooth families
355
+ of projective varieties such that the associated limit mixed Hodge structure has “as many” Hodge
356
+ classes as a general fiber in the family. The motivation behind the name is that Mumford-Tate
357
+ groups are determined uniquely by the set of Hodge classes in the associated tensor algebra. Let
358
+ us first recall the definition of the Mumford-Tate group.
359
+
360
+ MUMFORD TATE GROUPS AND THE HODGE CONJECTURE
361
+ 7
362
+ 3.1. Mumford-Tate groups. Denote by S the Weil restriction of scalars for the field extension
363
+ C/R. Let V be a Q-vector space. A pure Hodge structure of weight n on V is given by a non-
364
+ constant homomorphism of R-algebraic groups
365
+ φ : C∗ = S(R) → GL(V )(R)
366
+ such that φ(r) = rnId for all r ∈ R∗ ⊂ S(R) = C∗.
367
+ Let VC := V ⊗Q C.
368
+ To this group
369
+ homomorphism one associates the Hodge decomposition:
370
+ VC =
371
+
372
+ p+q=n
373
+ V p,q where V p,q := {v ∈ VC| φ(z)v = zpzqv for all z ∈ C∗}.
374
+ The Mumford-Tate group associated to the pure Hodge structure (V, φ), denoted MT(V, φ),
375
+ is the smallest Q-algebraic subgroup of GL(V ) whose set of real points contain the image of φ.
376
+ Denote by
377
+ T m,n(V ) := V ⊗m ⊗ Hom(V, Q)⊗n.
378
+ Note that, the Hodge structure on V induces a pure Hodge structure on T m,n(V ). Elements of
379
+ F 0(T m,n(VC)) ∩ T m,n(V )
380
+ are called Hodge tensors. The Mumford-Tate group as the largest subgroup of GL(VQ) which
381
+ fixes the Hodge tensors (see [11, §I.B]).
382
+ Example 3.1. We now recall some well-known examples of Mumford-Tate groups.
383
+ (1) Let X be an abelian variety and V = H1(X, Q). The Mumford-Tate group associated
384
+ to the pure Hodge structure on V will be denoted by MT(X). The polarization on X
385
+ corresponds to a non-degenerate alternating form φ : V ⊗ V → Q. Denote by GSp(V, φ)
386
+ the group of symplectic simplitudes with respect to the symplectic form φ:
387
+ GSp(V, φ) := {g ∈ GL(V ) | ∃ λ ∈ C∗ such that φ(gv, gw) = λφ(v, w) ∀ v, w ∈ V }.
388
+ Recall, for any abelian variety X, the Mumford-Tate group of X is contained in the
389
+ group of symplectic simplitudes i.e. MT(X) ⊆ GSp(V, φ). An abelian variety is called
390
+ simple if it does not contain an abelian subvariety other than 0 and X. If X is simple
391
+ and dim(X) = p, where p is a prime number, then MT(X) = GSp(V, φ).
392
+ (2) Let c be a positive integer. Let X be a general complete intersection subvariety contained
393
+ in P2m+c of codimension c, for some m ≥ 1. Assume that the degree of X is at least 5.
394
+ Denote by V := Hn(X, Q)prim and φ : V ⊗ V → Q the polarization on V . Let GO(V, φ)
395
+ be the group of orthogonal simplitudes with respect φ:
396
+ GO(V, φ) := {g ∈ GL(V ) | ∃ λ ∈ C∗ such that φ(gv, gw) = λφ(v, w) ∀ v, w ∈ V }.
397
+ Then the Mumford-Tate group of X, MT(X) = GO(V, φ).
398
+ 3.2. Mumford-Tate families. Keep setup as in §2.1. Given any s ∈ h, recall the exponential
399
+ map e from h to ∆∗ and the natural inclusion is from Xe(s) into X∞. Recall,
400
+ π : X∆∗ → ∆∗
401
+ the family of smooth, projective varieties. For any s ∈ h, H2p(Xe(s), Q) is equipped with a natural
402
+ pure Hodge structure. Denote by MTp(Xe(s)) the Mumford-Tate group associated to this pure
403
+ Hodge structure on H2p(Xe(s), Q). We say that π is a Mumford-Tate family of weight p if for any
404
+ class γ ∈ F pH2p(X∞, C) ∩ H2p(X∞, Q) satisfying Nγ = 0, the pullback i∗
405
+ s(γ) ∈ H2p(Xe(s), Q) is
406
+ fixed by MTp(Xe(s)) for a general s ∈ h. We say that π is Mumford-Tate if it is Mumford-Tate
407
+ of all weights.
408
+ Example 3.2. We now give some examples of Mumford-Tate families:
409
+
410
+ 8
411
+ A. DAN AND I. KAUR
412
+ (1) By Lefschetz hyperplane section theorem, for any smooth hypersurface X in P2m for
413
+ m ≥ 2, we have H2p(X, Q) ∼= Q for any 0 ≤ p ≤ 2m − 1. This implies if π parametrizes
414
+ smooth, hypersurfaces in P2m, then π is Mumford-Tate.
415
+ (2) Let π : X → ∆ be a smooth family of prime dimensional abelian varieties such that the
416
+ central fiber π−1(0) is simple. Then π is a Mumford-Tate family. Indeed, since π is a
417
+ smooth family, the local system Vp := R2pπ∗Q has no monodromy over the punctured
418
+ disc. Hence, H2p(X∞, Q) ∼= H2p(X0, Q) as pure Hodge structures, for all p and the local
419
+ system Vp is trivial. By the same argument, R1π∗Q is a trivial local system. A choice
420
+ of the trivialization fixes an identification:
421
+ ψt : V0
422
+
423
+ −→ Vt, where Vt := H1(Xt, Q) for any t ∈ ∆.
424
+ Note that the natural polarizations on V0 and Vt commutes with the identification ψt.
425
+ This induces an isomorphism:
426
+ GSp(Vt, φt) ∼
427
+ −→ GSp(V0, φ0) sending
428
+
429
+ Vt
430
+ g−→
431
+ ∼ Vt
432
+
433
+ to
434
+
435
+ V0
436
+ ψt
437
+ −→
438
+ ∼ Vt
439
+ g−→
440
+ ∼ Vt
441
+ ψ−1
442
+ t
443
+ −−→
444
+
445
+ V0
446
+
447
+ .
448
+ (3.1)
449
+ Now, γ0 ∈ H2p(X∞, Q) = H2p(X0, Q) is a Hodge class if and only if it is fixed by the
450
+ Mumford-Tate group MT(X0). Since X0 is simple, MT(X0) = GSp(V0, φ0). Using the
451
+ identification (3.1), since the Hodge class γ0 is fixed by GSp(V0, φ0), i∗
452
+ s(γ) = φs(γ) is
453
+ fixed by GSp(Vs, φs) for any s ∈ ∆∗. Since MT(Xs) is contained in GSp(Vs, φs), φs(γ)
454
+ is fixed by MT(Xs). Hence, φs(γ) is a Hodge class in H2p(Xs, Q). This proves the claim
455
+ that π is a Mumford-Tate family.
456
+ (3) Let π : X → ∆ be a smooth family of complex intersection subvarieties of codimension
457
+ c and let π−1(0) = X0. Suppose that MT(X0) = GO(Hn(X0, Q)prim, φ). Then π is a
458
+ Mumford-Tate family. The proof for this is the same as that of (2) above with GSp
459
+ replaced by GO.
460
+ Example 3.3. (Examples of non Mumford-Tate families) Recall for d ≥ 4, the Noether-
461
+ Lefschetz theorem states that a very general smooth, degree d surface in P3 has Picard number
462
+ 1. The Noether-Lefschetz locus parametrizes smooth degree d surfaces in P3 with Picard number
463
+ at least 2. See [3–5] for some its geometric properties. This means that there are smooth families
464
+ π : X → ∆ of hypersurfaces in P3 such that 0 ∈ ∆ lies on the Noether-Lefschetz locus and ∆���
465
+ does not intersect the Noether-Lefschetz locus. Since π is a smooth family, the local system
466
+ R2π∗Q does not have any monodromy over the punctured disc. Then, H2(X∞, Q) ∼= H2(X0.Q)
467
+ as pure Hodge structures. In particular, by the condition on the central fiber X0, the rank of
468
+ the Hodge lattice in H2(X∞, Q) is at least 2. But the rank of the Hodge lattice in H2(Xs, Q) is
469
+ 1 for any s ∈ ∆∗. Since the pullback morphism i∗
470
+ s is an isomorphism, this implies that there is
471
+ a Hodge class on H2(X∞, Q) that does not pullback to a Hodge class on H2(Xs, Q). Hence, π
472
+ cannot be a Mumford-Tate family.
473
+ 4. A cohomological version of the Hodge conjecture for singular varieties
474
+ In this section we define limit algebraic cohomology classes and limit Hodge classes. We show
475
+ that the limit algebraic cohomology classes are contained in the monodromy invariant limit
476
+ Hodge classes and the converse holds for Mumford-Tate families. In subsection 4.3 and 4.4 we
477
+ recall the necessary preliminaries for the Operational Chow group and the Bloch-Gillet-Soul´e
478
+ cycle class map. In 4.5 we state the Singular Hodge conjecture and in 4.6 we show that the
479
+ cohomology classes of algebraic cycles on a simple normal crossings variety are contained in the
480
+ Hodge classes.
481
+ We begin by recalling the classical Hodge conjecture.
482
+
483
+ MUMFORD TATE GROUPS AND THE HODGE CONJECTURE
484
+ 9
485
+ 4.1. The classical Hodge conjecture. Let X be a smooth, projective variety.
486
+ Given an
487
+ integer p > 0, denote by Zp(X) the free abelian group generated by codimension p algebraic
488
+ subvarieties. There is a natural cycle class map:
489
+ clp : Zp(X) → H2p(X, Z)
490
+ which associates to an algebraic subvariety W ⊂ X of codimension p, the fundamental class
491
+ [W] ∈ H2p(X, Z) (see [31, §11.1.2] for further details) and extend linearly. Furthermore, by [31,
492
+ Proposition 11.20], the image of the cycle class map clp lies in Hp,p(X, C) ∩ H2p(X, Z) i.e., the
493
+ cohomology class of an algebraic variety is a Hodge class. Tensoring the cycle class map by
494
+ rationals gives:
495
+ clp : Zp(X) ⊗Z Q → H2p(X, Q) ∩ Hp,p(X, C).
496
+ We denote by H2p
497
+ Hdg(X) := H2p(X, Q) ∩ Hp,p(X, C) the space of Hodge classes and the space of
498
+ algebraic classes H2p
499
+ A (X) ⊂ H2p(X, Q) is the image of the (rational) cycle class map clp. The
500
+ (rational) Hodge conjecture claims that the (rational) cycle class map clp is surjective for all p
501
+ i.e., the natural inclusion H2p
502
+ A (X) ⊂ H2p
503
+ Hdg(X) is an equality for all p.
504
+ Definition 4.1. Let X be a smooth, projective variety of dimension n. We say that X satisfies
505
+ HC(p, n) if the natural inclusion H2p
506
+ A (X) ⊂ H2p
507
+ Hdg(X) is an equality. We say that X satisfies
508
+ the Hodge conjecture if it satisfies HC(p, n) for every p ≥ 0. We say that HC(p, n) holds true to
509
+ mean that every smooth, projective variety of dimension n satisfies HC(p, n).
510
+ 4.2. Relative cycle class. Let
511
+ π : X∆∗ → ∆∗
512
+ be a smooth, projective morphism of relative dimension n. Let Z ⊂ X∆∗ be a closed subscheme
513
+ of X∆∗, flat over ∆∗ and of relative dimension n − p. The fundamental class of Z defines a
514
+ global section γZ of the local system H2p := R2pπ∗Z such that for every t ∈ ∆∗, the value
515
+ γZ(t) ∈ H2p(Xt, Z) of γZ at the point t is simply the fundamental class of Zt := Z ∩ Xt in Xt
516
+ (see [9, §19.2] and [25, §B.2.9] for details). The pull-back of the local system H2p under the
517
+ exponential map e : h → ∆∗ is a trivial local system with fiber H2p(X∞, Z). The global section
518
+ γZ defines an element of H2p(X∞, Z), which we again denote by γZ, such that for every s ∈ h,
519
+ the image i∗
520
+ s(γZ) is the fundamental class of Z ∩ Xe(s) in Xe(s), where is is the natural inclusion
521
+ of Xe(s) into X∞.
522
+ Definition 4.2. Denote by H2p
523
+ A (X∞) the sub-vector space of H2p(X∞, Q) generated by all such
524
+ elements of the form γZ arising from a ∆∗-flat closed subscheme of relative dimension n − p
525
+ in X∆∗. We call H2p
526
+ A (X∞) the limit algebraic cohomology group. We define the limit Hodge
527
+ cohomology group
528
+ H2p
529
+ Hdg(X∞) := F pH2p(X∞, C) ∩ W2pH2p(X∞, Q).
530
+ Note that, H2p
531
+ Hdg(X∞) need not be monodromy invariant. Recall, N is a morphism of mixed
532
+ Hodge structures from H2p(X∞, Q) to H2p(X∞, Q)(−1). We denote by H2p
533
+ Hdg(X∞)inv the mon-
534
+ odromy invariant part of H2p
535
+ Hdg(X∞) i.e.,
536
+ H2p
537
+ Hdg(X∞)inv := ker
538
+
539
+ H2p
540
+ Hdg(X∞) ֒→ H2p(X∞, Q) N
541
+ −→ H2p
542
+ Hdg(X∞, Q)
543
+
544
+ .
545
+ We now prove that the limit algebraic cohomology group lies in the limit Hodge cohomology
546
+ group. This is the asymptotic version of a classical result in Hodge theory.
547
+ Proposition 4.3. The limit algebraic cohomology group is contained in the monodromy invari-
548
+ ant part of the limit Hodge cohomology group i.e., the natural inclusion H2p
549
+ A (X∞) ⊂ H2p(X∞, Q)
550
+ factors through H2p
551
+ Hdg(X∞)inv.
552
+
553
+ 10
554
+ A. DAN AND I. KAUR
555
+ Proof. Take γ ∈ H2p
556
+ A (X∞). By construction, there exist ∆∗-flat closed subschemes Z1, ..., Zr of
557
+ relative dimension n − p in X∆∗ such that γ = � aiγZi for ai ∈ Q and γZi ∈ H2p
558
+ A (X∞) is as
559
+ defined above, arising from the fundamental class of Zi. By construction, each γZi arises from
560
+ a global section of the local system H2p. Hence, γZi is monodromy invariant i.e., T(γZi) = γZi
561
+ for 1 ≤ i ≤ r. This implies NγZi = 0 for 1 ≤ i ≤ r.
562
+ As the cohomology class of Zi ∩ Xe(s) lies in F pH2p(Xe(s), Q), we have γZi ∈ F p
563
+ s H2p(X∞, Q)
564
+ for all s ∈ h (notations as in §2.3). This implies γZi lies in exp(2πisN)F p
565
+ s H2p(X∞, Q) for every
566
+ s ∈ h. Recall from §2.3 that F pH2p(X∞, Q) contains the limit of exp(2πisN)F p
567
+ s H2p(X∞, C)
568
+ as Im(s) approaches ∞. Hence, γZi ∈ F pH2p(X∞, Q). As γZi is monodromy invariant and a
569
+ rational class, it must lie in W2pH2p(X∞, Q) (use the invariant cycle theorem along with the
570
+ fact that the degree 2p cohomology of the central fiber is of weight at most 2p). Therefore,
571
+ γ ∈ H2p
572
+ Hdg(X∞)inv. This proves the first part of the proposition.
573
+
574
+ We now ask when is H2p
575
+ A (X∞) isomorphic to H2p
576
+ Hdg(X∞)inv? One can naively guess that if the
577
+ general fibers in the family π satisfy the Hodge conjecture then this happens. However, this is
578
+ not enough (see Example 3.3 above). In particular, one needs to additionally assume that the
579
+ family π is Mumford-Tate. We prove:
580
+ Proposition 4.4. Suppose that π is a Mumford-Tate family of weight p. If a general fiber of π
581
+ satisfies HC(p, n), then the inclusion from H2p
582
+ A (X∞) to H2p
583
+ Hdg(X∞)inv is an isomorphism.
584
+ Note that, by general in the statement of the proposition, we mean the complement of finitely
585
+ many proper, closed subvarieties of the punctured disc ∆∗.
586
+ Proof. We need to show that every element in H2p
587
+ Hdg(X∞)inv lies in H2p
588
+ A (X∞).
589
+ Since π is a
590
+ Mumford-Tate family, we have
591
+ H2p
592
+ Hdg(X∞)inv =
593
+ lim
594
+ Im(s)→∞(F p
595
+ s H2p(X∞, Q) ∩ H2p(X∞, Q)inv).
596
+ (4.1)
597
+ It therefore suffices to show that
598
+ lim
599
+ Im(s)→∞(F p
600
+ s H2p(X∞, Q) ∩ H2p(X∞, Q)inv)
601
+ is contained in H2p
602
+ A (X∞).
603
+ By Lemma 2.4 for every 0 < ǫ ∈ R small enough, there exists sǫ ∈ ∆∗ of distance less than ǫ
604
+ from the origin, such that Xsǫ satisfies HC(p, n) and every closed subvariety Zsǫ of codimension
605
+ p in Xsǫ extends to a ∆∗-flat closed subscheme Z ⊂ X∆∗ such that the fiber Z ∩ Xsǫ over sǫ
606
+ is isomorphic to Zsǫ.
607
+ As observed before Definition 4.2, the fundamental class of Z defines
608
+ a section γZ ∈ H2p
609
+ A (X∞) and is monodromy invariant. Since F pH2p(Xsǫ, Q) is isomorphic to
610
+ H2p
611
+ A (Xsǫ), this implies
612
+ H2p(X∞, Q)inv ∩ F p
613
+ sǫH2p(X∞, Q) = (i∗
614
+ sǫ)−1(H2p
615
+ A (Xsǫ)) ⊆ H2p
616
+ A (X∞),
617
+ where isǫ is the natural inclusion of Xe(sǫ) into X∞. Therefore, the limit as Im(s) tends to ∞,
618
+ of H2p(X∞, Q)inv ∩ F p
619
+ s H2p(X∞, Q) is contained in H2p
620
+ A (X∞). This proves the proposition.
621
+
622
+ 4.3. Operational Chow group. Let Y be a quasi-projective variety (possibly singular), of
623
+ dimension say n. Consider a non-singular hyperenvelope of a compactification of Y (see [10,
624
+ §1.4.1] for the definition and basic properties of hyperenvelopes). The hyperenvelope gives rise
625
+ to a cochain complex of motives (see [10, §2.1]). For any positive integer p, one can then obtain
626
+ an abelian group R0CHp(Y ) arising as the cohomology group after applying the functor CHp(−)
627
+
628
+ MUMFORD TATE GROUPS AND THE HODGE CONJECTURE
629
+ 11
630
+ to the cochain complex of motives (see [10, §3.1.4]). Observe that R0CHp(Y ) does not depend
631
+ on the choice of the compactification or the hyperenvelope. Note that,
632
+ Theorem 4.5. Fix a positive integer p. Then, the following holds true for R0CHp(Y ):
633
+ (1) if Y is projective, then R0CHp(Y ) is the operational Chow group Ap(Y ) defined by
634
+ Fulton and MacPherson (see [9, Chapter 17]),
635
+ (2) if Y is non-singular (but not necessarily projective), then Ap(Y ) is the free abelian group
636
+ generated by the codimension p subvarieties in Y , upto rational equivalence,
637
+ (3) if Y is non-singular and Y is a compactification of Y with boundary Z := Y \Y , we then
638
+ have the exact sequence:
639
+ 0 → R0CHp(Y ) → R0CHp(Y ) → R0CHp(Z)
640
+ (4.2)
641
+ (4) if Y is the union of two proper closed subvarieties Y1 and Y2, then we have the exact
642
+ sequence:
643
+ 0 → R0CHp(Y ) → R0CHp(Y1) ⊕ R0CHp(Y2) → R0CHp(Y1 ∩ Y2).
644
+ (4.3)
645
+ Proof.
646
+ (1) This is [10, Proposition 4].
647
+ (2) This is [9, Proposition 17.3.1 and Corollary 17.4].
648
+ (3) This is [10, Theorem 2(iii) and §3.1.1].
649
+ (4) This is [10, Theorem 2(iv) and §3.1.1].
650
+
651
+ Notation 4.6. If Y is quasi-projective but not projective, we denote by Ap
652
+ c(Y ) := R0CHp(Y ),
653
+ the compactly supported operational Chow cohomology.
654
+ Given any compactification Y of Y ,
655
+ Theorem 4.5 implies that we have the following exact sequence
656
+ 0 → Ap
657
+ c(Y ) → Ap(Y ) → Ap(Y \Y )
658
+ (4.4)
659
+ For Y a projective variety, there are natural functorial cycle class maps (see [2] or [17, §2]):
660
+ clp : Ap(Y ) → GrW
661
+ 2pH2p(Y, Q) and clc
662
+ p : Ap
663
+ c(Ysm) → GrW
664
+ 2pH2p
665
+ c (Ysm, Q)
666
+ which agree with the usual cycle class map (see [31, §11.1.2]) if Y is non-singular (here Ysm
667
+ denotes the smooth locus of Y ). For Y projective, define the algebraic cohomology group denoted
668
+ by H2p
669
+ A (Y ) ⊂ GrW
670
+ 2pH2p(Y, Q) to be the image of the cycle class map clp.
671
+ 4.4. Bloch-Gille-Soul´e Cycle class map. Let Y be a scheme and φ : U → Y , γ : V → U×Y U
672
+ be envelopes. Let pi : V → U denote the compositions of γ with the projections U ×Y U → U.
673
+ Theorem 4.7. ( [2, Theorem A.3]) There is a left-exact sequence of Chow cohomology groups
674
+ 0 → CH∗(Y )
675
+ φ∗
676
+ −→ CH∗(U)
677
+ p∗
678
+ 1−p∗
679
+ 2
680
+ −−−−→ CH∗(V ).
681
+ Using the cycle map over smooth, quasi-projective varieties U and V , Bloch-Gillet-Soul´e uses
682
+ the above theorem to conclude:
683
+ Corollary 4.8. ( [2, Corollary A.4]) On the category of varieties over C, there is a “cycle class”
684
+ natural transformation of contravariant functors to the category of commutative, graded rings:
685
+
686
+ p
687
+ clp :
688
+
689
+ p
690
+ CHp(−) →
691
+
692
+ p
693
+ GrW
694
+ 0 H2p(− , Q(p)).
695
+
696
+ 12
697
+ A. DAN AND I. KAUR
698
+ 4.5. Singular Hodge conjecture. We are now ready to give a formulation of the Hodge
699
+ conjecture for singular varieties. Let Y be a projective variety of dimension n. Fix a positive
700
+ integer p ≤ n. We say that Y satisfies SHC(p, n) if the singular locus of Y is of dimension at
701
+ most p − 1 and the algebraic cohomology group H2p
702
+ A (Y ) coincides with
703
+ H2p
704
+ Hdg(Y ) := GrW
705
+ 2pH2p(Y, Q) ∩ F 2pGrW
706
+ 2pH2p(Y, C).
707
+ In the case when Y is non-singular and projective, this simply is the classical Hodge conjecture
708
+ (in weight p), which we already denote by HC(p, n).
709
+ 4.6. Algebraic cycles on simple normal crossings divisors. We now prove that the coho-
710
+ mology classes of algebraic cycles on a simple normal crossings variety are Hodge classes. This
711
+ is a generalization to the singular case of a classical result in Hodge theory. Recall, X0 is called a
712
+ simple normal crossings variety if X0 is connected, X0 = X1 ∪ ... ∪ Xr with Xi irreducible, non-
713
+ singular for all i and the intersection of any p of the irreducible components of X0 is non-singular
714
+ of codimension p, for any p ≥ 1.
715
+ Lemma 4.9. Let X0 be a simple normal crossings variety. Then, the cycle class map clp from
716
+ Ap(X0) to GrW
717
+ 2pH2p(X0, Q) factors through
718
+ H2p
719
+ Hdg(X0) := F pGrW
720
+ 2pH2p(X0, C) ∩ GrW
721
+ 2pH2p(X0, Q).
722
+ Proof. We use recursion on the components of X0. Let X0, ..., Xr be the irreducible components
723
+ of X0. Denote by Zi := X0\(X1 ∪ ... ∪ Xi), the complement of the components X1, ..., Xi for
724
+ i ≥ 1. Let Z0 := X0. Since Xi, Xj and Xi ∩ Xj are non-singular for all i, j, they have pure
725
+ Hodge structures. Moreover by [25, Theorem 5.39], H2p−1(Xi ∩Zi, Q) is of weight at most 2p−1
726
+ i.e., GrW
727
+ 2pH2p−1(Xi ∩ Zi, Q) = 0 for all 1 ≤ i ≤ r − 1. Therefore for all 1 ≤ i ≤ r − 1, we have
728
+ the following exact sequence of pure Hodge structures:
729
+ 0 → GrW
730
+ 2pH2p(Zi−1, Q) → H2p(Xi, Q) ⊕ GrW
731
+ 2pH2p(Zi, Q) → GrW
732
+ 2pH2p(Xi ∩ Zi, Q)
733
+ (4.5)
734
+ Moreover, by Theorem 4.5, we have the exact sequence:
735
+ 0 → Ap(Zi−1) → Ap(Xi) ⊕ Ap(Zi) → Ap(Xi ∩ Zi)
736
+ (4.6)
737
+ By the functoriality of the cycle class maps clp, we have the following diagram
738
+ 0
739
+ ✲ Ap(Zi−1)
740
+ ✲ Ap(Xi) ⊕ Ap(Zi)
741
+ ✲ Ap(Xi ∩ Zi)
742
+ 0
743
+ ✲ GrW
744
+ 2pH2p(Zi−1, Q)
745
+ clp
746
+
747
+ ✲ H2p(Xi, Q) ⊕ GrW
748
+ 2pH2p(Zi, Q)
749
+ clp
750
+
751
+ ✲ GrW
752
+ 2pH2p(Xi ∩ Zi, Q)
753
+ clp
754
+
755
+ For the base case, consider i = r−1. Note that, Zr−1 = Xr. Since Xr is non-singular, Ap(Zr−1)
756
+ is the usual Chow group. Therefore, clp(Ap(Zr−1)) ⊂ H2p
757
+ Hdg(Zr−1).
758
+ Now for the recursion step. Assume that clp(Ap(Zi)) ⊂ H2p
759
+ Hdg(Zi). Since the exact sequence
760
+ (4.5) is a morphism of pure Hodge structures, the commutativity of the left hand square implies
761
+ that clp(Ap(Zi−1)) ⊂ H2p
762
+ Hdg(Zi−1). This proves the lemma.
763
+
764
+
765
+ MUMFORD TATE GROUPS AND THE HODGE CONJECTURE
766
+ 13
767
+ 5. Main results
768
+ In this section we introduce the concept of MT-smoothable varieties. Consider a simple normal
769
+ crossings variety X (in the sense of §4.6). Denote by X(2) the disjoint union of intersection of
770
+ any 2 irreducible components of X. We prove that if X is MT-smoothable and X(2) satisfies
771
+ HC(p − 1, n − 1) then X satisfies SHC(p, n) (see Theorem 5.2).
772
+ This is a generalization of
773
+ Theorem 1.1 in the introduction. Moreover, if there is an irreducible component Xi of X such
774
+ that the restriction morphism on cohomology is surjective, then Xi satisfies the classical Hodge
775
+ conjecture (see Corollary 5.3). Finally, if the variety has worse singularities than simple normal
776
+ crossings, then we reduce the singular Hodge conjecture to a question solely on the algebraic
777
+ classes (see Theorem 5.5).
778
+ Definition 5.1. Let X be a singular projective variety of dimension n and p be an integer such
779
+ that dim(Xsing) ≤ p − 1. We say that X is MT-smoothable of weight p if there exists a flat,
780
+ projective, Mumford-Tate family
781
+ π0 : Y → ∆
782
+ smooth over ∆∗, containing X as a central fiber and a general fiber satisfying HC(p, n). We call
783
+ π0 a MT-smoothing of weight p of X.
784
+ Given a normal crossings variety X, We prove:
785
+ Theorem 5.2. Let X be a simple normal crossings variety of dimension n. Assume that every
786
+ irreducible component of X(2) satisfies HC(p − 1, n − 1). If X is MT-smoothable of weight p,
787
+ then X satisfies SHC(p, n) i.e.,
788
+ H2p
789
+ A (X, Q) ∼= H2p
790
+ Hdg(X, Q).
791
+ Moreover, for every irreducible component Xi of X, the image of the restriction morphism from
792
+ H2p
793
+ Hdg(X, Q) to H2p
794
+ Hdg(Xi, Q) are cohomology classes of algebraic cycles i.e., the image
795
+ Im(H2p
796
+ Hdg(X, Q) → H2p
797
+ Hdg(Xi, Q))
798
+ is contained in H2p
799
+ A (Xi, Q).
800
+ Proof. Since X is MT-smoothable of weight p, there exists a Mumford-Tate family of weight p
801
+ π : X → ∆
802
+ with central fiber X and general fibers satisfying HC(p, n). By Proposition 4.4 and Lemma 4.9,
803
+ we have a morphism spA from H2p
804
+ A (X) to H2p
805
+ A (X∞) given by the composition:
806
+ spA : H2p
807
+ A (X) ֒→ H2p
808
+ Hdg(X)
809
+ sp
810
+ −→ H2p
811
+ Hdg(X∞)inv ∼= H2p
812
+ A (X∞).
813
+ We claim that spA is surjective. Recall from Definition 4.2, H2p
814
+ A (X∞) is generated as a Q-vector
815
+ space by classes γZ where Z ⊂ X∆∗ is a ∆∗-flat closed subscheme of relative dimension n − p.
816
+ Denote by Z the closure of Z in X. By [9, §6.1], the intersection product Z.Xi of Z with Xi
817
+ is of codimension p in Xi . Denote by γi ∈ H2p(Xi, Q) the cohomology class of the intersection
818
+ product Z.Xi for 1 ≤ i ≤ r. By the associativity of intersection product (see [9, Proposition
819
+ 8.1.1 or Proposition 8.3]), for any pair of integers 1 ≤ i < j ≤ r, the image of γi (resp. γj) under
820
+ the restriction morphisms from H2p(Xi, Q) (resp. H2p(Xj, Q)) to H2p(Xi ∩ Xj, Q) coincides.
821
+ Using (4.5) one can observe that there exists an algebraic cohomology class γ ∈ H2p
822
+ A (X) such
823
+ that the image of γ under the restriction morphism from H2p
824
+ A (X) to H2p
825
+ A (Xi) is γi for 1 ≤ i ≤ r.
826
+ In other words, the cohomology class of Z in H2p(X, Q) (see [25, §B.2.9]) pulls back to γ in
827
+ H2p(X, Q) and to the cohomology class [Z ∩ Xt] ∈ H2p(Xt, Q) over Xt, for any t ∈ ∆∗. This
828
+
829
+ 14
830
+ A. DAN AND I. KAUR
831
+ means that under the specialization morphism sp from H2p(X, Q) to H2p(X∞, Q), γ maps to
832
+ γZ. This proves our claim.
833
+ By Proposition 2.3, the kernel of the specialization morphism
834
+ GrW
835
+ 2pH2p(X, Q) = E0,2p
836
+ 2
837
+ sp
838
+ −→ ∞E0,2p
839
+ 2
840
+ = GrW
841
+ 2pH2p(X∞, Q)
842
+ is isomorphic to the image of the Gysin morphism from H2p−2(X(2), Q) to H2p(X, Q) (as X(2)
843
+ is non-singular, H2p−2(X(2), Q) has a pure Hodge structure of weight 2p − 2). By assumption,
844
+ every irreducible component of X(2) satisfies HC(p − 1, n − 1).
845
+ Then, we get the following
846
+ commutative diagram of exact sequences:
847
+ H2p
848
+ A (X(2))
849
+ ✲ H2p
850
+ A (X)
851
+ spA✲ H2p
852
+ A (X∞)
853
+ ✲ 0
854
+
855
+
856
+ H2p
857
+ Hdg(X(2))
858
+ ∼=
859
+
860
+ ✲ H2p
861
+ Hdg(X)
862
+
863
+
864
+ sp✲ H2p
865
+ Hdg(X∞)inv
866
+ ∼=
867
+
868
+ By diagram chase (or using four lemma for the diagram of exact sequences), we conclude that the
869
+ middle morphism from H2p
870
+ A (X) to H2p
871
+ Hdg(X) is surjective, hence an isomorphism. This proves
872
+ the first part of the theorem. The second part of the theorem follows immediately from the
873
+ following commutative diagram, which arises from the Mayer-Vietoris sequence:
874
+ H2p
875
+ A (X) ⊂
876
+ ✲ H2p
877
+ A (Xi) ⊕ H2p
878
+ A (X\Xi)
879
+
880
+ H2p
881
+ Hdg(X)
882
+ ∼=
883
+
884
+ ⊂✲ H2p
885
+ Hdg(Xi) ⊕ H2p
886
+ Hdg(X\Xi)
887
+
888
+
889
+ This proves the theorem.
890
+
891
+ Corollary 5.3. Notations and hypothesis as in Theorem 5.2. Let X1 be an irreducible com-
892
+ ponent in X such that the complement Xc
893
+ 1 := X\X1 (the closure of X\X1 in X) satisfies:
894
+ Im(H2p
895
+ Hdg(X1) → H2p
896
+ Hdg(Xc
897
+ 1 ∩ X1)) ⊂ Im(H2p
898
+ Hdg(Xc
899
+ 1) → H2p
900
+ Hdg(Xc
901
+ 1 ∩ X1)).
902
+ (5.1)
903
+ Then, X1 satisfies HC(p, n).
904
+ Proof. Using the Mayer-Vietoris sequence we have the following commutative diagram:
905
+ 0
906
+ ✲ H2p
907
+ A (X)
908
+ ✲ H2p
909
+ A (X1) ⊕ H2p
910
+ A (Xc
911
+ 1)
912
+
913
+ 0
914
+ ✲ H2p
915
+ Hdg(X)
916
+ ∼=
917
+
918
+ ⊂✲ H2p
919
+ Hdg(X1) ⊕ H2p
920
+ Hdg(Xc
921
+ 1)
922
+
923
+
924
+ ✲ H2p
925
+ Hdg(Xc
926
+ 1 ∩ X1)
927
+ where the isomorphism of the first vertical arrow follows from Theorem 5.2 and the bottom row
928
+ is exact. If (5.1) is satisfied then for any γ ∈ H2p
929
+ Hdg(X1), there exists γ′ ∈ H2p
930
+ Hdg(Xc
931
+ 1) such that
932
+ their restrictions to X1 ∩ Xc
933
+ 1 agree. In other words, γ ⊕ γ′ maps to zero in H2p
934
+ Hdg(Xc
935
+ 1 ∩ X1).
936
+ By diagram chase, one observes that there exists γA ∈ H2p
937
+ A (X1) which maps to γ. This proves
938
+ H2p
939
+ A (X1) ∼= H2p
940
+ Hdg(X1). In other words, X1 satisfies HC(p, n). This proves the corollary.
941
+
942
+
943
+ MUMFORD TATE GROUPS AND THE HODGE CONJECTURE
944
+ 15
945
+ One immediately asks whether there are examples where (5.1) is satisfied?
946
+ Example 5.4. Let X be a projective variety of dimension n with only ordinary double point
947
+ singularities. Suppose also that X is smoothable. Then, there exists a flat, projective family
948
+ π0 : Y → ∆
949
+ smooth over ∆∗, X as the central fiber and Y is a regular variety. Moreover, there exists a
950
+ semi-stable reduction of π0:
951
+ π : X → ∆
952
+ such that the central fiber X0 := �
953
+ X ∪ E, where E is a disjoint union of quadric hypersurfaces in
954
+ Pn+1 and E ∩ �
955
+ X0 is the intersection of E by hyperplanes in copies of Pn+1. If n = 2p for some
956
+ p, then the n-th rational cohomology of a quadric hypersurface in Pn is isomorphic to Q. This
957
+ implies the natural restriction morphism from H2p(E) to H2p(E ∩ �
958
+ X) is surjective. In this case,
959
+ taking X1 := �
960
+ X, (5.1) is satisfied.
961
+ A natural conjecture arises from our observations:
962
+ Conjecture A. Let X be a singular projective variety, φ :
963
+
964
+ X → X be any resolution of
965
+ singularities and E be the exceptional divisor. Let p be an integer such that dim(Xsing) ≤ p−1.
966
+ We then have an exact sequence on cohomology (see [25, Corollary-Definition 5.37]):
967
+ H2p(X) → H2p( �
968
+ X) → H2p(E)
969
+ We conjecture that taking algebraic cohomology groups preserves the exactness of the sequence
970
+ i.e., the following sequence is exact:
971
+ H2p
972
+ A (X) → H2p
973
+ A ( �
974
+ X) → H2p
975
+ A (E).
976
+ We now observe that this conjecture is closely related to the singular Hodge conjecture (which
977
+ is equivalent to the Hodge conjecture).
978
+ Theorem 5.5. Let X be a singular projective variety of dimension n and p be an integer such
979
+ that dim(Xsing) ≤ p − 1. If X satisfies SHC(p, n), then X satisfies Conjecture A. Conversely, if
980
+ HC(p − 1, n − 1) holds true, X is MT-smoothable of weight p and satisfies Conjecture A, then
981
+ X satisfies SHC(p, n).
982
+ Proof. If X satisfies the SHC(p, n), then H2p
983
+ A (X) ∼= H2p
984
+ Hdg(X). Let
985
+ φ : �
986
+ X → X
987
+ be a resolution of X and E be the exceptional divisor. We then have the following commutative
988
+ diagram:
989
+ H2p
990
+ A (X)
991
+ ✲ H2p
992
+ A ( �
993
+ X)
994
+ ✲ H2p
995
+ A (E)
996
+
997
+
998
+ H2p
999
+ Hdg(X)
1000
+ ∼=
1001
+
1002
+ ⊂✲ H2p
1003
+ Hdg( �
1004
+ X)
1005
+
1006
+
1007
+ ✲ H2p
1008
+ Hdg(E)
1009
+
1010
+
1011
+ (5.2)
1012
+ where the bottom row is exact, injective on the left and the top row is a complex. To prove
1013
+ Conjecture A, we need to show that the top row is exact in the middle. For this, take γ ∈ H2p
1014
+ A ( �
1015
+ X)
1016
+ which maps to zero in H2p
1017
+ A (E). By diagram chase it is easy to check that there exists γ′ ∈ H2p
1018
+ A (X)
1019
+ which maps to γ. In other words, the top row of (5.2) is exact in the middle. This proves the
1020
+ first part of the theorem.
1021
+
1022
+ 16
1023
+ A. DAN AND I. KAUR
1024
+ We now assume that X satisfies Conjecture A. Let π0 : Y → ∆ be a MT-smoothing of weight
1025
+ p of X. By the semi-stable reduction theorem (see [15, Chapter II]) there exists a flat, projective
1026
+ family π : X → ∆ which has the same fiber over ∆∗ as π0, X is regular, the central fiber X0 is
1027
+ a reduced simple normal crossings divisor with one of the irreducible components, say �
1028
+ X being
1029
+ proper birational to X. Furthermore, the complement �
1030
+ Xc := X0\ �
1031
+ X satisfies:
1032
+ X0\ �
1033
+ Xc ∼= �
1034
+ X\( �
1035
+ Xc ∩ �
1036
+ X) ∼= X\Xsing
1037
+ i.e., X is isomorphic to Y away from Xsing. Using the Mayer-Vietoris sequence and Conjecture
1038
+ A we have the following commutative diagram of exact sequences:
1039
+ H2p
1040
+ A (X)
1041
+ ✲ H2p
1042
+ A ( �
1043
+ X)
1044
+ ✲ H2p
1045
+ A ( �
1046
+ X ∩ �
1047
+ Xc)
1048
+
1049
+
1050
+ H2p
1051
+ A (X0)
1052
+
1053
+
1054
+ ✲ H2p
1055
+ A ( �
1056
+ X) ⊕ H2p
1057
+ A ( �
1058
+ Xc)
1059
+
1060
+
1061
+ ✲ H2p
1062
+ A ( �
1063
+ X ∩ �
1064
+ Xc)
1065
+ ∼=
1066
+
1067
+ (5.3)
1068
+ where the first vertical morphism is induced by the pullback from X to X0 and the second one
1069
+ is the natural inclusion. By snake lemma, this gives rise to the exact sequence:
1070
+ 0 → H2p
1071
+ A (X) → H2p
1072
+ A (X0) → H2p
1073
+ A ( �
1074
+ Xc)
1075
+ (5.4)
1076
+ Since Xsing is of dimension at most p − 1, Hi(Xsing) = 0 for i ≥ 2p − 1. Then, the long exact
1077
+ sequences in cohomology associated to the pairs (X, Xsing) and (X0, �
1078
+ Xc) (see [25, Proposition 5.46
1079
+ and Corollary B.14])) implies GrW
1080
+ 2pH2p
1081
+ c (U) ∼= GrW
1082
+ 2pH2p(X) where U := X\Xsing. Furthermore,
1083
+ 0 → GrW
1084
+ 2pH2p
1085
+ c (U, Q) → GrW
1086
+ 2pH2p(X0, Q) → GrW
1087
+ 2pH2p( �
1088
+ Xc, Q)
1089
+ is an exact sequence of pure Hodge structures. This gives rise to the exact sequence:
1090
+ 0 → H2p
1091
+ Hdg(X) → H2p
1092
+ Hdg(X0) → H2p
1093
+ Hdg( �
1094
+ Xc)
1095
+ (5.5)
1096
+ of Q-vector spaces. Then, there is a natural morphism of exact sequences from (5.4) to (5.5):
1097
+ 0
1098
+ ✲ H2p
1099
+ A (X)
1100
+ ✲ H2p
1101
+ A (X0)
1102
+ ✲ H2p
1103
+ A ( �
1104
+ Xc)
1105
+
1106
+
1107
+ 0
1108
+ ✲ H2p
1109
+ Hdg(X)
1110
+
1111
+
1112
+ ✲ H2p
1113
+ Hdg(X0)
1114
+ ∼=
1115
+
1116
+ ✲ H2p
1117
+ Hdg( �
1118
+ Xc)
1119
+
1120
+
1121
+ where the isomorphism of the middle vertical arrow follows from Theorem 5.2. Applying snake
1122
+ lemma once again we conclude that the first vertical morphism is surjective. In other words, X
1123
+ satisfies SHC(p, n). This proves the converse and hence the theorem.
1124
+
1125
+ 6. Examples of Mumford-Tate families
1126
+ In §3 we introduced Mumford-Tate families.
1127
+ For such families, the central fiber displays
1128
+ interesting properties. For example, if the central fiber is smooth, then it is easy to check that it
1129
+ satisfies the Hodge conjecture if a general fiber satisfies the Hodge conjecture. More generally,
1130
+ if the central fiber is a reduced, simple normal crossings divisor, then it satisfies the singular
1131
+ Hodge conjecture if the general fiber satisfies the Hodge conjecture (see Theorem 5.2). In this
1132
+ section we use correspondences to give a general method to produce Mumford-Tate families (see
1133
+ Theorem 6.5). We give examples in Corollary 6.6.
1134
+
1135
+ MUMFORD TATE GROUPS AND THE HODGE CONJECTURE
1136
+ 17
1137
+ 6.1. Strict Mumford-Tate families. Let π1 : X ∗ → ∆∗ be a smooth, projective morphism
1138
+ over the punctured disc ∆∗. Recall that π1 is called a Mumford-Tate family if the pullback
1139
+ of every monodromy invariant Hodge class on H2p(X∞, Q) to a general fiber is fixed by the
1140
+ associated Mumford-Tate group, for every p. Here we generalize this condition to the tensor
1141
+ algebra of the cohomology ring H∗(X∞, Q). This is a slightly stronger notion. In particular, it
1142
+ is possible that wedge product of two elements from odd degree cohomology groups become a
1143
+ Hodge class, although they are individually not Hodge classes. This is a common phenomena
1144
+ appearing in the cohomology of abelian varieties, for example. This will play a crucial role below
1145
+ to produce new examples of Mumford-Tate families.
1146
+ In order to study the tensor algebras more effectively, we separate the odd cohomology groups
1147
+ from the even ones. We take exterior algebra of the odd cohomology groups and the symmetric
1148
+ algebra of the even ones. This is done to preserve compatibility with cup-products. Given two
1149
+ r-tuple of positive integers m := (m1, ..., mr) and k := (k1, ..., kr), denote by
1150
+ Tk
1151
+ m :=
1152
+ k1
1153
+
1154
+ Hm1(X∞, Q) ⊗ ... ⊗
1155
+ kr
1156
+
1157
+ Hmr(X∞, Q), if each mi is odd,
1158
+ Tk
1159
+ m := Symk1Hm1(X∞, Q) ⊗ ... ⊗ SymkrHmr(X∞, Q), if each mi is even.
1160
+ Given an r-tuple of even positive integers m := (m1, ..., mr), an l-tuple of odd positive integers
1161
+ n := (n1, ..., nl) and an r (resp. l) tuple of arbitrary positive integers k := (k1, ..., kr) (resp.
1162
+ k′ := (k′
1163
+ 1, ..., k′
1164
+ l)), denote by
1165
+ T(k,k′)
1166
+ (m,n) the pure part of Tk
1167
+ m ⊗ Tk′
1168
+ n i.e., T(k,k′)
1169
+ (m,n) := GrW
1170
+ a Tk
1171
+ m ⊗ Tk′
1172
+ n ,
1173
+ where a := �r
1174
+ i=1 miki + �l
1175
+ j=1 njk′
1176
+ j. Denote by
1177
+ T(m,n) :=
1178
+
1179
+ (k,k′)
1180
+ T(k,k′)
1181
+ (m,n),
1182
+ (6.1)
1183
+ where k and k′ ranges over all k-tuple and l-tuple of positive integers, respectively. Denote by
1184
+ Ts
1185
+ (m,n) the same as T(m,n) with X∞ replaced by Xs for any s ∈ ∆∗.
1186
+ Note that, the Hodge structure on Hm(Xs, Q) is pure for all m, so the “pure part” condition
1187
+ is redundant in this case. Let MTs
1188
+ m be the Mumford-Tate group associated to the pure Hodge
1189
+ structure Hm(Xs, Q). Then, the product of the Mumford-Tate groups
1190
+ MTs
1191
+ (m,n) := MTs
1192
+ m1 × MTs
1193
+ m2 × ... × MTs
1194
+ mr × MTs
1195
+ n1 × MTs
1196
+ n2 × ... × MTs
1197
+ nl
1198
+ acts on Ts
1199
+ (m,n). The family π is called strictly Mumford-Tate with respect to (m, n) if for any
1200
+ Hodge class γ ∈ T(m,n) and s ∈ h general, j∗
1201
+ s(γ) is fixed by MTs
1202
+ (m,n), where
1203
+ j∗
1204
+ s : T(m,n) → Ts
1205
+ (m,n)
1206
+ is induced by the pullback of the natural inclusion of Xs inside X∞.
1207
+ Proposition 6.1. Let π1 : X → ∆ be a flat, projective family of genus g curves for g ≥ 2. We
1208
+ assume that π1 is smooth over ∆∗ and the central fiber is a very general irreducible nodal curve
1209
+ (in the sense of [7]). Then, π1 is strictly Mumford-Tate with respect to ((0, 2), (1)).
1210
+ Proof. Consider the family of Jacobians associated to the family of curves π1,
1211
+ π2 : J → ∆∗ i.e., for all t ∈ ∆∗, π−1
1212
+ 2 (t) = Jac(Xt).
1213
+ By the definition of cohomology of abelian varieties, there is a natural isomorphism of mixed
1214
+ Hodge structures between H1(X∞, Q) and H1(J∞, Q). This induces an isomorphism of mixed
1215
+
1216
+ 18
1217
+ A. DAN AND I. KAUR
1218
+ Hodge structures,
1219
+ ∗�
1220
+ H1(X∞, Q) ∼
1221
+ −→ H∗(J∞, Q).
1222
+ By [7, Theorem 4.3], we have
1223
+ H∗
1224
+ Hdg(J∞, Q) ∼= Q[θ]/(θg+1), where g = genus(Xt), t ∈ ∆∗.
1225
+ Note that, Sym∗H0(X∞, Q) ∼= Q[T0] and Sym∗H2(X∞, Q) ∼= Q[T1] where T0 and T1 are Hodge
1226
+ classes. Consider the direct sum of vector spaces T(0,2),(1) as in (6.1) associated to the family π1.
1227
+ Then, the space of Hodge classes THdg in T(0,2),(1) is isomorphic to Q[T0, T1, θ]/(θg+1). Similarly,
1228
+ the set of Hodge class Ts
1229
+ Hdg in Ts
1230
+ (0,2),(1) contains Q[T s
1231
+ 0 , T s
1232
+ 1 , θs]/((θs)g+1), where (−)s := j∗
1233
+ s(−).
1234
+ Hence, T s
1235
+ 0 , T s
1236
+ 1 and θs are fixed by the Mumford-Tate group MTs
1237
+ (0,2),(1). Therefore, π1 is strictly
1238
+ Mumford-Tate with respect to ((0, 2), (1)). This proves the proposition.
1239
+
1240
+ 6.2. Cohomologies generated by Chern classes. Let X, Y be smooth, projective varieties
1241
+ of dimension m and n, respectively. Combining K¨unneth decomposition with Poincare duality,
1242
+ we have for every i, k ≥ 0,
1243
+ H2i−k(X × Y ) ≃
1244
+
1245
+ k
1246
+ H2n−k(X) ⊗ H2i−k(Y )
1247
+ ∨ ≃
1248
+
1249
+ k
1250
+ Hom(H2m−k(X), H2i−k(Y )).
1251
+ (6.2)
1252
+ Let E be a coherent sheaf on the fibre product X ×Y and ci(E) be the i-th Chern class of E. De-
1253
+ note by Φ(i,k)
1254
+ E
1255
+ the projection of ci(E) in H2i−k(Y ) to the component Hom(H2m−k(X), H2i−k(Y )).
1256
+ By [31, Lemma 11.41], the induced morphism
1257
+ Φ(i,k)
1258
+ E
1259
+ : H2m−k(X) → H2i−k(Y ) is a morphism of pure Hodge structures.
1260
+ (6.3)
1261
+ Theorem 6.2. Let π1 : X ∗ → ∆∗ and π2 : Y∗ → ∆∗ be two smooth, projective families of
1262
+ relative dimensions m and n, respectively. Assume that there exists a coherent sheaf U over
1263
+ X ∗ ×∆∗ Y∗ such that it is flat over ∆∗. Then the morphism
1264
+ Φ(i,k)
1265
+ Ut
1266
+ : H2m−k(Xt) → H2i−k(Yt)
1267
+ induces a morphism of (limit) mixed Hodge structures:
1268
+ Φ(i,k)
1269
+ U,∞ : H2m−k(X∞) → H2i−k(Y∞).
1270
+ Furthermore, the morphisms Φ(i,k)
1271
+ U,∞ and Φ(i,k)
1272
+ Ut
1273
+ commute with pullback to closed fibers i.e., for
1274
+ any u ∈ h with e(u) = t (where e is the exponential map) we have the following commutative
1275
+ diagram:
1276
+ H2m−k(X∞)
1277
+ Φ(i,k)
1278
+ U,∞
1279
+ ✲ H2i−k(Y∞)
1280
+
1281
+ H2m−k(Xt)
1282
+ (ju)∗ ∼=
1283
+
1284
+ Φ(i,k)
1285
+ Ut ✲ H2i−k(Yt)
1286
+ (j′
1287
+ u)∗ ∼=
1288
+
1289
+ (6.4)
1290
+ where ju : Yt ֒→ Y∞ and j′
1291
+ u : Xt ֒→ X∞ are natural inclusions.
1292
+ Proof. Consider the natural projective morphisms:
1293
+ π : X ∗ ×∆∗ Y∗ → ∆∗, π1 : X ∗ → ∆∗ and π2 : Y∗ → ∆∗.
1294
+ Consider the local system H2i := R2iπ∗Z over ∆∗. We denote by
1295
+ Hi
1296
+ X ∗ := Riπ1∗Z and Hi
1297
+ Y∗ := Riπ2∗Z.
1298
+
1299
+ MUMFORD TATE GROUPS AND THE HODGE CONJECTURE
1300
+ 19
1301
+ By K¨unneth decomposition in families (see [14, Ex. II.18]), we have
1302
+ H2i ≃
1303
+
1304
+ k
1305
+ (Hk
1306
+ X ∗ ⊗ H2i−k
1307
+ Y∗ )
1308
+ Applying Poincare duality to the local system Hk
1309
+ X ∗ (see [16, §I.2.6]), we get:
1310
+ H2i ≃
1311
+
1312
+ k
1313
+ (H2m−k
1314
+ X ∗
1315
+ )∨ ⊗ H2i−k
1316
+ Y∗
1317
+
1318
+
1319
+ k
1320
+ Hom(H2m−k
1321
+ X ∗
1322
+ , H2i−k
1323
+ Y∗ ).
1324
+ For any i, the i-th Chern class ci(U) defines a global section of H2i. Consider the projection
1325
+ φ of ci(U) to Hom(H2m−k
1326
+ X ∗
1327
+ , H2i−k
1328
+ Y∗ ). Pulling back the morphism φ of local systems on ∆∗ to the
1329
+ upper half plane h and taking global sections, we get the morphism
1330
+ Φ(i,k)
1331
+ U,∞ : H2m−k(X∞) → H2i−k(Y∞).
1332
+ Restricting the morphism to the fiber over u ∈ h gives us the morphism Φ(i,k)
1333
+ Ut
1334
+ , where t := e(u).
1335
+ In particular, we have commutative diagram (6.4).
1336
+ It remains to check that Φ(i,k)
1337
+ U,∞ is a morphism of limit mixed Hodge structures. By (6.3), Φ(i,k)
1338
+ Ut
1339
+ is a morphism of pure Hodge structures. Since the limit Hodge filtrations on X∞ and Y∞ arise
1340
+ simply as a limit of these Hodge filtrations, we conclude that Φ(i,k)
1341
+ U,∞ preserves the limit Hodge
1342
+ filtrations. It remains to check that Φ(i,k)
1343
+ U,∞ preserves the limit weight filtration. Equivalently,
1344
+ using the diagram (6.4) we need to prove that Φ(i,k)
1345
+ Ut
1346
+ preserves the weight filtration where the
1347
+ weight filtration on Xt and Yt is induced by X∞ and Y∞, respectively (via the isomorphisms j∗
1348
+ u
1349
+ and j′
1350
+ u
1351
+ ∗, respectively). Recall, the weight filtration on Xt and Yt is induced by the log of the
1352
+ monodromy operators (see [25, Lemma-Definition 11.9]):
1353
+ NX := log(TX ) and NY := log(TY).
1354
+ So, it suffices to check that for all γ ∈ H2m−k(Xt), we have Φ(i,k)
1355
+ Ut
1356
+ (NX (γ)) = NYΦ(i,k)
1357
+ Ut
1358
+ (γ). Since
1359
+ ci(U) is a global section of the local system, it is monodromy invariant. This means the induced
1360
+ morphism φ from H2m−k
1361
+ X ∗
1362
+ to H2i−k
1363
+ Y∗
1364
+ commutes with the monodromy operators i.e., for every
1365
+ t ∈ ∆∗, we have following commutative diagram:
1366
+ H2m−k(Xt)
1367
+ Φ(i,k)
1368
+ Ut✲ H2i−k(Yt)
1369
+
1370
+ H2m−k(Xt)
1371
+ TX
1372
+
1373
+ Φ(i,k)
1374
+ Ut✲ H2i−k(Yt)
1375
+ TY
1376
+
1377
+ (6.5)
1378
+ where TX and TY are the monodromy operators and Φ(i,k)
1379
+ Ut
1380
+ is as in (6.3). This implies for all
1381
+ γ ∈ H2m−k(Xt), we have Φ(i,k)
1382
+ Ut (TX (γ)) = TYΦ(i,k)
1383
+ Ut
1384
+ (γ). Hence,
1385
+ Φ(i,k)
1386
+ Ut
1387
+ (TX − Id)(γ) = Φ(i,k)
1388
+ Ut
1389
+ (TX (γ)) − Φ(i,k)
1390
+ Ut
1391
+ (γ) = TY(Φ(i,k)
1392
+ Ut
1393
+ (γ)) − Φ(i,k)
1394
+ Ut
1395
+ (γ) = (TY − id)Φ(i,k)
1396
+ Ut
1397
+ (γ).
1398
+ More generally, this implies for all m ≥ 1,
1399
+ Φ(i,k)
1400
+ Ut
1401
+ (TX − Id)m(γ) = Φ(i,k)
1402
+ Ut (TX − Id)(TX − Id)m−1(γ) = (TY − Id)Φ(i,k)
1403
+ Ut
1404
+ (TX − Id)m−1(γ)
1405
+ Therefore, by recursion we have Φ(i,k)
1406
+ Ut
1407
+ (TX −Id)m(γ) = (TY −Id)mΦ(i,k)
1408
+ Ut
1409
+ (γ). Using the logarithmic
1410
+ expansion of NX and NY we conclude:
1411
+ Φ(i,k)
1412
+ Ut
1413
+ (NX (γ)) = NYΦ(i,k)
1414
+ Ut
1415
+ (γ), for all γ ∈ H2m−k(Xt).
1416
+
1417
+ 20
1418
+ A. DAN AND I. KAUR
1419
+ This implies that Φ(i,k)
1420
+ Ut
1421
+ preserves the limit weight filtration. This proves the theorem.
1422
+
1423
+ Definition 6.3. Let X,Y be smooth, projective varieties of dimensions m and n, respectively.
1424
+ Denote by E a coherent sheaf on X ×k Y . The variety Y is said to be cohomologically generated
1425
+ by (X, E) if there is a collection SY (X, E) of pairs of integers (k, i) such that H∗(Y ) is generated
1426
+ as a cohomology ring by the direct sum of the images of
1427
+ Φ(i,k)
1428
+ E
1429
+ : H2m−k(X) → H2i−k(Y )
1430
+ as the pair (k, i) varies over all the elements in SY (X, E). Note that pr1(SY (X, E)) need not
1431
+ contain all integers from 0 to 2m. We call SY (X, E) an associated indexing set.
1432
+ Notations and Conventions 6.4. We fix the following notations:
1433
+ Seven := {(k, i) ∈ SY (X, E) | k even} and Sodd := {(k, i) ∈ SY (X, E) | k odd}
1434
+ p(Seven) := {2m − k|(k, i) ∈ Seven} and p(Sodd) := {2m − k|(k, i) ∈ Sodd}
1435
+ q(Seven) := {2i − k|(k, i) ∈ Seven} and q(Sodd) := {2i − k|(k, i) ∈ Sodd}
1436
+ Theorem 6.5. Let π1 : X ∗ → ∆∗ and π2 : Y∗ → ∆∗ be two smooth, projective families of
1437
+ relative dimensions m and n, respectively. Assume that there exists a coherent sheaf U over
1438
+ X ∗ ×∆∗ Y∗ such that it is flat over ∆∗ and for general t ∈ ∆∗, Yt is cohomologically generated
1439
+ by (Xt, Ut) by an indexing set SYt(Xt, Ut) such that π1 is strictly Mumford-Tate with respect to
1440
+ (p(Seven), p(Sodd)). Then, the family π2 is Mumford-Tate.
1441
+ Proof. Let t ∈ ∆∗ be such that Yt is cohomologically generated by (Xt, Ut) with indexing set
1442
+ SYt(Xt, Ut) such that π1 is strictly Mumford-Tate with respect to (p(Seven), p(Sodd)).
1443
+ Using
1444
+ Ehresmann’s theorem one can check that for any s ∈ ∆∗, Ys is cohomologically generated by
1445
+ (Xs, Us) and we have an equality of indexing sets SYt(Xt, Ut) = SYs(Xs, Us). Denote by
1446
+ TX := T(p(Seven),p(Sodd)) and TY := T(q(Seven),q(Sodd)) with X∞ replaced by Y∞.
1447
+ Recall, for any (k, i) ∈ SYt(Xt, Ut) we have the morphism Φ(i,k)
1448
+ U,∞ of mixed Hodge structures from
1449
+ H2m−k(X∞) to H2i−k(Y∞). This induces a morphism of mixed Hodge structures:
1450
+ φ : TX → TY.
1451
+ Recall, the cup-product morphism is a morphism of mixed Hodge structures [8, Lemma 6.16].
1452
+ So, the composition of the cup-product morphism with φ:
1453
+ Φ : TX
1454
+ φ−→ TY
1455
+
1456
+ −→ H∗(Y∞, Q)
1457
+ is a morphism of mixed Hodge structures. Given s ∈ ∆∗, denote by (see §6.1)
1458
+ TXs := Ts
1459
+ (p(Seven),p(Sodd)) and TYs := Ts
1460
+ (q(Seven),q(Sodd)) with Xs replaced by Ys.
1461
+ As before, we have the following composed morphism of Hodge structures:
1462
+ Φs : TXs → TYs
1463
+
1464
+ −→ H∗(Ys, Q),
1465
+ where the first morphism arises from Φ(i,k)
1466
+ Us
1467
+ as (k, i) ranges over entries in SYs(Xs, Us).
1468
+ By
1469
+ Theorem 6.2 we then have the following commutative diagram:
1470
+ TX
1471
+ Φ✲ H∗(Y∞, Q)
1472
+
1473
+ TXs
1474
+ j∗
1475
+ s
1476
+
1477
+ Φs✲ H∗(Ys, Q)
1478
+ (j′
1479
+ s)∗
1480
+
1481
+
1482
+ MUMFORD TATE GROUPS AND THE HODGE CONJECTURE
1483
+ 21
1484
+ where js (resp. j′
1485
+ s) is the natural inclusion of Xs (resp. Ys) into X∞ (resp. Y∞).
1486
+ Take γ ∈ F pH2p(Y∞, Q) i.e., γ is a Hodge class. We need to prove that j′
1487
+ s
1488
+ ∗(γ) is a Hodge class
1489
+ in H2p(Ys, Q). Since Ys is cohomologically generated by (Xs, Us) and Φ is a morphism of mixed
1490
+ Hodge structures, there exists a Hodge class γ′ ∈ TX such that Φ(γ′) = γ. As π1 is strictly
1491
+ Mumford-Tate with respect to (p(Seven), p(Sodd)), we have j∗
1492
+ s(γ′) is fixed by MTs
1493
+ (p(Seven),p(Sodd)).
1494
+ Hence, j∗
1495
+ s(γ′) is a Hodge class in TXs. Since Φs is a morphism of Hodge structures, this means
1496
+ (j′
1497
+ s)∗(γ) = Φs ◦ j∗
1498
+ s(γ′) is a Hodge class.
1499
+ Therefore, π2 is a Mumford-Tate family. This proves the theorem.
1500
+
1501
+ We now use the above theorem to get an explicit example.
1502
+ Corollary 6.6. Let π1 : X → ∆ be a flat, projective family of curves satisfying the hypothesis
1503
+ in Proposition 6.1. Fix an invertible sheaf L on X ∗ := π−1
1504
+ 1 (∆∗) of (relative) odd degree over the
1505
+ punctured disc ∆∗. Let
1506
+ π2 : M(2, L) → ∆∗
1507
+ be a relative moduli space of rank 2 semi-stable sheaves with fixed determinant L over X ∗.
1508
+ Then, π2 is a Mumford-Tate family.
1509
+ Proof. Consider the universal bundle U over X ∗ ×∆∗ M(2, L). It is well-known that for each
1510
+ t ∈ ∆∗, the fiber M(2, L)t := π−1
1511
+ 2 (t) is cohomologically generated by (Xt, Ut) with the associated
1512
+ indexing set (see [24, Theorem 1]):
1513
+ {(0, 1), (0, 2), (1, 2), (2, 2)}
1514
+ By Proposition 6.1, π1 is strictly Mumford-Tate.
1515
+ Then, Theorem 6.5 implies that π2 is a
1516
+ Mumford-Tate family. This proves the corollary.
1517
+
1518
+ Remark 6.7. In fact, the relative moduli space M(2, L) mentioned in Corollary 6.6 degenerates
1519
+ to a singular variety. A desingularization of this variety satisfies the classical Hodge conjecture.
1520
+ See [7, Theorem 5.2] for details.
1521
+ Acknowledgements
1522
+ This article was motivated by some questions asked by Prof. C. Simpson, after the second
1523
+ author gave a talk on the article [7] at the workshop ‘Moduli of bundles and related structures’
1524
+ held at ICTS, Bengaluru, India. We thank Prof. Simpson for his interest and the organisers
1525
+ for organising the workshop. We also thank Prof. R. Laterveer for his comments on an earlier
1526
+ draft.
1527
+ References
1528
+ [1] S. Basu, A. Dan, and I. Kaur. Degeneration of intermediate Jacobians and the Torelli theorem. Documenta
1529
+ Mathematica, 24:1739–1767, 2019.
1530
+ [2] S. Bloch, H. Gillet, and C. Soul´e. Non-archimedean Arakelov theory. Journal of Algebraic Geometry, 4(4):427–
1531
+ 486, 1995.
1532
+ [3] A. Dan. On a conjecture by Griffiths and Harris concerning certain Noether–Lefschetz loci. Communications
1533
+ in Contemporary Mathematics, 17(5):1550002, 2015.
1534
+ [4] A. Dan. On generically non-reduced components of Hilbert schemes of smooth curves. Mathematische
1535
+ Nachrichten, 290(17-18):2800–2814, 2017.
1536
+ [5] A. Dan. On a conjecture of Harris. Communications in Contemporary Mathematics, 23(07):2050028, 2021.
1537
+ [6] A. Dan and I. Kaur. Semi-regular varieties and variational Hodge conjecture. Comptes Rendus Mathematique,
1538
+ 354(3):297–300, 2016.
1539
+ [7] A. Dan and I. Kaur. Hodge conjecture for the moduli space of semi-stable sheaves over a nodal curve. Annali
1540
+ di Matematica Pura ed Applicata (1923-), pages 1–20, 2022.
1541
+
1542
+ 22
1543
+ A. DAN AND I. KAUR
1544
+ [8] T. Fujisawa. Polarizations on limiting mixed Hodge structures. Journal of Singularities, 8:146–193, 2014.
1545
+ [9] W. Fulton. Intersection theory, volume 2. Springer Science & Business Media, 2013.
1546
+ [10] H Gillet and C Soul´e. Descent, motives and K-theory. Journal f¨ur die reine und angewandte Mathematik,
1547
+ 478:127–176, 1996.
1548
+ [11] M. Green, P. A. Griffiths, and M. Kerr. Mumford-Tate Groups and Domains, Their Geometry and Arithmetic,
1549
+ volume 183 of Annals of Mathematics Studies. Princeton University Press, 2012.
1550
+ [12] R. Hartshorne. Algebraic Geometry. Graduate text in Mathematics-52. Springer-Verlag, 1977.
1551
+ [13] U. Jannsen. Mixed motives and algebraic K-theory, volume 1400. Springer, 2006.
1552
+ [14] M. Kashiwara and P. Schapira. Sheaves on manifolds. Grundlehren der Mathematischen Wissenschaften, 292.
1553
+ [15] G. Kempf, F. Knudsen, D. Mumford, and B. Saint-Donat. Toroidal embeddings 1, volume 339. Springer,
1554
+ 2006.
1555
+ [16] V. S. Kulikov. Mixed Hodge structures and singularities, volume 132. Cambridge University Press, 1998.
1556
+ [17] R. Laterveer. Surjectivity of cycle maps for singular varieties. Geometriae Dedicata, 179(1):265–278, 2015.
1557
+ [18] J. D. Lewis. A generalization of Mumford’s theorem, II. Illinois Journal of Mathematics, 39(2):288–304, 1995.
1558
+ [19] J. D. Lewis. The Hodge conjecture for a certain class of singular varieties. Mathematische Zeitschrift,
1559
+ 224(1):25–31, 1997.
1560
+ [20] J. D. Lewis and B. B. Gordon. A survey of the Hodge conjecture, volume 10. American Mathematical Soc.,
1561
+ 2016.
1562
+ [21] E. Markman. Generators of the cohomology ring of moduli spaces of sheaves on symplectic surfaces. Journal
1563
+ fur die reine und angewandte Mathematik, 544, 2002.
1564
+ [22] E. Markman. Integral generators for the cohomology ring of moduli spaces of sheaves over Poisson surfaces.
1565
+ Advances in Mathematics, 208(2):622–646, 2007.
1566
+ [23] D. Mumford and P. Newstead. Periods of a moduli space of bundles on curves. American Journal of Mathe-
1567
+ matics, 90(4):1200–1208, 1968.
1568
+ [24] P. E. Newstead. Characteristic classes of stable bundles of rank 2 over an algebraic curve. Transactions of
1569
+ the American Mathematical Society, 169:337–345, 1972.
1570
+ [25] C. Peters and J. H. M. Steenbrink. Mixed Hodge structures, volume 52. Springer Science & Business Media,
1571
+ 2008.
1572
+ [26] W. Schmid. Variation of Hodge structure: the singularities of the period mapping. Inventiones mathematicae,
1573
+ 22(3-4):211–319, 1973.
1574
+ [27] E. Sernesi. Deformaions of Algebraic Schemes. Grundlehren der Mathematischen Wissenschaften-334.
1575
+ Springer-Verlag, 2006.
1576
+ [28] J. Steenbrink. Limits of Hodge structures. Inventiones mathematicae, 31:229–257, 1976.
1577
+ [29] B. Totaro. Chow groups, Chow cohomology, and linear varieties. In Forum of Mathematics, Sigma, volume 2.
1578
+ Cambridge University Press, 2014.
1579
+ [30] C. Voisin. A counterexample to the Hodge conjecture extended to K¨ahler varieties. International Mathematics
1580
+ Research Notices, 2002(20):1057–1075, 2002.
1581
+ [31] C. Voisin. Hodge Theory and Complex Algebraic Geometry-I. Cambridge studies in advanced mathematics-76.
1582
+ Cambridge University press, 2002.
1583
+ [32] C. Voisin. Some aspects of the Hodge conjecture. Japanese Journal of Mathematics, 2(2):261–296, 2007.
1584
+ School of Mathematics and Statistics, University of Sheffield, Hicks building, Hounsfield Road,
1585
+ S3 7RH, UK
1586
+ Email address: [email protected]
1587
+ Department of Mathematical Sciences, Loughborough University, LE11 3TU, U.K
1588
+ Email address: [email protected]
1589
+
8tAzT4oBgHgl3EQfE_rp/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
8tFLT4oBgHgl3EQfBi4b/content/tmp_files/2301.11970v1.pdf.txt ADDED
@@ -0,0 +1,1601 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Even if Explanations:
2
+ Prior Work, Desiderata & Benchmarks for Semi-Factual XAI
3
+ Saugat Aryal1,2 , Mark T. Keane1,2
4
+ 1School of Computer Science, University College Dublin, Dublin, Ireland
5
+ 2 Insight Centre for Data Analytics, Dublin, Ireland
6
+
7
8
+
9
+ Abstract
10
+ Recently, eXplainable AI (XAI) research has
11
+ focused on counterfactual explanations as post-
12
+ hoc justifications for AI-system decisions (e.g., a
13
+ customer refused a loan might be told “if you
14
+ asked for a loan with a shorter term, it would have
15
+ been approved”). Counterfactuals explain what
16
+ changes to the input-features of an AI system
17
+ change the output-decision. However, there is a
18
+ sub-type of counterfactual, semi-factuals, that
19
+ have received less attention in AI (though the
20
+ Cognitive
21
+ Sciences
22
+ have
23
+ studied
24
+ them
25
+ extensively). This paper surveys these literatures
26
+ to summarise historical and recent breakthroughs
27
+ in this area. It defines key desiderata for semi-
28
+ factual XAI and reports benchmark tests of
29
+ historical algorithms (along with a novel, na¨ıve
30
+ method) to provide a solid basis for future
31
+ algorithmic developments.
32
+ 1 Introduction
33
+ With the emergence of deep learning there have been rising
34
+ concern about the opacity of Artifical Intelligence (AI)
35
+ systems and their impact on public and private life [Adadi
36
+ and Berrada, 2018; Guidotti et al., 2018]. Currently,
37
+ governments are taking steps to protect people’s rights in
38
+ these areas, to regulate the AI industry and ensure that
39
+ these technologies are not abused (e.g., the EU’s GDPR
40
+ [Goodman and Flaxman, 2017]). Research on eXplainable AI
41
+ (XAI) tries to address these issues using automated
42
+ explanations to improve the transparency of black-box
43
+ models, to facilitate the auditing of datasets and to ensure
44
+ fairness, accountability and trustworthiness [Gunning and
45
+ Aha, 2019; Sokol and Flach, 2019; Birhane et al., 2022].
46
+ Recently, significant research effort have been expended
47
+ on counterfactual explanations for XAI [Byrne, 2019; Miller,
48
+ 2019; Keane et al., 2021; Karimi et al., 2022]; for instance, a
49
+ recent survey paper reports 350 papers on the topic [Verma
50
+ et al., 2022]. In this paper, we survey a less-researched
51
+ special case of the counterfactual using semi-factual
52
+ explanations. In this review, we profile the literature on
53
+ semi-factuals, we define desiderata for this explanation
54
+ method, identify key evaluation metrics and implement
55
+ baselines to provide a solid base for future work.
56
+ Counterfactuals aim to explain algorithmic decisions in a
57
+ post-hoc fashion, as an after-the-fact justification, by
58
+ showing end-users what features could change an
59
+ automated decision (e.g., a customer refused a loan might
60
+ be told “if you asked for a loan with a shorter term, it would
61
+ have been approved”). In XAI, counterfactuals are typically
62
+ used to explain what changes to the input-features of an AI
63
+ system will change the output-decision (e.g., a class change,
64
+ loan-refused to the loan-approved; see also Fig. 1).
65
+ Technically, they could be called “outcome-counterfactuals”
66
+ as they capture changes to the world that change the
67
+ outcome (here, to be consistent with the literature, we will
68
+ mostly call them “counterfactuals”).
69
+ Semi-factuals are a special-case of the counterfactual;
70
+ they differ from outcome-counterfactuals in that they show
71
+ endusers the feature changes that do not change a
72
+ Figure 1: A and B are two semi-factuals (in blue) for the
73
+ query Q (in green) all in the same class (i.e. the negative
74
+ one), whereas the counterfactual C (in red) is over the
75
+ decision boundary in the positive class. A is considered
76
+ to be a better semi-factual than B, because A is further
77
+ from Q and closer to the decision boundary.
78
+
79
+
80
+ manifold
81
+ data
82
+ AO
83
+ B
84
+ C
85
+ ?
86
+ -
87
+ -decisionoutcome (e.g., “Even if you asked for a lower car-
88
+ loan, you would still have been refused the loan” or “Even if
89
+ you doubled your income, you would still be refused”). They
90
+ are “counterfactual” in that they convey possibilities that
91
+ “counter” what actually occurred, even though the outcome
92
+ does not change (see Fig.1). Philosophers have argued over
93
+ whether
94
+ semi-factuals
95
+ really
96
+ differ
97
+ from
98
+ outcomecounterfactuals (see [Bennett, 2003; Goodman,
99
+ 1947]), but they have been shown to differ in their
100
+ psychological impacts [McCloy and Byrne, 2002; Parkinson
101
+ and Byrne, 2017].
102
+ We believe that the benefits accruing to counterfactuals also
103
+ accrue to semi-factuals in XAI; namely, that they have many
104
+ legal [Wachter et al., 2017], psychological [Byrne, 2019] and
105
+ technical benefits [Keane et al., 2021]. For example, in
106
+ medicine it is often important to know what changes (e.g.,
107
+ inflammation or cell changes) occur just before an illness
108
+ emerges (e.g., an ulcer or cancer). Similarly, semi-factuals
109
+ can reveal errors in causal models (e.g., a farmer might be
110
+ told “Even if you doubled fertiliser use, your yield would not
111
+ increase” because of soil factors). However, as we shall see,
112
+ semi-factuals also differ significantly in several respects from
113
+ counterfactuals (see desiderata, section 4).
114
+ Outline of Paper & Contributions: In this paper, we
115
+ systematically
116
+ review
117
+ prior
118
+ work
119
+ on
120
+ semi-factuals
121
+ (henceforth, SFs) in the Cognitive Sciences and AI, beginning
122
+ with a discussion of key examples from the early literature in
123
+ Philosophy and Psychology (see section 2). From this work
124
+ we define desiderata for SFs (section 3). In section 4, we
125
+ report the results of a systematic survey before sketching
126
+ the brief history of semi-factual algorithms for explanation
127
+ (section 5). We then report a benchmarking study
128
+ implementing key historical algorithms along with a newly-
129
+ proposed na¨ıve benchmark (see section 6), before closing
130
+ with some conclusions (see section 7). As such, the paper
131
+ makes several novel contributions to this emerging area of
132
+ XAI, providing:
133
+ • A comprehensive survey of the relevant literature.
134
+ • A first statement of desiderata for semi-factual XAI.
135
+ • A na¨ıve benchmark algorithm, based on the new idea
136
+ of Most Distant Neighbors (MDNs).
137
+ • Novel comparative tests of historical benchmarks,
138
+ toidentify the best for future use.
139
+ • A publically-available repository of metrics, data, re-
140
+ sults, benchmarks and an annotated bibliography (see
141
+ https://github.com/itsaugat/sf survey).
142
+ 2 Philosophy & Psychology of Semi-Factuals
143
+ Semi-factuals have been studied under different guises in
144
+ Philosophy and Psychology for several decades. In
145
+ Philosophy, counterfactuals (if only...) and semi-factuals
146
+ (even if...) are often compared to conditionals (if...then) with
147
+
148
+ 1 Because Philip is allergic to the ice-cream in both desserts.
149
+ a view to analysing their logic, truth conditions and role in
150
+ causation [Chisholm, 1946; Goodman, 1947; Bennett, 1982;
151
+ Barker, 1991; Bennett, 2003]. For example, [Bennett, 1982]
152
+ and [Barker, 1991] argue about how the words “even” and
153
+ “still” affect the interpretation of examples, such as:
154
+ (1) Even if the United States has used nuclear weapons in
155
+ Vietnam, it would still have lost the war.
156
+ where the semi-factual asserts that even if the military-force
157
+ expended by United States significantly increased, the
158
+ Vietnam War would still have been lost. In AI terms, the
159
+ semifactual says increasing the feature-value of military-
160
+ force would not change the outcome. So, [Iten, 2002]
161
+ proposes “scalar” analyses of even and even if; “Even Neville
162
+ passed the exam” puts Neville low on an academic-ability
163
+ scale).
164
+ In Psychology, as in AI, semi-factual research has grown
165
+ out of counterfactual studies, specifically, from proposals on
166
+ counterfactual thinking in human cognition [Kahneman and
167
+ Tversky, 1982; Byrne, 2007; Handley and Feeney, 2007;
168
+ Epstude and Roese, 2008]. Byrne [2007] proposed a mental
169
+ models theory of semi-factuals that has been tested in
170
+ several psychological studies (see e.g., [McCloy and Byrne,
171
+ 2002; Parkinson and Byrne, 2017]). McCloy & Byrne’s [2002]
172
+ seminal work explicitly compared people’s reasoning using
173
+ matched scenarios for counterfactuals and semi-factuals,
174
+ akin to the case of Philip who has an allergic reaction to an
175
+ icecream sundae:
176
+ (2) If only Philip had not chosen the ice-cream sundae, he
177
+ wouldn’t have had an allergic reaction.
178
+ (Counterfactual)
179
+ (3) Even if Philip had chosen the banana split, he would
180
+ still have had an allergic reaction1. (Semi-factual)
181
+ McCloy & Byrne found that counterfactuals lead people to
182
+ judge the antecedent event (i.e., the choice of dessert) to be
183
+ more causally-related to the outcome, but semi-factuals had
184
+ the opposite effect, leading people to judge the antecedent
185
+ event to be less causally-related to the outcome. So, semi-
186
+ factuals weaken the causal link between the inputs and
187
+ outcome, convincing people that outcome would have
188
+ occurred anyway (people also differ in their emotional
189
+ reactions to these events). In another experiment, they also
190
+ found that counterfactuals lead people to focus on
191
+ alternative antecedents that undo the outcome (e.g., “If only
192
+ Philip had chosen the cheese cake they would not have had
193
+ a reaction”), whereas semi-factuals lead people to focus on
194
+ alternative antecedents that do not undo the outcome (e.g.,
195
+ “Even if Philip had chosen the baked-alaska he would still
196
+ have had a reaction”). Subsequent studies have tested other
197
+ psychological aspects of semi-factuals [Parkinson and Byrne,
198
+ 2017; Moreno-Rios et al., 2008; Espino et al., 2022].
199
+
200
+ Taken together these psychological findings show that
201
+ semi-factuals have very different psychological effects than
202
+ counterfactuals.
203
+ Unlike
204
+ counterfactuals,
205
+ semi-factuals
206
+ convince people of the status quo, they dissuade them from
207
+ questioning outcomes [Green, 2008], and weaken the causal
208
+ link between features and outcomes.
209
+ 3 Desiderata for Semi-Factuals
210
+ Several desiderata are suggested by these analyses of
211
+ semifactuals. These desiderata cover computational (i.e.,
212
+ “what needs to be computed”) and psychological
213
+ requirements (i.e., the response to be elicited in users) and
214
+ are defined as follows.
215
+ Assume (i) a query instance, Q, that has a vector, x, and an
216
+ outcome, y, that occurs when x holds and (ii) a semi-factual
217
+ instance, SF, that has a vector, x′, and an outcome, y′, that
218
+ occurs when x′ holds. SF will be a good explanation of Q if:
219
+ a) Q is factually the case and SF counters some of Q’s facts
220
+ but not Q’s outcome; so the vectors x and x′ differ,
221
+ diff(x, x′), with no outcome change, y = y′
222
+ b) Ideally, SF relies on sparse changes to a key-feature(s),
223
+ f, of Q, with other features being equal1; ideally, one
224
+ feature change (i.e., diff(x,x′)=1)
225
+ c) The
226
+ key-feature(s)
227
+ changed
228
+ should
229
+ be
230
+ plausible/mutable/actionable; that is, the SF produced
231
+ by the change should be within the data-manifold.
232
+ d) People should find the SF convincing even though it
233
+ may seem to be unexpected/surprising/counter-
234
+ intuitive; for instance, they may expect the key-feature
235
+ change to change the outcome, where y ̸= y′
236
+ e) If people accept SF, it will change their perception of
237
+ the causal role of the key-feature(s), f, in the domain.
238
+ So, their causal model of the domain will change (e.g.,
239
+ causes may be updated/deleted/refined).
240
+ f) For fairness and ethical reasons, the asserted
241
+ differencesbetween Q and SF, should not be
242
+ misleading. For instance, (i) the key-feature should not
243
+ be a proxy variable, (ii) the change should not just
244
+ address a small local region in the decision space, (iii)
245
+ though the change may be unexpected it should not
246
+ violate the domain’s causality, (iii) the change assumes
247
+ ceteris paribus (i.e., “other things being equal”),
248
+ verifiably so (i.e., the unchangedoutcome shown
249
+ should not depend on subtle interactions with other
250
+ variables).
251
+ These desiderata present a high bar for semi-factual
252
+ explanation methods; indeed, it is unclear whether any
253
+ current method meets all of them. Furthermore, some of
254
+ them may require further computational specification (e.g.,
255
+ how
256
+ keyfeatures
257
+ are
258
+ selected)
259
+ and
260
+ psychological
261
+
262
+ 2 Equal may not mean the features have identical values, they may just
263
+ be within some threshold difference.
264
+ specification in operational definitions for user studies (e.g.,
265
+ for the notions of plausibility, convincingness and surprise).
266
+ 4 Systematic Survey: Even if Explanations
267
+ A systematic search of the AI, Philosophy and Psychology
268
+ literatures on semi-factuals was conducted using a
269
+ bottomup citation-search and top-down keyword-searches
270
+ (see Table 1). Ten searches were carried out between
271
+ October 12th, 2022 and December 19th, 2022, consisting of
272
+ (i) a bottomup search checking GoogleScholar citations to
273
+ three key papers (i.e., [Cummins and Bridge, 2006; Nugent
274
+ et al., 2009; Kenny and Keane, 2021], (ii) nine top-down
275
+ searches using keywords in GoogleScholar (see Table 1). The
276
+ papers found (N=1,150) were title-and-abstact screened to
277
+ check whether they were just citing semi-factuals or
278
+ substantially researching them as a topic. Subsequent
279
+ selections then identified the core papers of relevance (see
280
+ here for PRISMA diagram).
281
+ Table 1: Ten searches used in the systematic survey of
282
+ GoogleScholar (12-10-2022 to 19-12-2022) with the number
283
+ of papers found and unique papers reviewed further (n.b.,
284
+ “sf”, “ai” and “xp” are short for “semi-factual”, “artificial
285
+ intelligence” and “explanation”,respectively).
286
+
287
+ 4.1 Survey Results
288
+ Of the 1,150 original results checked, 92 potentially-relevant
289
+ papers were selected to be read in depth from which 62 core
290
+ papers were identified (41 cited here; note, 145 duplicates
291
+ Search Terms
292
+ #
293
+ Papers
294
+ Found
295
+ Unique
296
+ Papers
297
+ *no search terms*
298
+ (citation search of key papers)
299
+ 1
300
+ 108
301
+ 17
302
+ “sf”, “nearest-neighbor”
303
+ 2
304
+ 20
305
+ 3
306
+ “sf”, “ai”
307
+ 3
308
+ 95
309
+ 12
310
+ “sf”, “ai”, “xp”
311
+ 4
312
+ 86
313
+ 12
314
+ “sf”, “xai”
315
+ 5
316
+ 44
317
+ 0
318
+ “ai”, “xp”, (“near-hit” OR
319
+ “nearest-hit”)
320
+ 6
321
+ 230
322
+ 20
323
+ “ai”, “xp”,“nearest-
324
+ likeneighbors”
325
+ 7
326
+ 12
327
+ 0
328
+ “sf”, “xp”, “philosophy”
329
+ 8
330
+ 203
331
+ 11
332
+ “sf”, “xp”, “psychology”
333
+ 9
334
+ 228
335
+ 3
336
+ “xp”, “even if conditionals”,
337
+ “linguistic”, “philosophy”
338
+ 10
339
+ 124
340
+ 14
341
+ Totals
342
+
343
+ 1,150
344
+ 92
345
+
346
+ were removed). As we shall see in the next section on history
347
+ (section 5), from a low base semi-factual research in AI has
348
+ expanded considerably in the last two years. Note, many
349
+ semi-factual papers in Philosophy, Psychology and
350
+ Linguistics were checked but few are specifically relevant to
351
+ explanation (e.g., in Philosophy the focus tends to be on the
352
+ truth conditions of counterfactual statements and the
353
+ linguistic functions of “even” and “still”). Finally, it should
354
+ also be said that many excluded papers were from closely-
355
+ related areas that do not cover semi-factuals per se, but
356
+ which could provide insights for future work; areas that
357
+ include research on (i) case difference learning (e.g.,
358
+ [Hanney and Keane, 1996; Ye et al., 2021]), (ii) feature
359
+ selection using near misses (e.g., [Kira et al., 1992;
360
+ Herchenbach et al., 2022]), (iii) counterfactual explanation
361
+ (e.g., [Keane et al., 2021; Verma et al., 2022]), (iv) flip-points
362
+ in learning (e.g., [Yousefzadeh and O’Leary, 2019]) and (v)
363
+ dynamic critiquing in recommenders (e.g., [Reilly et al.,
364
+ 2004]). These papers are recorded in a publically-available
365
+ annotated biblography (see https://github.com/itsaugat/sf
366
+ survey).
367
+ 5 A Brief History of Semi-Factual XAI
368
+ Thought absent in AI, there are long-standing literatures on
369
+ semi-factuals in Philosophy and Psychology [Bennett, 2003;
370
+ Byrne, 2007]. Much of the initial work emerged from
371
+ CaseBased
372
+ Reasoning
373
+ (CBR)
374
+ research
375
+ on
376
+ post-hoc,
377
+ examplebased explanations [Sørmo et al., 2005; Keane and
378
+ Kenny, 2019]. In this AI research, semi-factual explanations
379
+ have been variously cast as a fortori arguments [Nugent et
380
+ al., 2005; Nugent et al., 2009] and precedent-based
381
+ explanations [Cummins and Bridge, 2006; Bridge and
382
+ Cummins, 2005]. More recently, Kenny & Keane [2021] re-
383
+ connected this work to the Cognitive Science literatures by
384
+ calling them “semifactuals”. Arguably, there are four distinct
385
+ phases in the development of semi-factual explanation ideas
386
+ in AI: (i) initial utility-based proposals, (ii) proximity-based
387
+ methods, (iii) local-region methods and (iv) the more recent
388
+ “modern-era” of counterfactually-inspired proposals. In the
389
+ following subsections, we describe each in turn and the
390
+ intuitions behind them. We end this section by defining a
391
+ new benchmarkmethod based on the notion of Most Distant
392
+ Neighbors (MDNs).
393
+
394
+ 5.1 Semi-Factuals Based on Feature-Utility
395
+ Doyle et al. [2004] appears as the first AI paper in our
396
+ searches to propose using semi-factuals to explain
397
+ automated decisions, under the rubric of a fortori reasoning.
398
+ An a fortori argument is defined as one that uses a stronger
399
+ version of an already-convincing proposition (i.e., “EU
400
+ countries cannot afford standing armies, sure even the US
401
+ can hardly afford its standing army”). Doyle et al. [2004]
402
+ noted that nearestneighbor, example-based explanations
403
+ can often be less convincing than neighbors that have more
404
+ extreme feature-values within the same class. For example,
405
+ if patient-x with a moderate temperature is judged to be
406
+ dischargeable then a semifactual past case, patient-y with a
407
+ much higher temperature who was discharged is more
408
+ convincing than pointing to another patient with the same
409
+ moderate temperature being discharged [Doyle et al., 2006].
410
+ So, this semi-factual method computes a set of nearest
411
+ neighbours as explanatory cases and then re-ranks them
412
+ using utility functions on selected features to find a more
413
+ convincing a fortori case, as follows:
414
+ where q is the query, x is an instance, c is a class label and
415
+ ξ( ) measures the contribution to explanation utility of the
416
+ feature f. The ξ() function uses relative-differences in
417
+ feature-values to assign utilities. For example, for the
418
+ temperature feature, the measure might assign higher utility
419
+ to a 10◦C difference than to a 5◦C difference between a
420
+ query and semi-factual case. This method priorities
421
+ explanatory instances with more convincing feature-values,
422
+ and
423
+ may
424
+ compute
425
+ these
426
+ over
427
+ multiple
428
+ features.
429
+ Furthermore, these utilities are seen as being class-specific
430
+ and, even, user-specific, depending on what a given user
431
+ may find convincing. Furthermore, these utility values often
432
+ decrease as instances approach the decision boundary,
433
+ rather than just being linearly increasing functions.
434
+ However, this method was knowledge-intensive, the
435
+ utility values for each feature had to be hand-coded for each
436
+ class (and, presumably, for each end-user). Indeed, in one of
437
+ their user tests, the utility measures had to be re-defined
438
+ half-way through the study to better reflect end-users’
439
+ assessments [Doyle et al., 2006]. This is a major drawback
440
+ for the technique, as it begs the critical question about what
441
+ featuredifferences will actually be more convincing.
442
+ Accordingly, this utility method is not a plausible benchmark,
443
+ though we do use their intuition about feature-differences
444
+ to define a new, useful benchmark method (see section 5.4).
445
+ 5.2 NUN-Related Semi-Factuals
446
+ Cummins & Bridge’s [2006] “Knowledge-Light based
447
+ Explanation-Oriented
448
+ Retrieval”
449
+ (KLEOR)
450
+ approach
451
+ proposed three methods based on similarity to Nearest
452
+ Unlike Neighbors (NUNs). These KLEOR variants use the NUN
453
+ to find the best semi-factual for a given query (n.b., they
454
+ called the NUN, a Nearest Miss). In modern parlance, the
455
+ NUN is the closest counterfactual in the dataset to the query
456
+ (see [Keane and Smyth, 2020]).
457
+ The first variant, Sim-Miss, selects an instance to be the
458
+ semi-factual which is most similar to the NUN but in the
459
+ same class as the query q:
460
+
461
+ Utility(q, ,c) = wfs (qf,&f,c)
462
+ (1)
463
+ fEF
464
+ SFUtility (q, , c) = argmax Utility(q, &, c)
465
+ (2)SFsim-Miss (q, nun, G) = arg max Sim(r, nun)
466
+ (3)where q is the query, x is the instance, G represents the set
467
+ of all instances in the same class as the query, and nun is the
468
+ Nearest Unlike Neighbor, with Sim being Euclidean Distance
469
+ or Cosine Similarity. This variant is the most naieve as it
470
+ assumes an simple decision boundary. The second variant,
471
+ Global-Sim method, is more sophisticated in that it requires
472
+ the semi-factual be closer to q than to the nun (to avoid SFs
473
+ far from the query but close to the NUN):
474
+ using the global similarity between instances. Finally, the
475
+ third variant, Attr-Sim, computes more fine-grained
476
+ similarities for each feature-attribute, ensuring that the
477
+ semi-factual lies between the q and nun across the majority
478
+ of features:
479
+
480
+ where F is the feature-dimension set and a is a
481
+ featureattribute. These methods rely on the interesting
482
+ intuition that a known counterfactual can guide finding a
483
+ good semi-factual explanation. Furthermore, Cummins &
484
+ Bridge also showed, using computational and user
485
+ evaluations, that SFSim-Miss and SFAttr-Sim can do as well as
486
+ SFUtility, without the knowledge engineering overheads of
487
+ the latter, albeit on a single dataset. Accordingly, this
488
+ method is used in the present benchmarking study (see
489
+ section 6).
490
+ 5.3 Semi-Factuals Near Local-Region Boundaries
491
+ Nugent et al. [2009] proposed another a fortori method, by
492
+ finding marginal instances in the local region around the
493
+ query. Here, a surrogate model, specifically, logistic
494
+ regression was used to capture the local neighborhood
495
+ around the query, built using perturbations of it (akin to
496
+ LIME [Ribeiro et al., 2016]) . Then, candidate nearest
497
+ neighbors are tested using this local model to give a
498
+ probability, with the marginalprobability instance closest to
499
+ the decision boundary, being chosen as the semi-factual
500
+ explanation, as follows:
501
+ where, C is the set of candidate neighbors and LR() is the
502
+ local logistic regression model providing the probability
503
+ score.
504
+ The intuition here is that good semi-factuals should be
505
+ close to the query’s local decision boundary, while being as
506
+ far as possible from it in this local space (see Fig. 1). So, a
507
+ convincing semi-factual explanation should be locally close
508
+ to the query but as distant from it as possible within this local
509
+ region. Unfortunately, Nugent et al. [2009] did not evaluate
510
+ this method beyond providing indicative outputs, that seem
511
+ to be informative semi-factuals. Accordingly, it is also used
512
+ in the present benchmarking study (see section 6).
513
+
514
+ 5.4 A New Benchmark: Most Distant Neighbors
515
+ Analogies between counterfactual XAI and semi-factuals
516
+ suggest another na¨ıve benchmark that has not been
517
+ proposed before in the literature. Early counterfactual
518
+ methods often used Nearest Unlike Neighbors (NUNs), the
519
+ nearest classdifferent instance in the dataset to the query,
520
+ as counterfactual explanations [Cunningham et al., 2003;
521
+ Wexler et al., 2019]. NUNs are reasonable first-pass at
522
+ counterfactuals that are guaranteed to be within-domain
523
+ (though they have other weaknesses). An analogous
524
+ solution for semi-factual explanations relies on the notion of
525
+ Most Distant Neighbors (MDNs); namely, the most distant
526
+ same-class instance in the dataset to the query on some key-
527
+ feature. MDNs should be good semi-factuals because they
528
+ reflect many of the desiderata and are, by definition, within
529
+ domain.
530
+ To compute MDNs, for a given feature of q, its neighbours
531
+ on the dimension are partitioned into instance-sets that
532
+ have higher values (i.e., HighSet) or lower values (i.e.,
533
+ LowSet) than the query. Each of these sets are ranked-
534
+ ordered separately using the “Semi-Factual Score” (sfs)
535
+ function, a distance messure that prioritises instances that
536
+ are sparse (few feature differences) while also having the
537
+ highest valuedifferences on a key-feature, as follows:
538
+
539
+
540
+ where S is HigherSet or LowerSet and x ∈ S, same() counts
541
+ the features that are equal between q and x, F is the total
542
+ number of features, diff() gives the difference-value of
543
+ keyfeature, f, and diffmax() is the maximum difference-value
544
+ for that key-feature in the HighSet/LowSet. Basically, the
545
+ instance with the highest overall sfs value from the
546
+ HighSet/LowSet is the best candidate for that feature. This
547
+ computation is done for each feature of q, independently,
548
+
549
+ SFAttr-Sim(q, nun, G) = arg max Sim(α, nun)
550
+ EC
551
+ + maxcount[Sim(qa, aa) > Sim(qa, nuna)]
552
+ aeF
553
+ (5)Algorithm1MDN Semi-factual
554
+ Input: query q
555
+ Output:Semi-factual(q)
556
+ 1:InitializeI=0,F=0
557
+ 2: for feature f = fi, f2, fs, ..., fn do
558
+ 3:
559
+ S=:or≤]
560
+ High/Low Set
561
+ 4:
562
+ foraESdo
563
+ 5:
564
+ I ← I Usfs(x)
565
+ Equation 7
566
+ 6:
567
+ end for
568
+ 7:
569
+ F←FUmax(I)
570
+ 8:
571
+ end for
572
+ 9: SF(q) ←max(F)
573
+ 10: return SF(q)same(q, a)
574
+ diff(qf, cf)
575
+ sfs(q, S, F) =
576
+ (7)
577
+ F
578
+ diffmar(qf, Sf)SFGlobal-Sim(q, nun,G) = arg max Sim(c, nun)
579
+ CEG
580
+ (4)
581
+ + Sim(q,r) > Sim(q,nunSFLocal-Region(q, C) = arg min LR(α)
582
+ (6)with the best of the best instances (i.e., with the highest sfs
583
+ value across all features) being chosen as the overall semi-
584
+ factual for the query (see Algorithm 1).
585
+ The intuition behind MDNs is that if one can find a instance
586
+ that has some features in common with the query but is as
587
+ far from it on a key-feature, then it will make a good
588
+ semifactual (see Desiderata). This new method was also
589
+ added to benchmarking study to compare it to the historical
590
+ methods.
591
+ 5.5 The Modern Era: Post-2020 Methods
592
+ Kenny & Keane [2021] instigated, what could be called, the
593
+ modern-era of semi-factual AI research when they proposed
594
+ a GAN-based counterfactual method for images, called
595
+ PIECE, that also computed semi-factuals. PIECE finds
596
+ “exceptional” and “normal” features for a given class and
597
+ then modifies the query’s “exceptional” features to create
598
+ instances that have the “normal” features of the
599
+ counterfactual
600
+ class,
601
+ using
602
+ the
603
+ GAN
604
+ to
605
+ generate
606
+ visualisations. As successive exceptionalfeatures are
607
+ changed the generated instances move away from the query
608
+ towards the counterfactual class, with the instance
609
+ generated just before the decision boundary being identified
610
+ as the semi-factual. Kenny & Keane showed that these
611
+ generated semi-factuals were more distant from the query
612
+ than those produced by other perturbation techniques (see
613
+ their Expt.2). In one sense, this solution re-imagines the
614
+ CumminsBridge intuition that good semi-factuals can be
615
+ found somewhere between the query and a counterfactual,
616
+ close to the decision boundary.
617
+ PIECE kicked off a renewed interest in semi-factual XAI as
618
+ researchers have looked to improve on it and to apply semi-
619
+ factuals in different application contexts. So, Zhao et al.
620
+ [2022] have proposed a class-to-class variational encoder
621
+ (C2C-VAR) which is less computationally expensive than
622
+ PIECE that can generate semi-factuals (and counterfactuals).
623
+ Vats et al. [2022] have used StyleGAN2 [Karras et al., 2020]
624
+ to find semi-factual explanations for classifications of
625
+ medical images of ulcers. While these works try to explain
626
+ model capabilities, others have proposed using semifactuals
627
+ to explain model limits. Artelt & Hammer [2022] use semi-
628
+ factuals to explain the “reject option”; that is, the option
629
+ where an AI system rejects inputs because “a prediction with
630
+ an unacceptable lower certainty” can only be made. Their
631
+ perturbation-based optimisation method uses a loss
632
+ function that promotes diverse semi-factuals that are (i) in
633
+ the same class as the query (they are also rejected), (ii)
634
+ sparse (they aim for 1-feature-difference), (iii) “sufficiently
635
+ distant” from the query, and (iv) of higher certainty than the
636
+ query (to make them more convincing). Notably, here, the
637
+ key-feature being varied is the certainty of the instance’s
638
+ prediction. In a similar vein, Lu et al. [2022] argue that semi-
639
+ factuals may be used to explain spurious patterns using
640
+ human-in-the-loop ML. Finally, Mertes et al. [2022] propose
641
+ what appears to be a wholly new type of counterfactual,
642
+ called “alterfactuals”, to explore the“irrelevant feature”
643
+ space of the model; they describe these as semi-factuals that
644
+ “move parallel to the decision boundary, indicating which
645
+ features would not modify the model’s decision”. Other
646
+ proposals have also been made that suffer from a poor
647
+ knowledge of the literature (see e.g., [Fernandez et al., 2022;
648
+ Herchenbach et al., 2022]).
649
+ Finally, from the user perspective, Mueller et al. [2021]
650
+ include a semi-factual module in their cognitive tutorial for
651
+ training users about “cognitively-challenging aspects of an AI
652
+ system” and [Salimi, 2022] reports user-tests for
653
+ trustworthiness after using semi-factuals.
654
+ These recent papers reflect a rapidly-expanding interest in
655
+ semi-factual XAI. In time, these modern-era methods will
656
+ need to be comparatively evaluated relative to the
657
+ benchmarks and metrics proposed here, to determine which
658
+ fare best in explaining predictions to end-users.
659
+ 6 Benchmarking Study
660
+ To provide a firm empirical basis for future work on
661
+ semifactual XAI, we ran a benchmark study of five methods,
662
+ the four historical methods [i.e., the three KLEOR methods
663
+ (SimMiss, Global-Sim, Attr-sim) and the Local-Region one)
664
+ and the newly-proposed MDN method. Standard evaluation
665
+ metrics from prior XAI work were used to compare these
666
+ methods, using the five measures detailed below.
667
+ Query-to-SF Distance: The L2-norm from the Query to the
668
+ SF, where higher scores are better, as the semi-factual
669
+ should be far from from the query
670
+ Query-to-SF kNN (%): This is a measure of the percentage
671
+ of instances (within the whole dataset) in the k-NN set
672
+ surrounding the Query that occur before the SF is included
673
+ (i.e., as k is successively increased upto the appearance of
674
+ the SF); it is an alternative measure for how far the SF is from
675
+ the Query in the dataset, so higher values are better.
676
+ SF-to-Query-Class Distance: A within-distribution measure
677
+ for the closeness of the SF to the distribution of the Query-
678
+ Class using Mahalanobis distance, where lower values
679
+ indicate that the SF is closer to the query-class distribution.
680
+ MDN Distance: The sfs function, a semi-factual-oriented
681
+ distance for comparing Queries and a candidate-SFs, can
682
+ also be used to determine how far the SFs selected by
683
+ historical methods are from the Query; this metric allows us
684
+ to assess whether historical methods find “better” MDNs
685
+ than the MDN-method itself, where higher sfs values
686
+ indicate the SF is a better MDN for the Query
687
+ Sparsity (%): The L0-norm counting the number of feature-
688
+ differences between the Query and SF, divided into three
689
+ levels (i.e., 1-diff, 2-diff and >3-diff) where the percent of SFs
690
+ selected by the method at each level is recorded; obviously,
691
+ methods with higher percentages at lower difference levels
692
+ are better (ideally, high-percentages at the 1-diff level).
693
+
694
+ SFMDN(q, S) = arg max sfs()
695
+ (8)
696
+ ES6.1 Method
697
+ We performed leave-one-out cross-validation for each of the
698
+ five methods on seven datasets to find a semi-factual for
699
+ every instance in the dataset, treating each as a query. We
700
+ used 3-NN model to implement the KLEOR variants. For the
701
+ Local Region method, we consider a minimum of 200
702
+ instances from each class to build the local model for a
703
+ query. In the MDN method, a “20% of the standard
704
+ deviation” threshold was used to determine whether values
705
+ for a given feature were essentially “the same”. The seven
706
+ datasets were benchmark, publically-available, tabular
707
+ datasets commonly used in the counterfactual literature,
708
+ which were binary-classed: AdultIncome (N=26,540, 12
709
+ features), Blood Alcohol (N=2,000, 5 features), Default
710
+ Credit Card (N=30,000, 23 features), Pima Diabetes (N=392,
711
+ 8 features), German Credit (N=1,000, 20 features), HELOC
712
+ (8,291 instances, 20 features), Lending Club (N=39,239, 8
713
+ features). All the experiments were carried out in Python 3.9
714
+ on Ubuntu 16.04 machine with 40 core Intel Xeon(R)
715
+ processor with an approximate run-time of 40 hours. All
716
+ programs,
717
+ data
718
+ and
719
+ results
720
+ are
721
+ available
722
+ at
723
+ https://github.com/itsaugat/sf survey.
724
+
725
+ 6.2 Results & Discussion
726
+ Figures 2 summarises the overall results for the five methods
727
+ (as mean ranks over datasets) on the five benchmark
728
+ measures (Figures 3 and 4 show results by-dataset). The
729
+ summary shows that MDN does best on three of the five
730
+ measures (i.e., Query-to-SF Distance, Query-to-SF kNN,
731
+ MDN Distance), with the Local Region method being a close
732
+ second; performance on the two other metrics (SF-to-
733
+ Query-Class
734
+ Distance,
735
+ Sparcity)
736
+ require
737
+ further
738
+ interpretation.
739
+ On the Query-to-SF Distance metric (Figure 3a) it can be
740
+ seen that MDN produces the highest Query-to-SF distances
741
+ for 4 of the 7 datasets, showing that it tends to find the
742
+ furthest SF-instances from the query. On the Query-SF kNN
743
+ metric (Figure 3b) MDN again scores the highest in 3 of 7
744
+ datasets with overall percentages that stand out; so, MDN
745
+ finds SFs separated from the Query by many instances. On
746
+ the SF-to-Q-Class Distance measure (Figure 3c) MDN scores
747
+ less well (overall it is ranked 4th); though all these SFs are
748
+ by-definition within distribution (as valid datapoints), MDN
749
+ probably scores lower as it is finding more instances at the
750
+ edges of the distribution. On the MDN-Distance metric
751
+ (Figure 3d) the four historical methods mainly produce lower
752
+ scores across datasets (except for the HELOC dataset)
753
+ showing that the MDN method is finding the furthest SFs
754
+ from the Query in the dataset.
755
+ The one wrinkle in MDN’s performance is on the sparsity
756
+ measure. As a rough reckoning, in Figure 4, the higher the
757
+ blue-portion of the bars [i.e., the % of 1-diff SFs] for a given
758
+ method-dataset pair, the better the performance. In Figure
759
+ 4, we can see that MDN does the worst of all the methods in
760
+ three datasets where 100% of its SFs have >3-
761
+ featuredifferences (though in three others it fares better).
762
+ This performance could probably be improved by fine-tuning
763
+ the sfs function [see formula (7)]. Recall, that this function
764
+ has two equally-weighted components, that compute (i)
765
+ samefeatures and (ii) relative-differences in the key-feature.
766
+ If a higher weight was given to the same-features
767
+ component, then the method should select sparser SFs
768
+ (perhaps also aided by a scoring threshold). For the present
769
+ work, we felt it was better to provide a vanilla sfs function to
770
+ get a clear sense of how a baseline-MDN method might
771
+ work.
772
+ Overall, in conclusion, though it seems that the MDN and
773
+ the Local Region methods provide the best candidates for
774
+ semi-factual baselines. The Local Region method provides
775
+ reasonable, solid results with decent sparsity, whereas the
776
+ MDN method shows the furthest point in the dataset than
777
+ an SF can be from the Query (as type of upper limit to beat).
778
+
779
+
780
+ Figure 2: Mean Ranks of Success of the Five Benchmark
781
+ Methods on Five Different Measures, for the Tested
782
+ Datasets.
783
+
784
+
785
+
786
+ 1
787
+ m
788
+ 4
789
+ 6
790
+ 8
791
+ 9
792
+ Query-to-SFDistance
793
+ Query-to-SFkNN
794
+ SFtoQuery-ClassDistance
795
+ MDNDistance
796
+ Sparsity
797
+ Sim-MissGlobal-SimAttr-SimLocal-RegionMDN7 Conclusion
798
+ In recent years, counterfactual explanations has been
799
+ heavily researched as a significant explanation strategy in
800
+ XAI. Yet, very little attention has been given to an, arguably,
801
+ equally useful method that relies on semi-factuals (where
802
+ changes to input features do not lead to output changes). In
803
+ this paper, from a systematic survey, we aim to remedy this
804
+ deficit and place this topic area on a firm footing with
805
+ defined desiderata, benchmarked methods and suitable
806
+ metrics. In conclusion, several limitations and caveats are to
807
+ be noted.
808
+ With respect to limitations, it is to be noted that in the
809
+ current benchmark study we have concentrated on tabular
810
+ data, largely to respect the focus of historical methods.
811
+ However, the desiderata and evaluation metrics should
812
+ equally apply to image dataset (and possibly time-series
813
+ data), albeit relying more on latent features (as has been
814
+ demonstrated in [Kenny and Keane, 2021]). The paucity of
815
+ user studies is another severe limitation; until some
816
+ carefully-controlled studies are carried out, we do not really
817
+ know how users will respond to these explanations in the AI
818
+ context.
819
+ With respect to caveats, we believe that it is important to
820
+ reiterate the ethical point about the use of semi-factuals (a
821
+ point that also applies to counterfactuals [Asher et al.,
822
+ 2022]). These explanatory methods have significant
823
+ cognitive impacts on people’s understanding of AI systems
824
+ and domains, they convince and dissuade people. But, they
825
+ could be misused if certain assumptions are violated (e.g., if
826
+ the SF is not representative of the data). So, future
827
+ implementations of these methods will need to provide
828
+ metrics to audit these assumptions, to ensure they are being
829
+ properly and fairly applied in advice to end-users.
830
+
831
+ Figure 4: Sparsity Results Showing Precentages of 1-diff, 2-
832
+ diff and >3-diff SFs for each Method across Different
833
+ Datasets.
834
+
835
+ Figure 3: Benchmark Results: Performance of Five Semi-Factual Methods on Seven Tabular Datasets for Four Key Evaluation
836
+ Measures, the (a) Query-to-SF Distance, (b) Query-to-SF kNN (%), (c) SF-to-Q-Class Distance, (d) MDN Distance Measures.
837
+
838
+
839
+ 100
840
+ 90
841
+ 80
842
+ OL
843
+ Sparsity (%)
844
+ 60
845
+ 50
846
+ 40
847
+ 20
848
+ 10
849
+ Adult-Income
850
+ Blood Alcohol
851
+ Default Credit Card
852
+ Diabetes
853
+ German Credit
854
+ HELOC
855
+ Lending Club
856
+ Adult-Income
857
+ BloodAlcohol
858
+ Default Credit Card
859
+ Diabetes
860
+ German Credit
861
+ HELOC
862
+ Lending Club
863
+ Adult-Income
864
+ Blood Alcohol
865
+ Default Credit Card
866
+ Diabetes
867
+ German Credit
868
+ HELOC
869
+ Lending Club
870
+ Adult-Income
871
+ Blood Alcohol
872
+ Default Credit Card
873
+ Diabetes
874
+ German Credit
875
+ HELOC
876
+ Lending Club
877
+ Adult-Income
878
+ Blood Alcohol
879
+ Default Credit Card
880
+ Diabetes
881
+ German Credit
882
+ HELOC
883
+ LendingClub
884
+ Sim-Miss
885
+ Global-Sim
886
+ Attr-Sim
887
+ Local-Region
888
+ MDN
889
+ 1-diff
890
+ 2-diff
891
+ >3-diff(a)Query-to-SFDistance
892
+ (b)Query-to-SFkNN(%)
893
+ 1
894
+ 70
895
+ 0.9
896
+ 60
897
+ 0.8
898
+ 40
899
+ 0.5
900
+ 30
901
+ 00.3
902
+ 0.2
903
+ 10
904
+ 0.1
905
+ 0
906
+ Sim-Miss
907
+ Global-Sim
908
+ Attr-Sim
909
+ Local-Region
910
+ MDN
911
+ Sim-Miss
912
+ Global-Sim
913
+ Attr-Sim
914
+ Local-Region
915
+ MDN
916
+ (c)SF-to-Query-ClassDistance
917
+ (d)MDNDistance
918
+ 5
919
+ 2
920
+ 4.5
921
+ 1.9
922
+ 1.8
923
+ 4
924
+ 1.7
925
+ 3.5
926
+ 1.6
927
+ 3
928
+ 1.5
929
+ 1.4
930
+ 1.3
931
+ 2
932
+ 1.2
933
+ 1.5
934
+ 1.1
935
+ Sim-Miss
936
+ Global-Sim
937
+ Attr-Sim
938
+ Local-Region
939
+ MDN
940
+ Sim-Miss
941
+ Global-Sim
942
+ Attr-Sim
943
+ Local-Region
944
+ MDN
945
+ ■Adulit-Income
946
+ BloodAlcohol
947
+ DefaultCreditCard
948
+ Diabetes
949
+ GermanCredit
950
+ HELOC
951
+ LendingClubAnnotated Bibliography
952
+
953
+ * means cited in original shorter paper
954
+ SF_AI means core papers related to SFs in AI
955
+ SF_PSY means articles related to SFs in Psychology
956
+ SF_PHL means papers related to SFs in Philosophy
957
+ CF means related to Counterfactual XAI
958
+ SURV means survey/review article related to XAI
959
+ REL means areas closely related to SF
960
+
961
+
962
+ *SURV [Adadi and Berrada, 2018] Amina Adadi and
963
+ Mohammed Berrada. Peeking inside the black-box: A
964
+ survey on explainable artificial intelligence (xai). IEEE
965
+ Access, 6:52138–52160, 2018.
966
+ SF_AI [Armengol and Plaza, 2006] Eva Armengol and Enric
967
+ Plaza. Symbolic explanation of similarities in case-based
968
+ reasoning. Computing and informatics, 25(2-3):153–171,
969
+ 2006.
970
+ *SF_AI [Artelt and Hammer, 2022] Andre´ Artelt and
971
+ Barbara Hammer. ” even if...”–diverse semifactual
972
+ explanations of reject. arXiv preprint arXiv:2207.01898,
973
+ 2022.
974
+ *CF [Asher et al., 2022] Nicholas Asher, Lucas De Lara,
975
+ Soumya Paul, and Chris Russell. Counterfactual models
976
+ for fair and adequate explanations. Machine Learning and
977
+ Knowledge Extraction, 4(2):316–349, 2022.
978
+ *SF_PHL [Barker, 1991] Stephen Barker. ” even, still” and
979
+ counterfactuals. Linguistics and Philosophy, pages 1–38,
980
+ 1991.
981
+ SF_PHL
982
+ [Barker,
983
+ 1994]
984
+ Stephen
985
+ J
986
+ Barker.
987
+ The
988
+ consequententailment problem foreven if. Linguistics and
989
+ Philosophy, 17(3):249–260, 1994.
990
+ *SF_PHL [Bennett, 1982] Jonathan Bennett. Even if.
991
+ Linguistics and Philosophy, 5(3):403–418, 1982.
992
+ *SF_PHL [Bennett, 2003] Jonathan Bennett. A philosophical
993
+ guide to conditionals. Clarendon Press, 2003.
994
+ *REL [Birhane et al., 2022] Abeba Birhane, Vinay Uday
995
+ Prabhu, and John Whaley. Auditing saliency cropping
996
+ algorithms. In Proceedings of the IEEE/CVF Winter
997
+ Conference on Applications of Computer Vision, pages
998
+ 4051–4059, 2022.
999
+ REL [Bolon-Canedo and Remeseiro, 2020] Veronica
1000
+ BolonCanedo and Beatriz Remeseiro. Feature selection in
1001
+ image analysis: a survey. Artificial Intelligence Review,
1002
+ 53(4):2905–2931, 2020.
1003
+ REL [Booth et al., 2021] Serena Booth, Yilun Zhou, Ankit
1004
+ Shah, and Julie Shah. Bayes-trex: a bayesian sampling
1005
+ approach to model transparency by example. In
1006
+ Proceedings of the AAAI Conference on Artificial
1007
+ Intelligence, volume 35, pages 11423–11432, 2021.
1008
+ SF_PHL [Booth, 2014] Charles Booth. Boundary work in
1009
+ theory and practice: Past, present and future. PhD thesis,
1010
+ University of the West of England, 2014.
1011
+ SF_PSY [Branscombe et al., 1996] Nyla R Branscombe,
1012
+ Susan Owen, Teri A Garstka, and Jason Coleman. Rape
1013
+ and accident counterfactuals: Who might have done
1014
+ otherwise and would it have changed the outcome? 1.
1015
+ Journal of Applied Social Psychology, 26(12):1042–
1016
+ 1067, 1996.
1017
+ SF_PSY [Branscombe et al., 1997] Nyla R Branscombe,
1018
+ Ahogni N’gbala, Diane Kobrynowicz, and Daniel L
1019
+ Wann. Self and group protection concerns influence
1020
+ attributions
1021
+ but
1022
+ they
1023
+ are
1024
+ not
1025
+ determinants
1026
+ of
1027
+ counterfactual mutation focus. British Journal of Social
1028
+ Psychology, 36(4):387–404, 1997.
1029
+ *SF_AI [Bridge and Cummins, 2005] Derek G Bridge and
1030
+ Lisa Cummins. Knowledge lite explanation oriented
1031
+ retrieval. In ExaCt, pages 35–42, 2005.
1032
+ SF_PHL [Butcher, 1983] David Butcher. An incompatible
1033
+ pair
1034
+ of
1035
+ subjunctive
1036
+ conditional
1037
+ modal
1038
+ axioms.
1039
+ Philosophical Studies: An International Journal for
1040
+ Philosophy in the Analytic Tradition, 44(1):71–110,
1041
+ 1983.
1042
+ SF_PSY [Byrne, ] Ruth MJ Byrne. Counterfactuals, causes
1043
+ and exceptions.
1044
+ SF_PSY [Byrne, 2007a] Ruth MJ Byrne. Precis of the
1045
+ rational imagination: How people create alternatives to
1046
+ reality. Behavioral and Brain Sciences, 30(5-6):439– 453,
1047
+ 2007.
1048
+ *SF_PSY [Byrne, 2007b] Ruth MJ Byrne. The rational
1049
+ imagination: How people create alternatives to reality.
1050
+ MIT press, 2007.
1051
+ *CF [Byrne, 2019] Ruth MJ Byrne. Counterfactuals in
1052
+ explainable artificial intelligence (xai): evidence from
1053
+ human reasoning. In Proceedings of the Twenty-Eighth
1054
+ International Joint Conference on Artificial Intelligence,
1055
+ IJCAI- 19, pages 6276–6282, 2019.
1056
+ REL [Carvalho, 2022] Maria Manuel Domingos Carvalho.
1057
+ Towards biometrically-morphed medical case-based
1058
+ explanations. 2022.
1059
+ *SF_PHL [Chisholm, 1946] Roderick M Chisholm. The
1060
+ contrary-to-fact conditional. Mind, 55(220):289–307,
1061
+ 1946.
1062
+ CF [Cho and Shin, 2023] Soo Hyun Cho and Kyung-shik
1063
+ Shin. Feature-weighted counterfactual-based explanation
1064
+ for
1065
+ bankruptcy
1066
+ prediction.
1067
+ Expert
1068
+ Systems
1069
+ with
1070
+ Applications, 216:119390, 2023.
1071
+ REL [Craw et al., 2006] Susan Craw, Nirmalie Wiratunga,
1072
+ and Ray C Rowe. Learning adaptation knowledge to
1073
+ improve case-based reasoning. Artificial intelligence,
1074
+ 170(16- 17):1175–1192, 2006.
1075
+ *SF_AI [Cummins and Bridge, 2006] Lisa Cummins and
1076
+ Derek Bridge. Kleor: A knowledge lite approach to
1077
+ explanation
1078
+ oriented
1079
+ retrieval.
1080
+ Computing
1081
+ and
1082
+ Informatics, 25(2- 3):173–193, 2006.
1083
+ *SF_AI [Cunningham et al., 2003] Pa´draig Cunningham,
1084
+ Do´nal Doyle, and John Loughrey. An evaluation of the
1085
+ usefulness of case-based explanation. In International
1086
+ conference on case-based reasoning, pages 122–130.
1087
+ Springer, 2003.
1088
+ CF [Dandl et al., 2020] Susanne Dandl, Christoph Molnar,
1089
+ Martin Binder, and Bernd Bischl. Multi-objective
1090
+ counterfactual explanations. In International Conference
1091
+ on Parallel Problem Solving from Nature, pages 448–
1092
+ 469. Springer, 2020.
1093
+ REL [Dash and Liu, 1997] Manoranjan Dash and Huan Liu.
1094
+ Feature selection for classification. Intelligent data
1095
+ analysis, 1(1-4):131–156, 1997.
1096
+ SF_PHL [Declerck and Reed, 2001] Renaat Declerck and
1097
+ Susan Reed. Some truths and nontruths about even if.
1098
+ Linguistics, 39:203–255, 01 2001.
1099
+ CF [Dhurandhar et al., 2018] Amit Dhurandhar, Pin-Yu
1100
+ Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting,
1101
+ Karthikeyan Shanmugam, and Payel Das. Explanations
1102
+ based on the missing: Towards contrastive explanations
1103
+ with pertinent negatives. Advances in neural information
1104
+ processing systems, 31, 2018.
1105
+ *SF_AI [Doyle et al., 2004] Do´nal Doyle, Pa´draig
1106
+ Cunningham, Derek Bridge, and Yusof Rahman.
1107
+ Explanation oriented retrieval. In European Conference
1108
+ on Case-Based Reasoning, pages 157-168. Springer,
1109
+ 2004.
1110
+ *SF_AI [Doyle et al., 2006] Do´nal Doyle, Pa´draig
1111
+ Cunningham, and Paul Walsh. An evaluation of the
1112
+ usefulness of explanation in a case-based reasoning system
1113
+ for
1114
+ decision
1115
+ support
1116
+ in
1117
+ bronchiolitis
1118
+ treatment.
1119
+ Computational Intelligence, 22(3- 4):269–281, 2006.
1120
+ REL [d’Aquin et al., 2022] Mathieu d’Aquin, Emmanuel
1121
+ Nauer, and Jean Lieber. A factorial study of neural network
1122
+
1123
+ learning from differences for regression. In CaseBased
1124
+ Reasoning Research and Development: 30th International
1125
+ Conference, ICCBR 2022, Nancy, France, September 12–
1126
+ 15, 2022, Proceedings, pages 289–303. Springer, 2022.
1127
+ *SF_PSY [Epstude and Roese, 2008] Kai Epstude and Neal J
1128
+ Roese. The functional theory of counterfactual thinking.
1129
+ Personality and Social Psychology Review, 12(2):168–
1130
+ 192, 5 2008.
1131
+ *SF_PSY [Espino et al., 2022] Orlando Espino, Isabel
1132
+ Orenes, and Sergio Moreno-R´ıos. Inferences from the
1133
+ negation of counterfactual and semifactual conditionals.
1134
+ Memory & Cognition, 50(5):1090–1102, 2022.
1135
+ SF_PSY [Feeney et al., 2011] Aidan Feeney, Simon J
1136
+ Handley, et al. Suppositions, conditionals, and causal
1137
+ claims. Under- standing counterfactuals and causation:
1138
+ Issues in philosophy and psychology, pages 242–262,
1139
+ 2011.
1140
+ CF [Feiman, 2008] Roman Feiman. Possible worlds and
1141
+ counterfactuals:
1142
+ Critique
1143
+ and
1144
+ commentary
1145
+ on
1146
+ complicating causation. Episteme, 19(1):4, 2008.
1147
+ *SF_AI [Ferna´ndez et al., 2022] Rube´n R Ferna´ndez, Isaac
1148
+ Mart´ın de Diego, Javier M Moguerza, and Francisco
1149
+ Herrera. Explanation sets: A general framework for
1150
+ machine learning explainability. Information Sciences,
1151
+ 617:464–481, 2022.
1152
+ REL [Freitas et al., 2008] Alex A Freitas, Daniela C Wieser,
1153
+ and Rolf Apweiler. On the importance of comprehensible
1154
+ classification models for protein function prediction.
1155
+ IEEE/ACM Transactions on Computational Biology and
1156
+ Bioinformatics, 7(1):172–182, 2008.
1157
+ SURV [Gates and Leake, 2021] Lawrence Gates and David
1158
+ Leake. Evaluating cbr explanation capabilities: Survey and
1159
+ next steps. In ICCBR Workshops, pages 40–51, 2021.
1160
+ SF_PHL [Gomes, 2020] Gilberto Gomes. Concessive
1161
+ conditionals
1162
+ without
1163
+ even
1164
+ if
1165
+ and
1166
+ nonconcessive
1167
+ conditionals with even if. Acta Analytica, 35(1):1–21,
1168
+ 2020.
1169
+ REL [Goodman and Flaxman, 2017] Bryce Goodman and
1170
+ Seth Flaxman. European union regulations on algorithmic
1171
+ decision-making and a “right to explanation”. AI
1172
+ magazine, 38(3):50–57, 2017.
1173
+ *SF_PHL [Goodman, 1947] Nelson Goodman. The problem
1174
+ of counterfactual conditionals. The Journal of Philosophy,
1175
+ 44(5):113–128, 1947.
1176
+ *SF_PSY [Green, 2008] David W Green. Persuasion and the
1177
+ contexts of dissuasion: Causal models and informal
1178
+ arguments. Thinking & reasoning, 14(1):28–59, 2008.
1179
+ *SURV [Guidotti et al., 2018] Riccardo Guidotti, Anna
1180
+ Monreale, Salvatore Ruggieri, Franco Turini, Fosca
1181
+ Giannotti, and Dino Pedreschi. A survey of methods for
1182
+ explaining black box models. ACM computing surveys
1183
+ (CSUR), 51(5):1–42, 2018.
1184
+ SF_PHL [Gu¨ngo¨r, ] Hu¨seyin Gu¨ngo¨r. Truthmaking even
1185
+ ifs.
1186
+ *SURV [Gunning and Aha, 2019] David Gunning and David
1187
+ W Aha. Darpa’s explainable artificial intelligence
1188
+ program. AI Magazine, 40(2):44-58, 2019.
1189
+ SF_AI [Hagos et al., 2022] Misgina Tsighe Hagos, Kathleen
1190
+ M Curran, and Brian Mac Namee. Identifying spurious
1191
+ correlations
1192
+ and
1193
+ correcting
1194
+ them
1195
+ with
1196
+ an
1197
+ explanationbased
1198
+ learning.
1199
+ arXiv
1200
+ preprint
1201
+ arXiv:2211.08285, 2022.
1202
+ SF_PSY [Handley and Feeney, 2004] Simon J Handley and
1203
+ Aidan Feeney. Reasoning and pragmatics: The case of
1204
+ even-if. In Experimental pragmatics, pages 228–253.
1205
+ Springer, 2004.
1206
+ *SF_PSY [Handley and Feeney, 2007] Simon J Handley and
1207
+ Aidan Feeney. Semifactual: Byrne’s account of even-if.
1208
+ Behavioral and Brain Sciences, 30(5-6):458–459, 2007.
1209
+ *REL [Hanney and Keane, 1996] Kathleen Hanney and
1210
+ Mark T Keane. Learning adaptation rules from a
1211
+ casebase. In European Workshop on Advances in Case-
1212
+ Based Reasoning, pages 179–192. Springer, 1996.
1213
+ *SF_AI [Herchenbach et al., 2022] Marvin Herchenbach,
1214
+ Dennis Mu¨ller, Stephan Scheele, and Ute Schmid.
1215
+ Explaining image classifications with near misses, near
1216
+ hits and prototypes. In International Conference on
1217
+ Pattern Recognition and Artificial Intelligence, pages
1218
+ 419–430. Springer, 2022.
1219
+ CF [Ho¨ltgen et al., 2021] Benedikt Ho¨ltgen, Lisa Schut, Jan
1220
+ M Brauner, and Yarin Gal. Deduce: Generating
1221
+ counterfactual explanations efficiently. arXiv preprint
1222
+ arXiv:2111.15639, 2021.
1223
+ *SF_PHL [Iten, 2002] Corinne Iten. Even if and even: The
1224
+ case for an inferential scalar account. UCL Working
1225
+ Papers in Linguistics, 14:119, 2002.
1226
+ REL [Jalali et al., 2017] Vahid Jalali, David Leake, and
1227
+ Najmeh Forouzandehmehr. Learning and applying case
1228
+ adaptation rules for classification: An ensemble approach.
1229
+ In IJCAI, pages 4874–4878, 2017.
1230
+ *SF_PSY [Kahneman and Tversky, 1982] Daniel Kahneman
1231
+ and Amos Tversky. The Simulation Heuristic. In Daniel
1232
+ Kahneman, Paul Slovic, and Amos Tversky, editors,
1233
+ Judgment Under Uncertainty: Heuristics and Biases,
1234
+ pages 201–8. Cambridge University Press, New York,
1235
+ 1982.
1236
+ *SURV [Karimi et al., 2022] Amir-Hossein Karimi, Gilles
1237
+ Barthe, Bernhard Scho¨lkopf, and Isabel Valera. A survey
1238
+ of algorithmic recourse: contrastive explanations and
1239
+ consequential recommendations. ACM Computing
1240
+ Surveys, 55(5):1– 29, 2022.
1241
+ *REL [Karras et al., 2020] Tero Karras, Samuli Laine, Miika
1242
+ Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila.
1243
+ Analyzing and improving the image quality of stylegan. In
1244
+ Proceedings of the IEEE/CVF conference on computer
1245
+ vision and pattern recognition, pages 8110–8119, 2020.
1246
+ *SURV [Keane and Kenny, 2019] Mark T Keane and Eoin M
1247
+ Kenny. How case-based reasoning explains neural
1248
+ networks: A theoretical analysis of xai using post-hoc
1249
+ explanation-by-example from a survey of ann-cbr
1250
+ twinsystems. In International Conference on Case-Based
1251
+ Reasoning, pages 155–171. Springer, 2019.
1252
+ *CF [Keane and Smyth, 2020] Mark T Keane and Barry
1253
+ Smyth. Good counterfactuals and where to find them: A
1254
+ case- based technique for generating counterfactuals for
1255
+ explainable ai (xai). In Proceedings of the 28th
1256
+ International Conference on Case-Based Reasoning
1257
+ (ICCBR-20), pages 163–178. Springer, 2020.
1258
+ *CF [Keane et al., 2021] Mark T Keane, Eoin M Kenny, Eoin
1259
+ Delaney, and Barry Smyth. If only we had better counter-
1260
+ factual explanations. In Proceedings of the 30th
1261
+ International Joint Conference on Artificial Intelligence
1262
+ (IJCAI-21), 2021.
1263
+ *SF_AI [Kenny and Keane, 2021] Eoin M. Kenny and Mark
1264
+ T. Keane. On generating plausible counterfactual and semi-
1265
+ factual explanations for deep learning. In Proceedings of
1266
+ the 35th AAAI Conference on Artificial Intelligence (AAAI-
1267
+ 21), pages 11575–11585, 2021.
1268
+ *REL [Kira et al., 1992] Kenji Kira, Larry A Rendell, et al.
1269
+ The feature selection problem: Traditional methods and a
1270
+ new algorithm. In Aaai, volume 2, pages 129–134, 1992.
1271
+ CF [Kuhl et al., 2022] Ulrike Kuhl, Andre´ Artelt, and Barbara
1272
+ Hammer. Keep your friends close and your counterfactuals
1273
+ closer: Improved learning from closest rather than plausible
1274
+ counterfactual explanations in an abstract setting. arXiv
1275
+ preprint arXiv:2205.05515, 2022.
1276
+ REL [Kukar et al., 1996] Matjazˇ Kukar, Igor Kononenko, and
1277
+ Toma Silvester. Machine learning in prognosis of the
1278
+ femoral neck fracture recovery. Artificial intelligence in
1279
+ medicine, 8(5):431-451, 1996.
1280
+
1281
+ REL [Liao et al., 2018] Chieh-Kang Liao, Alan Liu, and Yu-
1282
+ Sheng Chao. A machine learning approach to case
1283
+ adaptation. In 2018 IEEE First International Conference
1284
+ on Artificial Intelligence and Knowledge Engineering
1285
+ (AIKE), pages 106–109. IEEE, 2018.
1286
+ REL [Lin and Shaw, 1997] Fu-Ren Lin and Michael J Shaw.
1287
+ Active training of backpropagation neural networks using
1288
+ the learning by experimentation methodology. Annals of
1289
+ Operations Research, 75:105–122, 1997.
1290
+ REL [Lou and Obradovic, 2012] Qiang Lou and Zoran
1291
+ Obradovic. Margin-based feature selection in incomplete
1292
+ data. In Proceedings of the AAAI Conference on Artificial
1293
+ Intelligence, volume 26, pages 1040–1046, 2012.
1294
+ *SF_AI [Lu et al., 2022] Jinghui Lu, Linyi Yang, Brian Mac
1295
+ Namee, and Yue Zhang. A rationale-centric framework
1296
+ for human-in-the-loop machine learning. arXiv preprint
1297
+ arXiv:2203.12918, 2022.
1298
+ REL [Luss et al., 2021] Ronny Luss, Pin-Yu Chen, Amit
1299
+ Dhurandhar,
1300
+ Prasanna
1301
+ Sattigeri,
1302
+ Yunfeng
1303
+ Zhang,
1304
+ Karthikeyan Shanmugam, and Chun-Chen Tu.
1305
+ Leveraging latent features for local explanations. In
1306
+ Proceedings of the 27th ACM SIGKDD Conference on
1307
+ Knowledge Discovery & Data Mining, pages 1139–1149,
1308
+ 2021.
1309
+ SF_PHL [Lycan, 1991] William G Lycan. Even” and” even
1310
+ if. Linguistics and Philosophy, pages 115–150, 1991.
1311
+ REL [McCarthy et al., 2005] Kevin McCarthy, James Reilly,
1312
+ Lorraine McGinty, and Barry Smyth. Experiments in
1313
+ dynamic critiquing. In Proceedings of the 10th
1314
+ international conference on Intelligent user interfaces,
1315
+ pages 175–182, 2005.
1316
+ *SF_PSY [McCloy and Byrne, 2002] Rachel McCloy and
1317
+ Ruth MJ Byrne. Semifactual “even if” thinking. Thinking
1318
+ & Reasoning, 8(1):41–67, 2002.
1319
+ SF_AI [McSherry, 2004] David McSherry. Explaining the
1320
+ pros and cons of conclusions in cbr. In European
1321
+ Conference on Case-Based Reasoning, pages 317–330.
1322
+ Springer, 2004.
1323
+ *SF_AI [Mertes et al., 2022] Silvan Mertes, Christina Karle,
1324
+ Tobias Huber, Katharina Weitz, Ruben Schlagowski, and
1325
+ Elisabeth
1326
+ Andre´.
1327
+ Alterfactual
1328
+ explanations–the
1329
+ relevance of irrelevance for explaining ai systems. arXiv
1330
+ preprint arXiv:2207.09374, 2022.
1331
+ SF_PHL [Kvart, 2001] Igal Kvart. The counterfactual analysis
1332
+ of cause. Synthese, 127(3):389–427, 2001.
1333
+ CF [Laugel et al., 2019] Thibault Laugel, Marie-Jeanne
1334
+ Lesot, Christophe Marsala, Xavier Renard, and Marcin
1335
+ Detyniecki. The dangers of post-hoc interpretability:
1336
+ Unjustified counterfactual explanations. arXiv preprint
1337
+ arXiv:1907.09294, 2019.
1338
+ REL [Leake et al., 2022] David Leake, Zachary Wilkerson,
1339
+ and David Crandall. Extracting case indices from
1340
+ convolutional neural networks: A comparative study. In
1341
+ Case- Based Reasoning Research and Development: 30th
1342
+ International Conference, ICCBR 2022, Nancy, France,
1343
+ September 12–15, 2022, Proceedings, pages 81–95.
1344
+ Springer, 2022.
1345
+ *SURV [Miller, 2019] Tim Miller. Explanation in artificial
1346
+ intelligence: Insights from the social sciences. Artificial
1347
+ Intelligence, 267:1–38, 2019.
1348
+ REL [Montenegro et al., 2021] Helena Montenegro, Wilson
1349
+ Silva, and Jaime S Cardoso. Privacy-preserving
1350
+ generative
1351
+ adversarial
1352
+ network
1353
+ for
1354
+ case-based
1355
+ explainability in medical image analysis. IEEE Access,
1356
+ 9:148037–148047, 2021.
1357
+ *SF_PSY [Moreno-Rios et al., 2008] Sergio Moreno-Rios,
1358
+ Juan A Garcia-Madruga, and Ruth MJ Byrne. Inferences
1359
+ from
1360
+ semifactual
1361
+ ‘even
1362
+ if’conditionals.
1363
+ Acta
1364
+ Psychologica, 128(2):197–209, 2008.
1365
+ CF [Mothilal et al., 2020] Ramaravind K Mothilal, Amit
1366
+ Sharma, and Chenhao Tan. Explaining machine learning
1367
+ classifiers through diverse counterfactual explanations. In
1368
+ Proceedings of the 2020 conference on fairness,
1369
+ accountability, and transparency, pages 607-617, 2020.
1370
+ *SF_AI [Mueller et al., 2021] Shane Mueller, Yin-Yin Tan,
1371
+ Anne Linja, Gary Klein, and Robert Hoffman. Authoring
1372
+ guide for cognitive tutorials for artificial intelligence:
1373
+ Purposes and methods. 2021.
1374
+ REL [Nimmy et al., 2023] Sonia Farhana Nimmy, Omar K
1375
+ Hussain, Ripon K Chakrabortty, Farookh Khadeer Hussain,
1376
+ and Morteza Saberi. Interpreting the antecedents of a
1377
+ predicted output by capturing the interdependencies among
1378
+ the system features and their evolution over time.
1379
+ Engineering Applications of Artificial Intelligence,
1380
+ 117:105596, 2023.
1381
+ *SF_AI [Nugent et al., 2005] Conor Nugent, Pa´draig
1382
+ Cunningham, and Do´nal Doyle. The best way to instil
1383
+ confidence is by being right. In International Conference
1384
+ on Case-Based Reasoning, pages 368–381. Springer, 2005.
1385
+ *SF_AI [Nugent et al., 2009] Conor Nugent, Do´nal Doyle,
1386
+ and Pa´draig Cunningham. Gaining insight through
1387
+ casebased explanation. Journal of Intelligent Information
1388
+ Systems, 32(3):267–295, 2009.
1389
+ SF_AI [Olsson et al., 2014] Tomas Olsson, Daniel Gillblad,
1390
+ Peter Funk, and Ning Xiong. Explaining probabilistic fault
1391
+ diagnosis and classification using case-based reasoning. In
1392
+ International Conference on Case-Based Reasoning, pages
1393
+ 360–374. Springer, 2014.
1394
+ REL [Olsson et al., 2014] Tomas Olsson, Daniel Gillblad,
1395
+ Peter Funk, and Ning Xiong. Case-based reasoning for
1396
+ explaining probabilistic machine learning. International
1397
+ Journal of Computer Science and Information
1398
+ Technology, 6:87–101, 2014.
1399
+ *SF_PSY [Parkinson and Byrne, 2017] Mary Parkinson and
1400
+ Ruth MJ Byrne. Counterfactual and semi-factual thoughts
1401
+ in moral judgements about failed attempts to harm.
1402
+ Thinking & Reasoning, 23(4):409–448, 2017.
1403
+ SF_PHL [Paul, 2009] Laurie Ann Paul. Counterfactual
1404
+ theories. The Oxford handbook of causation, pages 158–
1405
+ 184, 2009.
1406
+ CF [Pedapati et al., 2020] Tejaswini Pedapati, Avinash
1407
+ Balakrishnan,
1408
+ Karthikeyan
1409
+ Shanmugam,
1410
+ and
1411
+ Amit
1412
+ Dhurandhar. Learning global transparent models consistent
1413
+ with local contrastive explanations. Advances in neural
1414
+ information processing systems, 33:3592–3602, 2020.
1415
+ SF_AI [Rabold et al., 2022] Johannes Rabold, Michael
1416
+ Siebers, and Ute Schmid. Generating contrastive
1417
+ explanations for inductive logic programming based on a
1418
+ near miss approach. Machine Learning, 111(5):1799–
1419
+ 1820, 2022.
1420
+ REL [Raman and Ioerger, 2003] Baranidharan Raman and
1421
+ Thomas R Ioerger. Enhancing learning using feature and
1422
+ example selection. Journal of Machine Learning Research
1423
+ (submitted for publication), 2003.
1424
+ *REL [Reilly et al., 2004] James Reilly, Kevin McCarthy,
1425
+ Lorraine McGinty, and Barry Smyth. Dynamic critiquing.
1426
+ In European Conference on Case-Based Reasoning, pages
1427
+ 763– 777. Springer, 2004.
1428
+ *REL [Ribeiro et al., 2016] Marco Tulio Ribeiro, Sameer
1429
+ Singh, and Carlos Guestrin. ” why should i trust you?”
1430
+ explaining the predictions of any classifier. In Proceedings
1431
+ of the 22nd ACM SIGKDD international conference on
1432
+ knowledge discovery and data mining, pages 1135–1144,
1433
+ 2016.
1434
+ REL [Ribeiro et al., 2018] Marco Tulio Ribeiro, Sameer
1435
+ Singh, and Carlos Guestrin. Anchors: High-precision
1436
+ model- agnostic explanations. In Proceedings of the AAAI
1437
+ conference on artificial intelligence, volume 32, 2018.
1438
+ REL [Rissland and Skalak, 1989a] Edwina L Rissland and
1439
+ David B Skalak. Combining case-based and rule-based
1440
+ reasoning: A heuristic approach. In IJCAI, pages 524–
1441
+ 530, 1989.
1442
+ REL [Rissland and Skalak, 1989b] Edwina L Rissland and
1443
+ David B Skalak. Interpreting statutory predicates. In
1444
+
1445
+ Proceedings of the 2nd international conference on
1446
+ Artificial intelligence and law, pages 46–53, 1989.
1447
+ SURV [Rissland, 2006] Edwina L Rissland. Ai and
1448
+ similarity. IEEE Intelligent Systems, 21(03):39–49, 2006.
1449
+ SURV [Rissland, 2009] Edwina L Rissland. Black swans,
1450
+ gray cygnets and other rare birds. In International
1451
+ Conference on Case-Based Reasoning, pages 6–13.
1452
+ Springer, 2009.
1453
+ *SF_AI [Salimi, 2022] Pedram Salimi. Addressing trust and
1454
+ mutability issues in xai utilising case based reasoning.
1455
+ ICCBR Doctoral Consortium 2022, 1613:0073, 2022.
1456
+ SF_PSY[Santamar´ıa et al., 2005] Carlos Santamar´ıa,
1457
+ Orlando Espino, and Ruth MJ Byrne. Counterfactual and
1458
+ semifactual conditionals prime alternative possibilities.
1459
+ Journal of Experimental Psychology: Learning, Memory,
1460
+ and Cognition, 31(5):1149, 2005.
1461
+ SF_PHL [Sarasvathy, 2021] Saras D Sarasvathy. Even-if:
1462
+ Sufficient, yet unnecessary conditions for worldmaking.
1463
+ Organization Theory, 2(2):26317877211005785, 2021.
1464
+ CF [Schleich et al., 2021] Maximilian Schleich, Zixuan
1465
+ Geng, Yihong Zhang, and Dan Suciu. Geco: quality
1466
+ counterfactual explanations in real time. arXiv preprint
1467
+ arXiv:2101.01292, 2021.
1468
+ *CF [Sokol and Flach, 2019] Kacper Sokol and Peter A
1469
+ Flach. Counterfactual explanations of machine learning
1470
+ predictions: opportunities and challenges for ai safety.
1471
+ SafeAI@ AAAI, 2019.
1472
+ *SURV [Sørmo et al., 2005] Frode Sørmo, Jo¨rg Cassens,
1473
+ and Agnar Aamodt. Explanation in case-based reasoning–
1474
+ perspectives and goals. Artificial Intelligence Review,
1475
+ 24(2):109–143, 2005.
1476
+ REL [Sun, 2007] Yijun Sun. Iterative relief for feature
1477
+ weighting: algorithms, theories, and applications. IEEE
1478
+ transactions
1479
+ on
1480
+ pattern
1481
+ analysis
1482
+ and
1483
+ machine
1484
+ intelligence, 29(6):1035– 1051, 2007.
1485
+ SF_PHL [Tellings, 2017] Jos Tellings. Still as an additive
1486
+ particle in conditionals. In Semantics and Linguistic
1487
+ Theory, vol- ume 27, pages 1–21, 2017.
1488
+ REL [Urbanowicz et al., 2018] Ryan J Urbanowicz, Melissa
1489
+ Meeker, William La Cava, Randal S Olson, and Jason H
1490
+ Moore. Relief-based feature selection: Introduction and
1491
+ review. Journal of biomedical informatics, 85:189–203,
1492
+ 2018.
1493
+ *SF_AI [Vats et al., 2022] Anuja Vats, Ahmed Mohammed,
1494
+ Marius Pedersen, and Nirmalie Wiratunga. This changes
1495
+ to that: Combining causal and non-causal explanations to
1496
+ generate disease progression in capsule endoscopy. arXiv
1497
+ preprint arXiv:2212.02506, 2022.
1498
+ *SURV [Verma et al., 2022] Sahil Verma, Varich
1499
+ Boonsanong, Minh Hoang, Keegan E. Hines, John P.
1500
+ Dickerson, and Chirag Shah. Counterfactual explanations
1501
+ and algorithmic recourses for machine learning: A review,
1502
+ 2022.
1503
+ SF_PSY [Vidal and Baratgin, 2017] Mathieu Vidal and Jean
1504
+ Baratgin. A psychological study of unconnected
1505
+ conditionals. Journal of Cognitive Psychology, 29(6):769–
1506
+ 781, 2017.
1507
+ SF_PHL [Vidal, 2017] Mathieu Vidal. A compositional
1508
+ semantics for ‘even if’conditionals. Logic and Logical
1509
+ Philosophy, 26(2):237–276, 2017.
1510
+ *CF [Wachter et al., 2017] Sandra Wachter, Brent
1511
+ Mittelstadt,
1512
+ and
1513
+ Chris
1514
+ Russell.
1515
+ Counterfactual
1516
+ explanations without opening the black box. Harv. JL &
1517
+ Tech., 31:841, 2017.
1518
+ *CF [Wexler et al., 2019] James Wexler, Mahima Pushkarna,
1519
+ Tolga Bolukbasi, Martin Wattenberg, Fernanda Vie´gas,
1520
+ and Jimbo Wilson. The what-if tool. IEEE transactions on
1521
+ visualization and computer graphics, 26(1):56–65, 2019.
1522
+ CF [Wijekoon et al., 2022] Anjana Wijekoon, Nirmalie
1523
+ Wiratunga, Ikechukwu Nkisi-Orji, Chamath
1524
+ Palihawadana, David Corsar, and Kyle Martin. How close
1525
+ is too close? the role of feature attributions in discovering
1526
+ counterfactual explanations. In Case-Based Reasoning
1527
+ Research
1528
+ and
1529
+ Development:
1530
+ 30th
1531
+ International
1532
+ Conference, ICCBR 2022, Nancy, France, September 12–
1533
+ 15, 2022, Proceedings, pages 33–47. Springer, 2022.
1534
+ SF_AI [Winston, 1970] Patrick H Winston. Learning
1535
+ structural descriptions from examples. 1970.
1536
+ CF [Wiratunga et al., 2021] Nirmalie Wiratunga, Anjana
1537
+ Wi- jekoon, Ikechukwu Nkisi-Orji, Kyle Martin,
1538
+ Chamath Pal- ihawadana, and David Corsar. Discern:
1539
+ Discovering counterfactual explanations using relevance
1540
+ features from neighbourhoods. In 2021 IEEE 33rd
1541
+ International Conference on Tools with Artificial
1542
+ Intelligence (ICTAI), pages 1466–1473. IEEE, 2021.
1543
+ SF_PSY [Yang, 2022] Yujing Yang. On the study of chinese
1544
+ double-false counterfactual conditionals. In
1545
+ 2021
1546
+ International Conference on Social Development and
1547
+ Media Commu- nication (SDMC 2021), pages 1553–
1548
+ 1563. Atlantis Press, 2022.
1549
+ SF_AI [Ye et al., 2020] Xiaomeng Ye, David Leake,
1550
+ William Huibregtse, and Mehmet Dalkilic. Applying
1551
+ classto-class siamese networks to explain classifications
1552
+ with supportive and contrastive cases. In International
1553
+ Conference on Case-Based Reasoning, pages 245–260.
1554
+ Springer, 2020.
1555
+ *REL [Ye et al., 2021] Xiaomeng Ye, Ziwei Zhao, David
1556
+ Leake, Xizi Wang, and David Crandall. Applying the
1557
+ case difference heuristic to learn adaptations from deep
1558
+ network features. arXiv preprint arXiv:2107.07095,
1559
+ 2021.
1560
+ *REL
1561
+ [Yousefzadeh and O’Leary, 2019] Roozbeh
1562
+ Yousefzadeh and Dianne P O’Leary. Interpreting neural
1563
+ networks
1564
+ using
1565
+ flip
1566
+ points.
1567
+ arXiv
1568
+ preprint
1569
+ arXiv:1903.08789, 2019.
1570
+ REL [Zheng et al., 2019] Jianyang Zheng, Hexing Zhu,
1571
+ Fangfang Chang, and Yunlong Liu. An improved relief
1572
+ feature selection algorithm based on monte-carlo tree
1573
+ search. Systems Science & Control Engineering,
1574
+ 7(1):304–310, 2019.
1575
+ *SF_AI [Zhao et al., 2022] Ziwei Zhao, David Leake,
1576
+ Xiaomeng Ye, and David Crandall. Generating
1577
+ counterfactual images: Towards a c2c-vae approach.
1578
+ 2022.
1579
+
1580
+
1581
+
1582
+
1583
+
1584
+
1585
+
1586
+
1587
+
1588
+
1589
+
1590
+
1591
+
1592
+
1593
+
1594
+
1595
+
1596
+
1597
+
1598
+
1599
+
1600
+
1601
+
8tFLT4oBgHgl3EQfBi4b/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
9dE1T4oBgHgl3EQfCQJ2/content/tmp_files/2301.02862v1.pdf.txt ADDED
@@ -0,0 +1,1049 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.02862v1 [math.MG] 7 Jan 2023
2
+ AN INTEGER PARALLELOTOPE WITH SMALL SURFACE AREA
3
+ ASSAF NAOR AND ODED REGEV
4
+ ABSTRACT. We prove that for any n ∈ N there is a convex body K ⊆ Rn whose surface area is at most n
5
+ 1
6
+ 2 +o(1),
7
+ yet the translates of K by the integer lattice Zn tile Rn.
8
+ 1. INTRODUCTION
9
+ Given n ∈ N and a lattice Λ ⊆ Rn, a convex body K ⊆ Rn is called a Λ-parallelotope (e.g., [12]) if the
10
+ translates of K by elements of Λ tile Rn, i.e., Rn = Λ+K = �
11
+ x∈Λ(x +K ), and the interior of (x +K )∩(y +K )
12
+ is empty for every distinct x, y ∈ Λ. One calls K a parallelotope (parallelogon if n = 2 and parallelohedron
13
+ if n = 3; some of the literature calls a parallelotope in Rn and n-dimensional parallellohedron; e.g., [1, 11])
14
+ if it is a Λ-parallelotope for some lattice Λ ⊆ Rn. We call a Zn-parallelotope an integer parallelotope.
15
+ The hypercube [− 1
16
+ 2, 1
17
+ 2]n is an integer parallelotope whose surface area equals 2n. By [16, Corollary A.2],
18
+ for every n ∈ N there exists an integer parallelotope K ⊆ Rn whose surface area is smaller than 2n by a
19
+ universal constant factor. Specifically, the surface area of the integer parallelotope K that was considered
20
+ in [16] satisfies voln−1(∂K ) ⩽ σ(n +O(n2/3)), where σ = 2�∞
21
+ s=1(s/e)s/(s3/2s!) ⩽ 1.23721. To the best of our
22
+ knowledge, this is the previously best known upper bound on the smallest possible surface area of an
23
+ integer parallelotope. The main result of the present work is the following theorem:
24
+ Theorem 1. For every n ∈ N there exists an integer parallelotope whose surface area is n
25
+ 1
26
+ 2 +o(1).
27
+ Because the covolume of Zn is 1, the volume of any integer parallelotope K ⊆ Rn satisfies voln(K ) = 1.
28
+ Consequently, by the isoperimetric inequality we have1
29
+ voln−1(∂K ) ⩾ voln−1(Sn−1)
30
+ voln(Bn)
31
+ n−1
32
+ n
33
+
34
+
35
+ n,
36
+ (1)
37
+ where Bn def
38
+ = {(x1,...,xn) ∈ Rn : x2
39
+ 1 +···+ x2
40
+ n ⩽ 1} denotes the Euclidean ball and Sn−1 def
41
+ = ∂Bn.
42
+ Thanks to (1), Theorem 1 is optimal up to the implicit lower order factor. It remains open to determine
43
+ whether this lower-order factor could be removed altogether, namely to answer the following question:
44
+ Question 2. For every n ∈ N, does there exist an integer parallelotope K ⊆ Rn with voln−1(∂K ) ≍ �n?
45
+ Question 2 goes back to [24], though such early investigations were (naturally, from the perspective of
46
+ crystallography) focused on n = 3 and asked for the exact value of the smallest possible surface area of
47
+ a parallelohedron; see Conjecture 7.5 in [5] and the historical discussion in the paragraph that precedes
48
+ it. The corresponding question about precisely determining the minimum perimeter when n = 2 was
49
+ answered in [7] (its solution for general parallelogons rather than integer parallelogons is due to [17]; see
50
+ also [22], which treats tiles that need not be convex). Finding the exact minimum when n = 3 remains
51
+ A.N. was supported by NSF grant DMS-2054875, BSF grant 201822, and a Simons Investigator award. O.R. was supported by
52
+ NSF grant CCF-1320188 and a Simons Investigator award.
53
+ 1We use the following conventions for asymptotic notation, in addition to the usual O(·),o(·),Ω(·),Θ(·) notation. For a,b > 0,
54
+ by writing a ≲ b or b ≳ a we mean that a ⩽ Cb for a universal constant C > 0, and a ≍ b stands for (a ≲ b)∧(b ≲ a). If we need
55
+ to allow for dependence on parameters, we indicate it by subscripts. For example, in the presence of an auxiliary parameter ε,
56
+ the notation a ≲ε b means that a ⩽ C(ε)b, where C(ε) > 0 may depend only on ε, and analogously for a ≳ε b and a ≍ε b.
57
+ 1
58
+
59
+ open; we will not review the substantial literature on this topic, referring instead to the monograph [4]
60
+ (see also [28] for an exact solution of a different isoperimetric-type question for parallelohedra).
61
+ The higher dimensional asymptotic nature of Question 2 differs from the search for exact minimizers
62
+ in lower dimensions on which the literature has focused, but it is a natural outgrowth of it and it stands
63
+ to reason that it was considered by researchers who worked on this topic over the past centuries. Never-
64
+ theless, we do not know of a published source that mentions Question 2 prior to the more recent interest
65
+ in this topic that arose due to its connection to theoretical computer science that was found in [16] and
66
+ were pursued in [33, 25, 3, 26, 6]; specifically, Question 2 appears in [6, Section 6].
67
+ In [25] it was proved that Question 2 has a positive answer if one drops the requirement that the tiling
68
+ set is convex, i.e., by [25, Theorem 1.1] for every n ∈ N there is a compact set Ω ⊆ Rn such that Rn = Zn+Ω,
69
+ the interior of (x + Ω) ∩ (y + Ω) is empty for every distinct x, y ∈ Zn, and voln−1(∂Ω) ≲ �n; see also the
70
+ proof of this result that was found in [3]. The lack of convexity of Ω is irrelevant for the applications to
71
+ computational complexity that were found in [16]. The proofs in [25, 3] produce a set Ω that is decidedly
72
+ non-convex. Our proof of Theorem 1 proceeds via an entirely different route and provides a paralletotope
73
+ whose surface area comes close to the guarantee of [25] (prior to [25], the best known upper bound on
74
+ the smallest possible surface area of a compact Zn-tiling set was the aforementioned 1.23721n of [16]).
75
+ While it could be tempting to view the existence of the aforementioned compact set Ω as evidence
76
+ for the availability of an integer parallelotope with comparable surface area, this is a tenuous hope be-
77
+ cause the convexity requirement from a parallelotope imposes severe restrictions. In particular, by [30]
78
+ for every n ∈ N there are only finitely many combinatorial types of parallelotopes in Rn.2 In fact, by com-
79
+ bining [10, Section 6] with [30, 36] we see that K ⊆ Rn is a parallelotope if and only if K is a centrally
80
+ symmetric polytope, all of the (n − 1)-dimensional faces of K are centrally symmetric, and the orthog-
81
+ onal projection of K along any of its (n − 2)-dimensional faces is either a parallelogram or a centrally
82
+ symmetric hexagon.
83
+ Of course, Theorem 1 must produce such a constrained polytope. To understand how this is achieved,
84
+ it is first important to stress that this becomes a straightforward task if one only asks for a parallelotope
85
+ with small surface area rather than for an integer parallelotope with small surface area. Namely, it follows
86
+ easily from the literature that for every n ∈ N there exist a rank n lattice Λ ⊆ Rn whose covolume is 1 and
87
+ a Λ-parallelotope K ⊆ Rn that satisfies voln−1(∂K ) ≲ �n. Indeed, by [34] there is a rank n lattice Λ ⊆ Rn
88
+ of covolume 1 whose packing radius is at least c�n, where c > 0 is a universal constant. Let K be the
89
+ Voronoi cell of Λ, namely K consists of the points in Rn whose (Euclidean) distance to any point of Λ is
90
+ not less than their distance to the origin. Then, K is a Λ-parallelotope, voln(K ) = 1 since the covolume of
91
+ Λ is 1, and K ⊇ c�nBn since the packing radius of Λ is at least c�n. Consequently, the surface area of K is
92
+ at most c−1�n by the following simple lemma that we will use multiple times in the proof of Theorem 1:
93
+ Lemma 3. Fix n ∈ N and R > 0. Suppose that a convex body K ⊆ Rn satisfies K ⊇ RBn. Then,
94
+ voln−1(∂K )
95
+ voln(K )
96
+ ⩽ n
97
+ R .
98
+ Lemma 3 is known (e.g., [19, Lemma 2.1]); for completeness we will present its short proof in Section 2.
99
+ Even though the packing radius of Zn is small, the above observation drives our inductive proof of
100
+ Theorem 1, which proceeds along the following lines. Fix m ∈ {1,...,n−1} and let V be an m-dimensional
101
+ subspace of Rn. If the lattice V ⊥ ∩Zn has rank n −m and its packing radius is large, then Lemma 3 yields
102
+ a meaningful upper bound on the (n −m −1)-dimensional volume of the boundary of the Voronoi cell of
103
+ V ⊥ ∩Zn. We could then consider the lattice Λ ⊆ V which is the orthogonal projection of Zn onto V , and
104
+ inductively obtain a Λ-parallelotope (residing within V ) for which the (m −1)-dimensional volume of its
105
+ boundary is small. By considering the product (with respect to the identification of Rn with V ⊥ ×V ) of
106
+ the two convex bodies thus obtained, we could hope to get the desired integer parallelotope.
107
+ 2Thus, just for the sake concreteness (not important for the present purposes): Since antiquity it was known that there are 2
108
+ types of parallelogons; by [13] there are 5 types of parallelohedra; by [8, 35] there are 52 types of 4-dimensional parallelotopes.
109
+ 2
110
+
111
+ There are obvious obstructions to this plan. The subspace V must be chosen so that the lattice V ⊥∩Zn
112
+ is sufficiently rich yet it contains no short nonzero vectors. Furthermore, the orthogonal projection Λ
113
+ of Zn onto V is not Zm, so we must assume a stronger inductive hypothesis and also apply a suitable
114
+ “correction” to Λ so as to be able to continue the induction. It turns out that there is tension between how
115
+ large the packing radius of V ⊥∩Zn could be, the loss that we incur due to the aforementioned correction,
116
+ and the total cost of iteratively applying the procedure that we sketched above. Upon balancing these
117
+ constraints, we will see that the best choice for the dimension m of V is m = n exp(−Θ(
118
+
119
+ logn)). The rest
120
+ of the ensuing text will present the details of the implementation of this strategy.
121
+ 2. PROOF OF THEOREM 1
122
+ Below, for each n ∈ N the normed space ℓn
123
+ 2 = (Rn,∥·∥ℓn
124
+ 2 ) will denote the standard Euclidean space, i.e.,
125
+ ∀x = (x1,...,xn) ∈ Rn,
126
+ ∥x∥ℓn
127
+ 2
128
+ def
129
+ =
130
+
131
+ x2
132
+ 1 +···+ x2
133
+ n.
134
+ The standard scalar product of x, y ∈ Rn will be denoted 〈x, y〉
135
+ def
136
+ = x1y1+···+xnyn. The coordinate basis of
137
+ Rn will be denoted e1,...,en, i.e., for each i ∈ {1,...,n} the ith entry of ei is 1 and the rest of the coordinates
138
+ of ei vanish. We will denote the origin of Rn by 0 = (0,...,0). For 0 < s ⩽ n, the s-dimensional Hausdorff
139
+ measure on Rn that is induced by the ℓn
140
+ 2 metric will be denoted by vols(·). In particular, if K ⊆ Rn is a
141
+ convex body (compact and with nonempty interior), then the following identity holds (see, e.g., [27]):
142
+ voln−1(∂K ) = lim
143
+ δ→0+
144
+ voln(K +δBn)−voln(K )
145
+ δ
146
+ .
147
+ (2)
148
+ If V is a subspace of Rn, then its orthogonal complement (with respect to the ℓn
149
+ 2 Euclidean structure)
150
+ will be denoted V ⊥ and the orthogonal projection from Rn onto V will be denoted ProjV . When treating
151
+ a subset Ω of V we will slightly abuse notation/terminology by letting ∂Ω be the boundary of Ω within V ,
152
+ and similarly when we will discuss the interior of Ω we will mean its interior within V . This convention
153
+ results in suitable interpretations of when K ⊆ V is a convex body or a parallelohedron (with respect to a
154
+ lattice of V ). The variant of (2) for a convex body K ⊆ V becomes
155
+ voldim(V )−1(∂K ) = lim
156
+ δ→0+
157
+ voldim(V )
158
+
159
+ K +δ(V ∩Bn)
160
+
161
+ −voldim(V )(K )
162
+ δ
163
+ .
164
+ (3)
165
+ Proof of Lemma 3. Since K ⊇ RBn, for every δ > 0 we have
166
+ K +δBn ⊆ K + δ
167
+ R K =
168
+
169
+ 1+ δ
170
+ R
171
+ ��
172
+ R
173
+ R +δK +
174
+ δ
175
+ R +δK
176
+
177
+ =
178
+
179
+ 1+ δ
180
+ R
181
+
182
+ K ,
183
+ (4)
184
+ where the last step of (4) uses the fact that K is convex. Consequently,
185
+ voln−1(∂K )
186
+ (2)
187
+ = lim
188
+ δ→0+
189
+ voln(K +δBn)−voln(K )
190
+ δ
191
+ (4)
192
+ ⩽ lim
193
+ δ→0+
194
+
195
+ 1+ δ
196
+ R
197
+ �n −1
198
+ δ
199
+ voln(K ) = n
200
+ R voln(K ).
201
+
202
+ The sequence {Q(n)}∞
203
+ n=1 that we introduce in the following definition will play an important role in
204
+ the ensuing reasoning:
205
+ Notation 4. For each n ∈ N let Q(n) be the infimum over those Q ⩾ 0 such that for every lattice Λ ⊆ Zn of
206
+ rank n there exists a Λ-parallelotope K ⊆ Rn that satisfies
207
+ voln−1(∂K )
208
+ voln(K )
209
+ ⩽Q.
210
+ (5)
211
+ As voln(K ) = 1 for any integer parallelotope K ⊆ Rn, Theorem 1 is a special case of the following result:
212
+ Theorem 5. There exists a universal constant C ⩾ 1 such that Q(n) ≲ �neC�
213
+ logn for every n ∈ N .
214
+ The following key lemma is the inductive step in the ensuing proof of Theorem 5 by induction on n:
215
+ 3
216
+
217
+ Lemma 6. Fix m,n,s ∈ N with s ⩽ m ⩽ n. Suppose that B ∈ Mm×n(Z) is an m-by-n matrix all of whose
218
+ entries are integers such that B has rank m and any s of the columns of B are linearly independent. Then,
219
+ Q(n) ⩽ 2(n −m)
220
+ �s
221
+ +Q(m)∥B∥ℓn
222
+ 2 →ℓm
223
+ 2 ,
224
+ where ∥·∥ℓn
225
+ 2 →ℓm
226
+ 2 denotes the operator norm from ℓn
227
+ 2 to ℓm
228
+ 2 .
229
+ The fact that Theorem 5 treats any sublattice of Zn of full rank (recall howQ(n) is defined), even though
230
+ in Theorem 1 we are interested only in Zn itself, provides a strengthening of the inductive hypothesis
231
+ that makes it possible for our proof of Lemma 6 to go through. If Λ is an arbitrary full rank sublattice of
232
+ Zn, then a Λ-parallelotope K ⊆ Rn need no longer satisfy voln(K ) = 1, so the inductive hypothesis must
233
+ incorporate the value of voln(K ), which is the reason why we consider the quantity voln−1(∂K )/voln(K )
234
+ in (5). Observe that this quantity is not scale-invariant, so it might seem somewhat unnatural to study it,
235
+ but it is well-suited to the aforementioned induction thanks to the following simple lemma:
236
+ Lemma 7. Fix m,n ∈ N and an m-dimensional subspace V of Rn. Let O ⊆ V ⊥ be an open subset of V ⊥ and
237
+ let G ⊆ V be an open subset of V . Then, for Ω = O +G we have
238
+ voln−1(∂Ω)
239
+ voln(Ω)
240
+ = voln−m−1(∂O)
241
+ voln−m(O)
242
+ + volm−1(∂G)
243
+ volm(G)
244
+ .
245
+ (6)
246
+ Furthermore, if T : Rm → V is a linear isomorphism and K ⊆ Rm is a convex body, then
247
+ volm−1(∂T K )
248
+ volm(T K )
249
+ ⩽ volm−1(∂K )
250
+ volm(K )
251
+ ∥T −1∥(V,∥·∥ℓn
252
+ 2 )→ℓm
253
+ 2 ,
254
+ (7)
255
+ where ∥·∥(V,∥·∥ℓn
256
+ 2 )→ℓm
257
+ 2 is the operator norm from V , equipped with the norm inherited from ℓn
258
+ 2 , to ℓm
259
+ 2 .
260
+ Proof. For (6), note that since O ⊥ G we have voln(Ω) = voln−m(O)volm(G), and ∂Ω = (∂O +G)∪(O +∂G)
261
+ where voln−1((∂O +G)∩(O +∂G)) = 0, so voln−1(∂Ω) = voln−m−1(∂O)volm(G)+voln−m(O)volm−1(∂G).
262
+ For (7), denote ρ = ∥T −1∥(V,∥·∥ℓn
263
+ 2 )→ℓm
264
+ 2 , so that T −1(V ∩Bn) ⊆ ρBm. Consequently,
265
+ ∀δ ∈ R,
266
+ T K +δ(V ∩Bn) = T
267
+
268
+ K +δT −1(V ∩Bn)
269
+
270
+ ⊆ T (K +δρBm).
271
+ By combining this inclusion with (3), we see that
272
+ volm−1(∂T K ) ⩽ lim
273
+ δ→0+
274
+ volm
275
+
276
+ T (K +δρBm)
277
+
278
+ −volm(T K )
279
+ δ
280
+ ⩽ det(T ) lim
281
+ δ→0+
282
+ volm(K +δρBm)−volm(K )
283
+ δ
284
+ (2)
285
+ = det(T )volm−1(∂K )ρ = volm(T K )
286
+ volm(K ) volm−1(∂K )ρ.
287
+
288
+ Remark 8. We stated Lemma 7 with K being a convex body since that is all that we need herein. However,
289
+ the proof does not rely on its convexity in an essential way; all that is needed is that K is a body in Rm whose
290
+ boundary is sufficiently regular so that the identity (2) holds (with n replaced by m).
291
+ Any matrix B as in Lemma 6 must have a row with at least n/m nonzero entries. Indeed, otherwise the
292
+ total number of nonzero entries of B would be less than m(n/m) = n, so at least one of the n columns B
293
+ would have to vanish, in contradiction to the assumed linear independence (as s ⩾ 1). Thus, there exists
294
+ j ∈ {1,...,m} such that at least ⌈n/m⌉ of the entries of B∗e j ∈ Rn do not vanish. Those entries are integers,
295
+ so ∥B∗e j∥ℓn
296
+ 2 ⩾
297
+
298
+ ⌈n/m⌉. Hence, the quantity ∥B∥ℓn
299
+ 2 →ℓm
300
+ 2 = ∥B∗∥ℓm
301
+ 2 →ℓn
302
+ 2 in (6) cannot be less than
303
+
304
+ ⌈n/m⌉.
305
+ Question 9. Given m,n ∈ N and C > 1, what is the order of magnitude of the largest s = s(m,n,C) ∈ N for
306
+ which there exists B ∈ Mm×n(Z) such that any s of the columns of B are linearly independent and
307
+ ∥B∥ℓn
308
+ 2 →ℓm
309
+ 2 ⩽C
310
+ � n
311
+ m .
312
+ The following lemma is a step towards Question 9 that we will use in the implementation of Lemma 6:
313
+ 4
314
+
315
+ Lemma 10. Suppose that m,n ∈ N satisfy 4 ⩽ m ⩽ n and n ⩾ (m logm)/4. There exist s ∈ N with s ≳ m2/n
316
+ and B ∈ Mm×n(Z) of rank m such that any s of the columns of B are linearly independent and
317
+ ∥B∥ℓn
318
+ 2 →ℓm
319
+ 2 ≲
320
+ � n
321
+ m .
322
+ Lemma 10 suffices for our purposes, but it is not sharp. We will actually prove below that in the setting
323
+ of Lemma 10 for every 0 < ε ⩽ 1 there exist s ∈ N with s ≳ m1+ε/nε = m(m/n)ε ⩾ m2/n and B ∈ Mm×n(Z)
324
+ of rank m such that any s of the columns of B are linearly independent and ∥B∥ℓn
325
+ 2 →ℓm
326
+ 2 ≲ε
327
+
328
+ n/m.
329
+ While Question 9 arises naturally from Lemma 6 and it is interesting in its own right, fully answering
330
+ Question 9 will not lead to removing the o(1) term in Theorem 1 altogether; the bottleneck in the ensuing
331
+ reasoning that precludes obtaining such an answer to Question 2 (if true) is elsewhere.
332
+ Proof of Theorem 5 assuming Lemma 6 and Lemma 10. We will proceed by induction on n. In prepara-
333
+ tions for the base of the induction, we will first record the following estimate (which is sharp when the
334
+ lattice is Zn). The Voronoi cell of a rank n sublattice Λ of Zn, namely the set
335
+ K =
336
+
337
+ x ∈ Rn : ∀y ∈ Λ, ∥x∥ℓn
338
+ 2 ⩽ ∥x − y∥ℓn
339
+ 2
340
+
341
+ ,
342
+ is a Λ-parallelotope that satisfies K ⊇ 1
343
+ 2Bn. Indeed, if y ∈ Λ∖{0}, then ∥y∥ℓn
344
+ 2 ⩾ 1 since y ∈ Zn∖{0}. Hence,
345
+ ∀x ∈ 1
346
+ 2Bn,
347
+ ∥x − y∥ℓn
348
+ 2 ⩾ ∥y∥ℓn
349
+ 2 −∥x∥ℓn
350
+ 2 ⩾ ∥x∥ℓn
351
+ 2 .
352
+ By Lemma 3, it follows that voln−1(∂K )/voln(K ) ⩽ 2n. This gives the (weak) a priori bound Q(n) ⩽ 2n.
353
+ Fix n ∈ N and suppose that there exists m ∈ N satisfying 4 ⩽ m ⩽ n and n ⩾ (m logm)/4. By using
354
+ Lemma 6 with the matrix B from Lemma 10 we see that there is a universal constant κ ⩾ 4 for which
355
+ Q(n) ⩽ κ
356
+
357
+ n
358
+ 3
359
+ 2
360
+ m +Q(m)
361
+ � n
362
+ m
363
+
364
+ .
365
+ (8)
366
+ We will prove by induction on n ∈ N the following upper bound on Q(n), thus proving Theorem 5:
367
+ Q(n) ⩽ 4κ
368
+
369
+ ne
370
+
371
+ 2(logn)log(2κ).
372
+ (9)
373
+ If n ⩽ 4κ2, then by the above discussion Q(n) ⩽ 2n ⩽ 4κ�n, so that (9) holds. If n > 4κ2, then define
374
+ m
375
+ def
376
+ =
377
+
378
+ ne−�
379
+ 2(logn)log(2κ)�
380
+ .
381
+ (10)
382
+ It is straightforward to verify that this choice of m satisfies 4 ⩽ m < n and n ⩾ (m logm)/4 (with room to
383
+ spare). Therefore (8) holds. Using the induction hypothesis, it follows that
384
+ Q(m)
385
+ � n
386
+ m ⩽ 4κ
387
+
388
+ ne
389
+
390
+ 2(logm)log(2κ) (10)
391
+ ⩽ 4κ
392
+
393
+ ne
394
+
395
+ 2
396
+
397
+ logn−�
398
+ 2(logn)log(2κ)
399
+
400
+ log(2κ)
401
+ ⩽ 4κ
402
+
403
+ ne
404
+ ��
405
+ 2logn−�
406
+ log(2κ)
407
+ ��
408
+ log(2κ) = 2
409
+
410
+ ne
411
+
412
+ 2(logn)log(2κ),
413
+ (11)
414
+ where the penultimate step of (11) uses the inequality
415
+
416
+ a −b ⩽ �a − b/(2�a), which holds for every
417
+ a,b ∈ R with a ⩾ b; in our setting a = logn and b =
418
+
419
+ 2(logn)log(2κ) and a > b because we are now
420
+ treating the case n > 4κ2. A substitution of (11) into (8), while using that m ⩾ 1
421
+ 2n exp
422
+
423
+
424
+
425
+ 2(logn)log(2κ)
426
+
427
+ holds thanks to (10), gives (9), thus completing the proof of Theorem 5.
428
+
429
+ We will next prove Lemma 6, which is the key recursive step that underlies Theorem 1.
430
+ Proof of Lemma 6. We will start with the following two elementary observations to facilitate the ensuing
431
+ proof. Denote the span of the rows of B by V = B∗Rm ⊆ Rn and notice that dim(V ) = m as B is assumed
432
+ to have rank m. Suppose that Λ is a lattice of rank n that is contained in Zn. Firstly, we claim that the rank
433
+ of the lattice V ⊥ ∩Λ equals n −m. Indeed, we can write V ⊥ ∩Λ = C(Zn ∩C−1V ⊥) where C is an invertible
434
+ matrix with integer entries, i.e., C ∈ Mn(Z) ∩GLn(Q), such that Λ = CZn. Furthermore, V ⊥ = Ker(B), so
435
+ 5
436
+
437
+ the dimension over Q of Qn ∩V ⊥ equals n − m. As C−1 ∈ GLn(Q), it follows that C−1V ⊥ contains n − m
438
+ linearly independent elements of Zn. Secondly, we claim that the orthogonal projection ProjV Λ of Λ
439
+ onto V is a discrete subset of V , and hence is a lattice; its rank will then be dim(V ) = m because we
440
+ are assuming that span(Λ) = Rn, so span(ProjV Λ) = ProjV (span(Λ)) = ProjV (Rn) = V . We need to check
441
+ that for any {x1,x2,...} ⊆ Λ such that limi→∞ProjV xi = 0 there is i0 ∈ N such that ProjV xi = 0 whenever
442
+ i ∈ {i0,i0 +1,...}. Indeed, as V ⊥ = Ker(B) we have Bx = BProjV x for every x ∈ Rn, so limi→∞Bxi = 0. But,
443
+ Bxi ∈ Zm for every i ∈ N because B ∈ Mm×n(Z) and xi ∈ Λ ⊆ Zn. Consequently, there is i0 ∈ N such that
444
+ Bxi = 0 for every i ∈ {i0,i0 +1,...}, i.e., xi ∈ Ker(B) = V ⊥ and hence ProjV xi = 0.
445
+ Let K1 ⊆ V ⊥ be the Voronoi cell of V ��� ∩Λ, namely K1 = {x ∈ V ⊥ : ∀y ∈ V ⊥ ∩Λ,
446
+ ∥x∥ℓn
447
+ 2 ⩽ ∥x − y∥ℓn
448
+ 2 }. If
449
+ y = (y1,..., yn) ∈ V ⊥ = Ker(B), then y1Be1 +··· + ynBen = 0. By the assumption on B, this implies that if
450
+ also y ̸= 0, then |{i ∈ {1,...,n} : yi ̸= 0}| > s. Consequently, as the entries of elements of Λ are integers,
451
+ ∀y ∈ (V ⊥ ∩Λ)∖{0},
452
+ ∥y∥ℓn
453
+ 2 >
454
+
455
+ s.
456
+ Hence, if x ∈
457
+ �s
458
+ 2 (V ⊥ ∩Bn), then
459
+ ∀y ∈ (V ⊥ ∩Λ)∖{0},
460
+ ∥x − y∥ℓn
461
+ 2 ⩾ ∥y∥ℓn
462
+ 2 −∥x∥ℓn
463
+ 2 >
464
+
465
+ s −
466
+ �s
467
+ 2 =
468
+ �s
469
+ 2 ⩾ ∥x∥ℓn
470
+ 2 .
471
+ This means that K1 ⊇
472
+ �s
473
+ 2 (V ⊥ ∩Bn), and therefore by Lemma 3 we have
474
+ voln−m−1(∂K1)
475
+ voln−m(K1)
476
+ ⩽ n −m
477
+ 1
478
+ 2
479
+ �s
480
+ = 2(n −m)
481
+ �s
482
+ .
483
+ (12)
484
+ Next, fix i ∈ {1,...,m}. By the definition of V , the i’th row B∗ei of B belongs to V , so
485
+ ∀(x,i) ∈ Rn ×{1,...,m},
486
+ 〈x,B∗ei〉 = 〈ProjV x,B∗ei〉.
487
+ (13)
488
+ Since all of the entries of B are integers, it follows that
489
+ ∀(x,i) ∈ Zn ×{1,...,m},
490
+ 〈BProjV x,ei〉 = 〈ProjV x,B∗ei〉
491
+ (13)
492
+ = 〈x,B∗ei〉 ∈ Z.
493
+ In other words, BProjV Zn ⊆ Zm, and hence the lattice BProjV Λ is a subset of Zm. Furthermore, B is
494
+ injective on V because Ker(B) = V ⊥, so BProjV Zn is a rank m sublattice of Zm. By the definition of Q(m),
495
+ it follows that there exists a BProjV Λ-parallelotope K 0
496
+ 2 ⊆ Rm such that
497
+ volm−1(∂K 0
498
+ 2 )
499
+ volm(K 0
500
+ 2 )
501
+ ⩽ Q(m).
502
+ (14)
503
+ Because V ⊥ = Ker(B) and the rank of B is m = dim(V ), the restriction B|V of B to V is an isomorphism
504
+ between V and Rm. Letting T : Rm → V denote the inverse of B|V , define K2 = T K 0
505
+ 2. By combining (the
506
+ second part of) Lemma 7 with (14), we see that
507
+ volm−1(∂K2)
508
+ volm(K2)
509
+ ⩽Q(m)∥B∥ℓn
510
+ 2 →ℓm
511
+ 2 .
512
+ (15)
513
+ Let K = K1 +K2 ⊆ Rn. By combining (the first part of) Lemma 7 with (12) and (15), we have
514
+ voln−1(∂K )
515
+ voln(K )
516
+ ⩽ 2(n −m)
517
+ �s
518
+ +Q(m)∥B∥ℓn
519
+ 2 →ℓm
520
+ 2 .
521
+ Hence, the proof of Lemma 6 will be complete if we check that K is a Λ-parallelotope. Our construction
522
+ ensures by design that this is so, as K1 is a (V ⊥ ∩Λ)-parallelotope and K2 is a ProjV Λ-parallelotope; veri-
523
+ fying this fact is merely an unravelling of the definitions, which we will next perform for completeness.
524
+ Fix z ∈ Rn. As Rm = BProjV Λ+K 0
525
+ 2, there is x ∈ Λ with BProjV z ∈ BProjV x+K 0
526
+ 2. Apply T to this inclusion
527
+ and use that TB|V is the identity mapping to get ProjV z ∈ ProjV x +K2. Next, V ⊥ = K1 +V ⊥ ∩Λ since K1
528
+ is the Voronoi cell of V ⊥ ∩Λ, so there is y ∈ V ⊥ ∩Λ such that ProjV ⊥z −ProjV ⊥x ∈ y +K1. Consequently,
529
+ z = ProjV ⊥z +ProjV z ∈ ProjV ⊥x + y +K1 +ProjV x +K2 = x + y +K ∈ Λ+K . Hence, Λ+K = Rn.
530
+ 6
531
+
532
+ It remains to check that for every w ∈ Λ∖{0} the interior of K does not intersect w +K . Indeed, by the
533
+ definition of K , if k belongs to the interior of K , then k = k1+k2, where k1 belongs to the interior of K1 and
534
+ k2 belongs to the interior of K2. Since B is injective on K2 ⊆ V , it follows that Bk2 belongs to the interior
535
+ of BK2 = K 0
536
+ 2 . If ProjV w ̸= 0, then BProjV w ∈ BProjV Λ∖ {0}, so because K 0
537
+ 2 is a BProjV Λ-parallelotope,
538
+ Bk2 ∉ BProjV w + K 0
539
+ 2 . By applying T to is inclusion, we see that k2 ∉ ProjV w + K2, which implies that
540
+ k ∉ w +K . On the other hand, if ProjV w = 0, then w ∈ (V ⊥ ∩Λ)∖{0}. Since K1 is a V ⊥ ∩Λ-parallelotope,
541
+ it follows that k1 ∉ w +K1, so k ∉ w +K .
542
+
543
+ To complete the proof of Theorem 5, it remains to prove Lemma 10. For ease of later reference, we first
544
+ record the following straightforward linear-algebraic fact:
545
+ Observation 11. Fix m,n,s ∈ N with s ⩽ m ⩽ n. Suppose that there exists A ∈ Mm×n(Z) such that any s
546
+ of the columns of A are linearly independent. Then, there also exists B ∈ Mm×n(Z) such that any s of the
547
+ columns of B are linearly independent, B has rank m, and
548
+ ∥B∥ℓn
549
+ 2 →ℓm
550
+ 2 ⩽
551
+
552
+ 1+∥A∥2
553
+ ℓn
554
+ 2 →ℓm
555
+ 2 .
556
+ (16)
557
+ Proof. Let r ∈ {1,...,m} be the rank of A. By permuting the rows of A, we may assume that its first r rows,
558
+ namely A∗e1,...,A∗er ∈ Rn are linearly independent. Also, since we can complete A∗e1,...,A∗er to a
559
+ basis of Rn by adding n−r vectors from {e1,...,en} ⊆ Rn, by permuting the columns of A, we may assume
560
+ that the vectors A∗e1,...,A∗er,er+1,...,em ∈ Rn are linearly independent. Let B ∈ Mm×n(Z) be the matrix
561
+ whose rows are A∗e1,...,A∗er,er+1,...,em, so that B has rank m by design. Also,
562
+ ∀x ∈ Rn,
563
+ ∥Bx∥2
564
+ ℓm
565
+ 2 =
566
+ r�
567
+ i=1
568
+ (Ax)2
569
+ i +
570
+ m
571
+
572
+ j=r+1
573
+ x2
574
+ j ⩽
575
+
576
+ ∥A∥2
577
+ ℓn
578
+ 2 →ℓm
579
+ 2 +1
580
+
581
+ ∥x∥2
582
+ ℓn
583
+ 2 .
584
+ Therefore (16) holds. It remains to check that any s of the columns of B are linearly independent. Indeed,
585
+ fix S ⊆ {1,...,n} with |S| = s and {αj }j∈S ⊆ R such that �
586
+ j∈S αjBi j = 0 for every i ∈ {1,...,m}. In particular,
587
+
588
+ j∈S αjAi j = 0 for every i ∈ {1,...,r}. If k ∈ {r +1,...,m}, then since the k’th row of A is in the span of the
589
+ first r rows of A, there exist βk1,...,βkr ∈ R such that Ak j = �r
590
+ i=1βkiAi j for every j ∈ {1,...,n}. Conse-
591
+ quently, �
592
+ j∈S αjAk j = �r
593
+ i=1βki
594
+
595
+ j∈S αjAi j = 0. This shows that �
596
+ j∈S αjAi j = 0 for every i ∈ {1,...,m}. By
597
+ the assumed property of A, this implies that αj = 0 for every j ∈ S.
598
+
599
+ The following lemma is the main existential statement that underlies our justification of Lemma 10:
600
+ Lemma 12. There exists a universal constant c > 0 with the following property. Let d,m,n ⩾ 3 be integers
601
+ that satisfy d ⩽ m ⩽ n and n ⩾ (m logm)/d. Suppose also that s ∈ N satisfies
602
+ s ⩽ c
603
+ d
604
+
605
+ md
606
+ n2
607
+
608
+ 1
609
+ d−2
610
+ .
611
+ (17)
612
+ Then, there exists an m-by-n matrix A ∈ Mm×n({0,1}) with the following properties:
613
+ • Any s of the columns of A are linearly independent over the field Z/(2Z);
614
+ • Every column of A has at most d nonzero entries;
615
+ • Every row of A has at most 5dn/m nonzero entries.
616
+ The ensuing proof of Lemma 12 consists of probabilistic reasoning that is common in the literature
617
+ on Low Density Parity Check (LDPC) codes; it essentially follows the seminal work [18]. While similar
618
+ considerations appeared in many places, we could not locate a reference that states Lemma 12.3 A pecu-
619
+ liarity of the present work is that, for the reason that we have seen in the above deduction of Theorem 5
620
+ from Lemma 6 and Lemma 10, we need to choose a nonstandard dependence of m on n; recall (10).
621
+ 3The standard range of parameters that is discussed in the LDPC literature is, using the notation of Lemma 12, either when
622
+ m ≍ n, or when s,d are fixed and the pertinent question becomes how large n can be as m → ∞; sharp bounds in the former
623
+ case are due to [18] and sharp bounds in the latter case are due to [29, 32]. Investigations of these issues when the parameters
624
+ have intermediate asymptotic behaviors appear in [15, 14, 2, 9, 21, 23].
625
+ 7
626
+
627
+ In the course of the proof of Lemma 12 we will use the following probabilistic estimate:
628
+ Lemma 13. Let {W (t) = (W (t,1),...,W (t,m))}∞
629
+ t=0 be the standard random walk on the discrete hypercube
630
+ {0,1}m, starting at the origin. Thus, W (0) = 0 and for each t ∈ N the random vector W (t) is obtained from
631
+ the random vector W (t −1) by choosing an index i ∈ {1,...,m} uniformly at random and setting
632
+ W (t) =
633
+
634
+ W (t −1,1),...,W (t −1,i −1),1−W (t −1,i),W (t −1,i +1),...,W (t −1,m)
635
+
636
+ .
637
+ Then, Prob[W (t) = 0] ⩽ 2(t/m)t/2 for every t ∈ N.
638
+ Proof. If t is odd, then Prob[W (t) = 0] = 0, so suppose from now that t is even. Let P ∈ M{0,1}m×{0,1}m(R)
639
+ denote the transition matrix of the random walk W , i.e.,
640
+ ∀f : {0,1}m → R, ∀x ∈ {0,1}m,
641
+ Pf (x) = 1
642
+ m
643
+ m
644
+
645
+ i=1
646
+ f (x +ei mod 2).
647
+ Then, Prob[W (t) = 0] = (Pt)00. By symmetry, all of the 2m diagonal entries of Pt are equal to each other,
648
+ so (Pt)00 = Trace(Pt)/2m. For every S ⊆ {0,1}m, the Walsh function (x ∈ {0,1}m) �→ (−1)
649
+
650
+ i∈S xi is an eigen-
651
+ vector of P whose eigenvalue equals 1−2|S|/m. Consequently,
652
+ Prob[W (t) = 0] = 1
653
+ 2m Trace(Pt) = 1
654
+ 2m
655
+ m
656
+
657
+ k=0
658
+
659
+ m
660
+ k
661
+ ��
662
+ 1− 2k
663
+ m
664
+ �t
665
+ .
666
+ (18)
667
+ Suppose that β1,...,βm are independent {0,1}-valued unbiased Bernoulli random variables, namely,
668
+ Prob[βi = 0] = Prob[βi = 1] = 1/2 for any i ∈ {1,...,m}. By Hoeffding’s inequality (e.g., [37, Theorem 2.2.6]),
669
+ ∀u ⩾ 0,
670
+ Prob
671
+ �����
672
+ m
673
+
674
+ i=1
675
+
676
+ βi − 1
677
+ 2
678
+ ����� ⩾ u
679
+
680
+ ⩽ 2e− 2u2
681
+ m .
682
+ (19)
683
+ Observing that the right hand side of (18) is equal to the expectation of
684
+
685
+ 1− 2
686
+ m
687
+ �m
688
+ i=1βi
689
+ �t, we see that
690
+ Prob[W (t) = 0]
691
+ (18)
692
+ =
693
+
694
+ − 2
695
+ m
696
+ �t
697
+ E
698
+ �� m
699
+
700
+ i=1
701
+
702
+ βi − 1
703
+ 2
704
+ ��t�
705
+ =
706
+ � 2
707
+ m
708
+ �t �∞
709
+ 0
710
+ tut−1Prob
711
+ �����
712
+ m
713
+
714
+ i=1
715
+
716
+ βi − 1
717
+ 2
718
+ ����� ⩾ u
719
+
720
+ du
721
+ (19)
722
+ ⩽ 2t
723
+ � 2
724
+ m
725
+ �t �∞
726
+ 0
727
+ ut−1e− 2u2
728
+ m du = 2
729
+ � 2
730
+ m
731
+ � t
732
+ 2 � t
733
+ 2
734
+
735
+ ! ⩽ 2
736
+ � 2
737
+ m
738
+ � t
739
+ 2 � t
740
+ 2
741
+ � t
742
+ 2
743
+ = 2
744
+ � t
745
+ m
746
+ � t
747
+ 2
748
+ .
749
+
750
+ With Lemma 13 at hand, we can now prove Lemma 12.
751
+ Proof of Lemma 12. Consider the random matrix A ∈ Mm×n({0,1}) whose columns are independent iden-
752
+ tically distributed copies W1(d),...,Wn(d) of W (d), where W (0) = 0,W (1),W (2),... is the standard ran-
753
+ dom walk on {0,1}m as in Lemma 13. By design, this means that each column of A has at most d nonzero
754
+ entries. Fixing (i, j) ∈ {1,...,m}×{1,...,n}, if Wj(d,i) = 1, then in at least one of the d steps of the random
755
+ walk that generated Wj(d) the ith coordinate was changed. The probability of the latter event equals
756
+ 1−(1−1/m)d. Hence, Prob[Wj(d,i) = 1] ⩽ 1−(1−1/m)d ⩽ d/m and therefore for every fixed S ⊆ {1,...,n},
757
+ the probability that Wj(d,i) = 1 for every j ∈ S is at most (d/m)|S|. Consequently, the probability that all
758
+ of the rows of A have at most ℓ = ⌈4dn/m⌉ nonzero entries is at least
759
+ 1−m
760
+
761
+ n
762
+
763
+ �� d
764
+ m
765
+ �ℓ
766
+ ⩾ 1−m
767
+ �en
768
+
769
+ �ℓ � d
770
+ m
771
+ �ℓ
772
+ = 1−m
773
+ �edn
774
+ mℓ
775
+ �ℓ
776
+ ⩾ 1−m
777
+ �e
778
+ 4
779
+ �4logm
780
+ ⩾ 1
781
+ 3,
782
+ where the first step is an application of Stirling’s formula, the penultimate step uses ℓ ⩾ 4dn/m and the
783
+ assumption n ⩾ (m logm)/d, and the final step holds because m ⩾ 3.
784
+ It therefore suffices to prove that with probability greater than 2/3 the vectors {Wi (d)}i∈S ⊆ {0,1}m are
785
+ linearly independent over Z/(2Z) for every ∅ ̸= S ⊆ {1,...,n} with |S| ⩽ s, where s ∈ N satisfies (17) and the
786
+ universal constant c > 0 that appears in (17) will be specified later; see (23). So, it suffices to prove that
787
+ with probability greater than 2/3 we have �
788
+ i∈S Wi(d) ̸≡ 0 mod 2 for every ∅ ̸= S ⊆ {1,...,n} with |S| ⩽ s.
789
+ 8
790
+
791
+ Hence, letting D denote the number of ∅ ̸= S ⊆ {1,...,n} with |S| ⩽ s that satisfy �
792
+ i∈S Wi(d) ≡ 0 mod 2, it
793
+ suffices to prove that 2/3 < Prob[D = 0] = 1−Prob[D ⩾ 1]. Using Markov’s inequality, it follows that the
794
+ proof of Lemma 12 will be complete if we demonstrate that E[D] < 1/3.
795
+ The expectation of D can be computed exactly. Indeed,
796
+ E[D] = E
797
+
798
+
799
+ S⊆{1,...,n}
800
+ 1⩽|S|⩽s
801
+ 1{
802
+
803
+ i∈S Wi(d)≡0 mod 2}
804
+
805
+ =
806
+ s�
807
+ r=1
808
+
809
+ n
810
+ r
811
+
812
+ Prob[W (dr) = 0],
813
+ (20)
814
+ where we used the fact that �
815
+ i∈S Wi(d) mod 2 ∈ {0,1}m has the same distribution as W (d|S|) for every
816
+ ∅ ̸= S ⊆ {1,...,n}. By substituting the conclusion of Lemma 13 into (20) we see that
817
+ E[D] ⩽ 2
818
+ s�
819
+ r=1
820
+
821
+ n
822
+ r
823
+ ��dr
824
+ m
825
+ � dr
826
+ 2
827
+ ⩽ 2
828
+ s�
829
+ r=1
830
+ �ed
831
+ d
832
+ 2 r
833
+ d
834
+ 2 −1n
835
+ m
836
+ d
837
+ 2
838
+ �r
839
+ ,
840
+ (21)
841
+ where in the last step we bounded the binomial coefficient using Stirling’s formula. For every r ∈ {1,...,s},
842
+ ed
843
+ d
844
+ 2 r
845
+ d
846
+ 2 −1n
847
+ m
848
+ d
849
+ 2
850
+ ⩽ ed
851
+ d
852
+ 2 s
853
+ d
854
+ 2 −1n
855
+ m
856
+ d
857
+ 2
858
+ (17)
859
+ ⩽ edc
860
+ d
861
+ 2 −1 < 1
862
+ 7,
863
+ (22)
864
+ provided that
865
+ c < inf
866
+ d⩾3
867
+ � 1
868
+ 7ed
869
+
870
+ 2
871
+ d−2
872
+ ∈ (0,1).
873
+ (23)
874
+ Therefore, when (23) holds we may substitute (22) into (21) to get that E[D] < 2�∞
875
+ r=1
876
+ 1
877
+ 7r = 1
878
+ 3.
879
+
880
+ We can now prove Lemma 10, thus concluding the proof of Theorem 5.
881
+ Proof of Lemma 10. We will prove the following stronger statement (Lemma 10 is its special case ε = 1).
882
+ If 0 < ε ⩽ 2 and m,n ∈ N satisfy 2 + ⌊2/ε⌋ ⩽ m ⩽ n and n ⩾ (m logm)/(2 + ⌊2/ε⌋), then there exist s ∈ N
883
+ with s ≳ εm1+ε/nε, and B ∈ Mm×n(Z) such that any s of the columns of B are linearly independent, the
884
+ rows of B are linearly independent, and
885
+ ∥B∥ℓn
886
+ 2 →ℓm
887
+ 2 ≲ 1
888
+ ε
889
+ � n
890
+ m .
891
+ Indeed, apply Lemma 12 with d = 2 + ⌊2/ε⌋ ⩾ 3 (equivalently, d ⩾ 3 is the largest integer such that
892
+ 2/(d −2) ⩾ ε) to deduce that there exist an integer s with
893
+ s ≍ 1
894
+ d
895
+
896
+ md
897
+ n2
898
+
899
+ 1
900
+ d−2
901
+ = m
902
+ d
903
+ �m
904
+ n
905
+
906
+ 2
907
+ d−2 ≍ εm
908
+ �m
909
+ n
910
+ �ε
911
+ = εm1+ε
912
+
913
+ ,
914
+ and a matrix A ∈ Mm×n({0,1}) ⊆ Mm×n(Z) such that any s of the columns of A are linearly independent
915
+ over Z/(2Z), every column of A has at most d nonzero entries, and every row of A has at most 5dn/m
916
+ nonzero entries. If a set of vectors v1,...,vs ∈ {0,1}m is linearly independent over Z/(2Z), then it is also
917
+ linearly independent over R (e.g., letting V ∈ Mm×s({0,1}) denote the matrix whose columns are v1,...,vs,
918
+ the latter requirement is equivalent to the determinant of V∗V ∈ Ms({0,1}) being an odd integer, so in
919
+ particular it does not vanish). Hence, any s of the columns of A are linearly independent over R. Also,
920
+ ∥A∥ℓn
921
+ 2 →ℓm
922
+ 2 ⩽
923
+
924
+ max
925
+ i∈{1,...,m}
926
+ n�
927
+ j=1
928
+ |Ai j|
929
+ � 1
930
+ 2 �
931
+ max
932
+ j∈{1,...,n}
933
+ m
934
+
935
+ i=1
936
+ |Ai j|
937
+ � 1
938
+ 2 ⩽
939
+
940
+ 5dn
941
+ m ·
942
+
943
+ d ≍ 1
944
+ ε
945
+ � n
946
+ m ,
947
+ where the first step is a standard bound which holds for any m-by-n real matrix (e.g. [20, Corollary 2.3.2]).
948
+ Thus, A has all of the properties that we require from the matrix B in Lemma 10, except that we do not
949
+ know that A has rank m, but Observation 11 remedies this (minor) issue.
950
+
951
+ We end by asking the following question:
952
+ 9
953
+
954
+ Question 14. Fix n ∈ N. Does there exist an integer parallelotope K ⊆ Rn such that the (n−1)-dimensional
955
+ area of the orthogonal projection Projθ⊥K of K along any direction θ ∈ Sn−1 is at most no(1)?
956
+ An application of Cauchy’s surface area formula (see [27, Section 5.5]), as noted in, e.g., [31, Sec-
957
+ tion 1.6], shows that a positive answer to Question 14 would imply Theorem 1. Correspondingly, a posi-
958
+ tive answer to Question 14 with no(1) replaced by O(1) would imply a positive answer to Question 2.
959
+ Apart from the intrinsic geometric interest of Question 14, if it had a positive answer, then we would
960
+ deduce using [31] that there exists an integer parallelotope K ⊆ Rn such that the normed space X whose
961
+ unit ball is K has certain desirable nonlinear properties, namely, we would obtain an improved random-
962
+ ized clustering of X and an improved extension theorem for Lipschitz functions on subsets of X; we refer
963
+ to [31] for the relevant formulations since including them here would result in a substantial digression.
964
+ REFERENCES
965
+ [1] A. D. Alexandrov, Convex polyhedra, Springer Monographs in Mathematics, Springer-Verlag, Berlin, 2005, Translated from
966
+ the 1950 Russian edition by N. S. Dairbekov, S. S. Kutateladze and A. B. Sossinsky, With comments and bibliography by V.
967
+ A. Zalgaller and appendices by L. A. Shor and Yu. A. Volkov.
968
+ [2] Noga Alon and Uriel Feige, On the power of two, three and four probes, Proceedings of the Twentieth Annual ACM-SIAM
969
+ Symposium on Discrete Algorithms, SIAM, Philadelphia, PA, 2009, pp. 346–354.
970
+ [3] Noga Alon and Bo’az Klartag, Economical toric spines via Cheeger’s inequality, J. Topol. Anal. 1 (2009), 101–111.
971
+ [4] Tomaso Aste and Denis Weaire, The pursuit of perfect packing, second ed., Taylor & Francis, New York, 2008.
972
+ [5] Károly Bezdek, Sphere packings revisited, European J. Combin. 27 (2006), 864–883.
973
+ [6] Mark Braverman and Dor Minzer, Optimal tiling of the Euclidean space using permutation-symmetric bodies, 36th Com-
974
+ putational Complexity Conference, LIPIcs. Leibniz Int. Proc. Inform., vol. 200, Schloss Dagstuhl. Leibniz-Zent. Inform.,
975
+ Wadern, 2021, pp. Art. No. 5, 48.
976
+ [7] Jaigyoung Choe, On the existence and regularity of fundamental domains with least boundary area, J. Differential Geom.
977
+ 29 (1989), 623–663.
978
+ [8] B. Delaunay, Sur la partition régulière de l’espace à 4 dimensions. I, II., Bull. Acad. Sci. URSS 2 (1929), 79–110 (French).
979
+ [9] D. Dellamonica, Jr., P. Haxell, T. Łuczak, D. Mubayi, B. Nagle, Y. Person, V. Rödl, M. Schacht, and J. Verstraëte, On even-degree
980
+ subgraphs of linear hypergraphs, Combin. Probab. Comput. 21 (2012), 113–127.
981
+ [10] N. P. Dolbilin, Properties of faces of parallelohedra, Tr. Mat. Inst. Steklova 266 (2009), 112–126.
982
+ [11] N. P. Dolbilin, Parallelohedra: a retrospective and new results, Trans. Moscow Math. Soc. (2012), 207–220.
983
+ [12] Peter Engel, Geometric crystallography, Handbook of convex geometry, Vol. A, B, North-Holland, Amsterdam, 1993,
984
+ pp. 989–1041.
985
+ [13] E. S. Fedorov, Naˇcala uˇceniya o figurah, Izdat. Akad. Nauk SSSR, Moscow, 1953.
986
+ [14] Uriel Feige, Small linear dependencies for binary vectors of low weight, Building bridges, Bolyai Soc. Math. Stud., vol. 19,
987
+ Springer, Berlin, 2008, pp. 283–307.
988
+ [15] Uriel Feige, Jeong Han Kim, and Eran Ofek, Witnesses for non-satisfiability of dense random 3cnf formulas, 47th Annual
989
+ IEEE Symposium on Foundations of Computer Science (FOCS 2006), 21-24 October 2006, Berkeley, California, USA, Pro-
990
+ ceedings, IEEE Computer Society, 2006, pp. 497–508.
991
+ [16] Uriel Feige, Guy Kindler, and Ryan O’Donnell, Understanding parallel repetition requires understanding foams, 22nd An-
992
+ nual IEEE Conference on Computational Complexity (CCC 2007), 13-16 June 2007, San Diego, California, USA, IEEE Com-
993
+ puter Society, 2007, pp. 179–192.
994
+ [17] László Fejes Tóth, Über das kürzeste Kurvennetz, das eine Kugeloberfläche in flächengleiche konvexe Teile zerlegt, Math.
995
+ Naturwiss. Anz. Ungar. Akad. Wiss. 62 (1943), 349–354.
996
+ [18] R. G. Gallager, Low-density parity-check codes, IRE Trans. IT-8 (1962), 21–28.
997
+ [19] Apostolos Giannopoulos, Alexander Koldobsky, and Petros Valettas, Inequalities for the surface area of projections of convex
998
+ bodies, Canad. J. Math. 70 (2018), 804–823.
999
+ [20] Gene H. Golub and Charles F. Van Loan, Matrix computations, fourth ed., Johns Hopkins Studies in the Mathematical
1000
+ Sciences, Johns Hopkins University Press, Baltimore, MD, 2013.
1001
+ [21] Venkatesan Guruswami, Pravesh K. Kothari, and Peter Manohar, Algorithms and certificates for Boolean CSP refutation:
1002
+ smoothed is no harder than random, STOC ’22—Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of
1003
+ Computing, ACM, New York, [2022] ©2022, pp. 678–689.
1004
+ [22] T. C. Hales, The honeycomb conjecture, Discrete Comput. Geom. 25 (2001), 1–22.
1005
+ [23] Jun-Ting Hsieh, Pravesh K. Kothari, and Sidhanth Mohanty, A simple and sharper proof of the hypergraph Moore bound,
1006
+ Preprint available at https://arxiv.org/abs/2207.10850, 2022.
1007
+ [24] Lord Kelvin, On homogeneous division of space., Lond. R. S. Proc. 55 (1894), 1–16 (English).
1008
+ 10
1009
+
1010
+ [25] Guy Kindler, Ryan O’Donnell, Anup Rao, and Avi Wigderson, Spherical cubes and rounding in high dimensions, 49th An-
1011
+ nual IEEE Symposium on Foundations of Computer Science, FOCS 2008, October 25-28, 2008, Philadelphia, PA, USA, IEEE
1012
+ Computer Society, 2008, pp. 189–198.
1013
+ [26] Guy Kindler, Anup Rao, Ryan O’Donnell, and Avi Wigderson, Spherical cubes: optimal foams from computational hardness
1014
+ amplification, Commun. ACM 55 (2012), 90–97.
1015
+ [27] Daniel A. Klain and Gian-Carlo Rota, Introduction to geometric probability, Lezioni Lincee. [Lincei Lectures], Cambridge
1016
+ University Press, Cambridge, 1997.
1017
+ [28] Zsolt Lángi, An isoperimetric problem for three-dimensional parallelohedra, Pacific J. Math. 316 (2022), 169–181.
1018
+ [29] Hanno Lefmann, Pavel Pudlák, and Petr Savický, On sparse parity check matrices, Des. Codes Cryptogr. 12 (1997), 107–130.
1019
+ [30] H. Minkowski, Allgemeine Lehrsätze über die convexen Polyeder., Nachr. Ges. Wiss. Göttingen, Math.-Phys. Kl. 1897 (1897),
1020
+ 198–219 (German).
1021
+ [31] Assaf
1022
+ Naor,
1023
+ Extension,
1024
+ separation
1025
+ and
1026
+ isomorphic
1027
+ reverse
1028
+ isoperimetry,
1029
+ Preprint
1030
+ available
1031
+ at
1032
+ https://arxiv.org/abs/2112.11523, 2021.
1033
+ [32] Assaf Naor and Jacques Verstraëte, Parity check matrices and product representations of squares, Combinatorica 28 (2008),
1034
+ 163–185.
1035
+ [33] Ran Raz, A counterexample to strong parallel repetition, SIAM J. Comput. 40 (2011), 771–777.
1036
+ [34] C. A. Rogers, A note on coverings and packings, J. London Math. Soc. 25 (1950), 327–331.
1037
+ [35] M. I. Shtogrin, Regular Dirichlet-Vorono˘ı partitions for the second triclinic group, Izdat. “Nauka”, Moscow, 1973, Trudy Mat.
1038
+ Inst. Steklov. 123 (1973).
1039
+ [36] B. A. Venkov, On a class of Euclidean polyhedra, Vestnik Leningrad. Univ. Ser. Mat. Fiz. Him. 9 (1954), 11–31.
1040
+ [37] Roman Vershynin, High-dimensional probability, Cambridge Series in Statistical and Probabilistic Mathematics, vol. 47,
1041
+ Cambridge University Press, Cambridge, 2018, An introduction with applications in data science, With a foreword by Sara
1042
+ van de Geer.
1043
+ MATHEMATICS DEPARTMENT, PRINCETON UNIVERSITY, FINE HALL, WASHINGTON ROAD, PRINCETON, NJ 08544-1000, USA
1044
+ Email address: [email protected]
1045
+ DEPARTMENT OF COMPUTER SCIENCE, COURANT INSTITUTE OF MATHEMATICAL SCIENCES, NEW YORK UNIVERSITY, 251
1046
+ MERCER STREET, NEW YORK, NY 10012, USA
1047
+ Email address: [email protected]
1048
+ 11
1049
+
9dE1T4oBgHgl3EQfCQJ2/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
A9E0T4oBgHgl3EQfxwJo/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35cb629960eaeb8aa7633f20859c8cda2aa55a11b11b5a5aec77f6e9b8ec8b7b
3
+ size 6225965
AdAzT4oBgHgl3EQf_v_f/content/2301.01954v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a8418a235c91c211771f07201727a0b8e85bd7a1b0fe805d3f59c53b2423b71
3
+ size 1497618
AdAzT4oBgHgl3EQf_v_f/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb7be8914d09a68624beecea1fd5c42520d74d3916134553b8efb5be41fe3f3c
3
+ size 13959213
AdAzT4oBgHgl3EQf_v_f/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d41cb98e16c2e6677d6855db2453d2264ff753b4b4872d798c0b43033f37db6
3
+ size 461695
AdFIT4oBgHgl3EQf-yxb/content/2301.11412v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d5ec88a40d3f49d358f10ed50a54955448cc9f53df69ea48a76bf3017b0b39b
3
+ size 2756605
AdFIT4oBgHgl3EQf-yxb/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86e0e6ba9cea2032eabc5a235369252c86655d1243c1a838f5d0075f15458bc8
3
+ size 14549037