jackkuo commited on
Commit
dd03284
·
verified ·
1 Parent(s): 7ac247d

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -NAzT4oBgHgl3EQfSvt8/vector_store/index.pkl +3 -0
  2. -NFPT4oBgHgl3EQfZTSq/content/tmp_files/2301.13077v1.pdf.txt +1113 -0
  3. -NFPT4oBgHgl3EQfZTSq/content/tmp_files/load_file.txt +0 -0
  4. .gitattributes +1 -0
  5. 29AzT4oBgHgl3EQffPxu/content/tmp_files/2301.01449v1.pdf.txt +916 -0
  6. 29AzT4oBgHgl3EQffPxu/content/tmp_files/load_file.txt +0 -0
  7. 59AzT4oBgHgl3EQfEfpg/content/tmp_files/2301.00994v1.pdf.txt +784 -0
  8. 59AzT4oBgHgl3EQfEfpg/content/tmp_files/load_file.txt +0 -0
  9. 5NFAT4oBgHgl3EQfmh0R/content/tmp_files/2301.08623v1.pdf.txt +0 -0
  10. 5NFAT4oBgHgl3EQfmh0R/content/tmp_files/load_file.txt +0 -0
  11. AdE1T4oBgHgl3EQfDQNp/content/tmp_files/2301.02874v1.pdf.txt +1770 -0
  12. AdE1T4oBgHgl3EQfDQNp/content/tmp_files/load_file.txt +0 -0
  13. BNAyT4oBgHgl3EQfd_jC/content/tmp_files/2301.00314v1.pdf.txt +2042 -0
  14. BNAyT4oBgHgl3EQfd_jC/content/tmp_files/load_file.txt +0 -0
  15. C9E4T4oBgHgl3EQf5w7z/content/tmp_files/2301.05327v1.pdf.txt +637 -0
  16. C9E4T4oBgHgl3EQf5w7z/content/tmp_files/load_file.txt +0 -0
  17. E9AyT4oBgHgl3EQfrPkW/content/tmp_files/2301.00555v1.pdf.txt +1504 -0
  18. E9AyT4oBgHgl3EQfrPkW/content/tmp_files/load_file.txt +0 -0
  19. FtE1T4oBgHgl3EQfqwWe/content/tmp_files/2301.03347v1.pdf.txt +1020 -0
  20. FtE1T4oBgHgl3EQfqwWe/content/tmp_files/load_file.txt +401 -0
  21. GdE2T4oBgHgl3EQf-gnI/content/tmp_files/2301.04240v1.pdf.txt +0 -0
  22. GdE2T4oBgHgl3EQf-gnI/content/tmp_files/load_file.txt +0 -0
  23. HtAzT4oBgHgl3EQfHvtN/content/tmp_files/2301.01049v1.pdf.txt +1293 -0
  24. HtAzT4oBgHgl3EQfHvtN/content/tmp_files/load_file.txt +419 -0
  25. I9E0T4oBgHgl3EQfSAAQ/content/tmp_files/2301.02214v1.pdf.txt +876 -0
  26. I9E0T4oBgHgl3EQfSAAQ/content/tmp_files/load_file.txt +503 -0
  27. INE0T4oBgHgl3EQfRwA9/content/tmp_files/2301.02211v1.pdf.txt +1107 -0
  28. INE0T4oBgHgl3EQfRwA9/content/tmp_files/load_file.txt +0 -0
  29. KtFIT4oBgHgl3EQfaisW/content/tmp_files/2301.11257v1.pdf.txt +1740 -0
  30. KtFIT4oBgHgl3EQfaisW/content/tmp_files/load_file.txt +0 -0
  31. L9E0T4oBgHgl3EQf0AI4/content/tmp_files/2301.02679v1.pdf.txt +1084 -0
  32. L9E0T4oBgHgl3EQf0AI4/content/tmp_files/load_file.txt +0 -0
  33. MtAzT4oBgHgl3EQfV_zH/content/tmp_files/2301.01295v1.pdf.txt +2891 -0
  34. MtAzT4oBgHgl3EQfV_zH/content/tmp_files/load_file.txt +0 -0
  35. TNAzT4oBgHgl3EQfXfzv/content/tmp_files/2301.01321v1.pdf.txt +1732 -0
  36. UNFAT4oBgHgl3EQf2x7k/content/tmp_files/2301.08717v1.pdf.txt +2145 -0
  37. UNFAT4oBgHgl3EQf2x7k/content/tmp_files/load_file.txt +0 -0
  38. VdAzT4oBgHgl3EQfJ_tt/content/tmp_files/2301.01089v1.pdf.txt +1277 -0
  39. VdAzT4oBgHgl3EQfJ_tt/content/tmp_files/load_file.txt +0 -0
  40. Y9E5T4oBgHgl3EQfDA5Y/content/tmp_files/2301.05401v1.pdf.txt +4445 -0
  41. Y9E5T4oBgHgl3EQfDA5Y/content/tmp_files/load_file.txt +0 -0
  42. YNA0T4oBgHgl3EQfFf_E/content/tmp_files/2301.02034v1.pdf.txt +1557 -0
  43. YNA0T4oBgHgl3EQfFf_E/content/tmp_files/load_file.txt +0 -0
  44. Z9E0T4oBgHgl3EQfnQEE/content/tmp_files/2301.02508v1.pdf.txt +1888 -0
  45. Z9E0T4oBgHgl3EQfnQEE/content/tmp_files/load_file.txt +0 -0
  46. adE1T4oBgHgl3EQfKQPQ/content/tmp_files/2301.02963v1.pdf.txt +598 -0
  47. adE1T4oBgHgl3EQfKQPQ/content/tmp_files/load_file.txt +0 -0
  48. cNE_T4oBgHgl3EQf0Bwx/content/tmp_files/2301.08326v1.pdf.txt +545 -0
  49. cNE_T4oBgHgl3EQf0Bwx/content/tmp_files/load_file.txt +445 -0
  50. dNAzT4oBgHgl3EQfnv3m/content/tmp_files/2301.01587v1.pdf.txt +720 -0
-NAzT4oBgHgl3EQfSvt8/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8537c828041cfaa690de7b0dc6a892c8c05aa9a9bfe07d80cd6e4edb4ab20540
3
+ size 159975
-NFPT4oBgHgl3EQfZTSq/content/tmp_files/2301.13077v1.pdf.txt ADDED
@@ -0,0 +1,1113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A sluggish random walk with subdiffusive spread
2
+ Aniket Zodage1,2, Rosalind J. Allen3,4, Martin R. Evans4,5, Satya N. Majumdar5
3
+ 1 Department of Physics, Indian Institute of Science Education and Research, Dr. Homi Bhabha Road, Pune 411008, India
4
+ 2 Department of Physics, UC San Diego, 9500 Gilman Dr. La Jolla, California 92093, USA
5
+ 3 Theoretical Microbial Ecology, Institute of Microbiology, Faculty of Biological Sciences,
6
+ Friedrich Schiller University Jena, Buchaer Strasse 6, 07745 Jena, Germany
7
+ 4 SUPA, School of Physics and Astronomy, University of Edinburgh, Peter Guthrie Tait Road, EH9 3FD and
8
+ 5 LPTMS, CNRS, Univ. Paris-Sud, Universit´e Paris-Saclay, 91405 Orsay, France
9
+ (Dated: January 31, 2023)
10
+ We study a one-dimensional sluggish random walk with space-dependent transition probabilities.
11
+ Motivated by trap models of slow dynamics, we consider a model in which the trap depth increases
12
+ logarithmically with distance from the origin. This leads to a random walk which has symmetric
13
+ transition probabilities that decrease with distance |k| from the origin as 1/|k| for large |k|. We show
14
+ that the typical position after time t scales as t1/3 with a nontrivial scaling function for the position
15
+ distribution which has a trough (a cusp singularity) at the origin. Therefore biased random motion
16
+ emerges even though the transition probabilities are symmetric.
17
+ We also compute the survival
18
+ probability of the walker in the presence of a sink at the origin and show that it decays as t−1/3 at
19
+ late times. Furthermore we compute the distribution of the maximum position, M(t), to the right
20
+ of the origin up to time t, and show that it has a nontrivial scaling function. Finally we provide a
21
+ generalisation of this model where the transition probabilities decay as 1/|k|α with α > 0.
22
+ arXiv:2301.13077v1 [cond-mat.stat-mech] 30 Jan 2023
23
+
24
+ 2
25
+ I.
26
+ INTRODUCTION
27
+ Slow dynamics is a common feature of many physical systems, including glasses, granular media and colloids [1, 2].
28
+ well Slow dynamics commonly arises when the system becomes trapped for increasing periods of time in deeper and
29
+ deeper local free energy minima in the configuration space. This phenomenon has inspired the study of simplified
30
+ toy models known as trap models [3–7]. In these models, the many minima of the complex disordered landscape are
31
+ represented by traps whose depths are taken to be random variables. For a single particle hopping between nearby
32
+ traps the mean squared displacement typically grows more slowly than linearly in time, thus the particle’s motion is
33
+ subdiffusive [8–10].
34
+ Similar slow dynamics can also arise in an inhomogeneous landscape where the trap depth is position-dependent
35
+ (as opposed to being random). Here, the hopping dynamics between the traps is an example of a Markov chain
36
+ with space-dependent transition probabilities [11, 12], or in other words, an inhomogeneous random walk. For such
37
+ systems, explicit solutions for observables beyond the simple position distribution – such as first passage probabilities
38
+ [13–15] or extreme value statistics [16, 17] – are generally hard to obtain.
39
+ A classic example of an inhomogenous random walk is the centrally-biased Gillis model [18–22]. In this model, a
40
+ single particle hops on a one-dimensional lattice where the hopping probability is asymmetric in a position-dependent
41
+ manner. Specifically, for k ̸= 0, the hopping probabilities from site k to k ± 1 are 1
42
+ 2(1 ∓ ϵ/k), while for k = 0 the
43
+ hopping probabilities to sites ±1 are both 1/2. Because the hopping probabilities (for k ̸= 0) are asymmetric, the
44
+ particle undergoes a biased random walk in which the parameter ϵ ∈ [−1, 1] controls the strength of the bias. For
45
+ ϵ > 0 there is a drift towards the origin while for ϵ < 0 there is a drift away from the origin. Far away from the origin,
46
+ where |k| ≫ 1, the bias is small and the dynamics tends towards a symmetric random walk which, in the continuum
47
+ limit, reduces to a particle moving in a logarithmic potential U(k) → 2ϵ ln |k| [19].
48
+ The Gillis model [18–22], and its continuous limit of particle motion in a logarithmic potential [19, 23–27], have
49
+ aroused much interest because of their relevance to vortex dynamics, interactions between tracer particles in a driven
50
+ fluid, cold atoms trapped in optical lattices and the nonequilibrium behaviour of systems with long-range interactions
51
+ [28–32]. These models have the appealing feature of allowing the exact calculation of various observables going beyond
52
+ the position distribution.
53
+ Motivated by these works, here we consider the counterpart problem of an inhomogeneous trap model in which
54
+ the trap depth increases logarithmically with increasing distance from the origin. The dynamics of a particle in this
55
+ model corresponds to a symmetric random walk with space-dependent hopping probabilities that decrease inversely
56
+ with distance from the origin. This random walk has the interesting property of being ‘sluggish’ since the particle’s
57
+ motion slows down as it goes further away from the origin. We show that the physics of this model is quite different
58
+ from the previously studied case of a particle in a logarithmic potential. Instead, in the continuum limit and for
59
+ |k| ≫ 1, our model corresponds to a particle moving in a potential U(k) ∼ 1/|k| with, additionally, a space-dependent
60
+ diffusion constant that also decays as 1/|k|. The interplay of these two features leads to an emergent bias in the
61
+ dynamics away from the origin, even though the hopping probabilities are symmetric. The position distribution has
62
+ a non-trivial and non- Gaussian form in which distance scales with time as t1/3 at late times. Moreover, we show
63
+ that other observables such as the survival probability in the presence of an absorbing site and the distribution of the
64
+ maximum displacement to one side of the origin can be computed explicitly and exhibit non-trivial scaling behaviour.
65
+ Finally we discuss how the model can be easily generalised to higher dimensions and other space-dependent hopping
66
+ probabilities, such as a 1/|k|α decay with exponent α > 0, without losing its solvability.
67
+ II.
68
+ MODEL DEFINITION AND HYDRODYNAMIC LIMIT
69
+ As discussed, we consider an ordered array of traps arranged on a one-dimensional lattice such that the depth of
70
+ the trap at site k is a ln(|k| + 2) (as illustrated in Fig. 1). The corresponding Arrhenius escape rate from the trap at
71
+ site k is A(|k| + 2)−α with α = β a, where A is an overall constant and β is the inverse temperature. Without loss
72
+ of generality we will set A = 1. We will mostly focus on the case where α = 1, although we briefly discuss α ̸= 1 in
73
+ Section IX.
74
+ We consider the discrete-time dynamics of a particle moving at random on this infinite one-dimensional lattice.
75
+ The key feature of the dynamics is that, as time progresses, the particle explores sites further and further away from
76
+ the origin in which it gets trapped to a greater and greater extent because of the increasing trap depths. Hence the
77
+ particle is subject to diminishing transition probability for exiting the traps.
78
+ At each integer time step t the particle’s position evolves according to the following rules (illustrated in Fig. 1).
79
+ From a site k at time t, the particle hops to site k + 1 with probability 1/(|k| + 2), it hops to site k − 1 with equal
80
+ probability 1/(|k|+2), or it stays at site k with the complementary probability |k|/(|k|+2). The time is then updated
81
+ to t + 1. We note that (in contrast to the Gillis model [18–22]) the hopping probabilities are symmetric for all k;
82
+
83
+ 3
84
+ FIG. 1. Schematic illustration of our model, showing the ordered arrangement of traps and the hopping probabilities 1/(|k|+2)
85
+ for a particle to move to a nearest neighbour of site k on the lattice, as well as the probability |k|/(|k| + 2) of remaining at site
86
+ k.
87
+ however they differ from those of a simple random walk everywhere except at the origin, k = 0, where the hopping
88
+ probability is 1/2 to either of k = ±1.
89
+ Let P(k, t) denote the position distribution at time t for a particle that starts from k0 = 0 at t = 0. The distribution
90
+ evolves via the forward master equation:
91
+ P(k, t + 1) =
92
+ 1
93
+ |k + 1| + 2 P(k + 1, t) +
94
+ 1
95
+ |k − 1| + 2 P(k − 1, t) +
96
+ |k|
97
+ |k| + 2 P(k, t) .
98
+ (1)
99
+ The initial condition is P(k, 0) = δk,0 and the boundary condition is P(k, t) → 0 as |k| → ∞. The solution P(k, t) is
100
+ symmetric around k = 0, hence we can just focus on k ≥ 0. We first write (1) in in the more suggestive form
101
+ P(k, t + 1) − P(k, t) =
102
+ 1
103
+ |k + 1| + 2 P(k + 1, t) +
104
+ 1
105
+ |k − 1| + 2 P(k − 1, t) −
106
+ 2
107
+ |k| + 2 P(k, t) .
108
+ (2)
109
+ For large k > 0 and large t, we can expand the right hand side (rhs) of Equation (2) as a Taylor series in k and
110
+ replace the left hand side (lhs) by a time derivative. This gives, keeping all terms of the same order, the hydrodynamic
111
+ equation that captures the behaviour of the system at long distance and at late time:
112
+
113
+ ∂tP(k, t) ≈ 1
114
+ k
115
+ � ∂2
116
+ ∂k2 P(k, t) − 2
117
+ k
118
+
119
+ ∂k P(k, t) + 2
120
+ k2 P(k, t)
121
+
122
+ .
123
+ (3)
124
+ This hydrodynamic equation can be written as a continuity equation:
125
+
126
+ ∂tP(k, t) = − ∂
127
+ ∂k j(k, t) ,
128
+ (4)
129
+ where the space-time dependent current density j(k, t) reads
130
+ j(k, t) = −1
131
+ k
132
+
133
+ ∂k P(k, t) + 1
134
+ k2 P(k, t) .
135
+ (5)
136
+ We identify the first term on the rhs of (5) as a diffusive probability current and the second term as a drift away from
137
+ the origin. The original equation (3) can also be expressed in the standard Fokker-Planck form
138
+
139
+ ∂tP(k, t) = ∂
140
+ ∂k
141
+
142
+ D(k) ∂
143
+ ∂k P(k, t) +
144
+ � ∂
145
+ ∂k U(k)
146
+
147
+ P(k, t)
148
+
149
+ ,
150
+ (6)
151
+ where D(k) = 1/k and U(k) = 1/k. Equation (6) shows clearly the two features of the dynamics that lead to non-
152
+ trivial behaviour. Firstly, the effective diffusion constant D(k) = 1/k is space dependent, such that it slows down
153
+
154
+ 4
155
+ the dynamics as the particle moves away from the origin. Additionally, an effective external potential U(k) = 1/k
156
+ emerges, which is repulsive, such that it pushes the particle away from the origin. This repulsion arises from the
157
+ microscopic dynamics: the hopping probability from site k to k + 1 is 1/(k + 2), while the reverse event (from (k + 1)
158
+ to k) has probability 1/(k + 3) < 1/(k + 2) for any k > 0. Thus, even though the hopping probabilities out of site
159
+ k (i.e. to (k + 1) or (k − 1)) are symmetric, the space-dependence of the hopping probability produces an outward
160
+ bias away from the origin (symmetrically for k < 0) which leads to the drift term in the current in equation (5) in
161
+ the hydrodynamic description.
162
+ For large time and space we expect to see a scaling regime in which the probability distribution becomes a function
163
+ of a combination k/tν (with ν > 0). The following argument suggests that ν takes the value 1/3. Let us assume that
164
+ after time t, the typical value of the position k is ktyp. The number of steps N that have been taken by the particle
165
+ will scale as N ∼ t/ktyp, since the time for one step is the typical escape time 1/ktyp. Since all steps are equal in
166
+ distance and the hopping probability is symmetric, the position scales with the number of steps in the same way as
167
+ for a simple random walk, ktyp ∼ N 1/2. Putting these arguments together we obtain ktyp ∼ N 1/2 ∼ (t/ktyp)1/2 which
168
+ implies that the position of the particle scales with time as ktyp ∼ t1/3; hence ν = 1/3.
169
+ This scaling can be confirmed by assuming the following scaling form for the probability distribution P(k, t) in the
170
+ limit when both k and t are large, keeping the ratio k/tν (with ν > 0) fixed:
171
+ P(k, t) →
172
+ 1
173
+ b tν G
174
+ � k
175
+ b tν
176
+
177
+ ,
178
+ (7)
179
+ where G(z) is the scaling function. We have also incorporated an adjustable constant b which can be chosen appro-
180
+ priately. Substituting the scaling form (7) in equation (1), one readily finds that for leading order terms to be of the
181
+ same order we must have ν = 1
182
+ 3. For convenience we will also choose b = 31/3. We will discuss the precise form of
183
+ the scaling function G(z) in Section IV.
184
+ The scaling k ∼ t1/3 also manifests itself in other observables, such as the survival probability and the distribution of
185
+ the maximum position of the random walk that we study in this paper. In the next section, for clarity, we summarize
186
+ our main results. Then, in the following sections, we discuss each result in detail.
187
+ III.
188
+ SUMMARY OF KEY RESULTS
189
+ In this paper we derive exact results in the scaling limit for three observables: the position distribution, the survival
190
+ probability and the distribution of the maximum of the random walk. For clarity, we state these results here; their
191
+ derivations will be presented in the following sections.
192
+ Position distribution: in the large t and large k limit, such that z = k/(3t)1/3 is fixed, the position distribution of
193
+ the walker P(k, t) is given by
194
+ P(k, t) →
195
+ 1
196
+ (3t)1/3 G
197
+
198
+ k
199
+ (3t)1/3
200
+
201
+ (8)
202
+ where the scaling function G(z) is given by
203
+ G(z) =
204
+ 31/3
205
+ 2 Γ(2/3) |z| e−|z|3/3 .
206
+ (9)
207
+ This function has a trough at z = 0 and the function is bimodal with peaks at z = ±1 (see Fig. 2).
208
+ Survival probability: For a walker that starts from k0 > 0, the probability that the trap at k = 0 has not been
209
+ visited by time t is equivalent to the survival probability Q(k0, t) in the presence of an absorbing site at the
210
+ origin k = 0. This is given in the scaling limit by
211
+ Q(k0, t) ≈ f
212
+
213
+ k0
214
+ (3t)1/3
215
+
216
+ ,
217
+ where
218
+ f(z) = 1 −
219
+ 1
220
+ Γ(1/3) Γ(1/3, z3/3) ,
221
+ (10)
222
+ (see Fig. 3). This implies that in the long time limit the survival probability decays as t−1/3 (see equation (26)).
223
+ We also compute the joint distribution of position and survival (equations (31) and (34)).
224
+
225
+ 5
226
+ Distribution of maximum: The distribution of M, the furthest site to the right visited by the walker up to time
227
+ t, or equivalently the deepest trap visited up to time t, is given in the scaling limit by
228
+ P(M = L, t) →
229
+ 1
230
+ (3t)1/3 g
231
+
232
+ L
233
+ (3t)1/3
234
+
235
+ (11)
236
+ where y = L/t1/3 is now the scaling variable. The scaling function g(y) (see Fig. 5) is described in equations
237
+ (56),(58) and (59).
238
+ IV.
239
+ SCALING FORM OF THE POSITION DISTRIBUTION
240
+ −3
241
+ −2
242
+ −1
243
+ 0
244
+ 1
245
+ 2
246
+ 3
247
+ k/(3t)
248
+ 1
249
+ 3
250
+ 0.00
251
+ 0.05
252
+ 0.10
253
+ 0.15
254
+ 0.20
255
+ 0.25
256
+ 0.30
257
+ 0.35
258
+ 0.40
259
+ (3t)
260
+ 1
261
+ 3P(k)
262
+ Position Distribution
263
+ Scaling Function G(z)
264
+ FIG. 2. The scaling function G(z) plotted as a function of z. Symbols are obtained from Monte Carlo simulation data for the
265
+ random walk. Starting from k0 = 0 at t = 0, the random walk is numerically evolved up to t = 20000. The symbols show the
266
+ scaled histogram of final positions obtained from n = 150000 runs of the random walk simulation.
267
+ We now derive equations (8) and (9) for the position distribution P(k, t) of the random walk. As discussed earlier,
268
+ the form of equation (3) implies that the correct scaling variable involving k and t is z = k/(bt1/3), and we choose the
269
+ arbitrary constant as b = 31/3 for later convenience. Therefore, to solve the hydrodynamic equation (3), we assume a
270
+ scaling solution at late times and large k of the form
271
+ P(k, t) =
272
+ 1
273
+ (3t)1/3 G
274
+
275
+ k
276
+ (3t)1/3
277
+
278
+ ,
279
+ (12)
280
+ where G(z) is symmetric around z = 0, and is normalized to 1, i.e.,
281
+ � ∞
282
+ −∞ G(z) dz = 1, or equivalently
283
+ � ∞
284
+ 0
285
+ G(z) dz = 1
286
+ 2 .
287
+ (13)
288
+
289
+ 6
290
+ Substituting the scaling ansatz (12) in equation (1) and taking the scaling limit k → ∞, t → ∞ keeping z = k/(3t)1/3
291
+ fixed, we find that G(z), for z > 0, satisfies a second order ordinary differential equation
292
+ G′′(z) +
293
+
294
+ z2 − 2
295
+ z
296
+
297
+ G′(z) +
298
+
299
+ z + 2
300
+ z2
301
+
302
+ G(z) = 0 .
303
+ (14)
304
+ Remarkably, the general solution of this differential equation can be expressed in a simple closed form
305
+ G(z) = c1 z e−z3/3 + c2 z e−z3/3
306
+ � z
307
+ 0
308
+ eu3/3 du ,
309
+ (15)
310
+ where c1 and c2 are arbitrary. However, the second solution (the second term in (15)) behaves, for large z, as 1/z,
311
+ and hence is not normalisable, implying that we must have c2 = 0. The constant c1 can be fixed via the normalization
312
+ constant
313
+ � ∞
314
+ 0
315
+ G(z)dz = 1/2. Using the symmetry around z = 0, the full solution for the scaling distribution (12) is
316
+ then given by
317
+ G(z) =
318
+ 31/3
319
+ 2 Γ(2/3) |z| e−|z|3/3 .
320
+ (16)
321
+ This function is plotted in Fig.
322
+ (2) where we also plot results of Monte Carlo simulations that approach the
323
+ scaling curve. Strikingly, in contrast to a simple random walk (where the scaling variable is z = k/(2t)1/2 and the
324
+ corresponding scaling function is Gaussian with a peak at z = 0), G(z) has a trough at z = 0 where the solution has a
325
+ cusp singularity. The origin of this trough can be traced back to the drift term (away from the origin) in the current
326
+ in equation (5), that leads to a depletion of probability density near the origin at long times. Thus by creating an
327
+ emergent current away from the origin, the sluggish dynamics that is manifested in our model keeps the particle away
328
+ from the origin and produces two peaks (i.e. bimodality) in the probability distribution; these peaks are located at
329
+ z = ±1 or equivalently |k| = (3t)1/3. The distribution of the depth of the trap occupied at time t also follows from
330
+ equation (8) since the trap depth is a ln(|k| + 2).
331
+ V.
332
+ SURVIVAL PROBABILITY
333
+ We now introduce a sink at the origin, such that if the random walker arrives at k = 0, it dies. We consider the
334
+ survival probability of the walker in the presence of this absorbing site at k = 0. The ‘survival probability’ Q(k0, t)
335
+ denotes the probability that the walker is still alive after t steps, given that it starts at site k0 at time zero. Clearly,
336
+ Q(k0, t) is symmetric in k0, so we will consider only k0 ≥ 0, implying the walk that is defined on the positive integers.
337
+ It is convenient to use the backward master equation for the survival probability:
338
+ Q(k0, t + 1) =
339
+ 1
340
+ k0 + 2 Q(k0 + 1, t) +
341
+ 1
342
+ k0 + 2 Q(k0 − 1, t) +
343
+
344
+ 1 −
345
+ 2
346
+ k0 + 2
347
+
348
+ Q(k0, t) ,
349
+ (17)
350
+ for k0 ≥ 1. This equation has a simple interpretation, corresponding to the events that may occur in the first step
351
+ of the walk. In the first step, the walker either hops from site k0 (rightwards to k0 + 1, or leftwards to k0 − 1), or it
352
+ stays at k0. Then, starting from its position at time step 1, it has to survive a further t steps. Summing these three
353
+ possibilities for the first step leads to equation (17), which needs to be solved for k0 ≥ 1 with the boundary conditions
354
+ Q(k0 = 0, t) = 0
355
+ (18)
356
+ Q(k0 → ∞, t) = 1 .
357
+ (19)
358
+ The first boundary condition corresponds to the fact that if the walker starts at the absorbing site k0 = 0 it dies
359
+ immediately. The second condition follows from the fact that if the walker starts far away from the origin, it survives
360
+ with probability 1 as long as t is finite. In the limit of continuous time t and space k the backward equation becomes
361
+
362
+ ∂tQ(k0, t) = 1
363
+ k0
364
+ ∂2
365
+ ∂k2
366
+ 0
367
+ Q(k0, t) .
368
+ (20)
369
+ It is convenient to use a scaling approach to quickly derive the large t asymptotic behaviour of the survival prob-
370
+ ability. We aim to solve equation (20) in the scaling limit introduced in section IV when both k0 and t are large.
371
+ Following the discussion in section IV we expect that Q(k0, t) will satisfy a scaling form
372
+ Q(k0, t) → f
373
+
374
+ k0
375
+ (3t)1/3
376
+
377
+ ,
378
+ (21)
379
+
380
+ 7
381
+ where f(z) is the scaling function. We now substitute the scaling form (21) in equation (20) and expand to leading
382
+ order to obtain the following second order ordinary differential equation in z ≥ 0 for the scaling function
383
+ f ′′(z) = −z2 f ′(z) ,
384
+ (22)
385
+ subject to the two boundary conditions
386
+ f(z = 0) = 0
387
+ and
388
+ f(z → ∞) = 1 ,
389
+ (23)
390
+ which follow from Eqs. (18) and (19) respectively.
391
+ 0
392
+ 2
393
+ 4
394
+ 6
395
+ 8
396
+ 10
397
+ 12
398
+ 14
399
+ k0/(3t)
400
+ 1
401
+ 3
402
+ 0.0
403
+ 0.2
404
+ 0.4
405
+ 0.6
406
+ 0.8
407
+ 1.0
408
+ Q(k0, t)
409
+ k0 = 25, t ∈ [1, 30000]
410
+ k0 = 50, t ∈ [1, 30000]
411
+ k0 = 100, t ∈ [1, 30000]
412
+ k0 = 400, t ∈ [1, 30000]
413
+ Scaling Function f(z)
414
+ FIG. 3. Full curve: the scaling function f(z) plotted as a function of scaling variable z. Symbols are obtained from Monte
415
+ Carlo simulation data for different values of k0. For a given k0, the random walk is numerically evolved over the time window
416
+ shown in the legend. The symbols show histograms obtained from n = 10000 runs of the random walk simulation. These
417
+ histograms are plotted against the scaling variable z = k0/(3t)1/3 . The different intervals of z for different values of k0 are
418
+ chosen for purposes of clarity.
419
+ The solution of equation (22) can be found trivially. Integrating (22) once gives f ′(z) = C e−z3/3. Integrating once
420
+ more, using the boundary conditions (23), leads to the exact solution for the scaling function:
421
+ f(z) =
422
+ � z
423
+ 0 e−x3/3 dx
424
+ � ∞
425
+ 0
426
+ e−x3/3 dx = 1 −
427
+ 1
428
+ Γ(1/3) Γ(1/3, z3/3) ,
429
+ (24)
430
+ where Γ(s, x) =
431
+ � ∞
432
+ x e−t ts−1 dt is the incomplete Gamma function. The scaling function f(z) is plotted in Fig. (3).
433
+ It is linear for small z (t1/3 ≫ k0) and saturates at f = 1 for large z (t1/3 ≪ k0). More precisely, the scaling function
434
+ has the asymptotic behaviours
435
+ f(z) ≈
436
+
437
+
438
+
439
+
440
+
441
+ 32/3
442
+ Γ(1/3) z + O(z4)
443
+ as z → 0
444
+ 1 −
445
+ 32/3
446
+ Γ(1/3) z2 e−z3/3
447
+ as z → ∞ .
448
+ (25)
449
+
450
+ 8
451
+ In particular, for z → 0
452
+ Q(k0, t) ≃
453
+ 31/3
454
+ Γ(1/3)
455
+ k0
456
+ t1/3 .
457
+ (26)
458
+ Equation (26) implies that in the limit t → ∞ the asymptotic behaviour of the survival probability is Q ∼ t−1/3. The
459
+ exponent 1/3 is smaller than the value 1/2 that is obtained for a simple diffusive process, implying that the decay
460
+ is slower than for simple diffusion. Again, the sluggish dynamics results in a significant difference in the dynamical
461
+ properties, compared to those of a simple random walk.
462
+ VI.
463
+ JOINT SURVIVAL AND POSITION DISTRIBUTION
464
+ Next we consider the probability Ps(k, t|k0) that a walker, starting at k0 > 0 at time t = 0, arrives at k at time t,
465
+ having in the meantime avoided the sink at k = 0. This is the joint distribution of survival and position, with the
466
+ subscript s in Ps(k, t|k0) denoting survival.
467
+ For this calculation we use the forward master equation, for k > 0:
468
+ Ps(k, t + 1|k0) =
469
+ 1
470
+ k + 3 Ps(k + 1, t|k0) +
471
+ 1
472
+ k + 1 Ps(k − 1, t|k0) +
473
+ k
474
+ k + 2 Ps(k, t|k0) ,
475
+ (27)
476
+ with the boundary condition Ps(0, t|k0) = 0 and the initial condition, Ps(k, t = 0|k0) = δk,k0. When summed over
477
+ k = 1, 2, · · · , one should recover the survival probability of section V, namely
478
+
479
+
480
+ k=1
481
+ Ps(k, t|k0) = Q(k0, t) .
482
+ (28)
483
+ -4
484
+ -2
485
+ 2
486
+ 4 z
487
+ 0.1
488
+ 0.2
489
+ 0.3
490
+ 0.4
491
+ 0.5
492
+ 0.6
493
+ H(z)
494
+ FIG. 4. The scaling function H(z), given by equation (34), plotted as a function of z.
495
+ For simplicity, we will again work in the scaling limit where t → ∞, k → ∞ and k0 → ∞, keeping z = k/(3t)1/3
496
+ and y = k0/(3t)1/3 fixed. We expect a scaling form
497
+ Ps(k, t|k0) ≈
498
+ 1
499
+ (3t)1/3 W
500
+
501
+ k
502
+ (3t)1/3 ,
503
+ k0
504
+ (3t)1/3
505
+
506
+ ,
507
+ (29)
508
+ such that when integrated over k, we recover the scaling of the survival probability survival probability Q(k0, t) in
509
+ equation (24) with
510
+ � ∞
511
+ 0
512
+ W(z, y) dz = f(y) ,
513
+ (30)
514
+
515
+ 9
516
+ where f(y) is given in equation (24). Here we assume k0 ∼ O(1), so that the second argument of the scaling function
517
+ W in equation (29) approaches zero. From the small argument behaviour of the survival probability in (26), we expect
518
+ that W(z, y → 0) → y H(z). This leads us to the scaling ansatz, valid for any k0 ∼ O(1):
519
+ Ps(k, t|k0) ≈
520
+ k0
521
+ (3t)2/3 H
522
+
523
+ k
524
+ (3t)1/3
525
+
526
+ .
527
+ (31)
528
+ Substituting this scaling ansatz in equation (27), we get, to leading order in 1/t, the following ordinary differential
529
+ equation for H(z), for any z ≥ 0 (for z < 0, this function is symmetric, hence we consider only z ≥ 0):
530
+ H′′(z) +
531
+
532
+ z2 − 2
533
+ z
534
+
535
+ H′(z) +
536
+
537
+ 2z + 2
538
+ z2
539
+
540
+ H(z) = 0 .
541
+ (32)
542
+ The scaling function H(z) should satisfy the absorbing boundary condition H(0) = 0. One more condition can be
543
+ derived by substituting the scaling ansatz (31) in equation (28), and taking the limit y = k0/(3t)1/3 → 0. Using the
544
+ small y behaviour of f(y) in equation (25), we obtain the following condition:
545
+ � ∞
546
+ 0
547
+ H(z) dz =
548
+ 32/3
549
+ Γ(1/3) .
550
+ (33)
551
+ One can easily check that the normalised solution of (32) is simply
552
+ H(z) =
553
+ 32/3
554
+ Γ(1/3) z2e−z3/3 .
555
+ (34)
556
+ H(z) is plotted in Fig. (4). We note that the trough around z = 0 is quadratic in z for this calculation in the presence
557
+ of a sink, in contrast to the linear |z| dependence for the trough in the position distribution for the calculation without
558
+ a sink (equation (16)). The quadratic behaviour of H(z) near the origin also contrasts with the analogous result for
559
+ the simple random walk case where linear behaviour is obtained as z → 0. This limit z → 0 gives information on the
560
+ long time behaviour; from (31),(34) we obtain
561
+ Ps(k, t|k0) ≃
562
+ k0k2
563
+ 32/3Γ(1/3)
564
+ 1
565
+ t4/3 .
566
+ (35)
567
+ The t−4/3 long-time behaviour of the survival probability in equation (35) contrasts with the corresponding t−3/2
568
+ behaviour for a simple random walk.
569
+ VII.
570
+ DISTRIBUTION OF THE MAXIMUM OF THE RANDOM WALK
571
+ We now remove the sink at the origin and instead consider a walker that starts at the origin (k0 = 0) and moves
572
+ freely. We study the statistics of its maximum displacement M(t) on the positive side up to time t. This corresponds
573
+ to the deepest trap visited to the right of the origin up to time t. Then the cumulative distribution Prob. [M(t) ≤ L]
574
+ is just the probability that the walker, starting at the origin, does not visit the site L up to time t. Let S(k0, t) denote
575
+ the probability that starting from k0 at t = 0, the walker does not visit L up to t. We then have
576
+ Prob. [M(t) ≤ L] = S(0, t) .
577
+ (36)
578
+ To compute S(0, t), we will first solve S(k0, t) for a general starting point k0 and then set k0 = 0. The survival
579
+ probability S(k0, t) again evolves according to the backward master equation
580
+ S(k0, t + 1) =
581
+ 1
582
+ |k0| + 2 S(k0 + 1, t) +
583
+ 1
584
+ |k0| + 2 S(k0 − 1, t) +
585
+
586
+ 1 −
587
+ 2
588
+ |k0| + 2
589
+
590
+ S(k0, t) ,
591
+ (37)
592
+ with boundary condition
593
+ S(L, t) = 0 ,
594
+ (38)
595
+ i.e. we impose a sink at site k = L. The initial condition (starting from k0 < L) is
596
+ S(k0, 0) = 1 .
597
+ (39)
598
+
599
+ 10
600
+ Following the approach of section V, we expand in k0 to obtain the backward Fokker Planck equation:
601
+
602
+ ∂tS(k0, t) =
603
+ 1
604
+ |k0|
605
+ ∂2
606
+ ∂k2
607
+ 0
608
+ S(k0, t) ,
609
+ (40)
610
+ which is valid for k0 ≤ L, with an absorbing boundary condition S(k0 = L, t) = 0 at the sink k = L and the initial
611
+ condition S(k0, 0) = 1 for all k0 < L.
612
+ To solve equation (40), it is convenient to consider the Laplace transform
613
+ �S(k0, s) =
614
+ � ∞
615
+ 0
616
+ S(k0, t) e−s t dt .
617
+ (41)
618
+ This satisfies
619
+ ∂2
620
+ ∂k2
621
+ 0
622
+ �S(k0, s) = s|k0|�S(k0, s) − |k0| ,
623
+ (42)
624
+ where we used the initial condition S(k0, 0) = 1. Due to the presence of the absolute value k0 in the differential
625
+ equation (42), we need to solve for 0 ≤ k0 ≤ L and k0 ≤ 0 separately, and then match the solution and its first
626
+ derivative at k0 = 0.
627
+ The general solution of (42) for 0 ≤ k0 ≤ L and k0 ≤ 0 reads
628
+ �S(k0, s) = 1
629
+ s + a1 Ai(s1/3k0) + b1 Bi(s1/3k0)
630
+ for
631
+ 0 ≤ k0 ≤ L
632
+ (43)
633
+ �S(k0, s) = 1
634
+ s + a2 Ai(−s1/3k0)
635
+ for
636
+ k0 ≤ 0 ,
637
+ (44)
638
+ where Ai(x) and Bi(x) are the two linearly independent solutions of the Airy differential equation U ′′(x)−xU(x) = 0.
639
+ Since Bi(−x) diverges as x → −∞, we discarded this in the solution for k0 ≤ 0 in equation (44). The three constants
640
+ (independent of k0) a1, a2, b1 are fixed by the continuity of �S(k0, s), the continuity of ∂k0 �S(k0, s) at k0 = 0 and
641
+ the absorbing boundary condition �S(k0 = L, s) = 0, which yield three linear equations. These three constants can
642
+ then be straightforwardly determined explicitly (we do not give the details here). If the walker starts at k0 = 0 (for
643
+ simplicity), from equation (44), we just need the constant a2(s) since
644
+ �S(0, s) = 1
645
+ s + a2(s) Ai(0) .
646
+ (45)
647
+ It turns out that the expression of a2(s) is rather simple:
648
+ a2(s) =
649
+ 1
650
+ 2π Ai(0) Ai′(0) s Bi(s1/3 L) = −
651
+
652
+ 3
653
+ s Bi(s1/3 L) ,
654
+ (46)
655
+ where we used Ai(0) = 3−2/3/Γ(2/3) and Ai′(0) = −3−1/3/Γ(1/3). Plugging in equation (45) then gives the exact
656
+ Laplace transform, valid for all s:
657
+ �S(0, s) = 1
658
+ s
659
+
660
+ 1 −
661
+ 1
662
+ 31/6 Γ(2/3)
663
+ 1
664
+ Bi(s1/3 L)
665
+
666
+ .
667
+ (47)
668
+ Taking the Laplace transform of equation (36), and plugging in the result (47), we obtain the exact Laplace
669
+ transform of the cumulative distribution of the maximum:
670
+ � ∞
671
+ 0
672
+ Prob. [M(t) ≤ L] e−s t dt = 1
673
+ s
674
+
675
+ 1 −
676
+ 1
677
+ 31/6 Γ(2/3)
678
+ 1
679
+ Bi(s1/3 L)
680
+
681
+ .
682
+ (48)
683
+ This result can be further simplified by noting that Prob. [M(t) ≥ L] = 1 − Prob. [M(t) ≤ L]. Consequently,
684
+ � ∞
685
+ 0
686
+ Prob. [M(t) ≥ L] e−s t dt =
687
+ 1
688
+ 31/6 Γ(2/3)
689
+ 1
690
+ s Bi(s1/3 L) .
691
+ (49)
692
+ Formally inverting this Laplace transform using the Bromwich contour and rescaling sL1/3 = λ, one sees immediately
693
+ that for all t and L, the cumulative distribution takes the scaling form
694
+ Prob. [M(t) ≥ L] = Y
695
+
696
+ t
697
+ L1/3
698
+
699
+ ,
700
+ (50)
701
+
702
+ 11
703
+ 10−1
704
+ 100
705
+ m/(3t)
706
+ 1
707
+ 3
708
+ 10−5
709
+ 10−4
710
+ 10−3
711
+ 10−2
712
+ 10−1
713
+ 100
714
+ (3t)
715
+ 1
716
+ 3P(m)
717
+ Distribution of Maximum
718
+ Higher End Tail of g(z)
719
+ Lower End Tail of g(z)
720
+ FIG. 5. Distribution of the maximum of the random walk. The two full curves denote the lower end tail of g(z), equation (59),
721
+ and higher end tail of g(z), equation (58). The symbols are obtained from Monte Carlo simulation data for the random walk.
722
+ Starting from k0 = 0 at t = 0, the random walk was evolved up to t = 20000. The symbols show the scaled histogram obtained
723
+ from n = 105000 runs of the random walk simulation.
724
+ where the scaling function Y (y) has the exact Laplace transform
725
+ � ∞
726
+ 0
727
+ e−λy Y (y) dy = 31/3Γ(1/3)
728
+
729
+ 1
730
+ λBi(λ1/3) .
731
+ (51)
732
+ While it is difficult to invert the Laplace transform exactly, it is straightforward to extract its asymptotic behaviours,
733
+ as shown below.
734
+ The large y behaviour of Y (y) is controlled by the small λ expansion of (51)
735
+ � ∞
736
+ 0
737
+ e−λyY (y) dy ≃ 1
738
+ λ − 31/3Γ(2/3)
739
+ Γ(1/3)
740
+ 1
741
+ λ2/3 + 32/3Γ2(2/3)
742
+ Γ2(1/3)
743
+ 1
744
+ λ1/3 + . . . ,
745
+ (52)
746
+ which yields the large y asymptotic expansion
747
+ Y (y) ∼ 1 −
748
+ 31/3
749
+ Γ(1/3)
750
+ 1
751
+ y1/3 + 32/3Γ2(2/3)
752
+ Γ3(1/3)
753
+ 1
754
+ y2/3 . . . .
755
+ (53)
756
+ The small y behaviour of Y (y) can be obtained from the large λ asymptotic behaviour of (51)
757
+ � ∞
758
+ 0
759
+ e−λyY (y) dy ∼
760
+ π1/2
761
+ 31/6Γ(2/3)λ11/12 e−2/3 λ1/2 ,
762
+ (54)
763
+ which can be inverted to give the small y behaviour
764
+ Y (y) ≃
765
+ 32/3
766
+ Γ(2/3)y1/3e−1/(9y) .
767
+ (55)
768
+
769
+ 12
770
+ Using equation (50), we can now express the probability density Prob. [M(t) = L] of the maximum of the random
771
+ walk in a scaling form:
772
+ Prob. [M(t) = L] = − d
773
+ dLProb. [M(t) ≥ L] =
774
+ 1
775
+ (3t)1/3 g
776
+
777
+ L
778
+ (3t)1/3
779
+
780
+ ,
781
+ (56)
782
+ where the scaling function g(z) is simply related to the scaling function Y (y) and we deduce that
783
+ g(z) = z−4 Y ′(y)|y=1/(3z3) .
784
+ (57)
785
+ Using the asymptotic behavior of Y (y), we can then obtain the asymptotic tails of g(z) as
786
+ g(z) ∼
787
+ 31/3
788
+ Γ(2/3) z e−z3/3
789
+ for
790
+ z → ∞
791
+ (58)
792
+ g(z) ∼
793
+ 32/3
794
+ Γ(1/3)
795
+
796
+ 1 − 2.32/3Γ2(2/3)
797
+ Γ2(1/3)
798
+ z . . .
799
+
800
+ for
801
+ z ≪ 1 .
802
+ (59)
803
+ In Fig. (5) the tails of g(z), equations (58) and (59), are compared with the numerical simulations. It is useful to
804
+ compare equation (58) with equation (16). We see that for large z, the scaling function of the position distribution
805
+ (16) and that of the maximum (58) have the same asymptotic tails up to an overall factor 1/2. This is similar to
806
+ what occurs for a simple random walk, although in that case the tails are Gaussian. The small z behaviour (59) for
807
+ the scaling function is a constant with a linear correction. The constant is consistent with the large time limit of the
808
+ survival probability (26). The linear correction contrasts with the case of a simple random walk where the correction
809
+ to the constant term is quadratic in the scaling variable.
810
+ VIII.
811
+ GENERATING FUNCTION APPROACH
812
+ In sections IV - VII, we adopted a scaling approach to obtain long-time asymptotic results for the sluggish random
813
+ walk problem. We now illustrate how a generating function approach may be employed to find the exact solution
814
+ for all times. We will see that the long time limit of the solution obtained using the generating function approach
815
+ recovers the results of the scaling approach. For the sake of brevity, we restrict ourselves to the computation of the
816
+ survival probability.
817
+ Consider again Q(k0, t), the survival probability for a walker starting at k0 in the presence of a sink at the origin
818
+ k = 0. Q(k0, t) satisfies the backward master equation (17). We define a generating function with parameter λ:
819
+ G(k0) =
820
+
821
+
822
+ t=0
823
+ λtQ(k0, t) .
824
+ (60)
825
+ Substituting (60) into (17) and imposing the initial condition Q(k0, 0) = 1, we obtain
826
+
827
+ k0
828
+ 1 − λ
829
+ λ
830
+ + 2
831
+ λ
832
+
833
+ G(k0) − G(k0 + 1) − G(k0 − 1) = k0 + 2
834
+ λ
835
+ .
836
+ (61)
837
+ Equation (61), in which k0 takes integer values, can be solved using Bessel functions. However it is easier to take a
838
+ continuum limit and expand to second order in k0, to obtain
839
+ ∂2G(k0)
840
+ ∂k2
841
+ 0
842
+
843
+ �(k0 + 2)(1 − λ)
844
+ λ
845
+
846
+ G(k0) = −(k0 + 2)
847
+ λ
848
+ .
849
+ (62)
850
+ The homogeneous version of (62) (i.e. equating the lhs to zero) has Airy functions as solutions:
851
+ Ghom(k0) = B0Ai(C(k0 + 2)) + B1Bi(C(k0 + 2)) ,
852
+ (63)
853
+ where
854
+ C =
855
+ �1 − λ
856
+ λ
857
+ �1/3
858
+ ,
859
+ (64)
860
+
861
+ 13
862
+ and the constants B0 and B1 are to be fixed by the boundary conditions. We can set B1 = 0 and discard the Bi
863
+ solution, as it diverges as k0 → ∞.
864
+ Then for G(k0) a particular solution to (62) is 1/(1 − λ) and the general solution to (62) is
865
+ G(k0) = B0Ai((k0 + 2)C) +
866
+ 1
867
+ 1 − λ .
868
+ (65)
869
+ The boundary condition is G(0) = 0, which fixes the constant B0, and we obtain the solution to (62) as
870
+ G(k0) =
871
+ 1
872
+ 1 − λ
873
+
874
+ 1 − Ai((k0 + 2)C)
875
+ Ai(2C)
876
+
877
+ .
878
+ (66)
879
+ We are interested in the long time asymptotic behaviour, which we can extract from the λ → 1 limit of (66). Since
880
+ C → 0 as λ → 1, we require the small argument expansion of the Airy function:
881
+ Ai(x) ≃
882
+ 1
883
+ 32/3Γ(2/3) −
884
+ x
885
+ 31/3Γ(1/3) .
886
+ (67)
887
+ We then find, in the limit λ → 1,
888
+ G(k0) ≃ 31/3 Γ(2/3)
889
+ Γ(1/3)
890
+ k0
891
+ (1 − λ)2/3 .
892
+ (68)
893
+ Thus the leading singularity is at λ∗ = 1 and is of the form (λ∗ − λ)−2/3. Invoking the usual Tauberian theorem [33],
894
+ this singularity gives the following large t asymptotic behaviour:
895
+ Qt(k0) ∼
896
+ 31/3
897
+ Γ(1/3)k0t−1/3 .
898
+ (69)
899
+ This matches perfectly with the small z asymptotic of the scaling behaviour in (24) upon using the small z expansion
900
+ of f(z) in equation (25).
901
+ IX.
902
+ GENERALISATION TO THE CASE WHERE α ̸= 1
903
+ Up to now, we have considered only the case where the probability of hopping to the right or left is proportional
904
+ to 1/(|k| + 2), i.e. the exponent α = 1 in the general expression for the hopping probability, A(|k| + 2)−α. We now
905
+ generalise to the case where α ̸= 1, i.e. the hopping probability is proportional to 1/(|k| + 2)α. In this case equation
906
+ (3) generalises for k ≥ 0 to
907
+
908
+ ∂tP(k, t) ≈ 1
909
+
910
+ � ∂2
911
+ ∂k2 P(k, t) − 2α
912
+ k
913
+
914
+ ∂k P(k, t) + α(α + 1)
915
+ k2
916
+ P(k, t)
917
+
918
+ .
919
+ (70)
920
+ Equation (70) can be put into the standard form (6), where now D(k) = 1/kα and U(k) = 1/kα. One can again
921
+ solve (70) by the scaling approach discussed earlier. For general positive α it is easy to show that the scaling variable
922
+ becomes k/tν where ν = (2 + α)−1. Therefore the solution of (70) for P(k, t) has a scaling form
923
+ P(k, t) = t−
924
+ 1
925
+ α+2 G
926
+
927
+ k t−
928
+ 1
929
+ α+2
930
+
931
+ ,
932
+ (71)
933
+ where the scaling function G(z) is symmetric and, for positive z, satisfies the nontrivial differential equation
934
+ G′′(z) +
935
+ � zα+1
936
+ α + 2 − 2α
937
+ z
938
+
939
+ G′(z) +
940
+ � zα
941
+ α + 2 + α(α + 1)
942
+ z2
943
+
944
+ G(z) = 0 ,
945
+ (72)
946
+ with boundary condition G(z) → 0 as z → ∞. Remarkably, this equation admits the simple solution, satisfying the
947
+ boundary condition,
948
+ G(z) = A zα exp
949
+
950
+
951
+ zα+2
952
+ (α + 2)2
953
+
954
+ ,
955
+ (73)
956
+
957
+ 14
958
+ where the normalisation constant A is given by
959
+ A−1 = 2(α + 2)
960
+ α
961
+ α+2 Γ
962
+ �α + 1
963
+ α + 2
964
+
965
+ .
966
+ (74)
967
+ Using the symmetry G(z) = G(−z), the full solution for all z can be written as
968
+ G(z) = A |z|α exp
969
+
970
+ − |z|α+2
971
+ (α + 2)2
972
+
973
+ .
974
+ (75)
975
+ When α = 0 we recover the standard Gaussian result for a simple random walk, while for α = 1 we recover the result
976
+ (16) upon rescaling z → 31/3z. We note that for any α > 0 there is a trough, i.e. a cusp singularity, at z = 0. The
977
+ trough at z = 0 disappears only for the case of simple diffusion (α = 0).
978
+ Similar scaling analyses can be performed for the survival probability as well as the distribution of the maximum
979
+ site visited to the right. We do not repeat the analysis, but just note that the scaling implies that the asymptotic
980
+ decay of the survival probability is Q(t) ∼ t−1/(α+2) and the maximum scales as M(t) ∼ t1/(α+2) .
981
+ X.
982
+ CONCLUSION
983
+ In this paper we have studied a random walk with space-dependent transition probabilities. Our study was motivated
984
+ by trap models of slow dynamics, but in contrast to most such models, our trap depths are not random but instead
985
+ increase logarithmically with distance k from the origin. The dynamics of a particle moving on the lattice of traps
986
+ follows an inhomogeneous random walk which has symmetric transition probabilities that decrease with k as 1/k. Thus
987
+ the motion of a walker slows down as it goes further and further away from the origin, a phenomenon that we term
988
+ ‘sluggish dynamics’. The sluggish dynamics causes the typical distance explored up to time t to grow subdiffusively
989
+ as t1/3, in contrast to the standard t1/2 law for a simple random walk.
990
+ We used a scaling approach, in which the scaling variable is k/t1/3, to compute long-time asymptotic results for
991
+ various properties of this inhomogeneous random walk: the position distribution, the survival probability in the
992
+ presence of a sink at the origin, the joint survival and position distribution, and the distribution of the maximum
993
+ distance to the right. Interestingly, the position distribution has a trough (a cusp singularity) at the origin and is
994
+ bimodal, with two peaks located at |k| = (3t)1/3. The contrasts with the usual Gaussian distribution for diffusion
995
+ (which has a single maximum at k = 0). The bimodal distribution and the t1/3 scaling reflect the sluggish nature of
996
+ the dynamics. The survival probability shows an asymptotic decay ∼ t−1/3 at large time, which contrasts with the
997
+ t−1/2 decay for a simple random walk. The fact that the survival probability decays to zero as t → ∞ implies that
998
+ the walk is recurrent in d = 1, as is the simple random walk. The distribution of the maximum of the walk up to time
999
+ t has a nontrivial scaling function.
1000
+ We further showed how a generating function approach can be used to find exact solutions for all times. Using
1001
+ this approach to compute the survival probability in the presence of a sink at the origin, we recover our scaling
1002
+ result in the long-time limit. Application of the same generating function approach to other observables should be a
1003
+ straightforward extension.
1004
+ Finally, we generalised the model to cases where the transition probability decays as 1/|k|α with positive α. Except
1005
+ for α = 0 (simple random walk), the position distribution always shows a trough at the origin (k = 0), where it
1006
+ exhibits a singularity, behaving as |k|α. Remarkably, the scaling function for the position distribution takes on a
1007
+ simple form (equation (75)) and there is always a trough at the origin with associated singularity |z|α for α > 0.
1008
+ It is worthwhile comparing the behaviour of our sluggish random walk model with that of the Gillis model outlined
1009
+ in the introduction. In the continuum limit the Gillis model becomes diffusion in a logarithmic potential [19, 24] and
1010
+ the corresponding Fokker-Planck equation reads
1011
+
1012
+ ∂tP(k, t) = ∂
1013
+ ∂k
1014
+ � ∂
1015
+ ∂k P(k, t) +
1016
+ � ∂
1017
+ ∂k U(k)
1018
+
1019
+ P(k, t)
1020
+
1021
+ ,
1022
+ (76)
1023
+ where the potential U(k) = 2ϵ ln |k|. The relevant case for us is ϵ < 0 whereby the potential is repulsive and the
1024
+ particle is pushed away from the origin. In this Gillis case, the solution for the time-dependent position distribution
1025
+ has scaling form [19, 24]
1026
+ P(k, t) →
1027
+ 1
1028
+ t1/2 GGill
1029
+ � k
1030
+ t1/2
1031
+
1032
+ (77)
1033
+
1034
+ 15
1035
+ where the scaling function, GGill(z), is given by
1036
+ GGill(z) =
1037
+ 2ϵ−1/2
1038
+ Γ(1/2 − ϵ) |z|−2ϵ e−|z|2/2 .
1039
+ (78)
1040
+ This is to be compared with the scaling function G(z) (16) for the sluggish random walk model (where the scaling
1041
+ variable is z = k/(3t)1/3). As with (16), the scaling function (78) is bimodal, with peaks at z = ±(−2ϵ)1/2, and has a
1042
+ trough at the origin. However, the model exhibits diffusive scaling and is thus not sluggish. The difference between
1043
+ the sluggish random walk and diffusion in a logarithmic potential is evident when one compares the Fokker Planck
1044
+ equations (6) and (76). The key difference is the space-dependent diffusion constant D(k) = 1/k appearing in (76),
1045
+ along with the potential U(k) = 1/k . It is these features that lead to a change of the scaling variable to z = k/(3t)1/3
1046
+ and consequent sluggish behaviour.
1047
+ The sluggish random walk model and its analysis are straightforward to generalise to higher dimensions and other
1048
+ observables. For example, it would interesting to study the return probabilities and recurrence/transience transition
1049
+ in a higher dimension for general α. It would also be of interest to study the time for the walker to traverse from
1050
+ one maximum of the position distribution to the other. More generally our study has shown that inhomogenous
1051
+ space-dependent random walks can exhibit surprising properties and it remains to explore the full range of such
1052
+ behaviour.
1053
+ AZ acknowledges support of the INSPIRE fellowship from DST India and the Physics Computing Facility lab at
1054
+ UCSD. RJA was supported by the European Research Council under consolidator grant 682237 EVOSTRUC and
1055
+ by the Excellence Cluster Balance of the Microverse (EXC 2051 - Project-ID 390713860) funded by the Deutsche
1056
+ Forschungsgemeinschaft (DFG). For the purpose of open access, the author has applied a Creative Commons Attribu-
1057
+ tion (CC BY) licence to any Author Accepted Manuscript version arising from this submission. MRE thanks LPTMS
1058
+ for the award of a CNRS Visiting Professorship, during which this work was written up.
1059
+ [1] Wolynes P G, Lubchenko V (Editors) 2012 Structural Glasses and Supercooled Liquids: Theory, Experiment, and Appli-
1060
+ cations. John Wiley & sons.
1061
+ [2] Berthier L, Biroli G, Bouchaud J-P, Cipelletti,L, van Saarloos W. (Editors) 2011. Dynamical heterogeneities in glasses,
1062
+ colloids, and granular media (Vol. 150). OUP Oxford.
1063
+ [3] Bouchaud J-P 1992 Weak ergodicity breaking and aging in disordered systems J. Phys. I France 2 1705
1064
+ [4] Bouchaud, J-P, Dean, D S 1995 Aging on Parisi’s tree. J. Phys. I France 5 265.
1065
+ [5] Monthus C and Bouchaud J-P 1996 Models of traps and glass phenomenology. J. Phys. A: Math. Gen. 29 3847.
1066
+ [6] Bertin E M and Bouchaud J-P 2003 Linear and nonlinear response in the aging regime of the one-dimensional trap model.
1067
+ Phys. Rev. E 67, (2003): 065105.
1068
+ [7] Sollich P 2003 Fluctuation-dissipation relations in trap models. J. Phys A: Math. Gen 36 10807.
1069
+ [8] Bouchaud J-. and Georges A 1990 Anomalous diffusion in disordered media: statistical mechanisms, models and physical
1070
+ applications. Physics Reports 195 127
1071
+ [9] Metzler R and Klafter J 2000 The random walk’s guide to anomalous diffusion: a fractional dynamics approach. Physics
1072
+ Reports 339 1.
1073
+ [10] Bel G and Barkai E 2005. Weak ergodicity breaking in the continuous-time random walk. Phys. Rev. Lett. 94 240602.
1074
+ [11] Hughes B D 1995 Random walks and random environments: random walks (Vol. 1). Oxford University Press.
1075
+ [12] Menshikov M, Popov S, Wade A 2016 Non-homogeneous random walks: Lyapunov function methods for near-critical
1076
+ stochastic systems (Vol. 209). Cambridge University Press.
1077
+ [13] Redner S 2001 A guide to first-passage processes. Cambridge University Press.
1078
+ [14] Bray A J, Majumdar S N, Schehr G 2013 Persistence and first-passage properties in nonequilibrium systems. Advances in
1079
+ Physics 62 225.
1080
+ [15] Metzler R, Redner S, Oshanin G (Editors) 2014 First-passage phenomena and their applications (Vol. 35). World Scientific.
1081
+ [16] Gumbel E J 1958 Statistics of Extremes. Columbia University Press.
1082
+ [17] Majumdar S N, Pal A, Schehr G 2020 Extreme value statistics of correlated random variables: a pedagogical review.
1083
+ Physics Reports, 840 1.
1084
+ [18] Gillis J 1956 Centrally biased discrete random walk.Q. J. Math. , 7, 144.
1085
+ [19] Onofri M, Pozzoli G, Radice M, Artuso R 2020 Exploring the Gillis model: a discrete approach to diffusion in logarithmic
1086
+ potentials. J. Stat. Mech. P113201.
1087
+ [20] Pozzoli G, Radice M, Onofri M, Artuso R 2020 A continuous-time random walk extension of the Gillis model. Entropy 22
1088
+ 1431.
1089
+ [21] Radice M, Onofri M, Artuso R, Pozzoli G 2020 Statistics of occupation times and connection to local properties of
1090
+ nonhomogeneous random walks. Phys. Rev. E 101 042103.
1091
+ [22] Artuso R, Onofri M, Pozzoli G, Radice M 2022 Extreme value statistics of positive recurrent centrally biased random
1092
+ walks. J. Stat. Mech. P103209.
1093
+
1094
+ 16
1095
+ [23] Bray A J 2000 Random walks in logarithmic and power-law potentials, nonuniversal persistence, and vortex dynamics in
1096
+ the two-dimensional XY model Phys. Rev. E 62 103
1097
+ [24] Dechant A, Lutz E, Barkai E and Kessler D A 2011. Solution of the Fokker-Planck equation with a logarithmic potential.
1098
+ J. Stat. Phys. 145 1524.
1099
+ [25] Hirschberg O, Mukamel D, and Sch¨utz, G M 2011. Approach to equilibrium of diffusion in a logarithmic potential. Phys.
1100
+ Rev. E 84 041111.
1101
+ [26] Levine E, Mukamel D and Sch¨utz G M 2005 Long-range attraction between probe particles mediated by a driven fluid
1102
+ Europhys. Lett. 70 565
1103
+ [27] Ray S and Reuveni S 2020. Diffusion with resetting in a logarithmic potential. J. Chem. Phys. 152 234110.
1104
+ [28] Castin Y, Dalibard J and Cohen-Tannoudji C 1991 The limits of Sisyphus cooling Light Induced Kinetic Effects on Atoms,
1105
+ Ions and Molecules ed L Moi, S Gozzini, C Gabbanini, E Arimondo and F Strumia (Pisa: ETS Editrice)
1106
+ [29] Marksteiner S, Ellinger K and Zoller P 1996 Anomalous diffusion and L ´evy walks in optical lattices Phys. Rev. A 53 3409
1107
+ [30] Lutz E 2004 Power-law tail distributions and nonergodicity Phys. Rev. Lett. 93 190602
1108
+ [31] Bouchet F and Dauxois T 2005 Prediction of anomalous diffusion and algebraic relaxations for long-range interacting
1109
+ systems, using classical statistical mechanics Phys. Rev. E 72 045103(R)
1110
+ [32] Campa A, Dauxois T and Ruffo S 2009 Statistical mechanics and dynamics of solvable models with long-range interactions
1111
+ Phys. Rep. 480 57
1112
+ [33] Wilf, H. S. (2005). generatingfunctionology. CRC press.
1113
+
-NFPT4oBgHgl3EQfZTSq/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
.gitattributes CHANGED
@@ -184,3 +184,4 @@ tdAzT4oBgHgl3EQf6v4U/content/2301.01878v1.pdf filter=lfs diff=lfs merge=lfs -tex
184
  V9AzT4oBgHgl3EQfJ_vP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
185
  x9FJT4oBgHgl3EQfhSy6/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
186
  XNAyT4oBgHgl3EQf9PpX/content/2301.00870v1.pdf filter=lfs diff=lfs merge=lfs -text
 
 
184
  V9AzT4oBgHgl3EQfJ_vP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
185
  x9FJT4oBgHgl3EQfhSy6/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
186
  XNAyT4oBgHgl3EQf9PpX/content/2301.00870v1.pdf filter=lfs diff=lfs merge=lfs -text
187
+ o9FMT4oBgHgl3EQf7jFB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
29AzT4oBgHgl3EQffPxu/content/tmp_files/2301.01449v1.pdf.txt ADDED
@@ -0,0 +1,916 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Building Coverage Estimation with Low-resolution Remote Sensing Imagery
2
+ Enci Liu, Chenlin Meng, Matthew Kolodner, Eun Jee Sung, Sihang Chen
3
+ Marshall Burke, David Lobell, Stefano Ermon
4
+ Stanford University
5
+ {chenlin, jesslec}@cs.stanford.edu, {mkolod, ejsung, schen22, mburke, dlobell}@stanford.edu, [email protected]
6
+ Abstract
7
+ Building coverage statistics provide crucial insights into the
8
+ urbanization, infrastructure, and poverty level of a region, fa-
9
+ cilitating efforts towards alleviating poverty, building sustain-
10
+ able cities, and allocating infrastructure investments and pub-
11
+ lic service provision. Global mapping of buildings has been
12
+ made more efficient with the incorporation of deep learning
13
+ models into the pipeline. However, these models typically
14
+ rely on high-resolution satellite imagery which are expensive
15
+ to collect and infrequently updated. As a result, building cov-
16
+ erage data are not updated timely especially in developing re-
17
+ gions where the built environment is changing quickly. In this
18
+ paper, we propose a method for estimating building coverage
19
+ using only publicly available low-resolution satellite imagery
20
+ that is more frequently updated. We show that having a multi-
21
+ node quantile regression layer greatly improves the model’s
22
+ spatial and temporal generalization. Our model achieves a co-
23
+ efficient of determination (R2) as high as 0.968 on predicting
24
+ building coverage in regions of different levels of develop-
25
+ ment around the world. We demonstrate that the proposed
26
+ model accurately predicts the building coverage from raw in-
27
+ put images and generalizes well to unseen countries and con-
28
+ tinents, suggesting the possibility of estimating global build-
29
+ ing coverage using only low-resolution remote sensing data.
30
+ Introduction
31
+ The quantity and location of buildings provide important in-
32
+ sight into the human activities and urban development of
33
+ a region. Not only are building statistics themselves key
34
+ socioeconomic indicators, they also help predict other key
35
+ sustainable development indices, including poverty (Ayush
36
+ et al. 2021; Uzkent, Yeh, and Ermon 2020; Yeh et al. 2020),
37
+ population density (Huang et al. 2021), and climate out-
38
+ comes (Chini et al. 2018). Moreover, building coverage
39
+ statistics help policymakers and NGOs make informed deci-
40
+ sions regarding the provision of public services, the targeting
41
+ of humanitarian aid, and priorities for large-scale infrastruc-
42
+ ture investments.
43
+ The development of deep learning detection (Redmon and
44
+ Farhadi 2018) and segmentation models (Ronneberger, Fis-
45
+ cher, and Brox 2015; Sun et al. 2019) has allowed for more
46
+ efficient global mapping of buildings. As a result, there has
47
+ Copyright © 2023, Association for the Advancement of Artificial
48
+ Intelligence (www.aaai.org). All rights reserved.
49
+ been an increasing number of global settlement map datasets
50
+ in the past decade (Esch et al. 2017; Marconcini et al. 2020;
51
+ Sirko et al. 2021), allowing researchers to develop insights
52
+ into the socioeconomic development of different regions.
53
+ Nevertheless, detection- or segmentation-based methods
54
+ typically rely on a large amount of high-resolution satel-
55
+ lite imagery and the corresponding pixel- or instance-level
56
+ labels for training, which are often prohibitively expensive
57
+ and unaffordable for researchers and policymakers. More-
58
+ over, high-resolution imagery are updated less frequently
59
+ than low-resolution ones. In addition, running detection or
60
+ segmentation models over high-resolution images covering
61
+ a large area requires a large amount of compute. Due to
62
+ these reasons, building data gathered in this way is often not
63
+ updated timely. For example, the Microsoft Global Build-
64
+ ing Footprints dataset1 is collected from satellite images be-
65
+ tween 2014 and 2021. However, during the 7-year period,
66
+ population continues to grow and new buildings are con-
67
+ structed, especially in fast developing regions. For instance,
68
+ from 2015 to 2020, the population increased by 44.1%
69
+ for Malappuram and 34.2% for Abuja.2 Moreover, fine-
70
+ grained detection and segmentation models usually gener-
71
+ alize poorly to unseen geographies, timestamps, and image
72
+ sources because the appearance of buildings vary widely in
73
+ satellite images (Yuan and Cheriyadat 2014).
74
+ Compared with its high-resolution counterpart, low-
75
+ resolution satellite imagery (e.g. Sentinel-1 and Sentinel-
76
+ 2) are publicly available and updated every month, making
77
+ them desirable for studying the economic and urban devel-
78
+ opment of a region. However, prior works have not fully
79
+ utilized low-resolution imagery. In this paper, we propose
80
+ a cheaper and more generalizable way to update building
81
+ statistics using only low-resolution satellite imagery from
82
+ Sentinel-1 and Sentinel-2. Instead of detecting or segment-
83
+ ing each building in satellite images, the proposed model
84
+ directly predicts the building coverage from the raw input
85
+ pixels, using input imagery from a public source that is up-
86
+ dated nearly weekly. Specifically, we found that incorporat-
87
+ ing a multi-node quantile regression loss helps improve the
88
+ generalization of the model. The proposed method achieves
89
+ a coefficient of determination (R2) as high as 0.968 in re-
90
+ 1https://github.com/microsoft/GlobalMLBuildingFootprints
91
+ 2https://worldpopulationreview.com/
92
+ arXiv:2301.01449v1 [cs.CV] 4 Jan 2023
93
+
94
+ gions from different continents and of different levels of de-
95
+ velopment. We also conduct ablation studies and show that
96
+ the incorporation of additional multi-spectral bands avail-
97
+ able from the low-resolution satellite imagery and the multi-
98
+ node quantile regression design help improve the model per-
99
+ formance.
100
+ Method
101
+ In this paper, we propose a deep-learning based regression
102
+ model that accurately predicts building coverage in low-
103
+ resolution satellite imagery from Sentinel-1 and Sentinel-
104
+ 2. Unlike detection- or segmentation-based methods, our
105
+ model does not require high-resolution training data or
106
+ hand-crafted threshold and generalizes well to unseen re-
107
+ gions. The proposed method accurately predicts the building
108
+ coverage in the raw input low-resolution satellite image with
109
+ the help of a quantile regression.
110
+ Problem Definition
111
+ Given a geographical region, we want to estimate the build-
112
+ ing coverage within it. Due to the lack of direct statistics of
113
+ building coverage in square meter or kilometer, we instead
114
+ predict the number of building pixels y ∈ R, in the satellite
115
+ image X representing the target region, assuming that the
116
+ number of building pixels is proportional to the actual build-
117
+ ing coverage (Figure 1(a)). We want to build a model that
118
+ predicts y from the raw image input X.
119
+ Multi-node Quantile Regression Model
120
+ We observe that the distribution of building pixel counts
121
+ across Africa and South America are heavy-tailed, with
122
+ more than 75% of the samples having less than 20% of all
123
+ pixels being buildings. Regular linear regression trained on
124
+ root-mean-square error objective estimates the conditional
125
+ mean and assumes normality of the data distribution, fail-
126
+ ing to model non-normal asymmetric distribution accurately.
127
+ Moreover, linear regression is not robust to outlier values,
128
+ which characterize the building pixel count distribution for
129
+ our task.
130
+ To address the aforementioned problems, we adopt quan-
131
+ tile regression (Koenker and Bassett Jr 1978), which esti-
132
+ mates the conditional quantile (e.g., median) of the response
133
+ variable. Quantile regression allows us to incorporate uncer-
134
+ tainty in prediction and captures the relationship between the
135
+ input and different quantiles of the data. As we will see in
136
+ Section , multi-node quantile regression indeed empirically
137
+ performed better than regular linear regression for our task.
138
+ Specifically, we modify the ResNet18 (He et al. 2016) ar-
139
+ chitecture to have K output channels (i.e. nodes), each cor-
140
+ responding to a different quantile (see Figure 1). As it is ob-
141
+ served that the median of a distribution gives the minimum
142
+ absolute error from the ground truth (Hanley et al. 2001), at
143
+ inference time, we collect the model predictions from only
144
+ the 0.5 quantile node, which is expected to predict the con-
145
+ ditional median of the response variable.
146
+ Multi-node Quantile Loss
147
+ For each node that represents a quantile q ∈ (0, 1), we com-
148
+ pute an asymmetric quantile loss, or pinball loss. Depending
149
+ on the quantile q, over- and under-estimation are penalized
150
+ unevenly. Specifically, the node-wise pinball loss for a given
151
+ prediction ˆy and ground truth label y is computed as follows:
152
+ Lpinball(q, y, ˆy) =
153
+ �q · |ˆy − y|
154
+ if ˆy ≥ y
155
+ (1 − q) · |ˆy − y|
156
+ otherwise
157
+ When q = 0.5, the pinball loss is the same as the absolute
158
+ error.
159
+ To compute the final loss, we take the mean of the pinball
160
+ losses among the K output nodes as follows:
161
+ Lquantile(y, ˆy) = 1
162
+ K
163
+ K
164
+
165
+ n=1
166
+ Lpinball(qn, y, ˆy)
167
+ where the subscript n indicates the index of the quantile in
168
+ the list of quantiles predicted by the model. Notice that when
169
+ K = 1 and q = 0.5, the quantile loss formula boils down to
170
+ the regular L1 loss.
171
+ Experiment Setup
172
+ Before introducing the experiment setups, we define tile and
173
+ patch in the context of this paper. We define a patch to be the
174
+ small rasters of 50×50 pixels that we crop larger rasters into.
175
+ A tile is a larger satellite imagery that could be cropped into
176
+ multiple smaller patches (see Figure 2). We train and eval-
177
+ uate the multi-node quantile regression model on patches of
178
+ 50 × 50 pixels and add post-processing steps to collect tile-
179
+ level results.
180
+ Training Data
181
+ As the majority of the African continent is covered by for-
182
+ est or desert and does not contain any buildings, we sample
183
+ locations based on the population density so that the train-
184
+ ing data contains sufficient tiles with buildings for the model
185
+ to learn from. Our training set contains 15,000 input-label
186
+ pairs. Sentinel-1 and Sentinel-2 images are used as as the
187
+ input data and the Open Buildings dataset is used to derive
188
+ the building coverage label, as we detail next.
189
+ Sentinel-1 (S1)
190
+ satellites collect radar imagery for land
191
+ and ocean monitoring. We download the S1 satellite im-
192
+ agery collected in 2020 from Google Earth Engine. The im-
193
+ ages have 10m GSD and are composites that take the median
194
+ value over the target period of time. We include band 1, the
195
+ VV overall mean, as one of the input channels. The S1 tiles
196
+ are cropped into 50 × 50-pixel patches. As areas with high
197
+ built-up density are shown to result in stronger signals in
198
+ S1 band 1 (Koppel et al. 2017), we expect that adding this
199
+ channel to the input will improve the model’s performance.
200
+ Sentinel-2 (S2)
201
+ satellites are equipped with the mission of
202
+ land monitoring. We download the 2020 composites of S2
203
+ imagery from Google Earth Engine. We include the RGB
204
+ channels (i.e. bands 4, 3, 2) and the near-infrared (NIR)
205
+ channel (i.e. band 8) from S2 as input. Compared with other
206
+ channels, NIR is useful for distinguishing the vegetation
207
+ from the buildings (Luo et al. 2019; Pessoa et al. 2019;
208
+ Schlosser et al. 2020). All of the four bands are available
209
+ in 10m GSD. The S2 tiles are cropped into 50 × 50-pixel
210
+ patches.
211
+
212
+ (a) Predict the number of building pixels 𝑦 in image 𝑋
213
+ 𝑦 = 874
214
+ Base map (left) and the binary mask
215
+ (right) used to derive the label 𝑦
216
+ 0.1
217
+ 0.9
218
+ Concat
219
+ ResNet18
220
+ (b) Multi-node quantile regression model
221
+ #𝑦
222
+ 0.5
223
+ ℒ!"#$%&'(
224
+ 𝐾 = 3
225
+ Input 𝑿
226
+ 𝐴 = 34.96%
227
+ Figure 1: (a) Given an input image X, we want to predict the number of building pixels y within it. We assume that y ap-
228
+ proximate the actual building coverage. (b) Architecture of the multi-node quantile regression model. The input to the model
229
+ includes five channels, which are Sentinel-1 band 1 (Overall Mean), Sentinel-2 band 4, 3, 2, 8 (R, G, B, NIR). The model has
230
+ K output nodes representing different quantiles.
231
+ Tile
232
+ Patch
233
+ Crop
234
+ Summation
235
+ 50
236
+ 50
237
+ Figure 2: A tile can be cropped into multiple 50 × 50 pix-
238
+ els patches. The tile-level results are computed by taking
239
+ summation of all the patch-level predictions.
240
+ Open Buildings
241
+ (Sirko et al. 2021) contains 516M build-
242
+ ing footprints across 43 African countries that cover 64%
243
+ of the continent. The building footprints were detected us-
244
+ ing state-of-the-art segmentation model collected from high-
245
+ resolution imagery at different timestamps. We downsample
246
+ the original high-resolution mask (0.5m GSD) to 10m GSD
247
+ to match the resolution of the input data. Then, we convert
248
+ the continous-valued rasters into binary masks by threshold-
249
+ ing at 0. The building pixel count labels are derived for the
250
+ 50 × 50-pixel patches.
251
+ Experiment Settings
252
+ To evaluate the generalization performance of the model, we
253
+ consider three experiment settings that captures likely cases
254
+ of real-world application.
255
+ Holistic.
256
+ In the holistic setting, we train and test on data
257
+ points sampled across the African continent based on popu-
258
+ lation density.
259
+ Intra-country.
260
+ In the intra-country setting, we train and
261
+ test the model on samples from the same country.
262
+ Exclusive.
263
+ In the exclusive setting, we train the model on
264
+ all African countries except for the one country on which we
265
+ test our model.
266
+ Baselines
267
+ To evaluate the performance of the proposed method, we
268
+ use existing settlement map products from different years
269
+ as baselines to compare against. These off-the-shelf prod-
270
+ ucts provide researchers and policymakers with general in-
271
+ formation about urban shapes and boundaries, but could be
272
+ less useful in providing up-to-date building coverage statis-
273
+ tics, which change more frequently than the shape of urban
274
+ area. We believe that these existing settlement map products
275
+ serve as good baselines to compare our method against and
276
+ provide information of them in this section.
277
+ Gloal Urban Footprint (GUF)
278
+ GUF was collected from
279
+ 2011 to 2012. It contains mappings of human settlements in
280
+ the form of binary masks.
281
+ Global Human Settlement Layer (GHSL)
282
+ GHSL was
283
+ collected from Sentinel-1 images in 2018. The building map
284
+ is available as binary masks.
285
+ World Settlement Footprint (WSF)
286
+ WSF (Marconcini
287
+ et al. 2020) was collected in 2015. It contains binary masks
288
+ of global human settlements.
289
+ Evaluation Settings
290
+ We evaluate the model performance at both patch-level and
291
+ tile-level and describe the evaluation metrics in this section.
292
+ Patch-level Evaluation
293
+ As the model is trained on
294
+ patches, we want to evaluate the model’s performance at the
295
+ same scale. We use two evaluation metrics for patch-level
296
+ evaluation: mean absolute error (MAE) and Pearson’s r2 be-
297
+ tween the predicted and the ground truth labels.
298
+ Tile-level Evaluation
299
+ In real-world applications, building
300
+ coverage statistics are needed over a large geography. To re-
301
+ flect this use case, we also evaluate the model performance at
302
+
303
+ 1661 (1687)
304
+ 82 (83)
305
+ 93 (87)
306
+ 665 (645)
307
+ 698 (704)
308
+ 987 (945)
309
+ Figure 3: Examples of model predictions in Brazil, which was unseen during training. The first row is the RGB input image; the
310
+ second row is the binary masks (where the bright yellow pixels are building pixels) from which we derive the label. The model
311
+ prediction is in black; the ground truth labels are highlighted in green in the parentheses. Our method accurately estimated the
312
+ results.
313
+ Expt./Eval. settings*
314
+ Open Buildings
315
+ SpaceNet7
316
+ Holistic
317
+
318
+ Intra-country
319
+
320
+ Exclusive
321
+
322
+ Patch-level
323
+
324
+
325
+ Tile-level
326
+
327
+ Table 1: Checkmarks indicate that the corresponding dataset
328
+ is used as validation data under the experiment (Expt.) or
329
+ evaluation (Eval.) settings. *Different experiment settings
330
+ require retraining the model, while different evaluation set-
331
+ tings do not.
332
+ tile-level using absolute error in building coverage. To com-
333
+ pare the statistics of building coverage across baselines with
334
+ different GSDs, we compute the percentage of building pix-
335
+ els within a tile as a proxy for building coverage. For a tile
336
+ R cropped into N patches at inference time, let yi be the
337
+ number of building pixels in patch i, the building coverage
338
+ percentage is computed as:
339
+ Cbuilding(R) =
340
+ �N
341
+ i=1 yi
342
+ Htile × Wtile
343
+ × 100
344
+ where Htile and Wtile denote the height and width (in pixel)
345
+ of the tile. The absolute error between a method and the
346
+ ground truth is computed as the absolute difference between
347
+ the two building coverage percentages.
348
+ Evaluation Data
349
+ We evaluate our model on Open Buildings and SpaceNet 7
350
+ Challenge datasets and provide the information in Table 1.
351
+ Open Buildings
352
+ The Open Buildings (Sirko et al. 2021)
353
+ dataset provides the training labels. We evaluate the model
354
+ performance on a hold-out test subset of the Open Buildings
355
+ dataset at patch-level only. We do not use Open Buildings for
356
+ tile-level evaluation because it contains data from different
357
+ timestamps.
358
+ SpaceNet 7 Challenge dataset (SpaceNet7)
359
+ SpaceNet73
360
+ was published in 2020 as the data for the SpaceNet 7 Multi-
361
+ Temporal Urban Development Challenge. The dataset pro-
362
+ vides 4km × 4km tiles and the building polygons in each
363
+ tile. We downsample the raster to 10m GSD to match the
364
+ resolution of the input data. We evaluate the model perfor-
365
+ mance on SpaceNet7 at both patch-level and tile-level.
366
+ We use the SpaceNet7 tiles for validation because the la-
367
+ bels were collected the same year as the Sentinel-1/-2 in-
368
+ put data. Furthermore, SpaceNet7 includes tiles from re-
369
+ gions outside of Africa, allowing us to evaluate the model’s
370
+ performance on unseen countries. Specifically, we evaluate
371
+ our model on six SpaceNet7 tiles from different regions:
372
+ Uganda, Zambia, Ghana, Peru, Brazil, and Mexico. Note
373
+ that we do not use SpaceNet7 as the training labels be-
374
+ cause the data is limited in quantity. These labels are human-
375
+ generated and likely of higher quality compared to those
376
+ from Open Buildings, which are generated by a model.
377
+ Results
378
+ In this section, we evaluate the performance of the proposed
379
+ method. We first show that our model accurately predicts
380
+ building coverage in Africa and generalizes to South Ameri-
381
+ can regions unseen during training. We also conduct ablation
382
+ studies demonstrating that the major design choices – non-
383
+ RGB bands and multi-node quantile regression – are neces-
384
+ sary for boosting model performance and generalization.
385
+ 3SpaceNet on Amazon Web Services (AWS). “Datasets.” The
386
+ SpaceNet Catalog. Last modified October 1st, 2018. Accessed on
387
+ November 20th, 2021. https://spacenet.ai/datasets/
388
+
389
+ Africa
390
+ South America
391
+ Uganda
392
+ Zambia
393
+ Ghana
394
+ Brazil
395
+ Peru
396
+ Mexico
397
+ Method
398
+ R2 ↑
399
+ Tile ↓
400
+ R2 ↑
401
+ Tile ↓
402
+ R2 ↑
403
+ Tile ↓
404
+ R2 ↑
405
+ Tile ↓
406
+ R2 ↑
407
+ Tile ↓
408
+ R2 ↑
409
+ Tile ↓
410
+ GUF (2012)
411
+ 0.092
412
+ 7.23
413
+ -0.715
414
+ 0.96
415
+ 0.466
416
+ 1.59
417
+
418
+
419
+
420
+
421
+
422
+
423
+ WSF (2015)
424
+ 0.286
425
+ 0.21
426
+ -5.444
427
+ 38.68
428
+ -0.290
429
+ 3.28
430
+ 0.579
431
+ 9.21
432
+ 0.562
433
+ 6.06
434
+ -0.099
435
+ 18.52
436
+ GHSL (2018)
437
+ 0.057
438
+ 6.83
439
+ 0.023
440
+ 2.46
441
+ 0.771
442
+ 0.75
443
+ 0.863
444
+ 0.97
445
+ 0.516
446
+ 10.06
447
+ 0.187
448
+ 8.12
449
+ w/o multi-node QR
450
+ -15.51
451
+ 33.15
452
+ -10.41
453
+ 54.27
454
+ -15.64
455
+ 31.68
456
+ -0.069
457
+ 18.30
458
+ -2.807
459
+ 38.28
460
+ -2.480
461
+ 36.24
462
+ w/o S1 band 1
463
+ 0.809
464
+ 1.95
465
+ 0.779
466
+ 3.74
467
+ 0.252
468
+ 3.37
469
+ 0.864
470
+ 3.91
471
+ 0.906
472
+ 2.34
473
+ 0.422
474
+ 10.96
475
+ w/o S2 band 8
476
+ 0.772
477
+ 2.33
478
+ 0.580
479
+ 7.14
480
+ 0.804
481
+ 0.54
482
+ 0.751
483
+ 6.19
484
+ 0.836
485
+ 3.13
486
+ 0.337
487
+ 13.36
488
+ Ours
489
+ 0.868
490
+ 1.18
491
+ 0.866
492
+ 3.21
493
+ 0.835
494
+ 2.31
495
+ 0.968
496
+ 0.83
497
+ 0.798
498
+ 6.63
499
+ 0.707
500
+ 0.36
501
+ Table 2: Results on SpaceNet7. The bottom four rows are the proposed methods with the corresponding components removed
502
+ and the complete version. In the table, r2 is the patch-level Pearson’s r2 and Tile is the tile-level absolute error between
503
+ SpaceNet7 and the corresponding method. The model was trained with 15,000 samples in Africa for 1200 epochs and tested on
504
+ the corresponding SpaceNet7 tiles.
505
+ Expt. setting
506
+ Train
507
+ Test
508
+ MAE ↓
509
+ R2 ↑
510
+ Holistic
511
+ Africa
512
+ Africa
513
+ 75.43
514
+ 0.888
515
+ Intra-country
516
+ Rwanda
517
+ Rwanda
518
+ 27.60
519
+ 0.938
520
+ Exclusive
521
+ Africa*
522
+ Rwanda
523
+ 42.04
524
+ 0.844
525
+ Exclusive
526
+ Africa*
527
+ Uganda
528
+ 207.74
529
+ 0.568
530
+ Excluisve
531
+ Africa*
532
+ Kenya
533
+ 110.67
534
+ 0.915
535
+ Table 3: Patch-level results for different experiment settings
536
+ on Open Buildings. All models in the table are trained with
537
+ 15,000 samples from the corresponding train regions for
538
+ 1200 epochs and tested on 1,000 samples from the corre-
539
+ sponding test regions. *Under the Exclusive setting, the cor-
540
+ responding test region is removed from the training set so
541
+ that the model is tested on unseen regions.
542
+ Accurate Prediction in Africa
543
+ To demonstrate the effectiveness of our method, we evaluate
544
+ it on both SpaceNet7 (see Table 2) and Open Buildings (see
545
+ Table 3) in Africa. We define a tile as multiple patches and
546
+ compute the tile-level results by taking summation of the
547
+ patches within it (see Section 4.1).
548
+ As shown in Table 2, our model achieves an R2 as high as
549
+ 0.968 at patch-level, outperforming the baselines on most of
550
+ the African regions. Some examples of the proposed model’s
551
+ patch-level prediction are provided in Figure 3. Furthermore,
552
+ the proposed method yields fairly accurate building cov-
553
+ erage estimates at tile-level, achieving a low error rate of
554
+ 0.54% on Ghana. Additionally, we observe that baselines
555
+ like WSF and GUF give good estimates at only tile-level
556
+ but not the other, indicating that the errors at patch-level are
557
+ cancelled out. In contrast, the proposed method yields con-
558
+ sistently accurate estimations at both tile- and patch-level.
559
+ We emphasize that having good performance on both scales
560
+ is ideal, since it gets us closer to finer-grained pixel-level
561
+ predictions (semantic segmentation).
562
+ Generalization to Unseen Regions
563
+ Prior methods for generating building coverage statistics
564
+ generalize poorly under domain shift. To evaluate the gen-
565
+ eralizability of the proposed method, we train and test the
566
+ multi-node quantile regression model under three exper-
567
+ iment settings (defined in Section 4.3) – Holistic, Intra-
568
+ country, and Exclusive. We provide the patch-level results
569
+ on the Open Buildings dataset in Table 3.
570
+ We see that among the three experiment settings, Intra-
571
+ country gives the highest Pearson’s r2 of 0.971 because the
572
+ model is tested on in-domain data and the region is small.
573
+ We also observe that the proposed multi-node quantile re-
574
+ gression model generalizes well to regions not seen during
575
+ training. Speicifcally, under the Exclusive setting, the pro-
576
+ posed method achives an r2 as high as 0.962 even with the
577
+ test regions removed from the training set (see Table 2).
578
+ Furthermore, the proposed model generalizes to regions
579
+ outside of the African continent. We evaluate our method
580
+ on SpaceNet7 tiles from South America and provide the
581
+ results in Table 2. We observe that the proposed model
582
+ achieves comparable or superior performance when evalu-
583
+ ated on Brazil, Peru, and Mexico compared with the base-
584
+ lines. This generalization to a different continent indicates
585
+ that our model could potentially be applied to the globe for
586
+ collecting building coverage statistics, while using only pub-
587
+ licly available low-resolution satellite imagery.
588
+ Ablation Studies
589
+ In this part, we carry out ablation studies on 1) the incorpo-
590
+ ration of S1 band 1 and S2 band 8 as input and 2) the multi-
591
+ node quantile regression, and provide the results in Table 2.
592
+ Multi-node Quantile Regression
593
+ To see the effect of hav-
594
+ ing multiple output nodes for different quantiles, we com-
595
+ pare the performance of the multi-node quantile regression
596
+ model with the single-node model trained on the L1 objec-
597
+ tive. We provide the ablation study results on SpaceNet7 in
598
+ Table 2 (see row “w/o multi-node QR”). We observe that
599
+ the proposed multi-node model outperforms the single-node
600
+ model by a large margin on all test regions. In addition, as
601
+ shown in the scatter plots for ground truth VS. predicted
602
+ building pixel counts (Figure 4), the model tends to overesti-
603
+ mate the building coverage without the multi-node quantile
604
+ regression. We speculate that having multiple output nodes
605
+ improve performance because the pinball losses from the 0.1
606
+ and 0.9 quantile nodes help bound the predictions.
607
+ Multi-Spectral Bands
608
+ We conduct ablation studies on the
609
+ two non-RGB multi-spectral bands — i.e. S1 band 1 and S2
610
+
611
+ w/o S2 band 8
612
+ w/o S1 band 1
613
+ w/o Multi-node QR
614
+ Ours
615
+ 𝑹𝟐 = 0.751
616
+ 𝑹𝟐 = 0.864
617
+ 𝑹𝟐 = -0.069
618
+ 𝑹𝟐 = 0.968
619
+ Ground truth building pixel count
620
+ Predicted building pixel count
621
+ Figure 4: Scatter plots of patch-level predicted number of
622
+ building pixels against the ground truth from the ablation
623
+ studies. All models are trained on Africa and tested in the
624
+ Brazil tile. Removing the multi-node quantile regression or
625
+ any of the non-RGB bands makes the model be prone to
626
+ overestimation or underestimation.
627
+ band 8 — and provide the ablation study results evaluated on
628
+ SpaceNet7 in Table 2 . We observe that at both patch- and
629
+ tile-level, incorporating S1 band 1 and S2 band 8 boosts the
630
+ performance. From the scatter plots in Figure 4, we see that
631
+ removing either S1 band 1 or S2 band 8 makes the model un-
632
+ derestimates at patch-level. This suggests that the non-RGB
633
+ bands provide additional information that helps correct the
634
+ model’s tendency to under- or over-estimate.
635
+ Discussion and Social Impact
636
+ United Nations’ Sustainable Development Goals (SDGs)
637
+ present an urgent call for action in all countries and collabo-
638
+ ration between different sectors of policy-making for a more
639
+ sustainable development (Nations 2016). However, relevant
640
+ data for informing the decision makers and organizations are
641
+ often lacking or infrequently collected, especially in fast de-
642
+ veloping countries. Building coverage is an important so-
643
+ cioeconomic indicator and also helps predict other indica-
644
+ tors. In this paper, we develop a framework for estimat-
645
+ ing building coverage using only free low-resolution satel-
646
+ lite imagery from Sentinel-1 and Sentinel-2. The proposed
647
+ multi-node quantile regression model yields fairly accurate
648
+ estimates and generalizes well to unseen countries and con-
649
+ tinents.
650
+ This paper offers a cost-efficient and generalizable way
651
+ to collect global building coverage statistics, accelerating
652
+ the progress towards multiple SDGs. For example, build-
653
+ ing coverage helps predict the level of economic develop-
654
+ ment, which informs policymakers’ decisions for alleviating
655
+ poverty (SDG 1 No Poverty), allocating infrastructure re-
656
+ sources (SDG 9 Industry, Innovation and Infrastructure), and
657
+ building more sustainable cities (SDG 11 Sustainable Cities
658
+ and Communities). In addition, building coverage provides
659
+ important information about the interaction between human
660
+ and environment, including the monitoring of agriculture to
661
+ reduce hunger (SDG 2 Zero Hunger) and climate measure-
662
+ ment (SDG 13 Climate Action).
663
+ Furthermore, the proposed method could potentially be
664
+ applied to track changes of building coverage for different
665
+ regions in the world, assisting existing efforts for this. For
666
+ example, United Nations’ World Urbanization Prospects re-
667
+ port provides estimates and projections of urban and rural
668
+ data, including population and area, throughout the time
669
+ (Nations 2018). The proposed method could be applied to
670
+ derive building coverage statistics as soon as satellite images
671
+ are updated (the low-resolution satellite imagery are updated
672
+ on a weekly basis). As building coverage is highly corre-
673
+ lated with or can be used to derive other values like building
674
+ density, urban area, and population, our method could po-
675
+ tentially aid the efforts to track urban development.
676
+ This paper demonstrates the viability of using low-
677
+ resolution free satellite imagery for estimating building cov-
678
+ erage statistics over a large geography. Future research could
679
+ explore how the model can be applied to classify urban and
680
+ rural areas, and estimate population and poverty levels.
681
+ References
682
+ Ayush, K.; Uzkent, B.; Tanmay, K.; Burke, M.; Lobell, D.;
683
+ and Ermon, S. 2021. Efficient Poverty Mapping from High
684
+ Resolution Remote Sensing Images. In AAAI.
685
+ Chini, M.; Pelich, R.; Hostache, R.; Matgen, P.; and L´opez-
686
+ Mart´ınez, C. 2018. Towards a 20 m global building map
687
+ from sentinel-1 sar data. Remote Sensing, 10(11): 1833.
688
+ Esch, T.; Heldens, W.; Hirner, A.; Keil, M.; Marconcini, M.;
689
+ Roth, A.; Zeidler, J.; Dech, S.; and Strano, E. 2017. Breaking
690
+ new ground in mapping human settlements from space–The
691
+ Global Urban Footprint. ISPRS Journal of Photogrammetry
692
+ and Remote Sensing, 134: 30–42.
693
+ Hanley, J. A.; Joseph, L.; Platt, R. W.; Chung, M. K.; and
694
+ Belisle, P. 2001. Visualizing the median as the minimum-
695
+ deviation location. The American Statistician, 55(2): 150–
696
+ 152.
697
+ He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep resid-
698
+ ual learning for image recognition. In Proceedings of the
699
+ IEEE conference on computer vision and pattern recogni-
700
+ tion, 770–778.
701
+ Huang, X.; Wang, C.; Li, Z.; and Ning, H. 2021. A 100
702
+ m population grid in the CONUS by disaggregating census
703
+ data with open-source Microsoft building footprints.
704
+ Big
705
+ earth data, 5(1): 112–133.
706
+ Koenker, R.; and Bassett Jr, G. 1978. Regression quantiles.
707
+ Econometrica: journal of the Econometric Society, 33–50.
708
+ Koppel, K.; Zalite, K.; Voormansik, K.; and Jagdhuber, T.
709
+ 2017. Sensitivity of Sentinel-1 backscatter to characteris-
710
+ tics of buildings. International Journal of Remote Sensing,
711
+ 38(22): 6298–6318.
712
+
713
+ 000Luo, N.; Wan, T.; Hao, H.; and Lu, Q. 2019.
714
+ Fusing
715
+ high-spatial-resolution remotely sensed imagery and Open-
716
+ StreetMap data for land cover classification over urban ar-
717
+ eas. Remote Sensing, 11(1): 88.
718
+ Marconcini, M.; Metz-Marconcini, A.; ¨Ureyen, S.; Palacios-
719
+ Lopez, D.; Hanke, W.; Bachofer, F.; Zeidler, J.; Esch, T.;
720
+ Gorelick, N.; Kakarla, A.; et al. 2020. Outlining where hu-
721
+ mans live, the World Settlement Footprint 2015. Scientific
722
+ Data, 7(1): 1–14.
723
+ Nations, U. 2016. Transforming our world: The 2030 agenda
724
+ for sustainable development.
725
+ Nations, U. 2018.
726
+ 2018 Revision of World Urbanization
727
+ Prospects.
728
+ Pessoa, G. G.; Amorim, A.; Galo, M.; and Galo, M. d. L.
729
+ B. T. 2019.
730
+ Photogrammetric Point Cloud Classification
731
+ Based on Geometric and Radiometric Data Integration. Bo-
732
+ letim de Ciˆencias Geod´esicas, 25.
733
+ Redmon, J.; and Farhadi, A. 2018. Yolov3: An incremental
734
+ improvement. arXiv preprint arXiv:1804.02767.
735
+ Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Con-
736
+ volutional networks for biomedical image segmentation. In
737
+ International Conference on Medical image computing and
738
+ computer-assisted intervention, 234–241. Springer.
739
+ Schlosser, A. D.; Szab´o, G.; Bertalan, L.; Varga, Z.; Enyedi,
740
+ P.; and Szab´o, S. 2020. Building extraction using orthopho-
741
+ tos and dense point cloud derived from visual band aerial
742
+ imagery based on machine learning and segmentation. Re-
743
+ mote Sensing, 12(15): 2397.
744
+ Sirko, W.; Kashubin, S.; Ritter, M.; Annkah, A.; Bouchareb,
745
+ Y. S. E.; Dauphin, Y.; Keysers, D.; Neumann, M.; Cisse,
746
+ M.; and Quinn, J. 2021. Continental-Scale Building Detec-
747
+ tion from High Resolution Satellite Imagery. arXiv preprint
748
+ arXiv:2107.12283.
749
+ Sun, K.; Xiao, B.; Liu, D.; and Wang, J. 2019. Deep high-
750
+ resolution representation learning for human pose estima-
751
+ tion. In Proceedings of the IEEE/CVF Conference on Com-
752
+ puter Vision and Pattern Recognition, 5693–5703.
753
+ Uzkent, B.; Yeh, C.; and Ermon, S. 2020. Efficient object
754
+ detection in large images using deep reinforcement learn-
755
+ ing. In Proceedings of the IEEE/CVF Winter Conference on
756
+ Applications of Computer Vision, 1824–1833.
757
+ Yeh, C.; Perez, A.; Driscoll, A.; Azzari, G.; Tang, Z.; Lobell,
758
+ D.; Ermon, S.; and Burke, M. 2020. Using publicly available
759
+ satellite imagery and deep learning to understand economic
760
+ well-being in Africa. Nature Communications, 11.
761
+ Yuan, J.; and Cheriyadat, A. M. 2014. Learning to count
762
+ buildings in diverse aerial scenes.
763
+ In Proceedings of the
764
+ 22nd ACM SIGSPATIAL International Conference on Ad-
765
+ vances in Geographic Information Systems, 271–280.
766
+
767
+ Appendix A: Datasets
768
+ Sentinel-1
769
+ provides free global Synthetic Aperture Radar
770
+ (SAR) imagery available with 9 bands at ground sampling
771
+ distance (GSD) of 10m. The revisit time of the Sentinel-1
772
+ satellite is 12 days, meaning that the images are updated fre-
773
+ quently. We include band 1, the VV overall mean, as one of
774
+ the input channels. Each Sentinel-1 tile that we download is
775
+ 2km-by-2km and has the shape 200×200 pixels. Each tile is
776
+ then cropped into 16 smaller patches of shape 50×50 pixels
777
+ when fed into our model.
778
+ Sentinel-2
779
+ provides multi-spectral images in from the vis-
780
+ ible to the shortwave infrared spectral range (SWIR) with the
781
+ revisit time of 10 days. The Sentinel-2 satellite imagery con-
782
+ tains 13 multi-spectral bands with GSDs ranging from 10 to
783
+ 60m. Besides the RGB channels, we also included the near-
784
+ infrared (NIR) channel (i.e. band 8) as input. Like Sentinel-
785
+ 1, all the Sentinel-2 imagery are downloaded as 2km-by-
786
+ 2km tiles and cropped into 16 patches of size 50 × 50-pixel.
787
+ Open Buildings
788
+ contains 516M building footprints across
789
+ 43 African countries that cover 64% of the continent. The
790
+ building footprints were detected using state-of-the-art seg-
791
+ mentation models. The Open Buildings data is originally in
792
+ the format of polygons labeled with three confidence inter-
793
+ vals of buildings presence - 0.6 to 0.65, 0.65 to 0.7, and
794
+ greater than 0.7. For our purpose, we download the Open
795
+ Buildings data as high-resolution rasters with 0.5m GSD and
796
+ two bands. Band 1 demonstrates the model confidence that a
797
+ building is located in the region with the confidence interval
798
+ preserved from the original polygon data. A band 1 value of
799
+ zero indicates no building presence. Band 2 is a reclassifica-
800
+ tion of the confidence scores into four buckets.
801
+ We use band 1 to derive the binary label of building and
802
+ non-building. Specifically, we first downsample the rasters
803
+ to 10m GSD to match the resolution of the Sentinel-1 and
804
+ Sentinel-2 input images. Then, we convert the continous-
805
+ valued band 1 into a binary mask by treating all pixels with a
806
+ non-zero value as a building pixel – i.e. a pixel that contains
807
+ buildings. Like Sentinel-1 and Sentinel-2, the downsampled
808
+ binary mask is cropped into 50 × 50-pixel patches. For each
809
+ smaller patch, we use the number of building pixels as the
810
+ labels for training and testing.
811
+ Appendix B: Baselines
812
+ Figure 5 shows the visualization of different benchmarks de-
813
+ scribed below and datasets we used in experiments.
814
+ Gloal Urban Footprint (GUF)
815
+ is a worldwide mapping
816
+ of human settlement patterns in the form of binary masks
817
+ available in 12m GSD. The building footprints are derived
818
+ from satellite images of TerraSAR-X and TanDEM-X from
819
+ 2011 to 2012.
820
+ Global Human Settlement Layer (GHSL)
821
+ was collected
822
+ from backscattered information of Sentinel-1 images. The
823
+ building map is available as binary masks at 20m GSD.
824
+ World Settlement Footprint (WSF)
825
+ is a binary mask of
826
+ global human settlements available in 10m GSD. The map
827
+ Base map
828
+ Open Buildings (2021)
829
+ WSF (2015)
830
+ GHSL (2018)
831
+ GUF (2012)
832
+ SpaceNet (2020)
833
+ Figure 5: Binary settlement maps in Zambia (1 = red build-
834
+ ing pixel; 0 = white non-building pixel) from different
835
+ sources. The years of collection for the settlement maps are
836
+ provided in the parentheses.
837
+ (a) Geo-locations of training samples
838
+ (b) Per km2 cost and resolution of
839
+ different satellite imagery
840
+ Figure 6: (a) The geo-locations of our training samples, all
841
+ of which are from Africa. (b) High-resolution satellite im-
842
+ agery are expensive, while many lower-resolution ones are
843
+ publicly available.
844
+ was derived from Sentinel-1 and Landsat-8 satellite imagery
845
+ in year 2015.
846
+ Appendix C: Experiments
847
+ Model Implementation
848
+ We modify the ResNet18 architecture to incorporate multi-
849
+ node quantile regression. Specifically, we replace the final
850
+ fully-connected (FC) layer in ResNet18 with two FC layers,
851
+ each followed by a ReLU activation. We set the number of
852
+ output channels (i.e. nodes) to be K, which is the number
853
+ of quantiles the model predicts. For all models in this pa-
854
+ per, we use K = 3 and predict the quantiles {0.1, 0.5, 0.9}.
855
+ Each channel corresponds to a quantile and the correspond-
856
+ ing pinball loss is computed using the output of that partic-
857
+ ular channel. To prevent the model from overfitting, we add
858
+ dropout layers after each ResNet block, which we found em-
859
+ pirically that it helps the model generalize to unseen regions.
860
+
861
+ Price($/km^2)
862
+ GSDmRegion
863
+ Population (%)
864
+ Building (%)
865
+ Dhaka
866
+ 19.23
867
+ 29.68
868
+ Kampala
869
+ 28.18
870
+ 7.28
871
+ Athens
872
+ -0.19
873
+ 1.55
874
+ Boston
875
+ 4.45
876
+ -9.42
877
+ Table 4: Temporal change experiment results on four cities
878
+ of different levels of development. The second and third are
879
+ percentage change from 2016 to 2021. All results are col-
880
+ lected from the model trained with 15,000 samples in Africa
881
+ for 1200 epochs.
882
+ Training Details
883
+ The training samples’ geo-locations are shown in Figure 6
884
+ (a). For all models in the experiment section, we train them
885
+ on 15000 samples for 1200 epochs with a learning rate of
886
+ 0.002. The models have fully converged at the point where
887
+ we end training.
888
+ Temporal Experiments
889
+ Evaluation
890
+ To evaluate our model’s ability to track tem-
891
+ poral changes in building coverage, we run the model on
892
+ satellite images from 2016 and 2021.4 Specifically, we chose
893
+ four cities with various levels of development from four dif-
894
+ ferent continents – Dhaka, Kampala, Athens, and Boston –
895
+ and downloaded all images covering the cities. Then, we
896
+ compute the building coverage growth rate over the 5-year
897
+ interval for the chosen cities using the model trained on
898
+ 15,000 African images for 1200 epochs.
899
+ One challenge of temporal evaluation is the lack of build-
900
+ ing coverage ground truth data. To get a sense of how well
901
+ the proposed method tracks temporal changes in building
902
+ coverage, we use population change as a proxy for build-
903
+ ing coverage change. Specifically, we assume that popula-
904
+ tion and building coverage grows together, as more people
905
+ requires more buildings. However, we only use population
906
+ growth rate as a rough reference for the relative level of de-
907
+ velopments of different cities.
908
+ Results
909
+ Tracking changes in building coverage across
910
+ time allows us to understand the urban development of a
911
+ region, especially in cities or urban areas. We show that
912
+ the proposed model can be used to track building coverage
913
+ changes in regions of different levels of development and
914
+ provide the temporal experiment results in Figure 4.
915
+ 4Sentinel-2 mission started in 2014.
916
+
29AzT4oBgHgl3EQffPxu/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
59AzT4oBgHgl3EQfEfpg/content/tmp_files/2301.00994v1.pdf.txt ADDED
@@ -0,0 +1,784 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.00994v1 [quant-ph] 3 Jan 2023
2
+ Pinhole quantum ghost imaging
3
+ Andres Vega,1, a) Sina Saravi,1 Thomas Pertsch,1, 2 and Frank Setzpfandt1
4
+ 1)Institute of Applied Physics, Abbe Center of Photonics, Friedrich Schiller University Jena, Albert-Einstein-Str. 15, 07745 Jena,
5
+ Germany
6
+ 2)Fraunhofer Institute for Applied Optics and Precision Engineering IOF, Albert-Einstein-Str. 7, 07745 Jena,
7
+ Germany
8
+ (Dated: 4 January 2023)
9
+ We propose a quantum ghost imaging scheme based on biphotons, that, by using a collimated pump beam of the right
10
+ size for biphoton generation, obviates the need for lenses to achieve imaging. The scheme is found to be analogous
11
+ to the classical pinhole camera, where we show that the equivalent to the classical pinhole size depends mainly on the
12
+ width of the pump beam, but also on the thickness of the nonlinear crystal and the wavelengths of the biphoton.
13
+ Quantum ghost diffraction1 and imaging2 rely on the spa-
14
+ tial correlations of a biphoton, which can be created by the
15
+ nonlinear process of spontaneous parametric down conversion
16
+ (SPDC)3,4, where a pump (P) photon impinging on a crystal
17
+ with second-order nonlinearity χ(2) is split into a pair of pho-
18
+ tons called signal (S) and idler (I). After creation, the two
19
+ photons are separated into two different paths and only one of
20
+ them interacts with the object, e.g. the signal photon. Then,
21
+ the signal photon is measured by a detector with no spatial res-
22
+ olution, whereas another detector with spatial resolution mea-
23
+ sures the idler photon that never interacted with the object.
24
+ None of the detectors alone can recover a diffraction pattern
25
+ or image of the object. Remarkably, these can be retrieved
26
+ by correlating the two measurements5,6. This measurement
27
+ technique has two main advantages. First, very low numbers
28
+ of photons can be used due to the inherently better signal-to-
29
+ noise ratio of quantum ghost imaging compared to imaging
30
+ with classical light7,8. Additionally, ghost imaging with two-
31
+ color biphotons can overcome limitations due to inaccessible
32
+ wavelength ranges for illumination and detection9–11.
33
+ To form a ghost image, usually lenses are placed in the path
34
+ of the signal and/or idler after the crystal2,7,8 or in the pump
35
+ beam before the crystal12. The lenses introduce a parabolic
36
+ phase-front in either of the beam paths, which results in
37
+ the formation of the image in the coincidence measurement.
38
+ However, quantum ghost imaging can be also realized without
39
+ lenses by adding the parabolic phase-front through engineer-
40
+ ing the nonlinearity profile of the nonlinear crystal, e.g. by
41
+ using a nonlinear photonic crystal13. Furthermore, as ghost
42
+ imaging can be also realized with classical light using ther-
43
+ mal light sources14–19, their inherent property of acting like
44
+ a phase-conjugated mirror16 in a ghost imaging scheme can
45
+ also be used for lensless ghost imaging20,21.
46
+ In classical optics, lensless imaging can be also realized us-
47
+ ing the principle of pinhole imaging22,23. In a pinhole camera,
48
+ the object is located on one side of an opaque screen with a
49
+ small pinhole, whereas the detector is on the other side. With-
50
+ out the need of lenses, the detector captures a shadow of the
51
+ object, which can be optimized by adapting the pinhole size22.
52
+ Throughout this manuscript, we will refer to this shadow as an
53
+ image although strictly no imaging is taking place. For appli-
54
+ a)Electronic mail: [email protected]
55
+ cations where high spatial resolution is not needed, this type
56
+ of lensless imaging has several advantages over imaging with
57
+ lenses, among which are a larger depth of field, a wide angular
58
+ field of view23, and its applicability in wavelength ranges for
59
+ which high-quality lenses are less available24,25.
60
+ In this work, we want to show that the advantages of pin-
61
+ hole imaging can be also harnessed in quantum ghost imag-
62
+ ing. For ghost imaging with thermal light, a pinhole-based
63
+ scheme has been already proposed, where the optimal lensless
64
+ imaging condition depends on the size of the thermal source26.
65
+ We extend this approach of lensless imaging to the quantum
66
+ regime with entangled photons based on the setup sketched
67
+ in Fig. 1. Here, we assume that the nonlinear crystal gener-
68
+ ating photon pairs is illuminated by a collimated pump beam
69
+ and, contrary to ghost imaging with thermal light, we use a
70
+ bucket detector instead of a point detector behind the object.
71
+ We will show, that for specific pump beam diameters, pinhole
72
+ quantum ghost imaging can be realized and we investigate its
73
+ properties and optimum regime of operation. To this end, we
74
+ start by discussing the biphoton joint spatial probability (JSP)
75
+ and the quantum ghost pattern (G) together with a numeri-
76
+ cal example. Later, we derive a simplified analytical model
77
+ for our imaging scheme that explains the observations of the
78
+ numerical example and allows the connection to the classical
79
+ pinhole camera. Using this model, we will furthermore dis-
80
+ cuss the spatial resolution of pinhole quantum ghost imaging.
81
+ Throughout this work, z is the propagation direction and
82
+ we restrict our analysis, without loss of generality, to one
83
+ transverse dimension x in position space, whose conjugate in
84
+ momentum space is the transverse component of the wave-
85
+ vector kx. We also assume an infinitely extended nonlinear
86
+ crystal in the transverse direction, which ensures transverse
87
+ Nonlinear
88
+ crystal
89
+ Bucket
90
+ detector
91
+ Object
92
+ Coincidence
93
+ circuit
94
+ Spatially
95
+ resolving detector
96
+ FIG. 1. Sketch of the considered setup.
97
+
98
+ 2
99
+ phase matching kxP = kxS + kxI. We assume an undepleted
100
+ monochromatic classical pump beam of the form EP(x,z,t) =
101
+ � dkxP φP(kxP) exp[i(kxPx+ kzPz− ωPt)], where φP(kxP) is the
102
+ pump spatial spectrum, the longitudinal component of the
103
+ wave-vector is kz = [(ωn/c)2 − k2
104
+ x]1/2 with ω/c = 2π/λ, λ
105
+ being the wavelength in free space and ω the corresponding
106
+ frequency. Additionally, we ignore the effects of the bound-
107
+ aries between the crystal and its surrounding free space, which
108
+ would cause refracted and reflected waves, and assume a sim-
109
+ plified case with the crystal’s refractive index n = 1. We in-
110
+ vestigate signal and idler photons at fixed frequencies of ωI
111
+ and ωS, such that ωP = ωS + ωI. This can be achieved exper-
112
+ imentally by placing narrow bandpass filters centered around
113
+ these frequencies in their beam paths. Under these condi-
114
+ tions, the biphoton quantum state after the filters will have
115
+ the form27 |Ψ⟩ ∝
116
+ � dkxSdkxI ψSPDC(kxS,kxI) |kxS,ωS⟩|kxI,ωI⟩,
117
+ with ψSPDC(kxS,kxI) = φP(kxS + kxI) sinc(∆kz lz/2).
118
+ Here,
119
+ ∆kz = kzP − kzS − kzI, lz is the thickness of the crystal, and
120
+ |kx,ω⟩ is the single-photon state defined by the transverse
121
+ component of the wave-vector kx and frequency ω.
122
+ The JSP of the biphoton state at the two detectors in Fig. 1
123
+ is28
124
+ JSP(xS,xI) ∝
125
+ ���F−1�
126
+ hI(kxI)h2S(kxS)
127
+
128
+ to(kxS)∗
129
+
130
+ h1S(kxS)
131
+ ×ψSPDC(kxS,kxI)
132
+ ������
133
+ 2
134
+ ,
135
+ (1)
136
+ where F−1 is a two-dimensional inverse Fourier transform,
137
+ (kxS,kxI) → (xS,xI), and h are free space transfer func-
138
+ tions with hI(kxI) = exp(ikzI zI), h1S(kxS) = exp(ikzS d) and
139
+ h2S(kxS) = exp[ikzS(zS − d)]. Finally, ∗ denotes the convo-
140
+ lution only in kxS as the object with transmission To(xS) is
141
+ in the signal arm.
142
+ The ghost pattern G(xI) for a bucket
143
+ detector that collects all signal photons is derived from the
144
+ JSP as G(xI) ∝
145
+ � dxS JSP(xS,xI). We assume a pump beam
146
+ with Gaussian spatial spectrum, whose waist is located at
147
+ the center of the nonlinear crystal at the plane z = 0, where
148
+ φP ∝ exp[−σ2
149
+ P(kxS + kxI)2/2] has a flat wave front and a width
150
+ σP in position space (see zoomed out region in Fig. 1).
151
+ As shown in pinhole ghost imaging with a thermal source26,
152
+ the size of the photon source has a similar role as the pin-
153
+ hole size in classical optics, determining the optimal regime
154
+ of imaging. This suggests that in the quantum regime, the
155
+ same role can exist for the size of the biphoton source, which
156
+ depends on the width of the pump beam. To examine this
157
+ premise, we numerically calculate quantum ghost imaging in
158
+ a setup with a nonlinear crystal with thickness lz = 3 mm,
159
+ a pump with wavelength of λP = 350 nm, degenerate down-
160
+ converted photons with λS = λI = 700 nm, and detectors lo-
161
+ cated at zS = 1.2 m and zI = 1.5 m from the crystal. As ob-
162
+ ject, we consider a double-slit with 940 µm slit separation,
163
+ with unity transmission in each slit of 50 µm width and no
164
+ transmission elsewhere.
165
+ In Fig. 2(a-d) we present the en-
166
+ suing normalized JSPs calculated using Eq. (1) for different
167
+ sizes of the pump beam. The double-slit always results in
168
+ two spots, whose separation is approximately five times larger
169
+ than the slit separation. Depending on the width of the pump
170
+ beam, they change their widths and begin to overlap and in-
171
+ terfere. The interference is minimal in Fig. 2(c) with pump
172
+ 0
173
+ 1
174
+ (d)
175
+ (e)
176
+ (a)
177
+ (c)
178
+ -8
179
+ 0
180
+ 8
181
+ -8
182
+ 0
183
+ 8
184
+ -8
185
+ 0
186
+ 8
187
+ -8
188
+ 0
189
+ 8
190
+ -8
191
+ 0
192
+ 8
193
+ 102
194
+ 103
195
+ (b)
196
+ (c)
197
+ 0
198
+ 1
199
+ (d)
200
+ (a)
201
+ (b)
202
+ FIG. 2. (a-d) JSP(xS,xI) and corresponding (e) quantum ghost pat-
203
+ tern G(xI) of a double-slit, 940 µm slit separation and 50 µm slit
204
+ width, located at d = 30 cm produced by a pump width σP of (a)
205
+ 58 µm, (b) 102 µm, (c) 167 µm, and (d) 800 µm.
206
+ width σP = 167 µm. In Fig. 2(e), we show the corresponding
207
+ ghost patterns, where the cases of (a-d) are marked. We see,
208
+ that for a specific range of pump widths, two separate maxima
209
+ are visible, corresponding to an image of the double-slit. We
210
+ note, that the well-known quantum ghost diffraction pattern1
211
+ of the object could be recovered using a large pump width,
212
+ as in Fig. 2(d), and replacing the bucket with a point detector
213
+ which would measure only a horizontal cut through the JSP.
214
+ This numerical example portrays the core idea of this work.
215
+ Other schemes12,13 have shown that a pump wave with a
216
+ curved wave front, obtained by means of a lens placed be-
217
+ fore the nonlinear crystal or using a photonic crystal, can also
218
+ be used for quantum ghost imaging without lenses in the paths
219
+ of the biphoton. Fig. 2(c) now shows, that lensless quantum
220
+ ghost imaging can be also achieved by simply using a colli-
221
+ mated pump beam with an optimal width. This is easily seen,
222
+ as the Rayleigh length29 of the pump, 2πσ2
223
+ P/λP = 50 cm, is
224
+ much larger than the crystal thickness, lz = 3 mm.
225
+ To find the optimal conditions for this imaging scheme,
226
+ we derive a simplified analytical model. Here, we consider
227
+ only one of the slits and assume it has infinitesimal width and
228
+ is located at xS = a, which means that To(xS) = δ(xS − a).
229
+ This object is put into Eq. (1) and in paraxial approxima-
230
+ tion an analytical solution can be calculated (see supplemen-
231
+ tary material), where we approximate the sinc function ap-
232
+ pearing due to phasematching in the nonlinear crystal by
233
+ a Gaussian30.
234
+ We find a Gaussian ghost intensity pattern
235
+ G(xI) ∝ exp[−(xI − x0)2/(2σ2
236
+ G)] with a width σG and a max-
237
+ imum located at x0, given by
238
+ σG =
239
+
240
+ 2Re
241
+
242
+ α−1
243
+ 1
244
+ ��−1/2,
245
+ (2)
246
+ x0 = a Re
247
+
248
+ α−1
249
+ 1 α2
250
+ ��
251
+ Re
252
+
253
+ α−1
254
+ 1
255
+ ��−1 ,
256
+ (3)
257
+ where
258
+ α1 = σ2
259
+ P + γ lz
260
+ π (λI − λP)+ iλIzI
261
+ 2π −
262
+
263
+ σ2
264
+ P − γ lzλP
265
+ π
266
+
267
+ α2,
268
+ (4)
269
+ α2 =
270
+
271
+ σ2
272
+ P − γ lzλP
273
+ π
274
+ ��
275
+ σ2
276
+ P + γ lz
277
+ π (λS − λP)+ idλS
278
+
279
+ �−1
280
+ (5)
281
+
282
+ 102
283
+ -8
284
+ 0
285
+ 83
286
+ with γ = 0.455/4, a constant that comes from the sinc to Gaus-
287
+ sian approximation. Due to the bucket detector that collects
288
+ all signal photons behind the object, the equations do not de-
289
+ pend on zS, the distance of the object to the bucket detector;
290
+ however, the model does depend on the location of the re-
291
+ solving detector zI. These distances remain at zS = 1.2 m and
292
+ zI = 1.5 m throughout the manuscript. Fig. 3 shows the width
293
+ σG of the ghost pattern and the normalized position x0/a with
294
+ respect to the width of the pump σP and the distance of the ob-
295
+ ject to the crystal d. We observe in Fig. 3 that, for each object
296
+ position d, the width of the ghost pattern σG has a minimum
297
+ at a certain pump width σP. This confirms the observations of
298
+ the numerical example in Fig. 2 that uses d = 30 cm, where
299
+ the optimal case for imaging, marked with a dot in Fig. 2(c),
300
+ leads to the narrowest ghost pattern for each of the slits. In
301
+ the rest of the cases, the pattern of each slit is too wide, result-
302
+ ing in considerable overlap between them and hence the loss
303
+ of visibility of the ghost pattern. If a second infinitesimal slit
304
+ would by at xS = −a, the distance between the two maxima in
305
+ the ghost patterns would be 2x0, therefore, x0/a represents the
306
+ magnification. Fig. 3(b) verifies that for the numerical exam-
307
+ ple in Fig. 2 the magnification is approximately equal to five.
308
+ It also tells that the magnification is always negative, implying
309
+ that the ghost image is inverted.
310
+ Fig. 3(a) shows, that the minimum width of the ghost pat-
311
+ tern becomes smaller as the object is placed farther from the
312
+ crystal. This value, however, does not reach zero, the behavior
313
+ of σG converges to approximately the orange curve in the limit
314
+ where the object is very distant from the crystal, d → ∞. Here,
315
+ Eq. (2) can be reduced to σ2
316
+ G = σ2
317
+ 0 + σ−2
318
+ 0
319
+ [zIλI/(4π)]2. The
320
+ width of the ghost pattern of an infinitesimal slit object with
321
+ the spatially resolving detector placed right after the crystal,
322
+ σ0 = σG(zI = 0), is
323
+ σ0 =
324
+ �1
325
+ 2σ2
326
+ P + γ
327
+ � λI
328
+ λS
329
+ ��λPlz
330
+
331
+ ��1/2
332
+ ,
333
+ (6)
334
+ an expression that depends only on the parameters of the
335
+ biphoton source. For a fixed value of zIλI, the ghost pattern
336
+ width σG has a minimum value, namely σmin
337
+ G
338
+ =
339
+
340
+ 2σ0, when
341
+ σ2
342
+ 0 = zIλI/(4π). Remarkably, this result is equivalent to the
343
+ optimal pinhole size σpinhole in a classical pinhole camera that
344
+ creates the smallest point image of a slit upon spatially inco-
345
+ herent illumination22, σ2
346
+ pinhole ∝ λz. Hence, σ0 can be consid-
347
+ ered the pinhole size of quantum ghost imaging. It does not
348
+ only depend on the width of the pump but also on the thickness
349
+ of the crystal and the biphoton wavelengths. However, the de-
350
+ viation of the equivalent pinhole size σ0 from the pump width
351
+ σP due to the biphoton wavelength is small as for a pump with
352
+ negligible diffraction inside the crystal σ2
353
+ P ≫ λPlz/(2π).
354
+ Next, we analyze the magnification to complement the anal-
355
+ ogy.
356
+ The magnification of the classical pinhole camera is
357
+ given by geometric optics as −z/d with z the distance of the
358
+ detector to the pinhole and d the distance of the object to the
359
+ pinhole. A similar relation can be found from the analytical
360
+ model of the proposed ghost imaging scheme using Eq. (3),
361
+ under the conditions that σ2
362
+ P ≫ (γ lz/π)(max(λS,λI) − λP)
363
+ and {dλS/(2π), zIλI/(2π)} ≫ σ2
364
+ P. The first condition en-
365
+ sures, that signal, idler and pump waves have negligible
366
+ (a)
367
+ (b)
368
+ 0.4
369
+ 1
370
+ 1.6
371
+ 2.2
372
+ 2.8
373
+ 0
374
+ 200
375
+ 400
376
+ -45
377
+ -30
378
+ -15
379
+ 0
380
+ 0
381
+ 200
382
+ 400
383
+ d=3cm
384
+
385
+ d=5cm
386
+ d=10cm
387
+ d=30cm
388
+ d→∞
389
+ FIG. 3. (a) Width σG and (b) magnification of the position x0/a of
390
+ the ghost pattern of a infinitesimal slit at xS = a with respect to the
391
+ pump width σP at various distances d of the slit to the crystal. The
392
+ circle marks the parameters of the numerical example in Fig. 2(c).
393
+ diffraction inside the nonlinear crystal. In this case, σP de-
394
+ fines the size of the generated signal and idler beams inside the
395
+ crystal, and hence their Rayleigh lengths are also much larger
396
+ than the crystal thickness lz. The second condition states that
397
+ both the object and the resolving detector are in the far-field
398
+ of the biphoton source. Under these conditions, following the
399
+ steps detailed in the supplementary material, we find the mag-
400
+ nification to be x0/a ≈ −(zI/d)(λI/λS). This is similar to
401
+ the geometrical magnification of the classical pinhole camera
402
+ but including again the biphoton wavelengths. From this ex-
403
+ pression, the magnification of the numerical example in Fig. 2
404
+ can be quickly found −(1.5m/30cm)(700nm/700nm) = −5,
405
+ see the supplementary material for an example with non-
406
+ degenerate wavelengths. Noteworthy, our imaging scheme is
407
+ not limited to the far-field domain. We only take the approxi-
408
+ mations to find the simple expression for the magnification to
409
+ build the analogy with the classical pinhole camera.
410
+ We further harness the analytical model of Eqs. (2) and (3)
411
+ to derive the transverse resolution of pinhole quantum imag-
412
+ ing. In the following, we use Rayleigh’s resolution criterion
413
+ that is defined as the minimum distance between two point-
414
+ like objects to distinguish them from one another29. A smaller
415
+ ghost pattern width σG of a point-like object results in a bet-
416
+ ter distinction between neighboring objects, as was demon-
417
+ strated in Fig. 2. At the same time, a larger magnification
418
+ |x0/a| would results in a better distinction between two neigh-
419
+ boring objects, as their images are further apart. This suggests
420
+ that, to optimally tell two objects apart, not only the ghost pat-
421
+ tern width has to be taken into account but also the magnifi-
422
+ cation. To test this hypothesis, we begin by taking the derived
423
+ Gaussian ghost pattern from an infinitesimal slit A at xS = a.
424
+ For the sake of simplicity we normalize its maximum to one,
425
+ resulting in the ghost pattern GA = exp[−(xI − x0)2/(2σ2
426
+ G)],
427
+ where σG is given by Eq. (2) and x0 by Eq. (3). If we in-
428
+ clude another similar slit B but at xS = −a, the resulting ghost
429
+ pattern is symmetric with respect to xI = 0 and is given ap-
430
+ proximately by G ≈ GA + GB, assuming negligible interfer-
431
+ ence. The two slits can be distinguished in this pattern when
432
+ its visibility is above a certain threshold, here heuristically
433
+ chosen to be 0.4.
434
+ That means, the intensity at xI = 0 be-
435
+ tween the two maxima should be smaller than 0.4 of the maxi-
436
+ mal intensity at xI = ±x0, which gives the threshold condition
437
+ G(xI = 0)th ≈ GA(0)+GB(0) = 2GA(0) = 0.4. Using this, the
438
+
439
+ 4
440
+ 0
441
+ 100
442
+ 200
443
+ 300
444
+ 1.3
445
+ 1.6
446
+ 0.1
447
+ 0.4
448
+ 0.7
449
+ 1
450
+ d=3cm
451
+ d=5cm
452
+ d=10cm
453
+ d=30cm
454
+
455
+ 0
456
+ 1
457
+ 0
458
+ 1
459
+ -10
460
+ 0
461
+ 10
462
+ (��
463
+ �� �
464
+ ���
465
+
466
+
467
+ d=1m
468
+ 100
469
+ 101
470
+ 102
471
+ 10-1
472
+ 100
473
+ 101
474
+ d
475
+
476
+
477
+ d=3cm
478
+ d=30cm
479
+ d=1m
480
+ FIG. 4. (a) Resolution R with respect to the pump width σP. The
481
+ green line connects the minima of curves of a wider range of object
482
+ distances d. (c) Number of spatial modes N with respect to the crystal
483
+ thickness lz at various d. Numerically, JSP (top) and G (bottom) of
484
+ a (b) double-slit and a (d) complex object (magnified transmission in
485
+ yellow). Circles in (a) and (c) mark the parameters of (b) and (d).
486
+ expression for GA, and Eqs. (2) and (3), we find the transverse
487
+ spatial resolution R corresponding to the minimum resolvable
488
+ distance 2a between two identical infinitesimal slits to be
489
+ R = 2
490
+
491
+ −ln[G(0)th/2]Re
492
+
493
+ α−1
494
+ 1
495
+ ��1/2���Re
496
+
497
+ α−1
498
+ 1 α2
499
+ ����
500
+ −1
501
+ .
502
+ (7)
503
+ Fig. 4(a) displays its dependence on the pump width σP at dif-
504
+ ferent object distances d. The thick green line connects the
505
+ minima of several curves over a wider range of d to portray
506
+ the tendency. One could naively expect from the width of
507
+ the ghost pattern σG in Fig. 3(a) that the resolution R could
508
+ improve as the object is farther from the crystal since the min-
509
+ imum width becomes smaller. However, Fig. 4(a) tells the
510
+ opposite, as R is enhanced by the large magnification at small
511
+ object distances d. This is confirmed numerically by consid-
512
+ ering again the case depicted in Fig. 2(b), that uses a pump
513
+ width of 102 µm and d = 30cm, and changing the position of
514
+ the object to d = 10cm. The resulting JSP and ghost pattern G
515
+ are shown in Fig. 4(b). Compared to Fig. 2(b), the visibility is
516
+ increased due to the larger magnification. This originates from
517
+ the fact that, for a fixed object distance d, the minimum ghost
518
+ pattern width σG does not coincide with the largest magnifi-
519
+ cation x0/a. Hence, the optimal pump width σP that results in
520
+ the best resolution is not necessarily when σG is minimized.
521
+ This is only true for larger values of d where x0/a is almost
522
+ independent of σP, as in the numerical example of Fig. 2.
523
+ Noteworthy, the green tendency line in Fig. 4(a) hints that
524
+ the closer the object is to the crystal and the smaller the pump
525
+ width, the better the resolution R. However, the smaller the
526
+ pump width, the broader its spatial spectrum becomes, in such
527
+ case the used paraxial approximation does not hold. A non-
528
+ paraxial formulation is a matter of future research. Addition-
529
+ ally, R will depend linearly on the object position d for large
530
+ values of d. This is because in such case, the ghost pattern
531
+ width σG is nearly independent of d, see Fig. 3(a), and the
532
+ magnification is |x0/a| ∝ 1/d, as already explained.
533
+ In addition to the resolution, it is also of great interest to
534
+ describe the extent of the signal illumination onto the object,
535
+ which limits the object size that can be imaged. This is finite
536
+ and can be found from a projection of the JSP at the object po-
537
+ sition onto the signal axis (see the analytical expression in the
538
+ supplementary material). This illumination size σS increases
539
+ with the position of the object d and decreases with the crystal
540
+ thickness lz. Importantly, the ratio between the illumination
541
+ size and the resolution tells the maximum number of identical
542
+ infinitesimal slits that can be resolved inside the illuminated
543
+ region of the object, N ≡ σS/R, i.e. describes the number of
544
+ independent spatial modes in the object illumination31. Its de-
545
+ pendence on the crystal thickness lz is displayed in Fig. 4(c)
546
+ for various d, each at a pump width σP that minimizes the res-
547
+ olution as described by the green line in Fig. 4(a). To increase
548
+ the number of spatial modes N, a thinner crystal can be used,
549
+ similar to other quantum imaging schemes2,32, as it allows a
550
+ larger range of transverse wave-vectors33. Also, the object
551
+ could be put farther away from the crystal. Here, the increase
552
+ of N with d has a limit due to the linear dependence of both
553
+ σS and R on d for very distant objects. Finally, we found that
554
+ photon-pairs with non-degenerate wavelengths show a minor
555
+ improvement in the resolution and number of modes, see the
556
+ supplementary material for some examples.
557
+ Lastly, we sum up the main features of the proposed setup
558
+ with a ghost image of a more complicated object with vari-
559
+ ous shapes and transmissions, see Fig. 4(d). This object is
560
+ placed at d = 1 m and we use a pump width σP = 258 µm that
561
+ optimizes the ghost image resolution to R = 1.5 mm, allows
562
+ N = 10 spatial modes and has a magnification x0/a = −1.2.
563
+ Moreover, unlike the setup with a pseudo-thermal source26,
564
+ we see in the JSP of this example that an integrating bucket
565
+ detector is necessary to show the whole illuminated section of
566
+ the object in the ghost pattern, a point detector would not be
567
+ able to detect signals from all objects as it would just “take a
568
+ horizontal thin slice” of the JSP.
569
+ To conclude, we proposed a quantum ghost imaging
570
+ scheme without lenses in the biphoton arms by means of a
571
+ collimated pump beam with an optimal size. This imaging
572
+ scheme is best suited for applications where lenses for the
573
+ biphoton wavelengths are less available and a high transverse
574
+ resolution is not required. We demonstrated that the proposed
575
+ scheme is analogous to the classical pinhole camera where the
576
+ biphoton source plays the role of the pinhole and derived its
577
+ spatial resolution and number of spatial modes.
578
+ See the supplementary material for further details of the an-
579
+ alytical model and examples with non-degenerate photons.
580
+
581
+ 5
582
+ ACKNOWLEDGMENTS
583
+ We thank E. Santos and V. Gili for their insightful com-
584
+ ments. This work was supported by the Thuringian Ministry
585
+ for Economy, Science, and Digital Society; the European So-
586
+ cial Funds and the European Funds for Regional Develop-
587
+ ment (2017 FGR 0067, 2017 IZN 0012); the German Federal
588
+ Ministry of Education and Research (FKZ 13N14877, FKZ
589
+ 03ZZ0434) and the Deutsche Forschungsgemeinschaft (DFG,
590
+ German Research Foundation, project ID 407070005).
591
+ DATA AVAILABILITY
592
+ The data that supports the findings of this study are avail-
593
+ able within the article and its supplementary material.
594
+ This article may be downloaded for personal use only.
595
+ Any other use requires prior permission of the author and
596
+ AIP Publishing.
597
+ This article appeared in A. Vega et al.,
598
+ Appl. Phys. Lett. 117, 094003 (2020) and may be found
599
+ at https://doi.org/10.1063/5.0012477
600
+ 1D. V. Strekalov, A. V. Sergienko, D. N. Klyshko,
601
+ and Y. H. Shih,
602
+ Phys. Rev. Lett. 74, 3600 (1995).
603
+ 2T. B. Pittman, Y. H. Shih, D. V. Strekalov,
604
+ and A. V. Sergienko,
605
+ Phys. Rev. A 52, R3429 (1995).
606
+ 3D. C. Burnham and D. L. Weinberg, Phys. Rev. Lett. 25, 84 (1970).
607
+ 4C. K. Hong and L. Mandel, Phys. Rev. A 31, 2409 (1985).
608
+ 5R. S. Bennink,
609
+ S. J. Bentley,
610
+ R. W. Boyd,
611
+ and J. C. Howell,
612
+ Phys. Rev. Lett. 92, 033601 (2004).
613
+ 6F. Ferri, D. Magatti, A. Gatti, M. Bache, E. Brambilla, and L. A. Lugiato,
614
+ Phys. Rev. Lett. 94, 183602 (2005).
615
+ 7G.
616
+ Brida,
617
+ M.
618
+ Genovese,
619
+ and
620
+ I.
621
+ Ruo
622
+ Berchera,
623
+ Nature Photonics 4, 227 EP (2010).
624
+ 8P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett,
625
+ Nature Communications 6, 5913 EP (2015).
626
+ 9K.
627
+ W.
628
+ C.
629
+ Chan,
630
+ M.
631
+ N.
632
+ O’Sullivan,
633
+ and
634
+ R.
635
+ W.
636
+ Boyd,
637
+ Phys. Rev. A 79, 033808 (2009).
638
+ 10S. Karmakar and Y. Shih, Phys. Rev. A 81, 033845 (2010).
639
+ 11R. S. Aspden, N. R. Gemmell, P. A. Morris, D. S. Tasca, L. Mertens, M. G.
640
+ Tanner, R. A. Kirkwood, A. Ruggeri, A. Tosi, R. W. Boyd, G. S. Buller,
641
+ R. H. Hadfield, and M. J. Padgett, Optica 2, 1049 (2015).
642
+ 12T. B. Pittman, D. V. Strekalov, D. N. Klyshko, M. H. Rubin, A. V.
643
+ Sergienko, and Y. H. Shih, Phys. Rev. A 53, 2804 (1996).
644
+ 13P. Xu, H. Y. Leng, Z. H. Zhu, Y. F. Bai, H. Jin, Y. X. Gong, X. Q. Yu, Z. D.
645
+ Xie, S. Y. Mu, and S. N. Zhu, Phys. Rev. A 86, 013805 (2012).
646
+ 14A. F. Abouraddy, B. E. A. Saleh, A. V. Sergienko,
647
+ and M. C. Teich,
648
+ Phys. Rev. Lett. 87, 123602 (2001).
649
+ 15Y. Cai and S.-Y. Zhu, Phys. Rev. E 71, 056607 (2005).
650
+ 16D.-Z. Cao, J. Xiong, and K. Wang, Phys. Rev. A 71, 013801 (2005).
651
+ 17R.
652
+ S.
653
+ Bennink,
654
+ S.
655
+ J.
656
+ Bentley,
657
+ and
658
+ R.
659
+ W.
660
+ Boyd,
661
+ Phys. Rev. Lett. 89, 113601 (2002).
662
+ 18A.
663
+ Gatti,
664
+ E.
665
+ Brambilla,
666
+ M.
667
+ Bache,
668
+ and
669
+ L.
670
+ A.
671
+ Lugiato,
672
+ Phys. Rev. A 70, 013802 (2004).
673
+ 19A.
674
+ Valencia,
675
+ G.
676
+ Scarcelli,
677
+ M.
678
+ D’Angelo,
679
+ and
680
+ Y.
681
+ Shih,
682
+ Phys. Rev. Lett. 94, 063601 (2005).
683
+ 20G.
684
+ Scarcelli,
685
+ V.
686
+ Berardi,
687
+ and
688
+ Y.
689
+ Shih,
690
+ Applied Physics Letters 88, 061106 (2006).
691
+ 21X.-H. Chen, Q. Liu, K.-H. Luo, and L.-A. Wu, Opt. Lett. 34, 695 (2009).
692
+ 22M. Young, Appl. Opt. 10, 2763 (1971).
693
+ 23M. Young, The Physics Teacher 27, 648 (1989).
694
+ 24C.
695
+ Thomas,
696
+ G.
697
+ Rehm,
698
+ I.
699
+ Martin,
700
+ and
701
+ R.
702
+ Bartolini,
703
+ Phys. Rev. ST Accel. Beams 13, 022805 (2010).
704
+ 25H. O. Anger, Nature 170, 200 (1952).
705
+ 26W.
706
+ Gong,
707
+ P.
708
+ Zhang,
709
+ X.
710
+ Shen,
711
+ and
712
+ S.
713
+ Han,
714
+ Applied Physics Letters 95, 071110 (2009).
715
+ 27An
716
+ outline
717
+ of
718
+ the
719
+ derivation
720
+ of
721
+ the
722
+ biphoton
723
+ quantum
724
+ state
725
+ can
726
+ be
727
+ found
728
+ in
729
+ the
730
+ supplementary
731
+ material.
732
+ For
733
+ a
734
+ more
735
+ de-
736
+ tailed
737
+ treatment
738
+ see
739
+ e.g.,
740
+ J.
741
+ Schneeloch
742
+ and
743
+ J.
744
+ C.
745
+ Howell,
746
+ Journal of Optics 18, 053501 (2016) S. Walborn,
747
+ C. Monken,
748
+ S. Pá-
749
+ dua, and P. S. Ribeiro, Physics Reports 495, 87 (2010).
750
+ 28A. F. Abouraddy, B. E. A. Saleh, A. V. Sergienko,
751
+ and M. C. Teich,
752
+ J. Opt. Soc. Am. B 19, 1174 (2002).
753
+ 29M. Born,
754
+ E. Wolf,
755
+ A. B. Bhatia,
756
+ P. C. Clemmow,
757
+ D. Gabor,
758
+ A. R. Stokes, A. M. Taylor, P. A. Wayman,
759
+ and W. L. Wilcock,
760
+ Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light,
761
+ 7th ed. (Cambridge University Press, 1999).
762
+ 30K. W. Chan, J. P. Torres, and J. H. Eberly, Phys. Rev. A 75, 050101 (2007).
763
+ 31I. Kviatkovsky, H. M. Chrzanowski, E. G. Avery, H. Bartolomaeus, and
764
+ S. Ramelow, “Microscopy with undetected photons in the mid-infrared,”
765
+ (2020), arXiv:2002.05960 [physics.optics].
766
+ 32G. B. Lemos, V. Borish, G. D. Cole, S. Ramelow, R. Lapkiewicz,
767
+ and
768
+ A. Zeilinger, Nature 512, 409 (2014).
769
+ 33C. Okoth,
770
+ A. Cavanna,
771
+ T. Santiago-Cruz,
772
+ and M. V. Chekhova,
773
+ Phys. Rev. Lett. 123, 263602 (2019).
774
+
775
+ arXiv:2301.00994v1 [quant-ph] 3 Jan 2023
776
+ 1
777
+ (Dated: 4 January 2023)
778
+ 1
779
+
780
+ I.
781
+ A.
782
+ 1.
783
+ 2
784
+
59AzT4oBgHgl3EQfEfpg/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
5NFAT4oBgHgl3EQfmh0R/content/tmp_files/2301.08623v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
5NFAT4oBgHgl3EQfmh0R/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
AdE1T4oBgHgl3EQfDQNp/content/tmp_files/2301.02874v1.pdf.txt ADDED
@@ -0,0 +1,1770 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GAN-Based Content Generation of Maps for
2
+ Strategy Games
3
+ Vasco Nunes
4
+ Instituto Superior T´ecnico
5
+ University of Lisbon
6
+ Lisbon, Portugal
7
8
+ Jo˜ao Dias
9
+ Faculty of Science and Technology
10
+ University of Algarve and CCMAR and INESC-ID
11
+ Faro, Portugal
12
13
+ Pedro A. Santos
14
+ Instituto Superior T´ecnico / INESC-ID
15
+ University of Lisbon
16
+ Lisbon, Portugal
17
18
+ Keynotes—Heightmap; Procedural Content Generation; Genera-
19
+ tive Adversarial Network
20
+ Abstract—Maps are a very important component of strategy
21
+ games, and a time-consuming task if done by hand. Maps
22
+ generated by traditional PCG techniques such as Perlin noise or
23
+ tile-based PCG techniques look unnatural and unappealing, thus
24
+ not providing the best user experience for the players. However
25
+ it is possible to have a generator that can create realistic and
26
+ natural images of maps, given that it is trained how to do so. We
27
+ propose a model for the generation of maps based on Generative
28
+ Adversarial Networks (GAN). In our implementation we tested
29
+ out different variants of GAN-based networks on a dataset
30
+ of heightmaps. We conducted extensive empirical evaluation to
31
+ determine the advantages and properties of each approach. The
32
+ results obtained are promising, showing that it is indeed possible
33
+ to generate realistic looking maps using this type of approach.
34
+ I. INTRODUCTION
35
+ In maps for strategy games, the map’s visual characteristics
36
+ play a very important role in the player’s experience. When we
37
+ talk about visual characteristics, we usually refer to the map’s
38
+ outline and level of detail. Complex elements like peninsulas,
39
+ mountain ranges or islands provide more tactical information,
40
+ improving the player’s decisions in a strategic way. Therefore,
41
+ these details improve the way the information contained in the
42
+ map is assessed, in order for the player to make decisions in
43
+ a strategic way. A game which uses the same kind of map
44
+ numerous times with no variety can cause players to become
45
+ bored after replaying the game a few times.
46
+ One of the ways to generate maps for strategy games is
47
+ by using Procedural Content Generation (PCG) 1 techniques.
48
+ The most common traditional approach for initial Heightmap
49
+ generation is the Perlin Noise [1], followed by complementing
50
+ techniques such as Hydraulic erosion [2]. Unfortunately the
51
+ maps generated look a bit unnatural and unappealing. More
52
+ so, most of the methods generated by PCG suffer from some
53
+ kind of uncontrollability. The ideal scenario would be to have
54
+ a generator that learned to create realistic images of maps and
55
+ Published in the Proceedings of GAME ON 2022; Cite as:
56
+ Nunes, V., Dias, J., Santos, Pedro A.: GAN-Based Content Generation of
57
+ Maps for Strategy Games. Proceedings of GAME-ON’2022, pg 20-31, ISBN
58
+ 978-9-492859-22-8
59
+ 1Creation of game content algorithmically with limited or indirect user
60
+ input
61
+ with the complex elements (peninsulas or mountain ranges)
62
+ appearing next to each other2.
63
+ Recent advances in Deep Neural Networks, and in particular
64
+ GANs (Generative Adversarial Networks) highlight the po-
65
+ tential for a new approach for the automatic generation of
66
+ maps. GAN is a framework proposed by Ian J. Goodfellow
67
+ et al. [3] that is trained to generate data (images in most
68
+ cases) with the same characteristics of a given training set. For
69
+ example, a GAN trained on images of human faces would be
70
+ able to generate realistic samples that are authentic to human
71
+ observers.
72
+ The framework consists of two networks competing against
73
+ each other, thus the term adversarial: a generative network,
74
+ Generator G, which creates fake data from a random distri-
75
+ bution pz (usually normal or uniform) and a discriminative
76
+ network, Discriminator D, which, by giving it some data x,
77
+ estimates whether x came from real data distribution pdata or
78
+ from the generator’s distribution pg.
79
+ A real world analogy would be the job of an art counterfeiter
80
+ and a cop. The cop (D) learns to detect false paintings while
81
+ the counterfeiter (G) improves on producing perfectly fake
82
+ paintings indistinguishable from real ones.
83
+ The generation of images using GANs reached great success in
84
+ recently years. Recent applications using this network include
85
+ creation of realistic faces [4], pose guided person image
86
+ generation [5] or transforming images from one domain to
87
+ another [6].
88
+ Taking this into account, the research problem addressed in
89
+ this work is to explore several GAN’s techniques in order to
90
+ generate realistic and appealing maps for strategy games.
91
+ By “realistic”, we mean that maps should be perceived in
92
+ a similar way to natural land formations. By “appealing”,
93
+ we mean that maps should have suitable characteristics and
94
+ elements discussed above, that allow players to have a more
95
+ challenging and interesting experience.
96
+ Taking into account the research problem, if we have a
97
+ proper and balanced heightmap’s dataset of natural landscapes,
98
+ we believe that this type of technique can be successfully
99
+ used to generate maps that resemble the original dataset,
100
+ 2From an interview conducted for this research with Andy Gainey, game-
101
+ play programmer at Paradox Development Studio
102
+ arXiv:2301.02874v1 [cs.LG] 7 Jan 2023
103
+
104
+ Fig. 1. Generative Adversarial Network Model architecture.
105
+ which will have the type of characteristics existing in natural
106
+ landscapes, such as peninsulas and mountain ranges. Starting
107
+ from a baseline GAN architecture, we will slowly improve its
108
+ structure in order to achieve our goal, taking into account the
109
+ proper evaluation of the models.
110
+ II. RELATED WORK
111
+ In this section we describe different types of GANs that
112
+ appeared in recent works and that we will use.
113
+ A. DCGAN
114
+ In the work of Alec Radford et al. [7] the authors managed
115
+ to consolidate the junction of GAN and Convolutional Neu-
116
+ ral Network (CNN) frameworks, after several unsuccessfully
117
+ attempts to do so in the past years.
118
+ CNN are a subset of neural networks most commonly applied
119
+ to image and video recognition or computer vision problems.
120
+ They were largely inspired by the visual cortex, small regions
121
+ of cells sensitive to specific regions of the visual field [8].
122
+ These networks are mostly composed by convolutional layers,
123
+ that are responsible for applying the convolution3 operation
124
+ of the input using a filter, or kernel, and sending the result,
125
+ also known as feature map to the next layer. If this kernel is
126
+ designed to detect a specific type of feature on the input, then
127
+ filtering it across the whole image would allow the kernel to
128
+ detect that feature anywhere in the image independently of
129
+ the feature’s location. By having this translation invariance
130
+ characteristic and a shared-weight architecture, CNN are also
131
+ known as Shift Invariant Artificial Neural Networks [9].
132
+ 3In image processing, refers to the process of adding each pixel of the
133
+ image to its local neighbors, weighted by a kernel.
134
+ Their methodology consisted in adopting and modifying three
135
+ demonstrated changes to CNN architectures:
136
+ 1) Replacing the pooling layers of the CNN baseline ar-
137
+ chitecture for convolutional layers with stride 2 in order
138
+ for G and D to learn its own spatial upsampling and
139
+ downsampling respectively;
140
+ 2) Elimination of fully connected layers for convolutional
141
+ layers. To make the most use of these convolution layers,
142
+ they added another layer at the beginning of G that takes
143
+ the input vector z and reshapes it into a 4-dimensional
144
+ tensor. They also added another layer at the end of D that
145
+ flattens the image to a single value output;
146
+ 3) Applying Batch Normalization to layers in order to stabi-
147
+ lize learning by normalizing the input of a layer to have
148
+ zero mean and unit variance, for each minibatch.
149
+ B. Progressively Growing GANs
150
+ Generation of high-resolution images is a difficult task since it
151
+ is easier to discriminate between the fake and real images. In
152
+ the work of Karras et al. [4], the authors proposed a training
153
+ methodology that consists on starting with low-resolution
154
+ images, and then progressively increasing the resolution by
155
+ adding layers to both the discriminator and generator (which
156
+ are mirror images of each other and always grow in syn-
157
+ chrony). In other words, the authors are slicing a bigger
158
+ complex problem into smaller ones and slowly increasing the
159
+ complexity to prevent the training from becoming unstable.
160
+ The incremental addition of the layers allows the models to
161
+ effectively learn coarse-level detail and later learn even finer
162
+ detail, both on the generator’s and discriminator’s side.
163
+ The insertion of layers cannot be done directly due to sudden
164
+ shocks to the already well-trained layers. Instead they phase in
165
+ the layer. This operation consists on using a skip connection4
166
+ to connect the new block to the input of D or output of G
167
+ using a weighted sum with the existing input or output layer,
168
+ that represents the influence of the new block. It is controlled
169
+ by a parameter α that starts at a very small value and increases
170
+ linearly to 1 over the process of training. In other words, we
171
+ can think of this operation as the layers slowly being inserted
172
+ during the training phase.
173
+ In terms of results, this network was capable of generating high
174
+ resolution images of 1024 × 1024, creating a high-resolution
175
+ version of the CELEBA dataset [10].
176
+ C. Wasserstein GAN
177
+ Martin Arjovsky et al. [11] propose a GAN that has an
178
+ alternate way of training so that the generator model better
179
+ approximates the distribution of data of a given dataset. More
180
+ so, they present an alternate loss function in which D is always
181
+ giving enough information for the G to improve himself, even
182
+ if D has reached its optimality.
183
+ The discriminator D is replaced by a critic C that instead of
184
+ classifying an image as fake or real (in the interval [0,1]),
185
+ 4Skip connections are connections of outputs from early layers to later
186
+ layers through addition or concatenation
187
+
188
+ Latent space
189
+ Real example
190
+ Fake example
191
+ Updatemodel
192
+ UpdatemodelI
193
+ Probability of
194
+ fake/realit scores the fakeness or realness of an image (in the interval
195
+ ]−∞,+∞[). This score is also known as Wasserstein estimate.
196
+ The critic is looking to estimate the Wasserstein distance
197
+ between the dataset sample distribution and the generated
198
+ images distribution, which corresponds to the distance between
199
+ the average critic score on real and the average critic score on
200
+ fake images. Thus the network’s objective function can be
201
+ summarized as follows:
202
+ • C objective function is the difference between the average
203
+ critic score on fake images and the average critic score on
204
+ real images;
205
+ • G objective function is the average critic score on fake
206
+ images.
207
+ Both the networks are trying to maximize these objective
208
+ functions. Therefore, for G, a larger score of the fake images
209
+ will result in a higher output for the G, encouraging C to
210
+ output higher scores for the fake images. For C, a larger
211
+ score for real images results in a lower value for the model,
212
+ penalizing it, thus the encouragement for the critic to score
213
+ lower scores for the real images.
214
+ D. VAE + GAN
215
+ The idea of combining the power of Variational Autoencoders
216
+ (VAE) and GAN was the basis for the work of Larsen et
217
+ al. [12]. The motivation behind their work was to leverage
218
+ learned representations to better measure similarities in the
219
+ data distribution.
220
+ Autoencoders are another subset of neural networks used to
221
+ generate images with good results. They learn a representation
222
+ of a given data [13]. They are commonly used for face
223
+ recognition or acquiring semantic meanings of words. The
224
+ general idea behind this neural network is that they learn
225
+ to copy the input to the output through a latent space that
226
+ compresses the input maintaining only the most relevant and
227
+ important information. The decoder then decompresses the
228
+ information retained in the latent space ( also known as code)
229
+ leading to a very similar copy of the output.
230
+ VAE are a subset of autoencoders specialized in content
231
+ generation. They inherit the architecture from the traditional
232
+ autoencoders but instead of mapping the input to a fixed latent
233
+ space, the VAE maps the input onto a latent distribution,
234
+ which allows us to take random samples from the latent
235
+ space. These samples are then decoded using the decoder
236
+ segment to generate outputs very similar to the inputs used
237
+ to train the encoders. Instead of having a fixed vector as the
238
+ latent space, the vector is replaced by two separate vectors,
239
+ that represent the mean and the standard deviation of the
240
+ distribution, respectively.
241
+ Typically, VAE uses element-wise similarity in their recon-
242
+ struction error, however, Larsen et al. [12] propose using the
243
+ GAN discriminator to measure the sample similarity. In other
244
+ words, they use the GAN’s discriminator as a way to measure
245
+ the difference between the VAE’s output and the original
246
+ image.
247
+ The authors’ proposed architecture is comprised of a single
248
+ model that simultaneously learns to encode, generate and
249
+ compare samples from the dataset. The GAN’s generator
250
+ coincides with the VAE’s decoder. The authors define the loss
251
+ of the model as:
252
+ L = LV AE + LGAN
253
+ (1)
254
+ where LGAN is the Binary cross entropy loss and LV AE is
255
+ defined as:
256
+ LV AE = Lprior + LDisl
257
+ llike
258
+ (2)
259
+ where Lprior is a prior regularization term, the Kullback-
260
+ Leibler divergence and LDisl
261
+ llike is the expected log likelihood
262
+ (reconstruction error) expressed in the GAN discriminator:
263
+ LDisl
264
+ llike = −Ez∼Enc(x)[log p(Disl(x)|z)]
265
+ (3)
266
+ with Disl denoting a hidden representation of the lth layer of
267
+ the Discriminator.
268
+ III. DATASET CREATION
269
+ In order to apply the GAN-based techniques, it is necessary to
270
+ have a group of examples of what the system is supposed to
271
+ generate. Therefore, we began by building a proper dataset of
272
+ natural landscapes which had the characteristics of the already
273
+ existing land formations of the planet Earth, such as peninsulas
274
+ and mountain ranges.
275
+ Data gathering
276
+ When looking for a proper dataset we had the idea that no data
277
+ could resemble more the characteristics discussed in Section I
278
+ than data from the real world, in which the terrain had already
279
+ the desired complex elements and characteristics. Taking this
280
+ idea into account, we used a public dataset 5 with nearly
281
+ global coverage of the planet Earth generated by a satellite
282
+ radar topography mission. The dataset consists of several
283
+ Digital Elevation Model (DEM) files created by a Ground
284
+ Data Processing System supercomputer with a 3 arc-second
285
+ sample spacing. We grouped the different DEM files together
286
+ and exported them into a Tagged Image File Format using
287
+ a program called Global Mapper, resulting in a Heightmap
288
+ image with 43200 × 18000 resolution, as shown on Fig 2.
289
+ Fig. 2. Heightmap of the planet Earth after joining the DEM files together
290
+ 5https://www2.jpl.nasa.gov/srtm/cbanddataproducts.html
291
+
292
+ Preprocessing
293
+ As one can see in Fig 2 most regions of the Earth are a
294
+ bit too dark making the terrain not very noticeable. This is
295
+ not ideal for the techniques to be used since the altitude data
296
+ should be more evenly distributed. In other words, we needed
297
+ a higher range of pixel values so that their difference would
298
+ be better distributed between 0 and 255, instead of the original
299
+ linear mapping between altitudes and pixel values. So, using
300
+ the same program, we changed the image’s brightness level so
301
+ that lower altitude regions could have higher pixel greyscale
302
+ values.
303
+ We proceeded to crop the high resolution image into 1024 ×
304
+ 1024 images with a sliding window of 512 pixels, which gave
305
+ in total 2822 images.
306
+ Dataset Augmentation / Removal of unwanted images
307
+ Due to the lacking of enough images on the dataset for a
308
+ problem with a moderate level of complexity such as the one
309
+ approached here, we had to perform dataset augmentation in
310
+ order to increase the number of images. The procedure is
311
+ described below:
312
+ 1) To the 43200 × 18000 image, we applied different sets of
313
+ image processing techniques: rotation between 0 and 180
314
+ degrees, and horizontal / vertical flipping. The rest of the
315
+ image would then be filled with the pixel value 255.
316
+ 2) As done in the preprocessing phase, the 43200×18000 was
317
+ cropped into 1024 × 1024 images with a sliding window
318
+ of 512.
319
+ 3) Resulting images that had little to no continental land or
320
+ were cut due to the image processing techniques, were
321
+ removed. In other words, images that had 95% of the pixel
322
+ values ≤ 25, or had the pixel value of 255 were removed.
323
+ This procedure was repeated 15 times, ending up with 12640
324
+ images of 1024 × 1024 resolution, which we found more
325
+ than reasonable for the problem. Unfortunately, Working with
326
+ 1024×1024-sized images would result in our models having a
327
+ high number of parameters thus requiring more computational
328
+ resources, such as memory, which we had not at our disposal.
329
+ Therefore, we had to downsize the dataset to a 128 × 128
330
+ resolution. The image rescaling was done using a nearest
331
+ neighbor interpolation filter.
332
+ IV. GANS’ ARCHITECTURE AND TRAINING
333
+ Instead of just focusing on one type of GAN, we decided to
334
+ explore several models in order to decide which one would be
335
+ the most appropriate for the problem, making a comparison in
336
+ terms of quality of the images generated, training efficiency
337
+ and ease in training and convergence. We added tables which
338
+ detail each model’s architecture in the Appendix.
339
+ A. DCGAN
340
+ The Deep Convolutional Generative Adversarial Network
341
+ (DCGAN) architecture was the first to be tested, to serve as
342
+ an initial baseline model and a foundation for the other mod-
343
+ els. We followed the majority of the architectural guidelines
344
+ explained in [7] such as:
345
+ • Using strided convolutions on D and fractional-strided
346
+ (transposed) convolutions on G, allowing the network its
347
+ own spatial downsampling and upsampling;
348
+ • Batch Normalization layers in both networks, which helps
349
+ the network on its learning process;
350
+ • Removal of fully connected layers and replacing them with
351
+ the convolutional layers except in the beginning of G and
352
+ in the end of D;
353
+ • Leaky ReLU activation function for all layers of D, since it
354
+ is more effective for models generating images with higher
355
+ resolution;
356
+ However, instead of applying the Rectified Linear Unit (ReLU)
357
+ activation function to all layers of G, we instead used the
358
+ Leaky ReLU, since the latter activation function has been
359
+ proven to be more effective than the ReLU [14].
360
+ Fig. 3 and Table 4 depicts the model of the GAN generator
361
+ and discriminator, respectively.
362
+ Fig. 3. DCGAN’s Generator Model
363
+ Fig. 4. DCGAN’s Discriminator Model
364
+ All of the convolutional and transposed convolution layers had
365
+ a kernel size of 5, SAME padding and a stride of 2, except for
366
+ the first convolution layer of D which had stride of 1. Those
367
+ layers’ weights were initialized using a zero mean normal
368
+ distribution with standard deviation 0.02. The values used for
369
+
370
+ 32
371
+ 64
372
+ 128
373
+ 1024
374
+ 256
375
+ 1
376
+ 128
377
+ 128
378
+ 16
379
+ 32
380
+ 64
381
+ 8
382
+ 100
383
+ 8
384
+ 16
385
+ 32
386
+ 64
387
+ 128
388
+ 128
389
+ Convolution layer followed by Batch Normalization Layer,
390
+ LeakyReLu layer and Dropout Layer
391
+ Reshape1
392
+ 1
393
+ 32
394
+ 64
395
+ 128
396
+ 256
397
+ 128
398
+ 128
399
+ 64
400
+ 32
401
+ 16
402
+ 8
403
+ >output
404
+ 8
405
+ 16
406
+ 32
407
+ 128
408
+ 128
409
+ 64
410
+ Convolution layer followed by a LeakyReLu layer
411
+ Convolution layer followed by Batch Normalization Layer
412
+ and LeakyReLu layer
413
+ Flattenthe Batch Normalization, Leaky ReLU layers and the kernel
414
+ size on both networks were the same as the ones used in [7].
415
+ It is well-known that GANs suffer from the vanishing gradient
416
+ problem, where an optimal D ( one that is really good at
417
+ classifying the images as real and fake), doesn’t provide
418
+ enough information for the G to improve itself [11], [15].
419
+ Regarding the DCGAN’s training phase, in order to battle the
420
+ GAN’s vanishing gradient problem we decided to add three
421
+ hindering methods that would help in achieving this goal.
422
+ These hindering methods, have the purpose of preventing D
423
+ from rapidly reaching a perfect discriminator scenario [16],
424
+ [17], [15].
425
+ • Adding a unique noise vector to each sample of the mini-
426
+ batch of m real samples from pdata and the minibatch of
427
+ m fake samples;
428
+ • Applying One-sided Label Smoothing with β = 0.2.
429
+ • Adding a Dropout layer after each Leaky ReLU layer on D.
430
+ For these layers we chose to set 50% of the input’s values
431
+ to 0;
432
+ We came up with three expressions for the noise factor.
433
+ The first one illustrates an increasing noise factor over the
434
+ number of epochs:
435
+ noise factor = epoch × 0.5
436
+ epochs
437
+ (4)
438
+ In the second and third one, the noise factor increases till
439
+ half the number of epochs and then decreases.
440
+ noise factor =
441
+ � ep×0.5
442
+ half ep
443
+ if ep ≤ half ep
444
+ 0.5 − ep×0.5
445
+ ep
446
+ otherwise
447
+ (5)
448
+ noise factor =
449
+
450
+ e×0.5
451
+ h ep
452
+ if e ≤ h ep
453
+ 0.5 − (e−h ep)×0.5
454
+ h ep
455
+ otherwise
456
+ (6)
457
+ In (5) the noise drops to half after half of the epochs have
458
+ passed (e.g. 0.3 - 0.4 - 0.5 - 0.25 - ...) while in (6) the noise
459
+ ascends and descends at a same rate (e.g 0.3 - 0.4 - 0.5 -
460
+ 0.4 - 0.3 - ...). We wanted to test what would happen if we
461
+ hindered D till half of the training phase and then ease it either
462
+ by suddenly decreasing the noise factor to half at the middle
463
+ of the training phase or having a similar rate for increasing
464
+ and decreasing the noise factor.
465
+ B. WGAN
466
+ The second tested architecture was the Wasserstein Generative
467
+ Adversarial Network (WGAN) discussed in Section II-C. With
468
+ this model we were expecting to work on some of the problems
469
+ that came with the DCGAN model such as the failure to
470
+ converge and the vanishing gradient problems.
471
+ The structure of the generator from the DCGAN model and
472
+ the WGAN model is exactly the same. The same goes for the
473
+ discriminator from DCGAN model and the critic from WGAN
474
+ model except we’re using a linear activation function as the
475
+ last layer on the critic instead of the sigmoid, since the score
476
+ for realness or fakeness of an image doesn’t have a limit value.
477
+ There are some things to be taken in consideration about the
478
+ training phase:
479
+ • As discussed in section II-C, during each epoch, C is trained
480
+ n critic times more than G;
481
+ • It was not to possible to implement the Wasserstein distance
482
+ with the formulation presented in [11] using Keras’ API
483
+ methods because Keras doesn’t allow a sum of losses from
484
+ independent batches. We used the fact that maximizing the
485
+ Wasserstein distance is equivalent to increasing the distance
486
+ between the Wasserstein score for positive examples and the
487
+ score for negative examples. One simple way of achieving
488
+ this is by returning positive estimates for real examples and
489
+ negative estimates for fake examples6.
490
+ • When updating the weights, we clip them to stay in an
491
+ interval between a constant −c and c. This clipping is neces-
492
+ sary to ensure the critic’s approximation to the Wasserstein
493
+ distance, as explained in [11].
494
+ C. ProgGAN
495
+ For a third architecture we decided to apply the principles of
496
+ the WGAN model and the idea of generating high-resolution
497
+ images discussed in Section II-B together to create a GAN
498
+ based on the Progressive Growing Generative Adversarial Net-
499
+ work (ProgGAN). We wanted to verify if this technique would
500
+ have interesting results in terms of training time optimization.
501
+ We also wanted to see if it would generate images with higher
502
+ resolution while maintaining a good quality.
503
+ Fig. 5. ProgGAN’s Generator Model
504
+ Fig. 5 and 6 depict the models of the ProgGAN’s generator
505
+ and critic, respectively.
506
+ The upsampling layers use an upscaling factor of 2 for
507
+ both dimensions and a nearest neighbor interpolation. The
508
+ downsampling layers perform an average pooling operation,
509
+ using a downscaling factor of 2 for both dimensions and
510
+ a stride of 2. During training, the phasing in from giving
511
+ full weight to the 64 × 64 model (at the beginning of the
512
+ training phase) to full weight to the 128 × 128 model (end
513
+ of the training phase) is done through a weighted sum layer
514
+ controlling how much to weight the input from the 64 × 64
515
+ 6https://machinelearningmastery.com/how-to-implement-wasserstein-loss-
516
+ for-generative-adversarial-networks/
517
+
518
+ 64
519
+ 64
520
+ 64
521
+ 128
522
+ 128
523
+ 1
524
+ 64
525
+ 64
526
+ 64
527
+ 128
528
+ 128
529
+ 128
530
+ 100
531
+ 64
532
+ 64
533
+ 64
534
+ 128
535
+ 128
536
+ 128
537
+ Transposed convolution layer followed by Batch Normalization Layer
538
+ and LeakyReLu layer
539
+ -64x64network
540
+ Upsampling
541
+ 128x128 network
542
+ ReshapeFig. 6. ProgGAN’s Critic Model
543
+ and the 128 × 128 models. It uses a parameter α that grows
544
+ linearly over the training phase.
545
+ Each of the models was trained separately, in this order: 64×64
546
+ model → growth model → 128×128 model. We treated each
547
+ of the models as being a WGAN.
548
+ D. VAE + WGAN
549
+ As a final contribution, we wanted to combine the VAE and
550
+ WGAN together. The idea was to train a VAE to encode and
551
+ decode the images of our dataset of heightmaps. We hoped that
552
+ using a generator that was already trained to create features
553
+ from our dataset, such as in the VAE, would accelerate the
554
+ WGAN’s training by increasing the efficiency in training time.
555
+ Fig. 7 depicts the model of the encoder.
556
+ Fig. 7. VAE + WGAN’s Encoder Model
557
+ The decoder and the discriminator followed the same structure
558
+ as the WGAN’s generator and critic. The only difference is
559
+ that the last layer of the decoder uses a sigmoid activation
560
+ function instead of the hyperbolic tangent used in WGAN’s
561
+ generator.
562
+ In terms of training, we trained the VAE and WGAN sepa-
563
+ rately. We started with the VAE, by just training the model for
564
+ a fixed number of epochs. Then we took the decoder model
565
+ of the VAE and used it as our WGAN’s generator. However,
566
+ instead of normalizing our dataset between −1 and 1 for both
567
+ the networks, we normalized it between 0 and 1 since the last
568
+ activation function of the decoder is a sigmoid, which ranges
569
+ between the latter values. For some of the experiments, instead
570
+ of generating the z vectors from N(µ, σ2) we generated
571
+ them using the vectors µ and σ learned from the VAE. With
572
+ this latter idea, we wanted to test if the decoder would be able
573
+ to generate images with closer resemblance to the Heightmap
574
+ dataset, since the vectors µ and σ contained features of that
575
+ dataset.
576
+ V. RESULTS
577
+ All experiments were done in a desktop workstation with
578
+ architecture x86 64, CPU AMD Ryzen 5 2600X Six-Core
579
+ processor and one GeForce RTX 2080 Ti graphic cards with
580
+ 11 GB RAM. For the implementation and training of the
581
+ networks we used Keras7, a deep learning API running on top
582
+ of Tensorflow8. It provides abstractions and building blocks
583
+ for developing and solving machine learning solutions.
584
+ A. DCGAN
585
+ In terms of results, in the four experiments we performed,
586
+ the networks were compiled using the Adam optimizer with
587
+ learning rate 0.0002 and β19 = 0.5. Both values were chosen
588
+ as suggested in [7]. The networks were trained for 1000
589
+ Epochs. A brief description of every experiment made is
590
+ described below:
591
+ • E1: None of the discriminator hindering methods were used;
592
+ • E2: All the discriminator hindering methods were imple-
593
+ mented; Equation (4) for the noise vector;
594
+ • E3: All the discriminator hindering methods were imple-
595
+ mented; Equation (5) for the noise vector;
596
+ • E4: All the discriminator hindering methods were imple-
597
+ mented; Equation (6) for the noise vector;
598
+ In Fig. 9 one can observe the losses of D and G throughout the
599
+ epochs for E1, and we can see the GAN’s vanishing gradient
600
+ problem appearing.
601
+ As we can observe, D’s loss rapidly converges to 0, while G’s
602
+ loss keeps growing unsteadily. Early during the training phase,
603
+ D reaches its optimal state, which is perfectly discriminating
604
+ between the real and fake images. Thus, G is unable to
605
+ improve, leading to a loss growth.
606
+ For E2, E3 and E4, despite the fact that the hindering methods
607
+ above mentioned did in fact help battle the GAN’s vanishing
608
+ gradient problem, the results obtained have a poor quality.
609
+ In E1, we can see that the model entered in a mode collapse,
610
+ since the images generated have similar shapes, with some
611
+ of them even having a blurry artifact. In E2 however, the
612
+ quality of the images decayed, with the resulting images
613
+ presenting some kind of artifacts in the form of a squared
614
+ pattern. Experiments E3 and E4 yielded better results in terms
615
+ of quality. Despite this results, all the images represented in
616
+ Fig. 10 were classified as fake by the experiment’s respective
617
+ discriminators.
618
+ 7https://keras.io/, the version used was 2.3.1
619
+ 8https://www.tensorflow.org/, the version used was 1.14.0
620
+ 9Exponential decay for the running average of the gradient
621
+
622
+ 1
623
+ 64
624
+ 64
625
+ 128
626
+ 128
627
+ 128
628
+ 128
629
+ 128
630
+ 128
631
+ 64
632
+ 64
633
+ Output
634
+ 64
635
+ 128
636
+ 128
637
+ 128
638
+ Convolution layer followed by Batch Normalization Layer
639
+ and LeakyReLu layer
640
+ -64x64network
641
+ Downsampling
642
+ - 128 x 128 network
643
+ Flatten1
644
+ 16
645
+ 32
646
+ 64
647
+ 128
648
+ 1
649
+ 128
650
+ U
651
+ 512
652
+ 64
653
+ 32
654
+ 16
655
+ 8
656
+ >1024
657
+ 8
658
+ 16
659
+ a
660
+ 512
661
+ 32
662
+ 128
663
+ 64
664
+ Convolution layer followed by Batch Normalization Layer
665
+ andReLulayer
666
+ Flatten
667
+ Dense layerFig. 8. VAE + WGAN architecture. Train the VAE (1) to learn the representation of the Heightmap dataset. Then use the sample vector z created from the
668
+ distribution’s mean and standard deviation to train the WGAN (2). The VAE’s decoder will be the generator of the WGAN.
669
+ Fig. 9. D and G Binary Cross Entropy loss of the Experiment 1
670
+ Fig. 10. Some of the images generated by DCGAN’s generator after training
671
+ for 1000 epochs, in the several experiments.
672
+ B. WGAN
673
+ The network was compiled using the RMSProp optimizer with
674
+ learning rate of 0.0005, as in [11]. In terms of results, we
675
+ first ran some tests to see which value we should use as a
676
+ constraint for clipping the weights values from this list of
677
+ values: 0.01, 0.02, 0.05, 0.1, 0.15, 0.2. We ended up choosing
678
+ 0.1 as the constraint since it gave the best results in terms of
679
+ the Wasserstein estimate.
680
+ We ran a single but extensive experiment E5 where we trained
681
+ the network for 5000 epochs with a training time of 35 hours
682
+ and 27 minutes. Fig. 11 depicts the Wasserstein estimates of
683
+ E5. The blue and yellow lines, which represent the Wasserstein
684
+ estimate for the critic real and fake images, respectively, are
685
+ distancing from each other. We can also verify this by looking
686
+ at the other graphic, where the red line, which represents
687
+ the difference between those two estimates, keeps growing
688
+ troughout the training epochs.
689
+ Fig. 11.
690
+ Left: Wasserstein estimate for C real and fake images and G
691
+ generated images during the training phase; Right: Difference between the
692
+ C Wasserstein estimate of the real images and the fake images
693
+ Fig. 12 depicts some of the images generated during the last
694
+ epochs. The generator was able to reproduce realistic images
695
+ with a relative good quality and level of detail. Some complex
696
+ structures are also present such as peninsulas, mountain ranges
697
+ and groups of islands. Fig. 13 depicts a 3D representation of
698
+ a heightmap generated in E5, where we can see a peninsula
699
+ and some small islands next to it.
700
+ Given the results we achieved with the previous experiment,
701
+ we decided to run another experiment in order to check what
702
+ type of results another WGAN, with a more complex structure
703
+
704
+ Experiment1
705
+ Experiment 2
706
+ Experiment3
707
+ Experiment41e7
708
+ criticandgeneratorestimate
709
+ le7
710
+ criticdifferencewassersteinestimate
711
+ crit_real
712
+ 4-
713
+ crit_fake
714
+ gen
715
+ 8
716
+ 2 -
717
+ 6
718
+ estimate
719
+ 0
720
+ wasser
721
+ -2
722
+ 2
723
+ -4-
724
+ -0
725
+ 0
726
+ 1000
727
+ 2000
728
+ 3000
729
+ 4000
730
+ 5000
731
+ 0
732
+ 1000
733
+ 2000
734
+ 3000
735
+ 4000
736
+ 5000
737
+ epoch
738
+ epoch2
739
+ z
740
+ Decoder/
741
+ Update model
742
+ Generator
743
+ L
744
+ G(z)
745
+ G(z)
746
+ Decoder /
747
+ Encoder
748
+ Generator
749
+ G(z)
750
+ D(x)
751
+ D(x)
752
+ Update model
753
+ Heightmap
754
+ Datasetdiscriminatorloss
755
+ generatorloss
756
+ 6
757
+ 0.6
758
+ 0.5
759
+ 7
760
+ 0.4
761
+ 6.
762
+ loss
763
+ 0.3
764
+ 4.
765
+ 0.2
766
+ 3
767
+ 0.1
768
+ 2
769
+ 0.0
770
+ 0
771
+ 200
772
+ 400
773
+ 600
774
+ 800
775
+ 1000
776
+ 200
777
+ 400
778
+ 600
779
+ 800
780
+ 1000
781
+ epoch
782
+ epochFig. 12. Images generated by WGAN’s generator during the last 5 training
783
+ epochs
784
+ Fig. 13. 3D representation of a heightmap generated in E5
785
+ and higher number of neurons than the previous one, could
786
+ generate. So, in experiment E6 we trained the 128 × 128
787
+ model presented in subsection IV-C for 5000 epochs, which
788
+ took 217 hours of training time. Fig. 14 shows some of the
789
+ images generated through some training epochs of E6. Using
790
+ a WGAN with a more complex structure proved not to be so
791
+ efficient, given that the images between experiments E5 and
792
+ E6 have similar quality.
793
+ Fig. 14. Images generated by the 128 × 128 model’s generator during the
794
+ training of E6
795
+ C. ProgGAN
796
+ Taking into account the network structure and training algo-
797
+ rithm previously discussed, we made one experiment regarding
798
+ the ProgGAN model. To be able to verify the efficiency in
799
+ training time we had to run an experiment E7 with similar
800
+ amount of epochs to E6. Therefore, we trained the model for
801
+ 4950 epochs ( 1650 epochs for each of the networks ).
802
+ Table I depict the training time for E6 and E7. As we can
803
+ see, the ProgGAN model took less 57 hours to train than the
804
+ WGAN model.
805
+ TABLE I
806
+ TRAINING TIME FOR EACH EXPERIMENT
807
+ E6
808
+ E7
809
+ 64 × 64
810
+ 25h 3min
811
+ growth
812
+ 74h 9min
813
+ 128 × 128
814
+ 217h
815
+ 71h 10min
816
+ TOTAL
817
+ 217h
818
+ 170h 23 min
819
+ Fig 15, 16 and 17 depict the Wasserstein estimate of both
820
+ generators and critics for the 64 × 64, growth and 128 × 128
821
+ models, respectively.
822
+ Fig. 15.
823
+ Left: Wasserstein estimate for C real and fake images and G
824
+ generated images during the training phase of the 64 × 64 model; Right:
825
+ Difference between the C Wasserstein estimate of the real images and the
826
+ fake images
827
+ Fig. 16.
828
+ Left: Wasserstein estimate for C real and fake images and G
829
+ generated images during the training phase of the growth model; Right:
830
+ Difference between the C Wasserstein estimate of the real images and the
831
+ fake images
832
+ Despite the figures showing an erratic and unstable movement
833
+ of the Wasserstein estimate during the training phase, we can
834
+ see that in all of the models, the estimates of the real and fake
835
+ images were moving away from each other, hence the line of
836
+ the difference of the Wasserstein estimate on the critic going
837
+ up.
838
+ Fig. 18 shows some of the images generated through some
839
+ training epochs of E7. We can see that the quality of the
840
+ images from the 64 × 64 model to the growth model and
841
+ to the 128 × 128 didn’t deteriorate. We can even say that
842
+
843
+ Epocho
844
+ Epoch1250
845
+ Epoch2500
846
+ Epoch3750
847
+ Epoch5000critic and generatorwasserstein estimate
848
+ criticdifferencewassersteinestimate
849
+ 350000
850
+ crit_real
851
+ 70000
852
+ crit_fake
853
+ 300000
854
+ gen
855
+ 60000
856
+ imate
857
+ 250000
858
+ 50000
859
+ estll
860
+ 200000
861
+ 40000
862
+ 150000
863
+ 30000
864
+ 100000
865
+ 20000-
866
+ 50000
867
+ 10000
868
+ 0
869
+ 250
870
+ 500
871
+ 750
872
+ 1000
873
+ 1250
874
+ 1500
875
+ 0
876
+ 250
877
+ 500
878
+ 750
879
+ 1000
880
+ 1250
881
+ 1500
882
+ epoch
883
+ epoch1e7
884
+ criticandgeneratorwassersteinestimate
885
+ le7
886
+ criticdifferencewassersteinestimate
887
+ 2
888
+ crit_real
889
+ crit_fake
890
+ gen
891
+ 2.5
892
+ 1
893
+ wassersteinestimate
894
+ 2.0
895
+ 1.5
896
+ 1.0
897
+ -1
898
+ 0.5
899
+ -2.
900
+ 0.0
901
+ 0
902
+ 250
903
+ 500
904
+ 750
905
+ 1000
906
+ 1250
907
+ 1500
908
+ 0
909
+ 250
910
+ 500
911
+ 750
912
+ 1000
913
+ 1250
914
+ 1500
915
+ epoch
916
+ epochFig. 17.
917
+ Left: Wasserstein estimate for C real and fake images and G
918
+ generated images during the training phase of the 128 × 128 model; Right:
919
+ Difference between the C Wasserstein estimate of the real images and the
920
+ fake images
921
+ the quality improved, since there is more variety of complex
922
+ structures present in the end of the training phase such as bays,
923
+ peninsulas, mountain ranges near the shore and islands.
924
+ Fig. 18. Images generated by the ProgGAN’s generator during the training
925
+ epochs for the several models’ networks in E7
926
+ D. VAE + WGAN
927
+ We did several experiments with the VAE + WGAN architec-
928
+ ture, to analyze factors such as:
929
+ • Number of epochs needed to train the VAE in order for it to
930
+ recreate the dataset images with a reasonable level of detail;
931
+ • Generation of the random z vector in order to be given as
932
+ input to WGAN’s generator;
933
+ • Comparison between training a WGAN with the decoder
934
+ having already learned features from the Heightmap dataset
935
+ and a WGAN without a pre-trained decoder.
936
+ Taking into account these factors the experiments are described
937
+ as follow:
938
+ • E8: VAE trained for 1000 epochs. Decoder with the weights
939
+ already trained used as the WGAN’s generator. WGAN
940
+ trained for 1000 more epochs, using a normal distribution
941
+ with µ = 0 and σ = 1 for generating z;
942
+ • E9: VAE trained for 150 epochs. Decoder with the weights
943
+ already trained used as the WGAN’s generator. WGAN
944
+ trained for 1000 more epochs, using a normal distribution
945
+ with µ = 0 and σ = 1 for generating z;
946
+ • E10: Decoder, with the weights already trained in Experi-
947
+ ment 2, used as the WGAN’s generator. WGAN trained for
948
+ 1000 more epochs, using the vectors µ and σ learned from
949
+ the VAE, for generating z.
950
+ • E11: WGAN trained for 1150 epochs to make a comparison
951
+ between the models.
952
+ The training time for each experiment is denoted in Table
953
+ II. As we can see, the difference between E9 or E10, which
954
+ corresponds to training VAE for 150 epochs and training
955
+ the WGAN for 1000 epochs and E11 which corresponds to
956
+ training the WGAN for 1150 epochs isn’t that big, saving at
957
+ maximum 20 minutes with the latter experiment. Fig. 23 below
958
+ depicts images, for the several experiments, generated by the
959
+ WGAN after training.
960
+ TABLE II
961
+ TRAINING TIME FOR EACH EXPERIMENT
962
+ Experiment
963
+ VAE
964
+ WGAN
965
+ Total
966
+ E8
967
+ 8h 42 min
968
+ 7h 28 min
969
+ 16h 8 min
970
+ E9
971
+ 1h 18 min
972
+ 7h 30 min
973
+ 8h 48 min
974
+ E10
975
+ 1h 18 min
976
+ 7h 20 min
977
+ 8h 38 min
978
+ E11
979
+ 8h 24 min
980
+ 8h 24 min
981
+ Regarding E8, the loss and estimates are shown in Fig. 19. The
982
+ VAE’s training went well as expected, with its loss rapidly
983
+ decreasing in the initial epochs, but stagnating around 7250
984
+ towards the end of the training. The WGAN’s training also
985
+ went smoothly, despite some moments between the epochs
986
+ 800 and 1000 where all the estimates suddenly got close to
987
+ each other.
988
+ Fig. 19.
989
+ E8: Top: VAE’ loss during the training phase; Left: Wasserstein
990
+ estimate for C real and fake images and decoder’s generated images during
991
+ the training phase; Right: Difference between the C Wasserstein estimate of
992
+ the real images and the fake images
993
+ Since the stagnation of VAE’s loss happened early in its
994
+ training phase we thought we could have trained it for a lower
995
+ amount of epochs and still be able to generate images with
996
+ good quality. Taking that into account, in E9, as shown in
997
+ Fig. 20, we trained the VAE for only 150 epochs. In this
998
+ experiment, the WGAN training went really smoothly, as can
999
+ be seen by the Wasserstein estimate graphic and the images
1000
+ generated. However, in this experiment, there were images,
1001
+
1002
+ 1e8
1003
+ criticandgeneratorwassersteinestimate
1004
+ 1e8
1005
+ criticdifferencewassersteinestimate
1006
+ 1.5
1007
+ crit real
1008
+ crit_fake
1009
+ 2.5
1010
+ 1.0
1011
+ gen
1012
+ 2.0
1013
+ wassersteinestimate
1014
+ 0.5
1015
+ 1.5
1016
+ 0.0
1017
+ 1.0
1018
+ -0.5
1019
+ 0.5
1020
+ -1.0 -
1021
+ 0.0
1022
+ 0
1023
+ 250
1024
+ 500
1025
+ 750
1026
+ 1000
1027
+ 1250
1028
+ 1500
1029
+ 0
1030
+ 250
1031
+ 500
1032
+ 750
1033
+ 1000
1034
+ 1250
1035
+ 1500
1036
+ epoch
1037
+ epoch64 x 64
1038
+ Growth
1039
+ 128 x 128
1040
+ Epocho
1041
+ Epoch825
1042
+ Epoch1650VAEloSS
1043
+ 7800
1044
+ 7700
1045
+ 7600
1046
+ SSO
1047
+ 7500
1048
+ 7400
1049
+ 7300
1050
+ 7200
1051
+ 0
1052
+ 200
1053
+ 400
1054
+ 600
1055
+ 800
1056
+ 1000
1057
+ Epoch
1058
+ 1e7
1059
+ criticandgeneratorestimate
1060
+ le7
1061
+ criticdifferencewassersteinestimate
1062
+ 1.0
1063
+ crit_real
1064
+ crit_fake
1065
+ 2.0
1066
+ dec
1067
+ 0.5
1068
+ 1.5
1069
+ estimate
1070
+ 0.0
1071
+ 1.0
1072
+ 0.5
1073
+ 0.5
1074
+ 1.0 -
1075
+ 0.0
1076
+ 200
1077
+ 400
1078
+ 600
1079
+ 800
1080
+ 1000
1081
+ 0
1082
+ 200
1083
+ 400
1084
+ 600
1085
+ 800
1086
+ 1000
1087
+ epoch
1088
+ epochgenerated throughout the training process, that presented some
1089
+ strange artifacts as seen in Fig. 24. Even after the training
1090
+ was complete, some images with those kind of artifacts were
1091
+ generated.
1092
+ Fig. 20.
1093
+ E9: Top: VAE’ loss during the training phase; Left: Wasserstein
1094
+ estimate for C real and fake images and decoder’s generated images during
1095
+ the training phase; Right: Difference between the C Wasserstein estimate of
1096
+ the real images and the fake images
1097
+ In E10 we took the weights of the decoder trained in the
1098
+ VAE of the previous experiment and trained the WGAN for
1099
+ 1000 epochs again. However, we are using the vectors µ and
1100
+ σ learned from the VAE to generate our vectors z, since
1101
+ they contain features of the Heightmap dataset. Fig. 21 and
1102
+ 23 depict the results of E10. The graphic of the Wasserstein
1103
+ estimate is not what we expected it to be. From epoch 400, the
1104
+ images generated by the decoder managed to fool the critic
1105
+ into thinking they were becoming more real throughout the
1106
+ remaining epochs, given that the Wasserstein estimate of the
1107
+ fake images went up after that epoch.
1108
+ Fig. 21.
1109
+ E10: Left: Wasserstein estimate for C real and fake images and
1110
+ decoder’s generated images during the training phase; Right: Difference
1111
+ between the C Wasserstein estimate of the real images and the fake images
1112
+ As the last experiment, we wanted to just train the WGAN for
1113
+ 1150 epochs, in order to make a proper comparison between
1114
+ the other experiments. Fig. 22 and 23 depict the results.
1115
+ Discussion
1116
+ The results obtained with the DCGAN were unsatisfying, as
1117
+ we were already expecting, due to several problems with this
1118
+ architecture already mentioned in the literature. The WGAN
1119
+ Fig. 22.
1120
+ E11: Left: Wasserstein estimate for C real and fake images and
1121
+ decoder’s generated images during the training phase; Right: Difference
1122
+ between the C Wasserstein estimate of the real images and the fake images
1123
+ Fig. 23. Images generated by WGAN after training, in the several experiments
1124
+ Fig. 24. Images with some strange artifacts generated by WGAN during the
1125
+ epochs for E10
1126
+ proved to be a good option since throughout the whole training
1127
+ phase the generator always had enough information provided
1128
+ by the critic to improve itself, thus ending up generating
1129
+ heightmaps with good quality. With the ProgGAN we saw
1130
+ that we could achieve the same results as in the WGAN with
1131
+ less training time. Curiously, the Wasserstein estimates did not
1132
+ follow the expected pattern. With the VAE + WGAN model,
1133
+ although it seemed to be a promising model in paper, we didn’t
1134
+ get quite the results we expected.
1135
+ VI. CONCLUSIONS
1136
+ In this work, our purpose was to explore the GANs’ capabili-
1137
+ ties to generate images with high resolution and quality to try
1138
+ to create maps that look realistic and appealing for players, in
1139
+
1140
+ VAEloSS
1141
+ 7700
1142
+ 7600
1143
+ 7400
1144
+ 7300
1145
+ 20
1146
+ 40
1147
+ 60
1148
+ 80
1149
+ 100
1150
+ 120
1151
+ 140
1152
+ Epoch
1153
+ 1e7
1154
+ criticandgeneratorestimate
1155
+ 1e7
1156
+ critic difference wasserstein estimate
1157
+ crit_real
1158
+ crit_fake
1159
+ 2.5
1160
+ 1.0
1161
+ dec
1162
+ 2.0
1163
+ 0.5
1164
+ 1.5
1165
+ 0.0
1166
+ 1.0
1167
+ 0.5
1168
+ 0.5
1169
+ 1.0
1170
+ 1.5
1171
+ 0.0
1172
+ 200
1173
+ 400
1174
+ 600
1175
+ 008
1176
+ 1000
1177
+ 0
1178
+ 200
1179
+ 400
1180
+ 600
1181
+ 800
1182
+ 1000
1183
+ epoch
1184
+ epochle6
1185
+ criticandgeneratorestimate
1186
+ 1e6
1187
+ criticdifferencewassersteinestimate
1188
+ 2.0
1189
+ crit real
1190
+ 3.5
1191
+ 1.5
1192
+ crit_fake
1193
+ dec
1194
+ 3.0
1195
+ 1.0
1196
+ 2.5
1197
+ 0.5
1198
+ ate
1199
+ 2.0
1200
+ 0.0
1201
+ estim
1202
+ 1.5
1203
+ 0.5
1204
+ 1.0
1205
+ 1.0
1206
+ -1.5
1207
+ 0.5
1208
+ 2.0
1209
+ 0.0
1210
+ 200
1211
+ 400
1212
+ 600
1213
+ 800
1214
+ 1000
1215
+ 0
1216
+ 200
1217
+ 400
1218
+ 600
1219
+ 800
1220
+ 1000
1221
+ epoch
1222
+ epoch1e6
1223
+ criticandgeneratorestimate
1224
+ 1e6
1225
+ criticdifferencewassersteinestimate
1226
+ 2.00
1227
+ crit_real
1228
+ crit_fake
1229
+ 1.0
1230
+ dec
1231
+ 1.75
1232
+ 1.50
1233
+ 0.5
1234
+ 1.25
1235
+ imate
1236
+ 1.00
1237
+ ISe
1238
+ 0.0
1239
+ 0.75
1240
+ 0.50
1241
+ -0.5
1242
+ 0.25
1243
+ -1.0
1244
+ 0.00
1245
+ 0
1246
+ 200
1247
+ 400
1248
+ 600
1249
+ 800
1250
+ 1000
1251
+ 1200
1252
+ 0
1253
+ 200
1254
+ 400
1255
+ 600
1256
+ 800
1257
+ 1000
1258
+ 1200
1259
+ epoch
1260
+ epochE8
1261
+ E9
1262
+ E10
1263
+ E11a visual and even strategic way. That could not be done with
1264
+ the traditional approaches of PCG. Instead of just focusing on
1265
+ one model of the current state of the art of GANs we decided
1266
+ to explore several ones and test if they could indeed be used
1267
+ to generate such maps. We ended up exploring four models:
1268
+ DCGAN, WGAN, ProgGAN and VAE + WGAN, each one
1269
+ with its upsides and downsides. The DCGAN and VAE +
1270
+ WGAN models provided results with poor quality while the
1271
+ WGAN and ProgGAN models provided the best results, with
1272
+ the latter one being more efficient in terms of training time.
1273
+ Despite the poor results of the VAE + WGAN model, we
1274
+ think that this last model is one that should be further studied
1275
+ because its concept is promising.
1276
+ Given the server memory limitations we were only able to
1277
+ work with images of 128 × 128 resolution while the ideal
1278
+ scenario would be to work with an higher resolution such as
1279
+ 1024×1024. Further individual studies of each of these models
1280
+ or the ability to be able to expand these networks to images
1281
+ with higher resolution, given the proper hardware, are some
1282
+ examples of future work that could be explored and studied
1283
+ regarding this topic.
1284
+ ACKNOWLEDGMENTS
1285
+ This work was supported by national funds through FCT,
1286
+ Fundac¸˜ao para a Ciˆencia e a Tecnologia, under projects
1287
+ UIDB/04326/2020, UIDB/50021/2020, UIDP/04326/2020 and
1288
+ LA/P/0101/2020.
1289
+ REFERENCES
1290
+ [1]
1291
+ Ken Perlin. “An Image Synthesizer”. In: New York,
1292
+ NY, USA: Association for Computing Machinery, 1985,
1293
+ pp. 287–296.
1294
+ [2]
1295
+ Xing Mei, Philippe Decaudin, and Bao Gang Hu.
1296
+ “Fast hydraulic erosion simulation and visualization
1297
+ on GPU”. In: Proceedings - Pacific Conference on
1298
+ Computer Graphics and Applications. 2007.
1299
+ [3]
1300
+ Ian Goodfellow et al. “Generative adversarial net-
1301
+ works”. In: arXiv (2014).
1302
+ [4]
1303
+ Tero Karras et al. “Progressive growing of GANs for
1304
+ improved quality, stability, and variation”. In: 6th In-
1305
+ ternational Conference on Learning Representations,
1306
+ ICLR 2018 - Conference Track Proceedings. 2018.
1307
+ [5]
1308
+ Mehdi Mirza and Simon Osindero. “Conditional Gen-
1309
+ erative Adversarial Nets”. In: arXiv (2014).
1310
+ [6]
1311
+ Jun Yan Zhu et al. “Unpaired Image-to-Image Transla-
1312
+ tion Using Cycle-Consistent Adversarial Networks”. In:
1313
+ 2017, pp. 2242–2251.
1314
+ [7]
1315
+ Alec Radford, Luke Metz, and Soumith Chintala. “Un-
1316
+ supervised representation learning with deep convo-
1317
+ lutional generative adversarial networks”. In: arXiv
1318
+ (2016).
1319
+ [8]
1320
+ Ian Goodfellow, Yoshua Bengio, and Aaron Courville.
1321
+ Learning deep architectures for AI. MIT Press, 2016.
1322
+ [9]
1323
+ Wei Zhang et al. “Parallel distributed processing model
1324
+ with local space-invariant interconnections and its opti-
1325
+ cal architecture”. In: Applied Optics (1990).
1326
+ [10]
1327
+ Ziwei Liu, Ping Luo, and Xiaogang Wang andXiaoou
1328
+ Tang. “Deep Learning Face Attributes in the Wild”. In:
1329
+ Proceedings of International Conference on Computer
1330
+ Vision (ICCV). 2015.
1331
+ [11]
1332
+ Martin Arjovsky, Soumith Chintala, and L´eon Bottou.
1333
+ “Wasserstein GAN”. In: arXiv (2017).
1334
+ [12]
1335
+ Anders Boesen Lindbo Larsen et al. “Autoencoding
1336
+ beyond pixels using a learned similarity metric”. In:
1337
+ arXiv (2016).
1338
+ [13]
1339
+ Yoshua Bengio. Learning deep architectures for AI.
1340
+ Hanover, MA, USA: Now Publishers Inc., 2009.
1341
+ [14]
1342
+ Andrew Gordon Wilson, Been Kim, and William Her-
1343
+ lands. “Proceedings of NIPS 2016 Workshop on Inter-
1344
+ pretable Machine Learning for Complex Systems”. In:
1345
+ arXiv (2016).
1346
+ [15]
1347
+ Martin Arjovsky and L´eon Bottou. “Towards Princi-
1348
+ pled Methods for Training Generative Adversarial Net-
1349
+ works”. In: arXiv (2017).
1350
+ [16]
1351
+ Tim Salimans et al. “Improved techniques for training
1352
+ GANs”. In: arXiv (2016).
1353
+ [17]
1354
+ Phillip Isola et al. “Image-to-Image Translation with
1355
+ Conditional Adversarial Networks”. In: arXiv (2018).
1356
+ APPENDIX A
1357
+ In this appendix we show the detailed architecture of the
1358
+ implemented models and the hyper-parameters used, to allow
1359
+ for reproducibility of the results. For the DCGAN network,
1360
+ we used the Adam Optimizer with learning rate 0.0002 and
1361
+ β1 = 0.5. For the WGAN and ProgGAN network, we used the
1362
+ RMSProp optimizer with learning rate 0.0005. For the VAE +
1363
+ WGAN model we trained the VAE and then the whole model,
1364
+ using the RMSProp optimizer with learning rate 0.0003 and
1365
+ 0.0005, respectively.
1366
+ TABLE III
1367
+ GENERATOR STRUCTURE OF THE DCGAN MODEL USED
1368
+ Layer name
1369
+ Act.
1370
+ Input shape
1371
+ Output shape
1372
+ Dense
1373
+ -
1374
+ 100 × 1 × 1
1375
+ 65536 × 1 × 1
1376
+ Reshape
1377
+ -
1378
+ 65536 × 1 × 1
1379
+ 1024 × 8 × 8
1380
+ Deconv
1381
+ -
1382
+ 1024 × 8 × 8
1383
+ 256 × 16 × 16
1384
+ BatchNorm
1385
+ -
1386
+ 256 × 16 × 16
1387
+ 256 × 16 × 16
1388
+ LeakyReLU
1389
+ LeakyReLU
1390
+ 256 × 16 × 16
1391
+ 256 × 16 × 16
1392
+ Dropout
1393
+ -
1394
+ 256 × 16 × 16
1395
+ 256 × 16 × 16
1396
+ Deconv
1397
+ -
1398
+ 256 × 16 × 16
1399
+ 128 × 32 × 32
1400
+ BatchNorm
1401
+ -
1402
+ 128 × 32 × 32
1403
+ 128 × 32 × 32
1404
+ LeakyReLU
1405
+ LeakyReLU
1406
+ 128 × 32 × 32
1407
+ 128 × 32 × 32
1408
+ Dropout
1409
+ -
1410
+ 128 × 32 × 32
1411
+ 128 × 32 × 32
1412
+ Deconv
1413
+ -
1414
+ 128 × 32 × 32
1415
+ 64 × 64 × 64
1416
+ BatchNorm
1417
+ -
1418
+ 64 × 64 × 64
1419
+ 64 × 64 × 64
1420
+ LeakyReLU
1421
+ LeakyReLU
1422
+ 64 × 64 × 64
1423
+ 64 × 64 × 64
1424
+ Dropout
1425
+ -
1426
+ 64 × 64 × 64
1427
+ 64 × 64 × 64
1428
+ Deconv
1429
+ -
1430
+ 64 × 64 × 64
1431
+ 32 × 128 × 128
1432
+ BatchNorm
1433
+ -
1434
+ 32 × 128 × 128
1435
+ 32 × 128 × 128
1436
+ LeakyReLU
1437
+ LeakyReLU
1438
+ 32 × 128 × 128
1439
+ 32 × 128 × 128
1440
+ Dropout
1441
+ -
1442
+ 32 × 128 × 128
1443
+ 32 × 128 × 128
1444
+ Deconv
1445
+ tanh
1446
+ 32 × 128 × 128
1447
+ 1 × 128 × 128
1448
+
1449
+ TABLE IV
1450
+ DISCRIMINATOR STRUCTURE OF THE DCGAN MODEL USED
1451
+ Layer name
1452
+ Act.
1453
+ Input shape
1454
+ Output shape
1455
+ Conv
1456
+ -
1457
+ 1 × 128 × 128
1458
+ 1 × 128 × 128
1459
+ LeakyReLU
1460
+ LeakyReLU
1461
+ 1 × 128 × 128
1462
+ 1 × 128 × 128
1463
+ Conv
1464
+ -
1465
+ 1 × 128 × 128
1466
+ 32 × 64 × 64
1467
+ BatchNorm
1468
+ -
1469
+ 32 × 64 × 64
1470
+ 32 × 64 × 64
1471
+ LeakyReLU
1472
+ LeakyReLU
1473
+ 32 × 64 × 64
1474
+ 32 × 64 × 64
1475
+ Conv
1476
+ -
1477
+ 32 × 64 × 64
1478
+ 64 × 32 × 32
1479
+ BatchNorm
1480
+ -
1481
+ 64 × 32 × 32
1482
+ 64 × 32 × 32
1483
+ LeakyReLU
1484
+ LeakyReLU
1485
+ 64 × 32 × 32
1486
+ 64 × 32 × 32
1487
+ Conv
1488
+ -
1489
+ 64 × 32 × 32
1490
+ 128 × 16 × 16
1491
+ BatchNorm
1492
+ -
1493
+ 128 × 16 × 16
1494
+ 128 × 16 × 16
1495
+ LeakyReLU
1496
+ LeakyReLU
1497
+ 128 × 16 × 16
1498
+ 128 × 16 × 16
1499
+ Conv
1500
+ -
1501
+ 128 × 16 × 16
1502
+ 256 × 8 × 8
1503
+ BatchNorm
1504
+ -
1505
+ 256 × 8 × 8
1506
+ 256 × 8 × 8
1507
+ LeakyReLU
1508
+ LeakyReLU
1509
+ 256 × 8 × 8
1510
+ 256 × 8 × 8
1511
+ Flatten
1512
+ -
1513
+ 256 × 8 × 8
1514
+ 16384 × 1 × 1
1515
+ Dense
1516
+ sigmoid
1517
+ 16384 × 1 × 1
1518
+ 1 × 1 × 1
1519
+ TABLE V
1520
+ BLOCKS OF LAYERS USED IN THE PROGGAN NETWORK’S STRUCTURES
1521
+ Layer name
1522
+ Act.
1523
+ Output shape
1524
+ Deconv
1525
+ -
1526
+ 128 × 64 × 64
1527
+ BatchNorm
1528
+ -
1529
+ 128 × 64 × 64
1530
+ LeakyReLU
1531
+ LeakyReLU
1532
+ 128 × 64 × 64
1533
+ DECONV 1
1534
+ Deconv
1535
+ -
1536
+ 128 × 64 × 64
1537
+ BatchNorm
1538
+ -
1539
+ 128 × 64 × 64
1540
+ LeakyReLU
1541
+ LeakyReLU
1542
+ 128 × 64 × 64
1543
+ UpSampling
1544
+ -
1545
+ 128 × 128 × 128
1546
+ Deconv
1547
+ -
1548
+ 64 × 128 × 128
1549
+ BatchNorm
1550
+ -
1551
+ 64 × 128 × 128
1552
+ DECONV 2
1553
+ LeakyReLU
1554
+ LeakyReLU
1555
+ 64 × 128 × 128
1556
+ Deconv
1557
+ -
1558
+ 64 × 128 × 128
1559
+ BatchNorm
1560
+ -
1561
+ 64 × 128 × 128
1562
+ LeakyReLU
1563
+ LeakyReLU
1564
+ 64 × 128 × 128
1565
+ Conv
1566
+ -
1567
+ 64 × 128 × 128
1568
+ BatchNorm
1569
+ -
1570
+ 64 × 128 × 128
1571
+ LeakyReLU
1572
+ LeakyReLU
1573
+ 64 × 128 × 128
1574
+ CONV 1
1575
+ Conv
1576
+ -
1577
+ 128 × 128 × 128
1578
+ BatchNorm
1579
+ -
1580
+ 128 × 128 × 128
1581
+ LeakyReLU
1582
+ LeakyReLU
1583
+ 128 × 128 × 128
1584
+ Downsample
1585
+ -
1586
+ 128 × 64 × 64
1587
+ Conv
1588
+ -
1589
+ 128 × 64 × 64
1590
+ BatchNorm
1591
+ -
1592
+ 128 × 64 × 64
1593
+ CONV 2
1594
+ LeakyReLU
1595
+ LeakyReLU
1596
+ 128 × 64 × 64
1597
+ Conv
1598
+ -
1599
+ 128 × 64 × 64
1600
+ BatchNorm
1601
+ -
1602
+ 128 × 64 × 64
1603
+ LeakyReLU
1604
+ LeakyReLU
1605
+ 128 × 64 × 64
1606
+ TABLE VI
1607
+ GENERATOR STRUCTURE OF THE 64 × 64 NETWORK
1608
+ Layer name
1609
+ Act.
1610
+ Input shape
1611
+ Output shape
1612
+ Dense
1613
+ -
1614
+ 100 × 1 × 1
1615
+ 262144 × 1 × 1
1616
+ Reshape
1617
+ -
1618
+ 262144 × 1 × 1
1619
+ 64 × 64 × 64
1620
+ DECONV 1
1621
+ Deconv
1622
+ Tanh
1623
+ 128 × 64 × 64
1624
+ 1 × 64 × 64
1625
+ TABLE VII
1626
+ CRITIC STRUCTURE OF THE 64 × 64 NETWORK
1627
+ Layer name
1628
+ Act.
1629
+ Input shape
1630
+ Output shape
1631
+ Conv
1632
+ -
1633
+ 1 × 64 × 64
1634
+ 128 × 64 × 64
1635
+ LeakyReLU
1636
+ LeakyReLU
1637
+ 128 × 64 × 64
1638
+ 128 × 64 × 64
1639
+ CONV 2
1640
+ Flatten
1641
+ -
1642
+ 128 × 64 × 64
1643
+ 524288 × 1 × 1
1644
+ Dense
1645
+ linear
1646
+ 524288 × 1 × 1
1647
+ 1 × 1 × 1
1648
+ TABLE VIII
1649
+ GENERATOR STRUCTURE OF THE 128 × 128 NETWORK
1650
+ Layer name
1651
+ Act.
1652
+ Input shape
1653
+ Output shape
1654
+ Dense
1655
+ -
1656
+ 100 × 1 × 1
1657
+ 262144 × 1 × 1
1658
+ Reshape
1659
+ -
1660
+ 262144 × 1 × 1
1661
+ 64 × 64 × 64
1662
+ DECONV 1
1663
+ DECONV 2
1664
+ Deconv
1665
+ Tanh
1666
+ 64 × 128 × 128
1667
+ 1 × 128 × 128
1668
+ TABLE IX
1669
+ CRITIC STRUCTURE OF THE 128 × 128 NETWORK
1670
+ Layer name
1671
+ Act.
1672
+ Input shape
1673
+ Output shape
1674
+ Conv
1675
+ -
1676
+ 1 × 128 × 128
1677
+ 64 × 128 × 128
1678
+ LeakyReLU
1679
+ LeakyReLU
1680
+ 64 × 128 × 128
1681
+ 64 × 128 × 128
1682
+ CONV 1
1683
+ CONV 2
1684
+ Flatten
1685
+ -
1686
+ 128 × 64 × 64
1687
+ 524288 × 1 × 1
1688
+ Dense
1689
+ linear
1690
+ 524288 × 1 × 1
1691
+ 1 × 1 × 1
1692
+ TABLE X
1693
+ ENCODER STRUCTURE OF THE VAE + WGAN MODEL USED
1694
+ Layer name
1695
+ Act.
1696
+ Input shape
1697
+ Output shape
1698
+ Conv
1699
+ -
1700
+ 1 × 128 × 128
1701
+ 16 × 64 × 64
1702
+ BatchNorm
1703
+ -
1704
+ 16 × 64 × 64
1705
+ 16 × 64 × 64
1706
+ ReLU
1707
+ ReLU
1708
+ 16 × 64 × 64
1709
+ 16 × 64 × 64
1710
+ Conv
1711
+ -
1712
+ 16 × 64 × 64
1713
+ 32 × 32 × 32
1714
+ BatchNorm
1715
+ -
1716
+ 32 × 32 × 32
1717
+ 32 × 32 × 32
1718
+ ReLU
1719
+ ReLU
1720
+ 32 × 32 × 32
1721
+ 32 × 32 × 32
1722
+ Conv
1723
+ -
1724
+ 32 × 32 × 32
1725
+ 64 × 16 × 16
1726
+ BatchNorm
1727
+ -
1728
+ 64 × 16 × 16
1729
+ 64 × 16 × 16
1730
+ ReLU
1731
+ ReLU
1732
+ 64 × 16 × 16
1733
+ 64 × 16 × 16
1734
+ Conv
1735
+ -
1736
+ 64 × 16 × 16
1737
+ 128 × 8 × 8
1738
+ BatchNorm
1739
+ -
1740
+ 128 × 8 × 8
1741
+ 128 × 8 × 8
1742
+ ReLU
1743
+ ReLU
1744
+ 128 × 8 × 8
1745
+ 128 × 8 × 8
1746
+ Flatten
1747
+ -
1748
+ 128 × 8 × 8
1749
+ 8192 × 1 × 1
1750
+ Dense
1751
+ -
1752
+ 8192 × 1 × 1
1753
+ 1024 × 1 × 1
1754
+ BatchNorm
1755
+ -
1756
+ 1024 × 1 × 1
1757
+ 1024 × 1 × 1
1758
+ ReLU
1759
+ ReLU
1760
+ 1024 × 1 × 1
1761
+ 1024 × 1 × 1
1762
+ µ (Dense)
1763
+ -
1764
+ 1024 × 1 × 1
1765
+ 512 × 1 × 1
1766
+ σ (Dense)
1767
+ -
1768
+ 1024 × 1 × 1
1769
+ 512 × 1 × 1
1770
+
AdE1T4oBgHgl3EQfDQNp/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
BNAyT4oBgHgl3EQfd_jC/content/tmp_files/2301.00314v1.pdf.txt ADDED
@@ -0,0 +1,2042 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Edited: In the Proc. of the 26th International Conference on Pattern Recognition (ICPR’22) Montreal, Canada, Aug. 21-25, 2022.
2
+ Causal Deep Learning
3
+ M. Alex O. Vasilescu
4
5
+ IPAM, University of California, Los Angeles
6
+ Tensor Vision Technologies, Los Angeles
7
+ Abstract—We derive a set of causal deep neural networks whose
8
+ architectures are a consequence of tensor (multilinear) factor
9
+ analysis. Forward causal questions are addressed with a neural
10
+ network architecture composed of causal capsules and a tensor
11
+ transformer. The former estimate a set of latent variables that rep-
12
+ resent the causal factors, and the latter governs their interaction.
13
+ Causal capsules and tensor transformers may be implemented
14
+ using shallow autoencoders, but for a scalable architecture we
15
+ employ block algebra and derive a deep neural network composed
16
+ of a hierarchy of autoencoders. An interleaved kernel hierarchy
17
+ pre-processes the data resulting in a hierarchy of kernel ten-
18
+ sor factor models. Inverse causal questions are addressed with
19
+ a neural network that implements multilinear projection and
20
+ estimates the causes of effects. As an alternative to aggressive
21
+ bottleneck dimension reduction or regularized regression that may
22
+ camouflage an inherently underdetermined inverse problem, we
23
+ prescribe modeling different aspects of the mechanism of data
24
+ formation with piecewise tensor models whose multilinear projec-
25
+ tions are well-defined and produce multiple candidate solutions.
26
+ Our forward and inverse neural network architectures are suitable
27
+ for asynchronous parallel computation.
28
+ Index Terms—causality, factor analysis, tensor decomposition
29
+ transformer, Hebb learning, neural networks
30
+ I. INTRODUCTION
31
+ Building upon prior representation learning efforts aimed at
32
+ disentangling the causal factors of data variation [28][8][92]
33
+ [72][71]1, we derive a set of causal deep neural networks
34
+ that are a consequence of tensor (multilinear) factor analysis.
35
+ Tensor factor analysis is a transparent framework for modeling
36
+ a hypothesized multi-causal mechanisms of data formation,
37
+ computing invariant causal representations, and estimating
38
+ the effects of interventions [103][100][108][105]. The validity
39
+ and strength of causal explanations depend on causal model
40
+ specifications in conjunction with experimental designs for
41
+ acquiring suitable training data [81].
42
+ Unlike conventional statistics and machine learning which
43
+ model observed data distributions and make predictions about
44
+ one variable co-observed with another, or perform time series
45
+ forecasting, causal inference is a hypothesis-driven process,
46
+ as opposed to a data-driven process, that models the mecha-
47
+ nism of data formation and estimates the effects of interven-
48
+ tions [78][46][89][103][99]. Inverse causal “inference” estimates
49
+ the causes of effects given an estimated forward model and
50
+ constraints on the solution set [32][100][109].
51
+ 1Representation learning has been performed with deep neural net-
52
+ works [8][63][84] composed of architectural modules, such as Restricted
53
+ Boltzmann Machines (RBMs) [37][72], spike-and-slab RBMs [28][71], autoen-
54
+ coders [61][9][70], or encoder-decoders [79][15][50], and have been trained
55
+ in a supervised [64][22][71], unsupervised [37][8][28][61][79], and semi-
56
+ supervised manner [73]. Deep neural networks have been employed in life-
57
+ critical application areas, such as medical diagnosis [54][68][94], and face
58
+ recognition [91][42][90][20].
59
+ Fig. 1: A forward causal neural network is composed of
60
+ a set of causal capsules and a tensor transformer. Causal
61
+ capsules estimate the causal factor representations Um, and a
62
+ tensor transformer T governs their interaction. Causal capsules
63
+ and tensor transformers can be implemented with a set of
64
+ shallow autoencoders and tensor autoencoders, respectively. For
65
+ a scalable architecture, each autoencoder is replaced with a
66
+ deep neural network composed of a part based hierarchy of
67
+ autoencoders (Fig. 3f). The above neural network depicts the M-
68
+ mode SVD [106][105], a parallel multilinear rank decomposition.
69
+ (In practice, images are vectorized and centered.)
70
+ Causal Inference Versus Regression
71
+ Neural networks and tensor factorization methods may be causal
72
+ in nature and perform causal inference, or simply perform
73
+ regression from which no causal conclusions are drawn. For
74
+ causal inference, hypothesis-driven experimental design for
75
+ generating training data [81], and model specifications (Fig. 2)
76
+ trump algorithmic design and analysis.2
77
+ 2Tensor causal factor analysis have been employed in the analysis and
78
+ recognition of facial identities [108][103], facial expressions[44], human motion
79
+ signatures [97][24][41], and 3D sound [33]. It has been employed in the
80
+ transfer of facial expressions [111], the rendering of textures suitable for
81
+ arbitrary geometries, views and illuminations [107], etc.Tensor factor analysis
82
+ has also been employed in psychometrics [95][34][17][10][60], econometrics
83
+ [52],[69], chemometrics
84
+ [13], and other fields. Tensor regression has been
85
+ employed to estimate missing data [21] and to perform dimensionality reduction
86
+ [115][112][59] [14][49][40][7] by taking advantage of the row, column and
87
+ fiber redundancies. Recently, tensor regression has been employed in machine
88
+ learning to reduce neural network parameters. Network parameters are organized
89
+ into “data tensors”, and dimensionally reduced [62][74][56][55][76].
90
+ arXiv:2301.00314v1 [cs.LG] 1 Jan 2023
91
+
92
+ Causal Capsules:
93
+ Interventions
94
+ I, Illums.
95
+ Mode-m Matrix C omputation, Um
96
+ ^ I Views
97
+ Ip People
98
+
99
+ Interventions
100
+ p
101
+ Pixels
102
+ m = D ×U+··*m- Ut-*+U++·** U
103
+ matrixize
104
+ X,[v]
105
+ Xi,[L]
106
+ X
107
+ invariant
108
+ Xi
109
+ p,[P]
110
+ invariant
111
+ IP
112
+ code
113
+ xi
114
+ code
115
+ 2(电(
116
+ x2
117
+ X2
118
+
119
+ ll;
120
+ [P]
121
+ V
122
+ IP
123
+ / [P]
124
+ 1X
125
+ Nx
126
+ XNpIx
127
+ AT
128
+ il
129
+ Training Initialization
130
+ Tensor Transformer:
131
+ vec (Ri ini)
132
+ Core Tensor Computation, T
133
+ V3
134
+ V4
135
+ d
136
+ 0
137
+ 23
138
+ P2
139
+ Rt
140
+ I, Illums.
141
+ 1.14P2
142
+ di
143
+ Y111
144
+ d.
145
+ Iy Views
146
+ d.
147
+ 1
148
+ ipivit
149
+ d.
150
+ Y211
151
+ d2
152
+ tensorize
153
+ 1 p
154
+ People
155
+ ripivit
156
+ matrixize
157
+ 13
158
+ 3'4P
159
+ [x
160
+ 14
161
+ N,P
162
+ '2P
163
+ '3P
164
+ 4P
165
+ 1s
166
+ 15vP
167
+ 5'2P
168
+ T
169
+ Ix Pixels
170
+ YiplviL
171
+ d.x
172
+ R.A. Causal Neural Networks
173
+ Causal neural networks are composed of causal capsules and
174
+ tensor transformers (Fig. 1). Causal capsules estimate the latent
175
+ variables that represent the causal factors of data formation.
176
+ A tensor transformer governs the interaction of the latent
177
+ variables. Causal capsules may be implemented as shallow
178
+ Hebb autoencoders, which perform principal component analysis
179
+ (PCA) [87][83][82][1][75] (see supplemental Sec VI-A) when
180
+ the neurons are linear with non-deterministic activation [4][48].
181
+ The tensor transformer may be implemented as a tensor
182
+ autoencoder, a shallow autoencoder whose code is the tensor
183
+ product of the latent variables.
184
+ Causal deep neural networks are composed of stacking Hebb and
185
+ tensor autoencoders. Each causal capsule or tensor transformer
186
+ in a shallow causal neural network is replaced by mathematically
187
+ equivalent deep architectures composed either of a part-based
188
+ hierarchy of Hebb autoencoders or of a part-based hierarchy
189
+ of tensor autoencoders. An interleaved hierarchy of kernel
190
+ functions [86] serves as a pre-processor that warps the data
191
+ manifold for optimal tensor factor analysis.3 The resulting deep
192
+ causal neural network models the mechanism of data formation
193
+ with a hierarchy of tensor factor models [103][102][99, Sec
194
+ 4.4] (see Supplemental Section VI-C).
195
+ Inverse causal neural networks implement the multilinear
196
+ projection algorithm to estimate the causes of effects [109][100].
197
+ A neural network that addresses an underdetermined inverse
198
+ problem is characterized by a wide hidden layer. Dimensionality
199
+ reduction removes noise and nuisance variables [38], [93], and
200
+ has the added benefit of reducing the widths of hidden layers.
201
+ However, aggressive bottleneck dimensionality reduction may
202
+ camouflage an inherently ill-posed problem. Alternatively or in
203
+ addition to dimensionality reduction and regularized regression,
204
+ we prescribe modeling different aspects of the data formation
205
+ process with piecewise tensor (multilinear) models (mixture of
206
+ experts) whose projections are well-defined [104]. Candidate
207
+ solutions are gated to yield a unique solution.
208
+ II. FORWARD CAUSAL QUESTION: “WHAT IF?”
209
+ Forward causal inference is a hypothesis-driven process that
210
+ addresses the “what if” question. Causal hypotheses drive both
211
+ the experimental design for generating training data and the
212
+ causal model specification.
213
+ Training Data: For modeling the unit level effects of causes,
214
+ the training data is generated by combinatorially varying
215
+ each causal factor while holding the other factors fixed. The
216
+ best causal evidence comes from randomized experimental
217
+ studies. When physical, or statistical experiments for generating
218
+ training data are unethical or infeasible, experiments may be
219
+ approximated with carefully designed observational studies [81],
220
+ such as natural experiments [2][16][47]. The certainty of causal
221
+ conclusions are dependent on the type of evidence employed.4
222
+ Models: Within the tensor mathematical framework (Supple-
223
+ mental Section VI-B) a “data tensor,” D ∈ CI0×I1···×Im···×IM,
224
+ 3There have been a number of related transformer architectures engineered
225
+ and empirically tested with success [29], [113], [66].
226
+ 4Datasheets for datasets, as proposed by Gebru et al. [31], may help facilitate
227
+ the approximation of experimental studies.
228
+ (a)
229
+ (b)
230
+ Fig. 2: Same data, same algorithm, but two different model
231
+ specifications (problem setups) result in two semantically
232
+ different decompositions. (a) Causal Inference: The M-mode
233
+ SVD (Algorithm 1) factorizes a “data tensor” of vectorized
234
+ observations into a set of latent variables that represent the
235
+ causal factors. (b) Regression: The M-mode SVD factorizes a
236
+ “data tensor” composed of images as a “data matrix” into the
237
+ image column and row space as well as the normalized PCA
238
+ coefficients. (Images represent their vectorized versions except
239
+ in Fig. 2b.)
240
+ Algorithm 1 M-mode SVD (parallel computation)[106], [105]
241
+ Input D ∈ CI0×···×IM, dimensions R0, R1 . . . Rm . . . RM
242
+ 1. Initialize Um := I or random matrix, 0 ≤ m ≤ M
243
+ 2. Iterate until convergence
244
+ For m := 0, . . . , M,
245
+ • X := D ×0 UT
246
+ 0 × · · · ×m-1 UT
247
+ m-1 ×m+1 UT
248
+ m+1 · · · × UT
249
+ M
250
+ • Set Um to the ˜Rm leading left-singular vectors of the
251
+ SVD of X[m] or SVD of [X[m]XT
252
+ [m]]. a, b
253
+ 3. Set Z := D ×0 UT
254
+ 0 · · · ×m UT
255
+ m · · · ×M UT
256
+ M := X × UT
257
+ M
258
+ c
259
+ Output mode matrices U0, U1, . . . , UM and core tensor Z.
260
+ aThe computation of Um in the SVD X[m] = UmΣVmT can be performed
261
+ efficiently, depending on which dimension of X[m] is smaller, by decomposing
262
+ either X[m]X[m]T = UmΣ2UmT (note that VmT = Σ+UmTX[m]) or
263
+ by decomposing X[m]TX[m] = VmΣ2VmT and then computing Um =
264
+ X[m]VmΣ+.
265
+ b For a neural network implementation, the SVD of X[m] is replaced with
266
+ a Hebb autoencoder that sequentially computes the orthonormal columns of
267
+ Um/Vn by performing gradient descent or stochastic gradient descent [12][80].
268
+ In Fig. 1, the autoencoders learn the columns in Vm. Matrix Vm,r contains the
269
+ first r columns; vm,r is column r.
270
+ Hebb Autoencoder
271
+
272
+ ��
273
+
274
+ For r := 1 . . . Rm.
275
+ Iterate until convergence
276
+ ∆vm,r(t+1)=η
277
+
278
+ X[m] − Vm,r(t)VT
279
+ m,r(t)X[m]
280
+
281
+ XT
282
+ [m]vm,r(t)
283
+
284
+ ��
285
+
286
+ code
287
+ ˆvm,r(t+1)=
288
+
289
+ vm,r(t) + ∆vm,r(t+1)
290
+
291
+ ∥vm,r(t) + ∆vm,r(t+1)∥
292
+ cThe columns in Z[0] may be computed by initializing the code of an
293
+ autoencoder to (UM · · · ⊗ Um · · · ⊗ U0), where ⊗ is the Kronecker product.
294
+ In Fig. 1, the columns of the extended core T are computed by initializing the
295
+ code of the autoencoder with (UM · · · ⊗ Um · · · ⊗ U1).
296
+
297
+ people variance
298
+ person signature, p
299
+ X
300
+ ★peoplevariance
301
+ Up
302
+ Views
303
+ lew
304
+ People
305
+ variance
306
+ M-mode SVD
307
+ viewvariance
308
+ Pixels
309
+ illumination variance
310
+ illuminations
311
+ Uv
312
+ x
313
+ UL
314
+ illumination
315
+ view
316
+ representation.IT
317
+ illuminationvariance
318
+ representation,vT Images
319
+ Ri
320
+ Normalized PCA
321
+ Images
322
+ CoefficientMatrix
323
+ U
324
+ R
325
+ IT
326
+ M-mode SVD
327
+ Rxc
328
+ Pixels
329
+ Z
330
+ U
331
+ Pixels
332
+ xr
333
+ Rxc
334
+ Column
335
+ Row
336
+ Space
337
+ Space
338
+ Ixr
339
+ XC
340
+ IxcCausal Deep Learning
341
+ ICPR 2022, Montreal, Canada
342
+ Algorithm 2 Kernel Tensor Factor Analysis [99, Sec 4.4][108]
343
+ Kernel Multilinear Independent Component Analysis (K-MICA) and Kernel Principal Component Analysis (K-MPCA).
344
+ Input the data tensor D ∈ CI0×···×IM , where mode m = 0 is the measurement mode, and the desired ranks are ˜R1, . . . , ˜RM.
345
+ Initialize Cm = I or random matrix, ∀ 0 ≤ m ≤ M
346
+ Iterate until convergence.
347
+ 1) For m := 1, . . . , M
348
+ a) Set Xm := D ×1 C+
349
+ 1 · · · ×m−1 C+
350
+ m−1 ×m+1 C+
351
+ m+1 · · · ×M C+
352
+ M.
353
+ b) Compute the elements of the mode-m covariance matrix, for j, k := 1, . . . , Im:
354
+ [X[m]X[m]
355
+ T]jk:=
356
+ I1
357
+
358
+ i1=1
359
+ ...
360
+ Im−1
361
+
362
+ im-1=1
363
+ Im+1
364
+
365
+ im+1=1
366
+ ...
367
+ IM
368
+
369
+ iM=1
370
+ K(xi1...im-1 j im+1...iM, xi1...im-1 k im+1...iM).
371
+ (1)
372
+ c) a
373
+
374
+
375
+
376
+
377
+
378
+
379
+
380
+
381
+
382
+
383
+
384
+
385
+
386
+
387
+
388
+ For K-MPCA:
389
+ Set Cm := U to the left matrix of the SVD, of [X[m]X[m]
390
+ T] = UmΣ2UT
391
+ m from (1)
392
+ Truncate to ˜Rm columns Um ∈ CIm× ˜
393
+ Rm.
394
+ For K-MICA:
395
+ Set Cm := UmW−1
396
+ m . The additional invertible matrix Wm may be computed based on negentropy,
397
+ mutual information, or higher-order cumulants [108].
398
+ The initial SVD of [X[m]X[m]] T (1) truncates the subspace to ˜Rm.
399
+ 2) Set B := XM ×M C+
400
+ M. For K-MPCA, C+
401
+ M = CT
402
+ M.
403
+ Output the converged extended core tensor T ∈ CI0× ˜
404
+ R1×···× ˜
405
+ RM and causal factor mode matrices C1, . . . , CM.
406
+ aEvery SVD step may be computed by gradient descent and replaced with a Hebb autoencoder-decoder. See Algorithm 1 footnotes a and b. See Fig. 3 for a
407
+ scalable neural network implementation.
408
+ Linear kernel:
409
+ K(u, v) = uTv = u · v
410
+ Polynomial kernel of degree d:
411
+ K(u, v) = (uTv)d
412
+ Polynomial kernel up to degree d:
413
+ K(u, v) = (uTv + 1)d
414
+ Sigmoidal kernel:
415
+ K(u, v) = tanh(α uTv + β)
416
+ Gaussian (radial basis function (RBF)) kernel:
417
+ K(u, v) = exp
418
+
419
+ − ∥u−v∥2
420
+ 2σ2
421
+
422
+ TABLE I: Common kernel functions. Kernel
423
+ functions are symmetric, positive semi-definite
424
+ functions corresponding to symmetric, positive
425
+ semi-definite Gram matrices. The linear kernel
426
+ does not modify or warp the feature space.
427
+ contains a collection of vectorized5 and centered observations,
428
+ di1...im...iM ∈ CI0 that are the result of M causal factors. Causal
429
+ factor m (1 ≤ m ≤ M) takes one of Im values that are
430
+ indexed by im, 1 ≤ im ≤ Im. An observation and a data tensor
431
+ are modeled by a multilinear equation with multimode latent
432
+ variables:
433
+ D = T ×1 U1 · · · × Um ×M UM + E,
434
+ (2)
435
+ di1,...,iM = T ×1 (ˆu
436
+ T
437
+ i1 + ϵ
438
+ T
439
+ i1) · · · ×M (ˆu
440
+ T
441
+ iM + ϵ
442
+ T
443
+ iM) + ξi1,...,iM,
444
+ where T is the extended core that contains the basis vectors and
445
+ governs the interaction between the latent variables ˆuT
446
+ im (row
447
+ i of Um) that represent the causal factors of data formation,
448
+ ϵim ∈ N(0, Σm) are disturbances with Gaussian distribution,
449
+ and ξi1,...,iM is a Gaussian measurement error.
450
+ Minimizing the cost function
451
+ L=∥D − T ×1 U1... ×m Um... ×M UM∥ +
452
+ M
453
+
454
+ m=1
455
+ λm∥UmU
456
+ T
457
+ m − I∥
458
+ is equivalent to maximum likelihood estimation [27] of the
459
+ causal factor parameters, assuming the data was generated by
460
+ the model with additive Gaussian noise. The optimal mode
461
+ matrices Um are computed by employing a set of M alternating
462
+ least squares optimizations.
463
+ Lm
464
+ =
465
+ ∥Xm − T ×m Um∥ + λm∥U
466
+ T
467
+ m × Um − I∥, where
468
+ 5 It is preferable to vectorize an image and treat it as a single observation
469
+ rather than as a collection of independent column/row observations. Most
470
+ assertions found in highly cited publications in favor of treating an image as a
471
+ “data matrix” or “tensor” do not stand up to analytical scrutiny [99, App. A].
472
+ Xm:=D ×1 ... ×m-1 U
473
+ T
474
+ m-1 ×m+1 U
475
+ T
476
+ m+1... ×M U
477
+ T
478
+ M - parallel
479
+ (3)
480
+ = Xm(t−1) ×n U
481
+ T
482
+ n(t)Un(t−1), ∀n ̸= m, - asynchronous (4)
483
+ = (Xm-1 ×m-1 U
484
+ T
485
+ m-1) ×m Um = T ×m Um
486
+ - sequential
487
+ (5)
488
+ The M-mode SVD [105] (Algorithm 1) minimizes M alternat-
489
+ ing least squares in closed form by employing M different
490
+ SVDs. It is suitable for parallel computation, but can be
491
+ performed asynchronously or sequentially by employing (4)
492
+ and (5), respectively.6 The core tensor T is computed by
493
+ multiplying the data tensor with the inverse mode matrices,
494
+ T = D ×1 UT
495
+ 1 · · · ×m UT
496
+ m · · · ×M UT
497
+ M, or more efficiently as
498
+ T = Xm × UT
499
+ m.
500
+ A. Kernel Tensor Factor Analysis:
501
+ When data D are a combination of non-linear independent causal
502
+ factors C,
503
+ D = B ×1 φ(C1) · · · × φ(Cm) · · · ×M φ(CM) + E
504
+ (6)
505
+ Cm = UmW−1+ Em,
506
+ kernel
507
+ multilinear
508
+ independent
509
+ component
510
+ analysis
511
+ (K-
512
+ MICA) [99, Ch 4.4] models the mechanism of data formation. K-
513
+ MICA employs the “kernel trick” [85][110] as a pre-processing
514
+ step which makes the data suitable for multilinear independent
515
+ component analysis [108] (Algorithm 2), where the additional
516
+ rotation matrix Wm may be computed based on negentropy,
517
+ mutual information, or higher-order cumulants. K-MICA is
518
+ 6A sequential computation of the M-mode SVD is known in the literature
519
+ as a tensor ring [116]. A single-time iteration is known as a tensor train [77].
520
+
521
+ (a)
522
+ (b)
523
+ (c)
524
+ (d)
525
+ (e)
526
+ (f)
527
+ (g)
528
+ Fig. 3: Deep neural network. Subfigures (a-e) depict (7–10) [99, pg.38-40]. (a) The mode matrix computation Um is a constrained
529
+ cluster-based PCA that is rewritten in terms of block SVDs. Matrixizing may be viewed as a concatenation of “cluster” data.
530
+ The matrix W transforms the basis matrix U(n)
531
+ 0
532
+ such that the causal factor representation Um is the same regardless of cluster
533
+ membership. (b) Mode matrix Um computation using a single autoencoder-decoder. (c) Mode matrix computation as a hierarchy
534
+ of autoencoder-decoders, (d) Mode matrix computation written as a deep learning model (e) Concurrent-autoencoders; i.e.,
535
+ constrained cluster-based autoencoders. (f) Forward causal model with a set of capsules implemented by deep neural networks.
536
+ For parallel computation, we break the chain links and shuttle causal information between capsules. (g) Each capsule in (f) may
537
+ be replaced with a part-based deep neural network by permuting the rows in DT
538
+ [m], and segmenting them based on adaptive
539
+ subdivision [96]. The capsules are efficiently trained with a part-based hierarchy of autoencoders (Fig. 4).
540
+ a tensor generalization of the kernel PCA [86] and kernel
541
+ ICA [3][114].
542
+ To accomplish this analysis, recall that the computation
543
+ of covariance matrix D[m]D[m]
544
+ T involves inner products
545
+ dT
546
+ i1...im-1 j im+1...iMdi2...im-1 k im+1...iM between pairs of data points
547
+ in the data tensor D associated with causal factor mode m, for
548
+ m = 1, . . . , M (Step 2.2 in Algorithm 1). We replace the inner
549
+ products with a generalized distance measure between images,
550
+ K(di1...im−1 j im+1...iM, di2...im−1 k im+1...iM), where K(·, ·) is a
551
+ suitable kernel function (Table I) that corresponds to an inner
552
+ product in some expanded feature space. This generalization
553
+ naturally leads us to a Kernel Multilinear PCA (K-MPCA)
554
+ Algorithm, where the covariance computation is replaced by
555
+ [D[m]D[m]
556
+ T]jk :=
557
+ I1
558
+
559
+ i1=1
560
+ · · ·
561
+ Im−1
562
+
563
+ im−1=1
564
+ Im+1
565
+
566
+ im+1=1
567
+ · · ·
568
+ IM
569
+
570
+ iM=1
571
+ K(di1...im−1 j im+1...iM, di1...im−1 k im+1...iM).
572
+ When a causal factor is a combination of multiple independent
573
+ sources that are causal in nature, we employ a rotation matrix
574
+ W to identify them. The rotation matrix is computed by
575
+
576
+ Initialization:
577
+ Cm = D × U+..·Xm- Ut- *m++ U+1°.· U parallel computation
578
+ Iv Views
579
+ = m(t-1) xn Ut(t)Ur(t-1), Vn±m
580
+ asynchronous computation
581
+ = (n-1 *m- Ut-) *m Um = T *m Um sequential computation
582
+ Ip
583
+ D
584
+ People
585
+ extended core,T
586
+ matrixize
587
+ Tx.U.=α
588
+ T×,U,→s
589
+ T×,U=m
590
+ T×,U, =
591
+ matrixize
592
+ matrixize
593
+ M
594
+ M
595
+ T
596
+ T
597
+ TM
598
+ [zlix
599
+ Xi;
600
+ Xi
601
+ Xi
602
+ invariant
603
+ x.
604
+ invariant
605
+ invariant
606
+ xt
607
+ xi
608
+ shared
609
+ shared
610
+ shared
611
+ .code l
612
+ x
613
+ code
614
+ .codel:
615
+ x2
616
+ 12
617
+ x2
618
+ w
619
+ W.:
620
+ IwM
621
+ xt
622
+ 21
623
+ 21
624
+ U.
625
+ U,=U2 ...=Um=IL[P′
626
+ People
627
+ invariant
628
+ invariant
629
+ Pixels
630
+ code
631
+ code
632
+ xi
633
+ invariant
634
+ :
635
+ invariant
636
+ Xl
637
+ ll;
638
+ code
639
+ People
640
+ code
641
+ ui
642
+ Pixels
643
+ xi
644
+ X1
645
+ :
646
+ invariant
647
+ :
648
+ code
649
+ ui
650
+ People
651
+ lli
652
+ Pixels
653
+ x1
654
+ p
655
+ p
656
+ xi
657
+ :
658
+ :
659
+ X
660
+ People
661
+ ul;
662
+ Pixels
663
+ xi
664
+ :
665
+ :
666
+ Xsy
667
+ P
668
+ People
669
+ X1
670
+ xr
671
+ Pixels
672
+ :
673
+ :
674
+ ui
675
+ u
676
+ People
677
+ r
678
+ p
679
+ p
680
+ P
681
+ :
682
+ :
683
+ Xi.
684
+ People
685
+ X1
686
+ xr
687
+ Pixels
688
+ :
689
+ OR
690
+ :
691
+ X
692
+ u1
693
+ u;
694
+ People
695
+ Xt
696
+ "p
697
+ Xr
698
+ Pixels
699
+ :
700
+ .
701
+ AT
702
+ 11Sequential SVDs
703
+ Constrained cluster-based SVD
704
+ Di
705
+ Rotate with W.
706
+ SVD
707
+ SVD
708
+ (P)
709
+ Ip
710
+ DOT
711
+ V(1)
712
+ U(OT
713
+ V(0)
714
+ V(1)
715
+ W.,Z.
716
+ UT
717
+ V(1)
718
+ W)
719
+ Z.
720
+ UT
721
+ [P]
722
+ [P]
723
+ [P]
724
+ [P]
725
+ [P]
726
+ [P]
727
+ [P]
728
+ [P]
729
+ [P]
730
+ Z[P] [P]
731
+ [P]
732
+ [P] [P]
733
+ /
734
+ V(2)
735
+ U(2)T
736
+ V(2)
737
+ V2)
738
+ V(2)
739
+ W(2)
740
+ [P]
741
+ [P]
742
+ [P]
743
+ [P]
744
+ [P]
745
+ [P]
746
+ [P]
747
+ [P]
748
+ ...
749
+ Ip
750
+ V(Np)
751
+ V(Wp)
752
+ Ix
753
+ WWp)
754
+ [P]
755
+ [P]
756
+ [P]
757
+ [P]
758
+ [P]
759
+ [P]
760
+ [P]
761
+ [P]
762
+ [P] 上
763
+ )T
764
+ [P]
765
+ People
766
+ invariant
767
+ code
768
+ dr
769
+ Pixels
770
+ d2
771
+ d2
772
+ (N,)T
773
+ P
774
+ People
775
+ Pixels
776
+ aNpIgdr
777
+ dr
778
+ d2
779
+ d.
780
+ invariant
781
+ ()T
782
+ (1)
783
+ ()
784
+ VP
785
+ VP
786
+ code
787
+ U;
788
+ M
789
+ (2)
790
+ (1)
791
+ u;
792
+ u;
793
+ la
794
+ d.
795
+ n
796
+ W
797
+ P
798
+ di
799
+ (Np)
800
+ (VP)
801
+ u;
802
+ d2
803
+ u
804
+ (WP)T
805
+ ()
806
+ (VP)
807
+ d
808
+ Vp
809
+ W
810
+ u;
811
+ d.
812
+ tix
813
+ Adi
814
+ invariant
815
+ di
816
+ code
817
+ d2
818
+ (12
819
+ ()T
820
+ (1)
821
+ YP
822
+ d
823
+ : u;
824
+ W
825
+
826
+ P
827
+ d2
828
+ (NVp)T
829
+ (Np)
830
+ (2
831
+ P
832
+ VP
833
+ VP
834
+ IV
835
+ u;dr
836
+ invariant
837
+ code
838
+ d2
839
+ T
840
+ ()T
841
+ ()(D)
842
+ -
843
+ P
844
+ dt
845
+ u;
846
+ +
847
+ dt
848
+ dt
849
+ d2
850
+ (WP)T (WP)T
851
+ (VP)(VP)
852
+ d2
853
+ d
854
+ ATCausal Deep Learning
855
+ ICPR 2022, Montreal, Canada
856
+ employing either mutual information, negentropy, or higher-
857
+ order cumulants [23][6][45][5]. A Kernel Multilinear ICA (K-
858
+ MICA) Algorithm is a kernel generalization of the multilinear
859
+ independent component analysis (MICA) algorithm [108].
860
+ Algorithm 2 simultaneously specifies both K-MPCA and K-
861
+ MICA algorithms. A scalable tensor factor analysis represents
862
+ an observation as a hierarchy of parts and wholes [103][102].
863
+ III. NEURAL NETWORK ARCHITECTURE
864
+ Causal neural networks (Fig. 1) parallel the functionality and
865
+ composition of tensor factor analysis models. Causal neural
866
+ networks are composed of a set of causal capsules and a tensor
867
+ transformer. The capsules compute the latent variables, Um, that
868
+ represent the causal factors. The tensor transformer, T , encodes
869
+ the interaction between the causal factors.
870
+ Tensor factor analysis models are transformed into causal
871
+ neural networks by using Hebb autoencoders and tensor autoen-
872
+ coders as building blocks. The M-mode SVD (Algorithm 1)
873
+ is transformed into a causal neural network (Fig. 1) by
874
+ replacing every SVD step with gradient descent optimization,
875
+ which is outsourced to a Hebb autoencoder (Supplemental
876
+ Section VI-A). For effectiveness, we employ stochastic gradient
877
+ descent [12][80]. The extended core tensor T[0] is computed by
878
+ defining and employing a tensor autoencoder, an autoencoder
879
+ whose code is initialized to the tensor product of the causal
880
+ factor representations, D[0] = T[0](UiM ⊗· · ·⊗Uim · · ·⊗Ui1)T.
881
+ To address a set of arbitrarily non-linear causal factors, each
882
+ autoencoder employs kernel activation functions (Table I).
883
+ A. Causal Deep Networks and Scalable Tensor Factor Analysis:
884
+ For a scalable architecture, we leverage the properties of
885
+ block algebra. Shallow autoencoders are replaced with either a
886
+ mathematically equivalent deep neural network that requires end-
887
+ to-end training, a part-based hierarchy of autoencoders [103],
888
+ or a set of concurrent autoencoders (Fig. 3).
889
+ For example, the orthonormal subspace of a data matrix, D ∈
890
+ CI0×I1 that has I0 measurements and I1 observations may be
891
+ computed by recursively subdividing the data and analyzing the
892
+ data blocks,
893
+ D =
894
+
895
+ DA
896
+ DB
897
+
898
+ =
899
+
900
+ UASAVT
901
+ A
902
+ UBSBVT
903
+ B
904
+
905
+ =
906
+
907
+ UA
908
+ 0
909
+ 0
910
+ UB
911
+ � �
912
+ SAVT
913
+ A
914
+ SBVT
915
+ B
916
+
917
+
918
+ ��
919
+
920
+ SVD
921
+ =
922
+ =
923
+
924
+ UA
925
+ 0
926
+ 0
927
+ UB
928
+
929
+ WΣV
930
+ T =
931
+
932
+ UAWA
933
+ UBWB
934
+
935
+ ΣV
936
+ T = UΣV
937
+ T,
938
+ where W is a rotation matrix that transforms the basis matrices,
939
+ UA and UB, spanning the data blocks, DA and DB, such that their
940
+ observations have the same representations. This approach can
941
+ be applied bottom up to a recursively partioned data matrix. The
942
+ above equalities provide mathematical justification for greedy
943
+ layer training of deep neural networks [9](Fig. 3a-e).
944
+ Computing the mode matrices Um of a tensor model may be
945
+ viewed as equivalent to computing a set of mutually constrained,
946
+ cluster-based PCAs [99, pg.38-40] (Fig. 3a). When dealing with
947
+ data that can be separated into clusters, the standard machine
948
+ learning approach is to compute a separate PCA. When data
949
+ from different clusters are generated by the same underlying
950
+ process (e.g., facial images of the same people under different
951
+ (a)
952
+ (b)
953
+ Fig. 4: (a) Causal capsules may be implemented with a part-
954
+ based hierarchy of autoencoders. The dataset is permuted
955
+ by P, filtered and segmented by Hm that is mode depen-
956
+ dent. Subdivision into parts can be determined with adaptive
957
+ subdivision [96].(b) Implementing the capsules with a part-
958
+ based hierarchy of autoencoders is equivalent to performing
959
+ Incremental M-mode Block SVD[103, Sec IV][98].
960
+ viewing conditions), the underlying data can be concatenated
961
+ in the measurement mode and the common causal factor can
962
+ be modeled by one PCA.
963
+ Thus, we define a constrained, cluster-based PCA as the
964
+ computation of a set of PCA basis vectors that are rotated such
965
+ that the latent representation is constrained to be the invariant
966
+ of the cluster membership.
967
+ In the context of our multifactor data analysis, we define a cluster
968
+ as a set of observations for which all factors are fixed but one.
969
+ For every tensor mode, there are Nm = I1I2 . . . Im−1Im+1 . . . IM
970
+ possible clusters and the data in each cluster varies with the same
971
+ causal mode. The constrained, cluster-based PCA concatenates
972
+ the clusters in the measurement mode and analyzes the data
973
+ with a linear model, such as PCA or ICA [5], [23], [26].
974
+ To see this, let Di1...im−1im+1...iM ∈ CI0×1×1···×1×Im×1···×1
975
+ denote a subtensor of D that is obtained by fixing all causal
976
+ factor modes but mode m and mode 0 (the measurement
977
+ mode). Matrixizing this subtensor in the measurement mode
978
+ we obtain Di1...im−1im+1...iM [0] ∈ CI0×Im. This data matrix
979
+ comprises a cluster of data obtained by varying causal factor
980
+ m, to which one can traditionally apply PCA. Since there are
981
+ Nm = I1I2 . . . Im−1Im+1 . . . IM possible clusters that share the
982
+ same underlying space associated with factor m, the data can
983
+ be concatenated and PCA performed in order to extract the
984
+ same representation for factor m regardless of the cluster. Now,
985
+ consider the MPCA computation of mode matrix Um (Fig. 3a),
986
+ which can be written in terms of matrixized subtensors as
987
+ Dm =
988
+
989
+ �������
990
+ D1...11...1[m]
991
+ T
992
+ .
993
+ .
994
+ .
995
+ DI1...11...1[m]
996
+ T
997
+ ..
998
+ .
999
+ DI1...Im−1Im+1...IM [m]
1000
+ T
1001
+
1002
+ �������
1003
+ T
1004
+ = UmΣmVm
1005
+ T.
1006
+ (7)
1007
+ This
1008
+ is
1009
+ equivalent
1010
+ to
1011
+ computing
1012
+ a
1013
+ set
1014
+ of
1015
+ Nm
1016
+ =
1017
+ I1I2 . . . Im−1Im+1 . . . IM cluster-based PCAs concurrently by
1018
+ combining them into a single statistical model and representing
1019
+ the underlying causal factor m common to the clusters. Thus,
1020
+ rather than computing a separate linear PCA model for each
1021
+
1022
+ People
1023
+ xi
1024
+ .
1025
+ x:
1026
+ People
1027
+ ul.
1028
+ x
1029
+ x
1030
+ ui,
1031
+ People
1032
+ ul,
1033
+ ul.
1034
+ People
1035
+ ul;
1036
+ xi
1037
+ xi
1038
+ Xix
1039
+ People
1040
+ H
1041
+ Pixels
1042
+ X
1043
+ People
1044
+ Pixels
1045
+ xi
1046
+ 1
1047
+ People
1048
+ xi
1049
+ Pixels
1050
+ People
1051
+ Pixels
1052
+ xiIncremental
1053
+ 7M-mode Block SVD
1054
+ U1
1055
+ TFig. 5: An inverse
1056
+ causal neural network
1057
+ is
1058
+ an
1059
+ inverted
1060
+ for-
1061
+ ward
1062
+ model
1063
+ where
1064
+ the operations are per-
1065
+ formed in reverse or-
1066
+ der [100][109].
1067
+ cluster, MPCA concatenates the clusters into a single statistical
1068
+ model and computes a representation (coefficient vector) for
1069
+ mode m that is invariant relative to the other causal factor modes
1070
+ 1, ..., (m − 1), (m + 1), ..., M. Thus, MPCA is a multilinear,
1071
+ constrained, cluster-based PCA. To clarify the relationship, let
1072
+ us number each of the matrices Di1...im−1im+1...iM [m] = D(n)
1073
+ m
1074
+ with a parenthetical superscript 1 ≤ n = 1 + �M
1075
+ k=1,k̸=m(in −
1076
+ 1) �k−1
1077
+ l=1,l̸=m Il ≤ Nm.
1078
+ Let each of the cluster SVDs be D(n)
1079
+ m
1080
+ = U(n)
1081
+ m Σ(n)
1082
+ m V(n)
1083
+ m
1084
+ T, and
1085
+ D[m]=
1086
+
1087
+ U(1)
1088
+ m Σ(1)
1089
+ m
1090
+ . . . U(NM)
1091
+ m
1092
+ Σ(Nm)
1093
+ m
1094
+
1095
+
1096
+ ��
1097
+
1098
+ SVD
1099
+ diag([V(1)
1100
+ m
1101
+ . . . V(Nm)
1102
+ m
1103
+ ])
1104
+ T
1105
+ =
1106
+ UmΣmW
1107
+ T
1108
+ m diag([ V(1)
1109
+ m
1110
+ . . .
1111
+ V(Nm)
1112
+ m
1113
+ ])
1114
+ T,
1115
+ (8)
1116
+ =
1117
+ UmΣm[ V(1)
1118
+ m W(1)
1119
+ m
1120
+ . . .
1121
+ V(Nm)
1122
+ m
1123
+ W(Nm)
1124
+ m
1125
+ ]
1126
+ T
1127
+ (9)
1128
+ =
1129
+ UmΣmVm
1130
+ T,
1131
+ (10)
1132
+ where diag(·) denotes a diagonal matrix whose elements are
1133
+ each of the elements of its vector argument. The mode matrix
1134
+ V(nm)
1135
+ m
1136
+ is the measurement matrix U(nm)
1137
+ 0
1138
+ (U(nm)
1139
+ x
1140
+ when the
1141
+ measurements are image pixels) that contains the eigenvectors
1142
+ spanning the observed data in cluster nm, 1 ≤ nm ≤ Nm. MPCA
1143
+ can be thought as computing a rotation matrix, Wm, that contains
1144
+ a set of blocks W(n)
1145
+ m
1146
+ along the diagonal that transform the PCA
1147
+ cluster eigenvectors V(nm)
1148
+ m
1149
+ such that the mode matrix Um is
1150
+ the same regardless of cluster membership (8–10)(Fig 3). The
1151
+ constrained “cluster”-based PCAs may also be implemented
1152
+ with a set of concurrent “cluster”-based PCAs.
1153
+ Causal factors of object wholes may be computed efficiently
1154
+ from their parts, by applying a permutation matrix P and
1155
+ creating part-based data clusters with a segmentation filter Hm,
1156
+ where D
1157
+ ×
1158
+ T
1159
+ m HmP ⇔ HmPD[m]
1160
+ T, but leaving prior analysis
1161
+ intact (Fig. 3g). A deep neural network can be efficiently
1162
+ trained with a hierarchy of part-based autoencoders (Fig. 4). A
1163
+ computation that employs a part-based hierarchy of autoencoders
1164
+ parallels the Incremental M-mode Block SVD [103, Sec.
1165
+ IV][102][98] (Supplemental VI-C).
1166
+ A data tensor is recursively subdivided into data blocks,
1167
+ analyzed in a bottom-up fashion, and the results merged as
1168
+ one moves through the hierarchy. The computational cost is
1169
+ the cost of training one autoencoder, O(T), times O(logNM),
1170
+ the total number of autoencoders trained for each factor matrix,
1171
+ O(TlogNm). If the causal neural network is trained sequentially,
1172
+ the training cost for one-time iteration is O(MTlog ¯N), where
1173
+ ¯N is the average number of clusters across the M modes.
1174
+ IV. INVERSE CAUSAL QUESTION: “WHY?”
1175
+ Inverse causal inference addresses the “why” question and
1176
+ estimates the causes of effects given an estimated forward causal
1177
+ model and a set of constraints that reduce the solution set and
1178
+ render the problem well-posed [32][100][109].
1179
+ Multilinear tensor factor analysis constrains causal factor
1180
+ representations to be unitary vectors. Multilinear projec-
1181
+ tion [109][100] relies on this constraint and performs multiple
1182
+ regularized regressions. One or more unlabeled test observations
1183
+ that are not part of the training data set are simultaneously
1184
+ projected into the causal factor spaces
1185
+ T +x ×
1186
+ T
1187
+ x dtest = R
1188
+ (M-mode SVD or CP)
1189
+ ≈ r1... ◦ rm... ◦ rM, and ∥rm∥ = 1.
1190
+ An autoencoder-decoder neural network architecture that imple-
1191
+ ments a multilinear projection architecture (Fig. 5) is an inverted
1192
+ (upside down) forward neural network architecture that reverses
1193
+ the operation order of the forward model.
1194
+ Neural architectures addressing underdetermined inverse prob-
1195
+ lems are characterized by hidden layers that are wider than the
1196
+ input layer; i.e., the dimensionality of vec(R) is larger than
1197
+ the number of measurements in d. Dimensionality reduction
1198
+ reduces noise, and the width of the hidden layers [38], [93].
1199
+ Adding sparsity, non-negativity constraints, etc., can further
1200
+ reduce the solution set. Alternatively or in addition, one can
1201
+ determine a set of candidate solutions by modeling different
1202
+ aspects of the mechanism of data formation as piecewise tensor
1203
+ (multilinear) factor models, such that each of their inverses
1204
+ is well-posed. A single multilinear projection [109][100] is
1205
+ replaced with multiple multilinear projections that are well-
1206
+ posed. Vasilescu and Terzopoulos [104][99, Ch.7] rewrote the
1207
+ forward multilinear model in terms of multiple piecewise linear
1208
+ models that were employed to perform multiple well-posed
1209
+ linear projections and produced multiple candidate solutions.
1210
+ V. CONCLUSION
1211
+ We derive a set of causal deep neural networks that are a
1212
+ consequence of tensor factor analysis.7 Causal deep neural
1213
+ networks encode hypothesized mechanisms of data formation
1214
+ as a part-based hierarchy of kernel tensor models, where
1215
+ “A causes B” means “the effect of A is B”, a measurable
1216
+ and experimentally repeatable quantity [39]. The causal deep
1217
+ architectures are composed of causal capsules and tensor
1218
+ transformers.
1219
+ The former estimate the causal factor representations, whose
1220
+ interaction are governed by the latter. Inverse causal questions
1221
+ estimate the causes of effects and implement the multilinear
1222
+ projection. For an underdetermined inverse problem, as an
1223
+ alternative to aggressive “bottleneck” dimensionality reduction,
1224
+ the mechanism of data formation is modeled as piecewise tensor
1225
+ (multilinear) models, and inverse causal inference performs
1226
+ multiple well-posed multilinear projections that result in multiple
1227
+ candidate solutions, which are gated to yield a unique solution.
1228
+ 7“Every theoretical physicist that is any good knows six or seven different
1229
+ theoretical representations for exactly the same physics. He knows that they
1230
+ are all equivalent, but he keeps them all in his head hoping that they will give
1231
+ him different ideas.” - Richard Feynman [30].
1232
+
1233
+ Compute the representation, vec(R)
1234
+ 1211
1235
+ [x]
1236
+ [x
1237
+ 'Iplvl
1238
+ Factorize R. into latent variabfes:
1239
+ Tensorize code vec(R) → R
1240
+ CP or M-mode SVD( R.)
1241
+ R
1242
+ [PCausal Deep Learning
1243
+ ICPR 2022, Montreal, Canada
1244
+ VI. CAUSAL DEEP LEARNING
1245
+ (SUPPLEMENTAL DOCUMENT)
1246
+ We denote scalars by lower case italic letters (a, b, ...), vectors
1247
+ by bold lower case letters (a, b, ...), matrices by bold uppercase
1248
+ letters (A, B, ...), and higher-order tensors by bold uppercase
1249
+ calligraphic letters (A, B, ...). Index upper bounds are denoted
1250
+ by italic uppercase letters (i.e., 1 ≤ a ≤ A or 1 ≤ i ≤ I). The
1251
+ zero matrix is denoted by 0, and the identity matrix is denoted
1252
+ by I. The TensorFaces paper [105] is a gentle introduction to
1253
+ tensor factor analysis, [58] is a great survey of tensor methods
1254
+ and references [99], [25], [13] provide an in depth treatment of
1255
+ tensor factor analysis.
1256
+ A. PCA computation with a Hebb autoencoder
1257
+ A Hebb autoencoder-decoder minimizes the least squares
1258
+ function,
1259
+ l
1260
+ =
1261
+ I
1262
+
1263
+ i=1
1264
+ ∥di − Bci∥ + λ∥B
1265
+ TB − I∥,
1266
+ (11)
1267
+ and learns a set of weights, bi0,r, that are identical to the
1268
+ elements of the PCA basis matrix [19, p. 58], B ∈ CI0×R,
1269
+ when employing non-deterministic linear neurons. The weights
1270
+ are computed sequentially by training on a set of observations
1271
+ di ∈ CI0 with I0 measurements (Fig. 6). The autoencoder is
1272
+ implemented with a cascade of Hebb neurons[36].
1273
+ The contribution of each neuron, c1, . . . , cr, is sequentially
1274
+ computed, subtracted from a centered training data set, and
1275
+ the difference is driven through the next Hebb neuron, cr+1 [87],
1276
+ [83], [82], [1], [75].
1277
+ The weights of a Hebb neuron, cr, are updated by
1278
+ ∆br(t + 1) = η
1279
+
1280
+ d −
1281
+ r
1282
+
1283
+ ir=1
1284
+ bir(t)cir(t)
1285
+
1286
+ cr(t)
1287
+ (12)
1288
+ = η
1289
+
1290
+ d −
1291
+ r
1292
+
1293
+ ir=1
1294
+ bir(t)b
1295
+ T
1296
+ ir(t)d
1297
+
1298
+ d
1299
+ Tbr(t),
1300
+ br(t + 1)
1301
+ = (br(t) + ∆br(t + 1))
1302
+ ∥br(t) + ∆br(t + 1)∥
1303
+ where d ∈ CI0 is a vectorized centered observation with I0
1304
+ measurements, 0 ≤ η ≤ 2/∥B∥2 = σmax,B is the learning rate, br
1305
+ are the autoencoder weights of the r neuron, cr is the activation,
1306
+ Fig. 6: Autoencoder-decoder architecture and Principal Compo-
1307
+ nent Analysis. (All images have been vectorized, but they are
1308
+ displayed as a grid of numbers. )
1309
+ Fig. 7: Matrixizing a 3rd order tensor, A.
1310
+ and t is the time iteration. Back-propagation[64], [65] performs
1311
+ PCA gradient descent [19, p. 58][51]. An autoencoder may be
1312
+ trained and the weights updated with a data batch, D,
1313
+ ∆bi(t + 1)
1314
+ =
1315
+ (D − Br(t)B
1316
+ T
1317
+ r (t)D) D
1318
+ Tbr(t)
1319
+ =
1320
+
1321
+ DD
1322
+ T −
1323
+ r
1324
+
1325
+ ir=1
1326
+ bir(t)b
1327
+ T
1328
+ ir(t)DD
1329
+ T
1330
+
1331
+ br(t).
1332
+ Computational speed-ups are achieved with stochastic gradient
1333
+ descent [12][80].
1334
+ B. Relevant Tensor Algebra
1335
+ Briefly, the natural generalization of matrices (i.e., linear
1336
+ operators defined over a vector space), tensors define multilinear
1337
+ operators over a set of vector spaces. A “data tensor” denotes
1338
+ an M-way data array.
1339
+ Definition 1 (Tensor): Tensors are multilinear mappings over a
1340
+ set of vector spaces, CIm, 1 ≤ m ≤ M, to a range vector space
1341
+ CI0:
1342
+ A :
1343
+
1344
+ CI1 × CI2 × · · · × CIM�
1345
+ �→ CI0.
1346
+ (13)
1347
+ The order of tensor A ∈ CI0×I1×···×IM is M + 1. An element
1348
+ of A is denoted as Ai0i1...im...iM or ai0i1...im...iM, where 1 ≤
1349
+ im ≤ Im.
1350
+ The mode-m vectors of an M-order tensor A ∈ CI0×I1×···×IM
1351
+ are the Im-dimensional vectors obtained from A by varying index
1352
+ im while keeping the other indices fixed. In tensor terminology,
1353
+ column vectors are the mode-0 vectors and row vectors as mode-
1354
+ 1 vectors. The mode-m vectors of a tensor are also known as
1355
+ fibers. The mode-m vectors are the column vectors of matrix
1356
+ A[m] that results from matrixizing (a.k.a. flattening) the tensor
1357
+ A.
1358
+ Definition 2 (Mode-m Matrixizing): The mode-m matrixizing
1359
+ of tensor A ∈ CI0×I1×...IM is defined as the matrix A[m] ∈
1360
+
1361
+ n-p
1362
+ n-p
1363
+ b1
1364
+ b
1365
+ b11
1366
+ d
1367
+ C1
1368
+ b21
1369
+ big
1370
+ d.
1371
+ bil
1372
+ b,
1373
+ C
1374
+ bIR
1375
+ DR
1376
+ D2R
1377
+ CR
1378
+ d
1379
+ 1
1380
+ -μ B
1381
+ =p
1382
+ B
1383
+ p介
1384
+ C7
1385
+ C1
1386
+ d
1387
+ b1
1388
+ bR
1389
+ u
1390
+ CR
1391
+ Rlo
1392
+ loAlgorithm 3 M-mode SVD algorithm.[105]
1393
+ Input the data tensor D ∈ CI0×···×IM.
1394
+ 1) For m := 0, . . . , M,
1395
+ Let Um be the left orthonormal matrix of [UmSmVT
1396
+ m] :=
1397
+ svd(D[m])a
1398
+ 2) Set Z := D ×0 U0
1399
+ T ×1 U1
1400
+ T · · · ×m Um
1401
+ T... ×M UM
1402
+ T.
1403
+ Output mode matrices U0, U1, ..., UM, and the core tensor Z.
1404
+ aThe computation of Um in the SVD D[m] = UmΣVmT can be performed
1405
+ efficiently, depending on which dimension of D[m] is smaller, by decomposing
1406
+ either D[m]D[m]T = UmΣ2UmT (note that VmT = Σ+UmTD[m]) or
1407
+ by decomposing D[m]TD[m] = VmΣ2VmT and then computing Um =
1408
+ D[m]VmΣ+.
1409
+ CIm×(I0...Im−1Im+1...IM). As the parenthetical ordering indicates,
1410
+ the mode-m column vectors are arranged by sweeping all the
1411
+ other mode indices through their ranges, with smaller mode
1412
+ indexes varying more rapidly than larger ones; thus,
1413
+ [A[m]]jk= ai1...im...iM,
1414
+ where
1415
+ (14)
1416
+ j = im
1417
+ and
1418
+ k = 1 +
1419
+ M
1420
+
1421
+ n=0
1422
+ n̸=m
1423
+ (in − 1)
1424
+ n−1
1425
+
1426
+ l=0
1427
+ l̸=m
1428
+ Il.
1429
+ A generalization of the product of two matrices is the product
1430
+ of a tensor and a matrix [25], [18].
1431
+ Definition 3 (Mode-m Product, ×m): The mode-m product
1432
+ of a tensor A ∈ CI1×I2×···×Im×···×IM and a matrix B ∈
1433
+ CJm×Im, denoted by A ×m B, is a tensor of dimensionality
1434
+ CI1×···×Im−1×Jm×Im+1×···×IM whose entries are computed by
1435
+ [A ×mB]i1...im−1jmim+1...iM
1436
+ =
1437
+
1438
+ im
1439
+ ai1...im−1imim+1...iMbjmim,
1440
+ C = A ×m B.
1441
+ matrixize
1442
+ tensorize
1443
+ C[m] = BA[m].
1444
+ The M-mode SVD, Algorithm 3 proposed by Vasilescu and
1445
+ Terzopoulos [105] is a “generalization” of the conventional
1446
+ matrix (i.e., 2-mode) SVD which may be written in tensor
1447
+ notation as
1448
+ D = U0SU1
1449
+ T
1450
+
1451
+ D = S ×0 U0 ×1 U1
1452
+ The M-mode SVD orthogonalizes the M spaces and decom-
1453
+ poses a tensor as the mode-m product, denoted ×m , of M-
1454
+ orthonormal mode matrices, and a core tensor Z
1455
+ D = Z ×0 U0 · · · ×m Um · · · ×M UM.
1456
+ (15)
1457
+ D[m] = UmZ[m] (UM · · · ⊗ Um+1 ⊗m-1 U · · · ⊗ U0)
1458
+ T, (16)
1459
+ vec(D) = (UM · · · ⊗ Um+1 ⊗ Um-1 · · · ⊗ U0) vec(Z).
1460
+ (17)
1461
+ The latter two equations express the decomposition in matrix
1462
+ form and in terms of vec operators.
1463
+ C. Compositional Hierarchical Block TensorFaces
1464
+ Training Data: In our experiments, we employed gray-level
1465
+ facial training images rendered from 3D scans of 100 subjects.
1466
+ The scans were recorded using a CyberwareTM 3030PS laser
1467
+ scanner and are part of the 3D morphable faces database
1468
+ created at the University of Freiburg [11]. Each subject was
1469
+ combinatoriall y imaged in Maya from 15 different viewpoints
1470
+ (θ = −60◦ to +60◦ in 10◦ steps on the horizontal plane,
1471
+ φ = 0◦) with 15 different illuminations ( θ = −35◦ to +35◦
1472
+ in 5◦ increments on a plane inclined at φ = 45◦).
1473
+ Data Preprocessing: Facial images were warped to an average
1474
+ face template by a piecewise affine transformation given a set
1475
+ of facial landmarks obtained by employing Dlib software [57],
1476
+ [53], [88], [67], [35]. Illumination was normalized with an
1477
+ adaptive contrast histogram equalization algorithm, but rather
1478
+ than performing contrast correction on the entire image, subtiles
1479
+ of the image were contrast normalized, and tiling artifacts
1480
+ were eliminated through interpolation. Histogram clipping was
1481
+ employed to avoid over-saturated regions.
1482
+ Experiments:We ran five experiments with five facial part-based
1483
+ hierarchies from which a person representation was computed,
1484
+ Fig. 8. Each image, d ∈ RI0×1, was convolved with a Gaussian
1485
+ and a Laplacian filter bank {Hs∥s = 1...S} that contained five
1486
+ filters, S = 5. The filtered images, d ×0 Hs, resulted in five
1487
+ facial part hierarchies composed of (i) independent pixel parts
1488
+ (ii) parts segmented from different layers of a Gaussian pyramid
1489
+ that were equally or (iii) unequally weighed, (iv) parts were
1490
+ segmented from a Laplacian pyramid that were equally or (v)
1491
+ unequally weighed.
1492
+ The composite person signature was computed for every test
1493
+ image by employing the multilinear projection algorithm [101],
1494
+ [109], and signatures were compared with a nearest neighbor
1495
+ classifier.
1496
+ To validate the effectiveness of our system on real-world images,
1497
+ we report results on “LFW” dataset (LFW) [43]. This dataset
1498
+ contains 13,233 facial images of 5,749 people. The photos are
1499
+ unconstrained (i.e., “in the wild”), and include variation due
1500
+ to pose, illumination, expression, and occlusion. The dataset
1501
+ consists of 10 train/test splits of the data. We report the mean
1502
+ accuracy and standard deviation across all splits in Table 9.
1503
+ Fig. 8(b-c) depicts the experimental ROC curves. We follow
1504
+ the supervised “Unrestricted, labeled outside data” framework.
1505
+ Results: While we cannot celebrate closing the gap on human
1506
+ performance, our results are promising. DeepFace, a CNN model,
1507
+ improved the prior art verification rates on LFW from 70% to
1508
+ 97.35%, by training on 4.4M images of 200 × 200 pixels from
1509
+ 4, 030 people, the same order of magnitude as the number of
1510
+ people in the LFW database.
1511
+ We trained on less than one percent (1%) of the 4.4M total
1512
+ images used to train DeepFace. Images were rendered from
1513
+ 3D scans of 100 subjects with an the intraocular distance of
1514
+ approximately 20 pixels and with a facial region captured
1515
+ by 10, 414 pixels (image size ≈ 100 × 100 pixels). We have
1516
+ currently achieved verification rates just shy of 80% on LFW.
1517
+ Summary: Compositional Hierarchical Block TensorFaces
1518
+ models cause-and-effect as a hierarchical block tensor inter-
1519
+ action between intrinsic and extrinsic causal factors of data
1520
+ formation [103][98].
1521
+ A data tensor expressed as a part-based a hierarchy is a unified
1522
+ tensor model of wholes and parts. The resulting causal factor
1523
+ representations are interpretable, hierarchical, and statistically
1524
+ invariant to all other causal factors. While we have not closed
1525
+ the gap on human performance, we report encouraging face
1526
+
1527
+ (b)
1528
+ (c)
1529
+ Fig. 8: Compositional Hierarchical Block TensorFaces learns a hierarchy of features, and reesents each person as a part-based
1530
+ compositional representation. Figure depicts the training data factorization, D = T H ×L UL ×V UV ×P UP, where an observation is
1531
+ represented as d(p, v, l) = T H ×L lT ×V vT ×P pT and TH spans the hierarchical causal factor variance. (b) ROC curves for the
1532
+ University of Freiburg 3D Morphable Faces dataset. (c) ROC curves for the LFW dataset. The average accuracies are listed next
1533
+ to each method, along with the area under the curve (AUC). Parts refers to using compositional hierarchical Block TensorFaces
1534
+ models to separately analyze facial parts. Gaussian, Laplacian refers to using compositional hierarchical Block TensorFaces on a
1535
+ Gaussian/Laplacian data pyramid.
1536
+ Training
1537
+ Dataset
1538
+ Test
1539
+ Dataset
1540
+ PCA
1541
+ TensorFaces
1542
+ Compositional Hierarchical Block TensorFaces
1543
+ Pixels
1544
+ Gaussian
1545
+ Pyramid
1546
+ Weighted
1547
+ Gaussian
1548
+ Pyramid
1549
+ Laplacian
1550
+ Pyramid
1551
+ Weighted
1552
+ Laplacian
1553
+ Pyramid
1554
+ Freiburg
1555
+ Freiburg
1556
+ 65.23%
1557
+ 71.64%
1558
+ 90.50%
1559
+ 88.17%
1560
+ 94.17%
1561
+ 90.96%
1562
+ 93.98%
1563
+ Freiburg
1564
+ LFW
1565
+ (grey level images)
1566
+ 69.23%
1567
+ ±1.51
1568
+ 66.25%
1569
+ ±1.60
1570
+ 72.72%
1571
+ ±2.14
1572
+ 76.72%
1573
+ ±1.65
1574
+ 77.85%
1575
+ ±1.83
1576
+ 77.58%
1577
+ ±1.45
1578
+ 78.93%
1579
+ ±1.77
1580
+ Fig. 9: Empirical results reported for Freiburg and Labeled Faces in the Wild (LFW) using PCA, TensorFaces and Compositional
1581
+ Hierarchical Block TensorFaces representations. Pixels denotes independent facial part analysis Gaussian/Laplacian use a multi
1582
+ resolution pyramid to analyze facial features at different scales. Weighted denotes a weighted composite signature.
1583
+ Freiburg Experiment:
1584
+ Train on Freiburg: 6 views (±60◦,±30◦,±5◦); 6 illuminations (±60◦,±30◦,±5◦), 45 people
1585
+ Test on Freiburg:
1586
+ 9 views (±50◦, ±40◦, ±20◦, ±10◦, 0◦), 9 illums (±50◦, ±40◦, ±20◦, ±10◦, 0◦), 45 different people
1587
+ Labeled Faces in the Wild (LFW) Experiment:
1588
+ Models were trained on approximately half of one percent (0.5% < 1%) of the 4.4M images used to train DeepFace.
1589
+ Train on Freiburg:
1590
+ 15 views (±60◦,±50◦, ±40◦,±30◦, ±20◦, ±10◦,±5◦, 0◦), 15 illuminations (±60◦,±50◦, ±40◦,±30◦, ±20◦, ±10◦,±5◦, 0◦), 100
1591
+ people
1592
+ Test on LFW: We report the mean accuracy and standard deviation across standard literature partitions [43], following the
1593
+ Unrestricted, labeled outside data supervised protocol.
1594
+
1595
+ people variance weights
1596
+ -person signature, pT
1597
+ Up,1
1598
+ Up,s
1599
+ P,nose
1600
+ P,face
1601
+ people
1602
+ WS
1603
+ Upx
1604
+ Xp
1605
+ viewpoint variance.
1606
+ full face @ layer|6 (lowest resolution)
1607
+ XI
1608
+ illumination variance weights
1609
+ viewpoint variance weights
1610
+ Xy
1611
+ varianc
1612
+ illuminatibn representation, IT
1613
+ vjewpoint representation, vt
1614
+ parts @ layer 5
1615
+ illumination
1616
+ Humination
1617
+ U
1618
+ ewp
1619
+ UVX
1620
+ subparts @ layer 1 (highest resolution)0.9
1621
+ 0.8
1622
+ 0.7
1623
+ true positive rate (recall)
1624
+ 0.6
1625
+ 0.5
1626
+ 0.4
1627
+ 0.3
1628
+ Parts+Gaussian+Weighted (94.17%) (AUC=0.9801)
1629
+ Parts+Laplacian+Weighted (93.98%) (AUC=0.9799)
1630
+ 0.2
1631
+ Parts+Laplacian (90.96%) (AUC=0.9654)
1632
+ Parts (90.50%) (AUC=0.9577)
1633
+ Parts+Gaussian (88.17%) (AUC=0.9484)
1634
+ 0.1
1635
+ TensorFaces (71.64%) (AUC=0.7920)
1636
+ PCA (65.23%) (AUC=0.7140)
1637
+ 0
1638
+ 0
1639
+ 0.1
1640
+ 0.2
1641
+ 0.3
1642
+ 0.4
1643
+ 0.5
1644
+ 0.6
1645
+ 0.7
1646
+ 0.8
1647
+ 0.9
1648
+ 1
1649
+ false positive rate0.9
1650
+ 0.8
1651
+ 0.7
1652
+ true positive rate (recall)
1653
+ 0.6
1654
+ 0.5
1655
+ 0.4
1656
+ 0.3
1657
+ Parts+Laplacian+Weighted (78.93%) (AUC=0.8669)
1658
+ Parts+Gaussian+Weighted (77.85%) (AUC=0.8600)
1659
+ 0.2
1660
+ Parts+Laplacian (77.58%) (AUC=0.8575)
1661
+ Parts+Gaussian (76.72%) (AUC=0.8502)
1662
+ Parts (72.72%) (AUC=0.8138)
1663
+ 0.1
1664
+ PCA (69.23%) (AUC=0.7735)
1665
+ TensorFaces (66.25%) (AUC=0.7257)
1666
+ 0
1667
+ 0
1668
+ 0.1
1669
+ 0.2
1670
+ 0.3
1671
+ 0.4
1672
+ 0.5
1673
+ 0.6
1674
+ 0.7
1675
+ 0.8
1676
+ 0.9
1677
+ 1
1678
+ false positive rateverification results on two test data sets–the Freiburg, and the
1679
+ Labeled Faces in the Wild datasets by training on a very small
1680
+ set of synthetic images. We have currently achieved verification
1681
+ rates just shy of eighty percent on LFW by employing synthetic
1682
+ images from 100 people, 15 viewpoints and 15 illuminations,
1683
+ for a total that constitutes less than one percent (1%) of the
1684
+ total images employed by DeepFace. CNN verification rates
1685
+ improved the 70% prior art to 97.35% only when they employed
1686
+ 4.4M images from 4, 030 people, the same order of magnitude
1687
+ as the number of people in the LFW database.
1688
+ REFERENCES
1689
+ [1] D. H. Ackley, G. A. Hinton, and T. J. Sejnowski. A learning algorithm
1690
+ for Boltzmann machines. Cognitive Science, 9(1):147–169, 1985.
1691
+ [2] J. D. Angrist.
1692
+ Lifetime earnings and the vietnam era draft lottery:
1693
+ evidence from social security administrative records. The American
1694
+ Economic Review, pages 313–336, 1990.
1695
+ [3] F. R. Bach and M. I. Jordan. Kernel independent component analysis.
1696
+ Journal of Machine Learning Research, 3(Jul):1–48, 2002.
1697
+ [4] P. Baldi and K. Hornik.
1698
+ Neural networks and principal component
1699
+ analysis: Learning from examples without local minima. Neural networks,
1700
+ 2(1):53–58, 1989.
1701
+ [5] M. Bartlett, J. Movellan, and T. Sejnowski.
1702
+ Face recognition by
1703
+ independent component analysis. IEEE Transactions on Neural Networks,
1704
+ 13(6):1450–64, 2002.
1705
+ [6] A. J. Bell and T. J. Sejnowski. An information-maximization approach to
1706
+ blind separation and blind deconvolution. Neural Computation, (6):1004–
1707
+ 1034, 1995.
1708
+ [7] J. Benesty, C. Paleologu, L. Dogariu, and S. Ciochin˘a. Identification of
1709
+ linear and bilinear systems: A unified study. Electronics, 10(15), 2021.
1710
+ [8] Y. Bengio and A. Courville.
1711
+ Handbook on Neural Information
1712
+ Processing, chapter 1.Deep Learning of Representations, pages 1–28.
1713
+ Springer Berlin Heidelberg, Berlin, Heidelberg, 2013.
1714
+ [9] Y. Bengio, P. Lamblin, D. Popovici, H. Larochelle, and U. Montreal.
1715
+ Greedy layer-wise training of deep networks. volume 19, 01 2007.
1716
+ [10] P. Bentler and S. Lee. A statistical development of three-mode factor
1717
+ analysis. British J. of Math. and Stat. Psych., 32(1):87–104, 1979.
1718
+ [11] V. Blanz and T. A. Vetter. Morphable model for the synthesis of 3D
1719
+ faces. In Proc. ACM SIGGRAPH 99 Conf., pages 187–194, 1999.
1720
+ [12] L. Bottou et al. Online learning and stochastic approximations. On-line
1721
+ learning in neural networks, 17(9):142, 1998.
1722
+ [13] R. Bro. Parafac: Tutorial and applications. In Chemom. Intell. Lab
1723
+ Syst., Special Issue 2nd Internet Cont. in Chemometrics (INCINC’96),
1724
+ volume 38, pages 149–171, 1997.
1725
+ [14] A. Bulat, J. Kossaifi, G. Tzimiropoulos, and M. Pantic. Incremental
1726
+ multi-domain learning with network latent tensor factorization. In The
1727
+ Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI, pages
1728
+ 10470–10477. AAAI Press, 2020.
1729
+ [15] C. Cadieu and B. Olshausen.
1730
+ Learning transformational invariants
1731
+ from natural movies. In Proc. 19th Inter. Conf. on Neural Information
1732
+ Processing Systems, NIPS’09, page 209–216, 2009.
1733
+ [16] D. Card and A. B. Krueger. Minimum wages and employment: A case
1734
+ study of the fast food industry in New Jersey and Pennsylvania, 1993.
1735
+ [17] J. D. Carroll and J. J. Chang. Analysis of individual differences in
1736
+ multidimensional scaling via an N-way generalization of ‘Eckart-Young’
1737
+ decomposition. Psychometrika, 35:283–319, 1970.
1738
+ [18] J. D. Carroll, S. Pruzansky, and J. B. Kruskal. CANDELINC: A general
1739
+ approach to multidimensional analysis of many-way arrays with linear
1740
+ constraints on parameters. Psychometrika, 45:3–24, 1980.
1741
+ [19] C. Chatfield and A. Collins. Introduction to Multivariate Analysis, 1983.
1742
+ [20] J. C. Chen, R. Ranjan, A. Kumar, C. H. Chen, V. M. Patel, and
1743
+ R. Chellappa. An end-to-end system for unconstrained face verification
1744
+ with deep convolutional neural networks. In IEEE International Conf.
1745
+ on Computer Vision Workshop (ICCVW), pages 360–368, Dec 2015.
1746
+ [21] W. Chu and Z. Ghahramani. Probabilistic models for incomplete multi-
1747
+ dimensional arrays.
1748
+ volume 5 of Proceedings of Machine Learning
1749
+ Research, pages 89–96, Hilton Clearwater Beach Resort, Clearwater
1750
+ Beach, Florida USA, 16–18 Apr 2009. PMLR.
1751
+ [22] D. C. Cire¸san, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber.
1752
+ Flexible, high performance convolutional neural networks for image
1753
+ classification. In Proceedings of the Twenty-Second International Joint
1754
+ Conference on Artificial Intelligence - Volume Volume Two, IJCAI’11,
1755
+ page 1237–1242. AAAI Press, 2011.
1756
+ [23] P. Common. Independent component analysis, a new concept? Signal
1757
+ Processing, 36:287–314, 1994.
1758
+ [24] J. Davis and H. Gao. Recognizing human action efforts: An adaptive
1759
+ three-mode PCA framework. In Proc. IEEE Inter. Conf. on Computer
1760
+ Vision, (ICCV), pages 1463–69, Nice, France, Oct 13-16 2003.
1761
+ [25] L. de Lathauwer. Signal Processing Based on Multilinear Algebra. PhD
1762
+ dissertation, Katholieke Univ. Leuven, Belgium, 1997.
1763
+ [26] L. De Lathauwer, P. Comon, B. De Moor, and J. Vandewalle. Higher-order
1764
+ power method - application in independent component analysis. Proc. of
1765
+ the International Symposium on Nonlinear Theory and its Applications
1766
+ (NOLTA’95), pages 91–96, 1995.
1767
+ [27] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from
1768
+ incomplete data via the em algorithm. Journal of the Royal Statistical
1769
+ Society: Series B (Methodological), 39(1):1–22, 1977.
1770
+ [28] G. Desjardins, A. Courville, and Y. Bengio. Disentangling factors of
1771
+ variation via generative entangling. arXiv:1210.5474, 2012.
1772
+ [29] H. Fan, B. Xiong, K. Mangalam, Y. Li, Z. Yan, J. Malik, and
1773
+ C. Feichtenhofer. Multiscale vision transformers. In Proceedings of the
1774
+ IEEE/CVF International Conference on Computer Vision, pages 6824–
1775
+ 6835, 2021.
1776
+ [30] R. P. Feynman. Knowing versus Understanding. 1961.
1777
+ [31] T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach,
1778
+ H. D. III, and K. Crawford. Datasheets for datasets. Communications
1779
+ of the ACM, 64(12):86–92, 2021.
1780
+ [32] A. Gelman and G. Imbens. Why ask why? Forward causal inference
1781
+ and reverse causal questions. Tech.report, Nat.Bureau of Econ. Research,
1782
+ 2013.
1783
+ [33] G. Grindlay and M. A. O. Vasilescu. A multilinear (tensor) framework
1784
+ for hrtf analysis and synthesis. In 2007 IEEE International Conference
1785
+ on Acoustics, Speech and Signal Processing - ICASSP ’07, volume 1,
1786
+ pages I–161–164, 2007.
1787
+ [34] R. Harshman. Foundations of the PARAFAC procedure: Model and
1788
+ conditions for an explanatory factor analysis. Tech. Report Working
1789
+ Papers in Phonetics 16, UCLA, CA, Dec 1970.
1790
+ [35] A. Hatamizadeh, D. Terzopoulos, and A. Myronenko.
1791
+ End-to-end
1792
+ boundary aware networks for medical image segmentation. In Inter.
1793
+ Workshop on Machine Learning in Medical Imaging, pages 187–194.
1794
+ Springer, 2019.
1795
+ [36] D. O. Hebb. The organization of behavior: A neuropsychological theory.
1796
+ John Wiley And Sons, Inc., New York, 1949.
1797
+ [37] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for
1798
+ deep belief nets. Neural Comput., 18(7):1527–54, Jul 2006.
1799
+ [38] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of
1800
+ data with neural networks. Science, 313(5786):504–507, 2006.
1801
+ [39] P. W. Holland. Statistics and causal inference: Rejoinder. J. of the
1802
+ American Statistical Association, 81(396):968–970, 1986.
1803
+ [40] R. C. Hoover, K. Caudle, and K. Braman. A new approach to multilinear
1804
+ dynamical systems and control, 2021.
1805
+ [41] E. Hsu, K. Pulli, and J. Popovic. Style translation for human motion.
1806
+ ACM Transactions on Graphics, 24(3):1082–89, 2005.
1807
+ [42] G. B. Huang. Learning hierarchical representations for face verification
1808
+ with convolutional deep belief networks. In IEEE Conf. on Computer
1809
+ Vision and Pattern Recognition (CVPR), pages 2518–25, Jun 2012.
1810
+ [43] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces
1811
+ in the wild: A database for studying face recognition in unconstrained
1812
+ environments. Technical Report 07-49, University of Massachusetts,
1813
+ Amherst, Oct 2007.
1814
+ [44] H.Wang and N.Ahuja. Facial expression decomposition. In Proc, 9th
1815
+ IEEE Inter. Conf. on Computer Vision (ICCV), pages 958–65,v.2, 2003.
1816
+ [45] A. Hyvärinen, J. Karhunen, and E. Oja.
1817
+ Independent Component
1818
+ Analysis. Wiley, New York, 2001.
1819
+ [46] G. Imbens and D. Rubin. Causal Inference for Statistics, Social and
1820
+ Biomedical Sciences: An Introduction. Cambridge Univ. Press, 2015.
1821
+ [47] G. W. Imbens and J. D. Angrist. Identification and estimation of local
1822
+ average treatment effects. Econometrica, 62(2):467–475, 1994.
1823
+
1824
+ [48] L. Ingber. Simulated annealing: Practice versus theory. Mathematical
1825
+ and computer modelling, 18(11):29–57, 1993.
1826
+ [49] M. A. Iwen, D. Needell, E. Rebrova, and A. Zare. Lower memory
1827
+ oblivious (tensor) subspace embeddings with fewer random bits: mode-
1828
+ wise methods for least squares. SIAM Journal on Matrix Analysis and
1829
+ Applications, 42(1):376–416, 2021.
1830
+ [50] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What is the
1831
+ best multi-stage architecture for object recognition? In Proc. Inter. Conf.
1832
+ on Computer Vision (ICCV 2009), page 2146—53. IEEE, 2009.
1833
+ [51] I. Jolliffe. Principal Component Analysis. Springer-Verlag, New York,
1834
+ 1986.
1835
+ [52] A. Kapteyn, H. Neudecker, and T. Wansbeek. An approach to n-mode
1836
+ component analysis. Psychometrika, 51(2):269–275, Jun 1986.
1837
+ [53] V. Kazemi and J. Sullivan. One millisecond face alignment with an
1838
+ ensemble of regression trees. In Proc. IEEE Conf. on Computer Vision
1839
+ and Pattern Recognition, CVPR ’14, pages 1867–74, Washington, DC,
1840
+ USA, 2014. IEEE Computer Society.
1841
+ [54] D. S. Kermany, M. Goldbaum, W. Cai, C. C. Valentim, H. Liang, S. L.
1842
+ Baxter, A. McKeown, G. Yang, X. Wu, F. Yan, J. Dong, M. K. Prasadha,
1843
+ J. Pei, M. Y. Ting, J. Zhu, C. Li, S. Hewett, J. Dong, I. Ziyar, A. Shi,
1844
+ R. Zhang, L. Zheng, R. Hou, W. Shi, X. Fu, Y. Duan, V. A. Huu, C. Wen,
1845
+ E. D. Zhang, C. L. Zhang, O. Li, X. Wang, M. A. Singer, X. Sun, J. Xu,
1846
+ A. Tafreshi, M. A. Lewis, H. Xia, and K. Zhang. Identifying medical
1847
+ diagnoses and treatable diseases by image-based deep learning. Cell,
1848
+ 172(5):1122–1131.e9, 2018.
1849
+ [55] V. Khrulkov. Geometrical Methods in Machine Learning and Tensor
1850
+ Analysis. PhD dissertation, Skolkovo Institute, 2020.
1851
+ [56] Y. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin. Compression
1852
+ of deep convolutional neural networks for fast and low power mobile
1853
+ applications. CoRR, abs/1511.06530, 2015.
1854
+ [57] D. E. King. Dlib-ml: A machine learning toolkit. Journal of Machine
1855
+ Learning Research, 10:1755–1758, 2009.
1856
+ [58] T. G. Kolda and B. W. Bader. Tensor decompositions and applications.
1857
+ SIAM review, 51(3):455–500, 2009.
1858
+ [59] J. Kossaifi, Z. C. Lipton, A. Kolbeinsson, A. Khanna, T. Furlanello,
1859
+ and A. Anandkumar. Tensor regression networks. Journal of Machine
1860
+ Learning Research, 21(123):1–21, 2020.
1861
+ [60] P. M. Kroonenberg and J. de Leeuw. Principal component analysis
1862
+ of three-mode data by means of alternating least squares algorithms.
1863
+ Psychometrika, 45:69–97, 1980.
1864
+ [61] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An
1865
+ empirical evaluation of deep architectures on problems with many factors
1866
+ of variation. In Proceedings of the 24th International Conference on
1867
+ Machine Learning, page 473–480, New York, NY, USA, 2007.
1868
+ [62] V. Lebedev, Y. Ganin, M. Rakhuba, I. V. Oseledets, and V. S. Lem-
1869
+ pitsky.
1870
+ Speeding-up convolutional neural networks using fine-tuned
1871
+ cp-decomposition. CoRR, abs/1412.6553, 2014.
1872
+ [63] Y. LeCun, Y. Bengio, and G. Hinton.
1873
+ Deep learning.
1874
+ Nature,
1875
+ 521(7553):436–444, 2015.
1876
+ [64] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning
1877
+ applied to document recognition. Proceedings of the IEEE, 86(11):2278–
1878
+ 2324, Nov 1998.
1879
+ [65] Y. A. LeCun, L. Bottou, G. B. Orr, and K.-R. Müller. Efficient BackProp,
1880
+ pages 9–48. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
1881
+ [66] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo.
1882
+ Swin transformer: Hierarchical vision transformer using shifted windows.
1883
+ In Proceedings of the IEEE/CVF International Conference on Computer
1884
+ Vision, pages 10012–10022, 2021.
1885
+ [67] I. Macedo, E. V. Brazil, and L. Velho. Expression transfer between
1886
+ photographs through multilinear aam’s. pages 239–246, Oct 2006.
1887
+ [68] A. Madani, M. Moradi, A. Karargyris, and T. Syeda-Mahmood. Semi-
1888
+ supervised learning with generative adversarial networks for chest x-ray
1889
+ classification with ability of data domain adaptation. In 2018 IEEE 15th
1890
+ International Symposium on Biomedical Imaging (ISBI 2018), pages
1891
+ 1038–1042, 2018.
1892
+ [69] J. Magnus and H. Neudecker.
1893
+ Matrix Differential Calculus with
1894
+ Applications in Statistics and Econometrics. John Wiley & Sons, 1988.
1895
+ [70] J. Masci, U. Meier, D. Cire¸san, and J. Schmidhuber. Stacked convolu-
1896
+ tional auto-encoders for hierarchical feature extraction. In T. Honkela,
1897
+ W. Duch, M. Girolami, and S. Kaski, editors, Artificial Neural Networks
1898
+ and Machine Learning – ICANN 2011, pages 52–59, Berlin, Heidelberg,
1899
+ 2011. Springer Berlin Heidelberg.
1900
+ [71] M. F. Mathieu, J. J. Zhao, J. Zhao, A. Ramesh, P. Sprechmann, and
1901
+ Y. LeCun. Disentangling factors of variation in deep representation using
1902
+ adversarial training. Advances in neural information processing systems,
1903
+ 29, 2016.
1904
+ [72] R. Memisevic and G. E. Hinton.
1905
+ Learning to Represent Spatial
1906
+ Transformations with Factored Higher-Order Boltzmann Machines.
1907
+ Neural Computation, 22(6):1473–1492, 06 2010.
1908
+ [73] V. Nair and G. E. Hinton. 3D object recognition with deep belief nets.
1909
+ In Proc. 22Nd Inter. Conf. on Neural Information Processing Systems,
1910
+ NIPS’09, pages 1339–47, USA, 2009. Curran Associates Inc.
1911
+ [74] A. Novikov, D. Podoprikhin, A. Osokin, and D. P. Vetrov. Tensorizing
1912
+ neural networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama,
1913
+ and R. Garnett, editors, Advances in Neural Information Processing
1914
+ Systems 28, pages 442–450. Curran Associates, Inc., 2015.
1915
+ [75] E. Oja. A simplified neuron model as a principal component analyzer.
1916
+ 15:267–2735, 1982.
1917
+ [76] C. C. Onu, J. E. Miller, and D. Precup. A fully tensorized recurrent
1918
+ neural network. CoRR, abs/2010.04196, 2020.
1919
+ [77] I. V. Oseledets.
1920
+ Tensor-train decomposition.
1921
+ SIAM J. on Scientific
1922
+ Computing, 33(5):2295–2317, 2011.
1923
+ [78] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge Univ.
1924
+ Press, 2000.
1925
+ [79] M. Ranzato, C. Poultney, S. Chopra, and Y. LeCun. Efficient learning
1926
+ of sparse representations with an energy-based model. In Proc. 19th
1927
+ Inter. Conf. on Neural Information Processing Systems, NIPS’06, pages
1928
+ 1137–44, Cambridge, MA, USA, 2006. MIT Press.
1929
+ [80] H. Robbins and S. Monro. A stochastic approximation method. The
1930
+ annals of mathematical statistics, pages 400–407, 1951.
1931
+ [81] D. B. Rubin. For objective causal inference, design trumps experimental
1932
+ analysis. The Annals of Applied Statistics, 2(3):808 – 840, 2008.
1933
+ [82] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal
1934
+ representations by error propagation. 1986.
1935
+ [83] T. Sanger.
1936
+ Optimal unsupervised learnig in a single layer linear
1937
+ feedforward neural network. 12:459–473, 1989.
1938
+ [84] J. Schmidhuber. Deep learning in neural networks: An overview. Neural
1939
+ networks, 61:85–117, 2015.
1940
+ [85] B. Scholkopf, A. J. Smola, and K. R. Muller. Kernel principal component
1941
+ analysis. Lecture notes in computer science, 1327:583–588, 1997.
1942
+ [86] B. Schölkoph, A. Smola, and K.-R. Muller. Nonlinear component analysis
1943
+ as a kernel eigenvalue problem. Neural Computation, 10(5):1299–1319,
1944
+ 1998.
1945
+ [87] T. Sejnowski, S. Chattarji, and P. Sfanton. Induction of Synaptic Plasticity
1946
+ by Hebbian Covariance in the Hippocampus, pages 105–124. Addison-
1947
+ Wesley, 1989.
1948
+ [88] W. Si, K. Yamaguchi, and M. A. O. Vasilescu. Face Tracking with
1949
+ Multilinear (Tensor) Active Appearance Models. Jun 2013.
1950
+ [89] P. Spirtes, C. N. Glymour, R. Scheines, and D. Heckerman. Causation,
1951
+ prediction, and search. MIT press, 2000.
1952
+ [90] Y. Sun, X. Wang, and X. Tang. Hybrid deep learning for face verification.
1953
+ In Proc. IEEE International Conf. on Computer Vision (ICCV), pages
1954
+ 1489–96, Dec 2013.
1955
+ [91] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the
1956
+ gap to human-level performance in face verification. In Proc. IEEE Conf.
1957
+ on Computer Vision and Pattern Recognition, pages 1701–08, 2014.
1958
+ [92] Y. Tang, R. Salakhutdinov, and G. Hinton. Tensor analyzers. volume 28
1959
+ of Proceedings of Machine Learning Research, pages 163–171, Atlanta,
1960
+ Georgia, USA, 17–19 Jun 2013.
1961
+ [93] N. Tishby and N. Zaslavsky. Deep learning and the information bottleneck
1962
+ principle. In 2015 IEEE Information Theory Workshop (ITW), pages
1963
+ 1–5, 2015.
1964
+ [94] E. J. Topol. High-performance medicine: the convergence of human and
1965
+ artificial intelligence. Nature Medicine, 25(1):44–56, 2019.
1966
+ [95] L. R. Tucker. Some mathematical notes on three-mode factor analysis.
1967
+ Psychometrika, 31:279–311, 1966.
1968
+ [96] M. Vasilescu and D. Terzopoulos. Adaptive meshes and shells: Irregular
1969
+ triangulation, discontinuities, and hierarchical subdivision. In Proc. IEEE
1970
+ Conf. on Computer Vision and Pattern Recognition (CVPR’92), pages
1971
+ 829–832, Champaign, IL, Jun 1992.
1972
+
1973
+ [97] M. A. O. Vasilescu. Human motion signatures: Analysis, Synthesis,
1974
+ Recognition.
1975
+ In Proc. Int. Conf. on Pattern Recognition, volume 3,
1976
+ pages 456–460, Quebec City, Aug 2002.
1977
+ [98] M. A. O. Vasilescu. Incremental Multilinear SVD. In Proc. Conf. on
1978
+ ThRee-way methods In Chemistry And Psychology (TRICAP 06), 2006.
1979
+ [99] M. A. O. Vasilescu. A Multilinear (Tensor) Algebraic Framework for
1980
+ Computer Graphics, Computer Vision, and Machine Learning.
1981
+ PhD
1982
+ dissertation, University of Toronto, 2009.
1983
+ [100] M. A. O. Vasilescu. Multilinear projection for face recognition via
1984
+ canonical decomposition. In Proc. IEEE Inter. Conf. on Automatic Face
1985
+ Gesture Recognition (FG 2011), pages 476–483, Mar 2011.
1986
+ [101] M. A. O. Vasilescu. Multilinear projection for face recognition via
1987
+ canonical decomposition. In Proc. IEEE Inter. Conf. on Automatic Face
1988
+ Gesture Recognition (FG 2011), pages 476–483, Mar 2011.
1989
+ [102] M. A. O. Vasilescu and E. Kim. Compositional hierarchical tensor
1990
+ factorization: Representing hierarchical intrinsic and extrinsic causal
1991
+ factors. In The 25th ACM SIGKDD Conf. on Knowledge Discovery and
1992
+ Data Mining (KDD 2019): Tensor Methods for Emerging Data Science
1993
+ Challenges Workshop, Aug. 5 2019.
1994
+ [103] M. A. O. Vasilescu, E. Kim, and X. S. Zeng.
1995
+ CausalX: Causal
1996
+ eXplanations and block multilinear factor analysis.
1997
+ In 2020 25th
1998
+ International Conference of Pattern Recognition (ICPR 2020), pages
1999
+ 10736–10743, Jan 2021.
2000
+ [104] M. A. O. Vasilescu and D. Terzopoulos. Multilinear analysis for facial
2001
+ image recognition. In Proc. Int. Conf. on Pattern Recognition, volume 2,
2002
+ pages 511–514, Quebec City, Aug 2002.
2003
+ [105] M. A. O. Vasilescu and D. Terzopoulos. Multilinear analysis of image
2004
+ ensembles: TensorFaces. In Proc. European Conf. on Computer Vision
2005
+ (ECCV 2002), pages 447–460, Copenhagen, Denmark, May 2002.
2006
+ [106] M. A. O. Vasilescu and D. Terzopoulos. Multilinear subspace analysis of
2007
+ image ensembles. In Proc. IEEE Conf. on Computer Vision and Pattern
2008
+ Recognition, volume II, pages 93–99, Madison, WI, 2003.
2009
+ [107] M. A. O. Vasilescu and D. Terzopoulos. TensorTextures: Multilinear
2010
+ Image-Based Rendering. ACM Transactions on Graphics, 23(3):336–342,
2011
+ Aug 2004. Proc. ACM SIGGRAPH 2004 Conf., Los Angeles, CA.
2012
+ [108] M. A. O. Vasilescu and D. Terzopoulos.
2013
+ Multilinear independent
2014
+ components analysis. In Proc. IEEE Conf. on Computer Vision and
2015
+ Pattern Recognition, pages 547–553, v.I, San Diego, CA, 2005.
2016
+ [109] M. A. O. Vasilescu and D. Terzopoulos.
2017
+ Multilinear projection for
2018
+ appearance-based recognition in the tensor framework. In Proc. 11th
2019
+ IEEE Inter. Conf. on Computer Vision (ICCV’07), pages 1–8, 2007.
2020
+ [110] J.-P. Vert, K. Tsuda, and B. Schölkopf. A primer on kernel methods.
2021
+ Kernel methods in computational biology, 47:35–70, 2004.
2022
+ [111] D. Vlasic, M. Brand, H. Pfister, and J. Popovic. Face transfer with
2023
+ multilinear models. ACM Transactions on Graphics (TOG), 24(3):426–
2024
+ 433, Jul 2005.
2025
+ [112] H. Wang and N. Ahuja. A tensor approximation approach to dimen-
2026
+ sionality reduction.
2027
+ Inter. J. of Computer Vision, 6(3):217–29, Mar
2028
+ 2008.
2029
+ [113] W. Wang, E. Xie, X. Li, D.-P. Fan, K. Song, D. Liang, T. Lu, P. Luo,
2030
+ and L. Shao. Pyramid vision transformer: A versatile backbone for
2031
+ dense prediction without convolutions. In Proceedings of the IEEE/CVF
2032
+ International Conference on Computer Vision, pages 568–578, 2021.
2033
+ [114] J. Yang, X. Gao, D. Zhang, and J. Yang. Kernel ICA: An alternative
2034
+ formulation and its application to face recognition. Pattern Recognition,
2035
+ 38(10):1784–87, 2005.
2036
+ [115] J. Ye.
2037
+ Generalized low rank approximations of matrices.
2038
+ Machine
2039
+ Learning, 61(1):167–191, 2005.
2040
+ [116] Q. Zhao, G. Zhou, S. Xie, L. Zhang, and A. Cichocki. Tensor ring
2041
+ decomposition. arXiv preprint arXiv:1606.05535, 2016.
2042
+
BNAyT4oBgHgl3EQfd_jC/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
C9E4T4oBgHgl3EQf5w7z/content/tmp_files/2301.05327v1.pdf.txt ADDED
@@ -0,0 +1,637 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Blind Judgement:
2
+ Agent-Based Supreme Court Modelling With GPT
3
+ Sil Hamilton
4
+ McGill University
5
6
+ Abstract
7
+ We present a novel Transformer-based multi-agent system
8
+ for simulating the judicial rulings of the 2010-2016 Supreme
9
+ Court of the United States. We train nine separate models
10
+ with the respective authored opinions of each supreme justice
11
+ active ca. 2015 and test the resulting system on 96 real-world
12
+ cases. We find our system predicts the decisions of the real-
13
+ world Supreme Court with better-than-random accuracy. We
14
+ further find a correlation between model accuracy with re-
15
+ spect to individual justices and their alignment between legal
16
+ conservatism & liberalism. Our methods and results hold sig-
17
+ nificance for researchers interested in using language mod-
18
+ els to simulate politically-charged discourse between multi-
19
+ ple agents.
20
+ Introduction
21
+ Recent and ongoing political turmoil in the United States
22
+ has magnified the actions of the federal Supreme Court in
23
+ the public eye. The Court has taken to overturning judicial
24
+ precedent in recent years, with the number of such deci-
25
+ sions in the last six years reaching over twice the number
26
+ of overturns between 2010 to 20151. The weakening rule of
27
+ stare decisis has encouraged judicial researchers to develop
28
+ holistic models of Supreme Court behaviour to better pre-
29
+ dict and account for future trends (Blake 2019; Allcorn and
30
+ Stein 2021).
31
+ Accurate models of Supreme Court behaviour are rare de-
32
+ spite this focus. The best performing models only reach ac-
33
+ curacy levels of ≈ 70% on out-of-distribution cases (Katz,
34
+ Bommarito, and Blackman 2017). Models achieving even
35
+ this middling accuracy are complex in their architecture,
36
+ generally consisting of a mix of SVM and logistic regression
37
+ models. This complexity is necessitated by the variables in-
38
+ volved.
39
+ Confounding variables discussed in the literature in-
40
+ clude little agreed-upon theories regarding the legal doc-
41
+ trines practiced by individual justices (Jr, Curry, and Mar-
42
+ shall 2011) and their rarely-documented social realities
43
+ (Kromphardt 2017; Peterson, Giallouri, and Menounou
44
+ 2021). While the in-court behaviour of the justices is well
45
+ documented, exogenous factors have an equal impact on
46
+ Copyright © 2023, Association for the Advancement of Artificial
47
+ Intelligence (www.aaai.org). All rights reserved.
48
+ 12010-2015: 8 overturns, 2016-2022: 22+ overturns.
49
+ BREYER
50
+ GINSBURG
51
+ KAGAN
52
+ SOTOMAYOR
53
+ ALITO
54
+ KENNEDY
55
+ ROBERTS
56
+ SCALIA
57
+ THOMAS
58
+ BREYER
59
+ GINSBURG
60
+ KAGAN
61
+ SOTOMAYOR
62
+ ALITO
63
+ KENNEDY
64
+ ROBERTS
65
+ SCALIA
66
+ THOMAS
67
+ 1.0000 0.4515 0.4521 0.3441 -0.1069 0.0599 -0.1066-0.2252-0.2390
68
+ 0.4515 1.0000 0.5825 0.5426 -0.2388-0.0527-0.2074-0.0908-0.2774
69
+ 0.4521 0.5825 1.0000 0.4556 -0.2212 0.0529 -0.1857-0.0253-0.2274
70
+ 0.3441 0.5426 0.4556 1.0000 -0.2198-0.0256-0.1201-0.1524-0.2240
71
+ -0.1069-0.2388-0.2212-0.2198 1.0000 0.1095 0.4693 0.2998 0.4692
72
+ 0.0599 -0.0527 0.0529 -0.0256 0.1095 1.0000 0.0361 0.0688 -0.0166
73
+ -0.1066-0.2074-0.1857-0.1201 0.4693 0.0361 1.0000 0.4630 0.3800
74
+ -0.2252-0.0908-0.0253-0.1524 0.2998 0.0688 0.4630 1.0000 0.4462
75
+ -0.2390-0.2774-0.2274-0.2240 0.4692 -0.0166 0.3800 0.4462 1.0000
76
+ Figure 1: Correlation matrix of justices voting on 290 cases
77
+ between 2010 and 2016. Note the clustering of justices nom-
78
+ inated by Democrat and Republican presidents.
79
+ case decision-making. A model capable of both cognitive
80
+ and social reasoning would therefore benefit justice be-
81
+ haviour modelling. To this end, we investigate whether re-
82
+ cent advances in social simulation with language models can
83
+ promote simple and effective models of Supreme Court be-
84
+ haviour.
85
+ Background
86
+ In this section we describe the rationale behind our project.
87
+ Judicial Modelling
88
+ Three major theories of judicial behaviour generally inform
89
+ the design of Supreme Court models: the legal theory, the at-
90
+ titudinal theory, and the strategic theory (Jr, Curry, and Mar-
91
+ shall 2011). The legal theory suggests justices are bound by
92
+ constitutional precedent. The attitudinal theory instead ar-
93
+ gues justices account for policy preference first, precedent
94
+ second. Between the two lies the strategic theory, which
95
+ says justices vote according to a mix of precedent and pref-
96
+ erence.
97
+ arXiv:2301.05327v1 [cs.CL] 12 Jan 2023
98
+
99
+ As we show in Figure 1, decision correlations between
100
+ justices active between 2010 and 2016 (hereafter referred
101
+ to as the Roberts IV court) indicate the strategic theory is
102
+ most accurate to reality. While justices will invoke prece-
103
+ dent when writing their rationales, evidence suggests jus-
104
+ tices remain somewhat beholden to the political alignment
105
+ of their nominator. We note, however, that the correlations
106
+ are only medium in their strength. This indicates accurate
107
+ models should account for precedent, but not exclusively.
108
+ Integrating one of these three theories into a Supreme
109
+ Court model requires choosing how to best cast the influ-
110
+ ence of precedence and preference as variables. Given this
111
+ conversion can itself result in significant drawbacks via un-
112
+ foreseen factors, we instead choose a simulative tool which
113
+ allows to us to make fewer assumptions as to the most cor-
114
+ rect theory of judicial behaviour: language models.
115
+ Simulation
116
+ Large Language Models (LLMs) are adept at simulating
117
+ complex social phenomena. Recent research has demon-
118
+ strated their ability to predict populated social media plat-
119
+ forms (Park et al. 2022), the distribution of votes for presi-
120
+ dential candidates in the 2012-2020 American elections (Ar-
121
+ gyle et al. 2022), and the general sentiment of news arti-
122
+ cles reporting COVID-19 in the early stages of the pan-
123
+ demic (Hamilton and Piper 2022). These developments
124
+ show model bias is valuable for those in the social sciences
125
+ given bias is derived from the underlying distributions of
126
+ their training material.
127
+ Prior simulation research benefits from new techniques
128
+ for eliciting cognitive activity from LLMs nominally de-
129
+ signed for next-token prediction. These include chain of
130
+ thought reasoning (Wei et al. 2022), discretely-structured
131
+ prompts (Liu et al. 2021), and fine-tuning (Drori et al. 2022).
132
+ These techniques have the model draw on internal biases to
133
+ make predictions, allowing researchers to embed fewer as-
134
+ sumptions into their simulative models. LLMs are alluring
135
+ given judicial modelling necessitates a system capable of
136
+ both social and cognitive reasoning.
137
+ Agent-Based Modelling
138
+ The process by which the Court arrives at a decision is nomi-
139
+ nally rational (Jr, Curry, and Marshall 2011). While predilec-
140
+ tions are known to influence vote outcomes, justices are ex-
141
+ pected to justify dissenting decisions in written documents
142
+ called opinions. Opinions are typically one to five pages in
143
+ length within which the justice (or their aid) lays out their
144
+ argument in a manner similar to an essay. For our simula-
145
+ tion task, we assume justices record their rationale honestly
146
+ and so treat the opinions as our primary target of prediction,
147
+ meaning any model we train will be predicting opinions.
148
+ Because opinions are long documents (i.e. longer than the
149
+ 1024 token-long context window GPT-2 is trained for), hav-
150
+ ing one model produce multiple opinions in the same run is
151
+ untenable. We turn to agent-based modelling for a solution.
152
+ Whether consolidating multiple generative LLMs into a
153
+ single architecture is beneficial has been heretofore un-
154
+ derstudied, with the only significant prior experiment with
155
+ LLMs showing promise (Betz 2022). However, given the
156
+ Case Syllabus
157
+ ...
158
+ Judge 1
159
+ Judge n
160
+ ...
161
+ Opinion
162
+ Opinion
163
+ ...
164
+ Vote
165
+ Vote
166
+ Majority Opinion
167
+ Figure 2: Flow of our multi-agent system.
168
+ success of the Mixture of Experts (MoE) method in ma-
169
+ chine translation (NLLB Team 2022), we argue further ex-
170
+ ploration of similar techniques for simulation tasks is war-
171
+ ranted. While social simulation experiments suggest a sin-
172
+ gle language model is capable of producing a wide range
173
+ of opinions, training multiple models separately prevents
174
+ cross-contamination when studying multiple data sources.
175
+ Method
176
+ We present the general design of our architecture in three
177
+ parts: data collection, system architecture, and measures.
178
+ Dataset
179
+ We source data from two datasets for this experiment. The
180
+ first corpus is the Supreme Court Database (SCDB) released
181
+ by researchers at Washington University in St. Louis, which
182
+ provides variables for 9,095 cases decided between 1946
183
+ and 2021 (Spaeth et al. 2014).
184
+ We supplement the SCDB with all written opinions from
185
+ all slips provided on the Supreme Court website.2 Extracting
186
+ the opinions from the PDF documents with an optical char-
187
+ acter recognition (OCR) utility leaves us with 145MiB of
188
+ text written between 2003 and 2022. We then associate each
189
+ opinion with the justice and case from which it originated.
190
+ System Architecture
191
+ We choose to simulate the Roberts IV court (2010-2016)
192
+ given this period outlasts all other Supreme Court iterations
193
+ in recent history.3
194
+ 2Found at https://www.supremecourt.gov/opinions/slipopinion
195
+ 3See http://scdb.wustl.edu/documentation.php?var=naturalCourt
196
+
197
+ Design
198
+ Our multi-agent system is composed of nine full-
199
+ sized GPT-2 models (Radford et al. 2019). We present the
200
+ system architecture in Figure 2. At a high level, our system
201
+ receives the topic of a case being brought before the court
202
+ and passes it along to nine justice models. The system then
203
+ receives back nine opinions and corresponding decisions of
204
+ whether to approve the appellant. The system totals the re-
205
+ sults and returns the majority vote.4
206
+ Prompt
207
+ We train each justice model with a discrete
208
+ prompt structured like a Python dictionary:
209
+ {
210
+ ’issue’: ’Lorem ipsum...’,
211
+ ’topic’: ’Lorem ipsum...’,
212
+ ’opinion’: ’Lorem ipsum...’
213
+ ’decision’: ’Lorem ipsum...’
214
+ }
215
+ The issue value corresponds to the issueArea variable pro-
216
+ vided by SCDB.5 The topic value is a short description of
217
+ what the appellant is bringing before the court. We extract
218
+ this information from the syllabus of each opinion slip and
219
+ summarize it with GPT-3 Davinci (Brown et al. 2020). The
220
+ opinion value is the corresponding rationale the justice pro-
221
+ duces when formulating their decision value, here a categor-
222
+ ical variable signalling (dis)approval. We provide an exam-
223
+ ple in the appendix.
224
+ Training
225
+ All models are trained for a total 30 epochs at a
226
+ learning rate of 2e−4 with the Adam optimizer (Kingma and
227
+ Ba 2014). This training process is conducted in two steps:
228
+ 1. Construct ≤ 1000 token prompts of the above style for all
229
+ cases in which the Roberts IV court came to a unanimous
230
+ decision. This model serves as the base for all further
231
+ trained models.
232
+ 2. Collect all prompts generated in step 1 for each of the
233
+ opinions (2003-2016) written by each justice active dur-
234
+ ing Roberts IV. We thereby collect nine training sets and
235
+ further train the model generated in step 1 with each sep-
236
+ arately.
237
+ Average model loss after both steps is 1.5, indicating there
238
+ remains significant room for improvement.
239
+ Measures
240
+ We assess the performance of our multi-agent system on 96
241
+ test cases withheld from the training set with two measures:
242
+ accuracy and a novel measure for judicial ideological align-
243
+ ment.
244
+ Accuracy
245
+ We measure accuracy with a receiver’s operat-
246
+ ing characteristic curve (ROC) together with Cohen’s κ to
247
+ account for a slight distribution bias in our test set.
248
+ Alignment
249
+ Justices are understood as being more or less
250
+ in favour of overturning precedent. We capture this align-
251
+ ment by taking the Pearson coefficient (r) between model
252
+ accuracy and the frequency with which the respective justice
253
+ 4We provide our code at [withheld from review copy]
254
+ 5See http://scdb.wustl.edu/documentation.php?var=issueArea
255
+ Justice
256
+ Accuracy
257
+ κ
258
+ Samuel Alito
259
+ 65%
260
+ 0.30
261
+ Ruth Bader Ginsburg
262
+ 62%
263
+ 0.21
264
+ Clarence Thomas
265
+ 59%
266
+ 0.18
267
+ Stephen Breyer
268
+ 58%
269
+ 0.16
270
+ John Roberts
271
+ 57%
272
+ 0.13
273
+ Elena Kagan
274
+ 56%
275
+ 0.12
276
+ Anthony Kennedy
277
+ 54%
278
+ 0.09
279
+ Sonia Sotomayor
280
+ 51%
281
+ 0.00
282
+ Antonin Scalia
283
+ 50%
284
+ -0.03
285
+ Table 1: Model accuracy by justice. Note the wide variation
286
+ in accuracy between justices.
287
+ voted against precedent-altering decisions between 2003
288
+ and 2016. Our measure is intended to capture where a justice
289
+ is aligned between conservative (e.g. textualism, formalism,
290
+ originalism) or liberal (e.g. legal realism) frameworks of ju-
291
+ dicial decision making (Post and Siegel 2006).
292
+ Results
293
+ All results are reported with a minimum confidence rate of
294
+ 80% and are controlled for training material size and topic.
295
+ Generations are run with a temperature of 0.5 and a maxi-
296
+ mum length of 1000 tokens.
297
+ Accuracy
298
+ Our system achieves an aggregate accuracy of 60% (κ ≈
299
+ 0.18) on 96 test cases. While less predictive than the state of
300
+ the art, our model nonetheless achieves better-than-random
301
+ performance despite having been trained solely on opinions.
302
+ We find a wide variation in the accuracy of each simulated
303
+ justice when examining system performance more closely.
304
+ As shown in Table 1, model accuracy varies between 65%
305
+ and 50% despite having controlled for training data volume
306
+ and case outcome.
307
+ Alignment
308
+ We measure a moderate correlation (r ≈ 0.56) between sim-
309
+ ulated justice accuracy and the frequency with which each
310
+ respective justice did not agree with the Court overruling
311
+ or re-interpreting precedent. This result suggests our system
312
+ achieves better accuracy with justices who are less likely to
313
+ overturn precedent. We discuss the implications of this result
314
+ below.
315
+ Validation
316
+ We train a single agent model to ensure having many agents
317
+ provides non-negligible benefits. We fine-tune this single
318
+ agent with the majority opinions of all cases decided on by
319
+ the Roberts IV court. Testing this single agent on the test
320
+ set results in an overall accuracy of 54% (κ = 0.08). The
321
+ predicted decisions differs from the original test set with
322
+ a Cohen’s d of ≈ −0.86 versus d ≈ 0.19 for our multi-
323
+ agent model, increasing the population overlap from 68.5%
324
+ to 92.4%.
325
+
326
+ We implement software controls to ensure program out-
327
+ put validity given training to a low loss does not guarantee
328
+ the model produces both the opinion and decision variables.
329
+ We therefore rerun each case until all models have returned
330
+ a valid result. Once having processed all 96 cases, we sam-
331
+ ple agent-produced opinions belonging to half to ensure co-
332
+ herency.
333
+ Discussion
334
+ In this section we discuss two major consequences of our
335
+ research.
336
+ Precedent Hallucination
337
+ GPT-2 is not an expert on legal
338
+ precedent, nor should one expect it to be when the only for-
339
+ mal source of legal information ingested by the model dur-
340
+ ing pre-training were some seven thousand pages from Find-
341
+ Law, a website principally known for tort law (Clark 2022).6
342
+ This becomes evident when surveying model output. While
343
+ the model will occasionally reference real laws, these cita-
344
+ tions prove to be happenstance as GPT-2 will confuse details
345
+ and thus render the references meaningless.
346
+ That the model generates its own precedent when arguing
347
+ over a case is an example of hallucination, a well known
348
+ property of language models (Rohrbach et al. 2018). Be-
349
+ cause causal language models are only tasked with predict-
350
+ ing the next most likely token given some prior sequence,
351
+ they are not given incentive to withhold factually incorrect
352
+ statements—the model will say whatever is necessary to re-
353
+ turn the number of tokens requested in a cogent manner.
354
+ Our justice models will hallucinate precedence when pro-
355
+ ducing opinions. They produce this pretend precedence im-
356
+ plicitly by citing it throughout the argumentation process.
357
+ That our models achieve greater-than-random decision accu-
358
+ racy in voting outcomes despite not producing legally valid
359
+ arguments suggests Supreme Court decisions may not al-
360
+ ways rest on legally coherent rationales.
361
+ Alignment Correlation
362
+ The correlation between model
363
+ accuracy and judicial alignment indicates conservative jus-
364
+ tices are more predictable given their general unwillingness
365
+ to overturn precedent. Considering the model hallucinates
366
+ precedent, this correlation suggests conservative justices are
367
+ conservative for ideological rather than rational reasons.
368
+ We find this result surprising given conservative justices
369
+ often make it a point to rationalize their unwillingness to
370
+ overturn precedent with legal justifications. Common for-
371
+ malist theories of this sort include both originalism and
372
+ textualism, doctrines practiced by conservative members of
373
+ the current court (Esbeck 2011). Our results suggest these
374
+ decision-making patterns are less grounded in rational logic
375
+ than anticipated given they are partially captured in a model
376
+ not familiar with common law.
377
+ Conclusion
378
+ The aim of our project was to produce a multi-agent sys-
379
+ tem capable of predicting Supreme Court decision-making
380
+ with little to no prior theory-based assumptions of judicial
381
+ 6https://www.findlaw.com/
382
+ behaviour. Given our resulting model achieves better-than-
383
+ random accuracy despite having been trained only on opin-
384
+ ion matter, we argue our process serves as an example for
385
+ researchers seeking to develop simulative experiments with
386
+ language models.
387
+ Limitations
388
+ As should be expected of any project promoting the creative
389
+ output of AI, we make note of the biased material used in
390
+ the production of large language models like GPT-2. While
391
+ we contend this culturally-derived bias is beneficial for re-
392
+ searchers using foundation models in the the social sciences,
393
+ we nonetheless ensure our model does not cause unwanted
394
+ harm. As such, we clearly mark all samples as having been
395
+ generated and refrain from releasing large collections of
396
+ generated material to the public.
397
+ Next Steps
398
+ We propose the following next steps after having demon-
399
+ strated the basic viability of our architecture.
400
+ Larger Model
401
+ Can we improve system accuracy with
402
+ larger language models? Recent research suggests language
403
+ models develop emergent cognitive features when scaled
404
+ above 6.7 billion parameters, narrowing future possible can-
405
+ didates to the likes of GPT-NeoX-20B and T5X (Dettmers
406
+ et al. 2022; Black et al. 2022; Roberts et al. 2022).
407
+ Larger Training Corpus
408
+ Another avenue for increas-
409
+ ing system accuracy involves fine-training GPT-2 with the
410
+ whole corpus of American law as captured by proceedings
411
+ and opinions written in lower courts. The principle of stare
412
+ decisis means the practice of common law is a social ven-
413
+ ture, suggesting language models would do well in predict-
414
+ ing precedent-dependent cases if prepared.
415
+ Improved Prompting
416
+ Research indicates language mod-
417
+ els can avoid the long tail of token probabilities by repet-
418
+ itively querying the model (Portelli et al. 2022; Kim et al.
419
+ 2020). Integrating repetitive prompting strategies into the
420
+ opinion-generating schema is a promising avenue for im-
421
+ provement. Another avenue would be to assess how rein-
422
+ forcement learning from human feedback (RLHF) models
423
+ like InstructGPT simulate court proceedings (Ouyang et al.
424
+ 2022).
425
+ Investigating Future Cases
426
+ How would the Roberts IV
427
+ court fare with cases brought before the court after 2016?
428
+ Would their court overturn precedent at the rate the post-
429
+ 2016 Supreme Court has? Questions of this caliber would be
430
+ made approachable with a more accurate Roberts IV system.
431
+ Acknowledgements
432
+ I thank Prof. Andrew Piper of McGill University and Prof.
433
+ Kristen Thomasen of UBC for their invaluable advice during
434
+ the research process. I furthermore thank the reviewers and
435
+ workshop committee members for their recommendations.
436
+
437
+ References
438
+ Allcorn, S.; and Stein, H. F. 2021. Unpacking the Supreme
439
+ Court: The Age of Trump, Law, and Psychohistory. Journal
440
+ of Psychohistory, 49(1).
441
+ Argyle, L. P.; Busby, E. C.; Fulda, N.; Gubler, J.; Rytting,
442
+ C.; and Wingate, D. 2022. Out of One, Many: Using Lan-
443
+ guage Models to Simulate Human Samples. arXiv preprint
444
+ arXiv:2209.06899.
445
+ Betz, G. 2022.
446
+ Natural-Language Multi-Agent Simu-
447
+ lations of Argumentative Opinion Dynamics.
448
+ Journal
449
+ of Artificial Societies and Social Simulation, 25(1): 2.
450
+ ArXiv:2104.06737 [cs].
451
+ Black, S.; Biderman, S.; Hallahan, E.; Anthony, Q.; Gao,
452
+ L.; Golding, L.; He, H.; Leahy, C.; McDonell, K.; Phang,
453
+ J.; and Pieler, M. 2022. GPT-NeoX-20B: An Open-Source
454
+ Autoregressive Language Model.
455
+ Blake, W. D. 2019.
456
+ ’Don’t Confuse Me with the Facts’:
457
+ The Use and Misuse of Social Science on the United States
458
+ Supreme Court. Md. L. Rev., 79: 216.
459
+ Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.;
460
+ Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell,
461
+ A.; et al. 2020. Language models are few-shot learners. Ad-
462
+ vances in neural information processing systems, 33: 1877–
463
+ 1901.
464
+ Clark, J. 2022. GPT-2 Domains. Original-date: 2019-02-
465
+ 11T04:21:59Z.
466
+ Dettmers, T.; Lewis, M.; Belkada, Y.; and Zettlemoyer, L.
467
+ 2022.
468
+ LLM.int8(): 8-bit Matrix Multiplication for Trans-
469
+ formers at Scale.
470
+ Drori, I.; Zhang, S.; Shuttleworth, R.; Tang, L.; Lu, A.; Ke,
471
+ E.; Liu, K.; Chen, L.; Tran, S.; Cheng, N.; et al. 2022. A neu-
472
+ ral network solves, explains, and generates university math
473
+ problems by program synthesis and few-shot learning at hu-
474
+ man level.
475
+ Proceedings of the National Academy of Sci-
476
+ ences, 119(32): e2123433119.
477
+ Esbeck, C. H. 2011. Uses and Abuses of Textualism and
478
+ Originalism in Establishment Clause Interpretation. Utah L.
479
+ Rev., 489.
480
+ Hamilton, S.; and Piper, A. 2022. The COVID That Wasn’t:
481
+ Counterfactual Journalism Using GPT. In Proceedings of
482
+ the 6th Joint SIGHUM Workshop on Computational Lin-
483
+ guistics for Cultural Heritage, Social Sciences, Humanities
484
+ and Literature, 83–93. Gyeongju, Republic of Korea: Inter-
485
+ national Conference on Computational Linguistics.
486
+ Jr, R. L. P.; Curry, B. W.; and Marshall, B. W. 2011. Decision
487
+ Making by the Modern Supreme Court. Cambridge Univer-
488
+ sity Press. ISBN 978-1-139-49879-1. Google-Books-ID:
489
+ SnVP2trSfcIC.
490
+ Katz, D. M.; Bommarito, M. J., II; and Blackman, J. 2017. A
491
+ general approach for predicting the behavior of the Supreme
492
+ Court of the United States. PLOS ONE, 12(4): 1–18.
493
+ Kim, L.-S.; Kim, S.-s.; Jang, H.-S.; Park, S.-W.; and Kang,
494
+ I.-H. 2020.
495
+ Long-tail Query Expansion using Extractive
496
+ and Generative Methods. In Annual Conference on Human
497
+ and Language Technology, 267–273. Human and Language
498
+ Technology.
499
+ Kingma, D. P.; and Ba, J. 2014.
500
+ Adam: A Method for
501
+ Stochastic Optimization.
502
+ Kromphardt, C. D. 2017. Evaluating the effect of law clerk
503
+ gender on voting at the United States Supreme Court. Justice
504
+ System Journal, 38(2): 183–201.
505
+ Liu, P.; Yuan, W.; Fu, J.; Jiang, Z.; Hayashi, H.; and Neubig,
506
+ G. 2021. Pre-train, prompt, and predict: A systematic survey
507
+ of prompting methods in natural language processing. arXiv
508
+ preprint arXiv:2107.13586.
509
+ NLLB Team. 2022.
510
+ No Language Left Behind: Scaling
511
+ Human-Centered Machine Translation.
512
+ Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright,
513
+ C. L.; Mishkin, P.; and Zhang, C. 2022. Training language
514
+ models to follow instructions with human feedback.
515
+ Park, J. S.; Popowski, L.; Cai, C. J.; Morris, M. R.; Liang,
516
+ P.; and Bernstein, M. S. 2022. Social Simulacra: Creating
517
+ Populated Prototypes for Social Computing Systems.
518
+ Peterson, J. C.; Giallouri, T.; and Menounou, E. 2021. The
519
+ Personal Finances of United States Supreme Court Justices
520
+ and Decision-making in Economic Litigation. The Journal
521
+ of Legal Studies, 50(2): 379–405.
522
+ Portelli, B.; Scaboro, S.; Santus, E.; Sedghamiz, H.; Cher-
523
+ soni, E.; and Serra, G. 2022. Generalizing over Long Tail
524
+ Concepts for Medical Term Normalization. arXiv preprint
525
+ arXiv:2210.11947.
526
+ Post, R.; and Siegel, R. 2006.
527
+ Originalism as a Political
528
+ Practice: The Right’s Living Constitution. Fordham L. Rev.,
529
+ 75: 545.
530
+ Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.;
531
+ Sutskever, I.; et al. 2019. Language models are unsupervised
532
+ multitask learners. OpenAI blog, 1(8): 9.
533
+ Roberts, A.; Chung, H. W.; Levskaya, A.; Mishra, G.; Brad-
534
+ bury, J.; Andor, D.; Narang, S.; Lester, B.; Gaffney, C.; Mo-
535
+ hiuddin, A.; Hawthorne, C.; and Lewkowycz, A. 2022. Scal-
536
+ ing Up Models and Data with t5x and seqio.
537
+ arXiv
538
+ preprint arXiv:2203.17189.
539
+ Rohrbach, A.; Hendricks, L. A.; Burns, K.; Darrell, T.; and
540
+ Saenko, K. 2018. Object Hallucination in Image Captioning.
541
+ In Proceedings of the 2018 Conference on Empirical Meth-
542
+ ods in Natural Language Processing, 4035–4045. Brussels,
543
+ Belgium: Association for Computational Linguistics.
544
+ Spaeth, H.; Epstein, L.; Ruger, T.; Whittington, K.; Segal,
545
+ J.; and Martin, A. D. 2014. 2020 Supreme Court Database.
546
+ URL: http://scdb. wustl. edu.
547
+ Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Chi, E.;
548
+ Le, Q.; and Zhou, D. 2022.
549
+ Chain of thought prompting
550
+ elicits reasoning in large language models. arXiv preprint
551
+ arXiv:2201.11903.
552
+ Appendix
553
+ We demonstrate our system with a real-world case fed to the
554
+ Ruth Bader Ginsburg model. We first provide our prompt
555
+ followed by the model’s response. Note the model correctly
556
+ predicts Ginsburg’s real-world decision (to deny the appel-
557
+ lant) despite the ”California Childhood Video Game Act”
558
+ not existing.
559
+
560
+ Prompt
561
+ Issue:
562
+ ‘First Amendment’,
563
+ Topic:
564
+ ‘The California law restricting the sale or rental of
565
+ violent video games to minors was challenged by the video-
566
+ game and software industries. The Federal District Court
567
+ concluded that the Act violated the First Amendment and
568
+ permanently enjoined its enforcement. The Ninth Circuit af-
569
+ firmed.’,
570
+ Appellant is seeking a:
571
+ ‘certiorari’,
572
+ Model Response
573
+ Opinion:
574
+ ‘The California Childhood Video Game Act is
575
+ on its face both overbroad and vague. It prohibits the sale
576
+ or rental of “any device or medium designed for or capable
577
+ of reproducing visual images of decapitation, blood, or dis-
578
+ memberment of a human body,” in the presence of a child
579
+ under the age of 13. It also prohibits the importation into
580
+ the State of California of “any video game system, com-
581
+ puter game software, or computer program that contains a
582
+ killing or violent content.” The Act specifically identifies
583
+ video games that have been determined by the State to be
584
+ “violent,” and any person who “possesses, or has under his
585
+ control,” one of those video games is subject to the Act’s
586
+ restrictions. The Act falls outside the First Amendment be-
587
+ cause it interferes with the creative expression and acquisi-
588
+ tive impulse that go to make up the quintessential activity of
589
+ a child’s life.
590
+ The majority points out that the Act does not regulate
591
+ the mere possession of these games, but seeks to regulate
592
+ the medium in which they are played. It reasons that the
593
+ State may regulate what the child sees and does not see by
594
+ preventing the child from becoming engrossed in the game
595
+ world and thereby “sustaining or supporting the action or
596
+ activity.” But the majority does not ask whether the regula-
597
+ tion of what the child sees and does not see is content-based.
598
+ It neither considers nor concludes whether the regulation is
599
+ narrowly tailored to the interests it seeks to protect. And it
600
+ does not consider the adverse impact that its rule may have
601
+ on legitimate play. The majority says that the regulation is
602
+ content-neutral because it does not regulate “the medium
603
+ in which the [killing or violent content] is embodied,” or
604
+ the “context in which the [killing or violent content] is pre-
605
+ sented.” But to the extent that the regulation of medium and
606
+ context are related, the majority’s rule will be unworkable.
607
+ The child who might otherwise play The Last of Us might
608
+ otherwise play The Path, instead, because the former game
609
+ has fewer graphic content and the latter is rated M for Ma-
610
+ ture. And the child who might otherwise enjoy the violent
611
+ video game The Walking Dead might otherwise have played
612
+ might instead have played The Secret of Monkey Island,
613
+ which has no violence or blood and is rated A. The child
614
+ who might otherwise have played Grand Theft Auto might
615
+ instead have played The Legend of Zelda, which has no vi-
616
+ olence or blood and is rated E. And so on. The variety of
617
+ content that the child may choose from makes it virtually
618
+ impossible to determine with any precision what the reg-
619
+ ulation of what the child sees and does not see will look
620
+ like. The only thing that the majority seems to be able to
621
+ say with any precision is that the regulation will likely con-
622
+ tain some content-based exceptions. But as the San Fran-
623
+ cisco ordinance demonstrates, those exceptions are exceed-
624
+ ingly rare. And when they do appear, they almost always
625
+ involve content the State has determined is not suitable for
626
+ children.
627
+ I fear that the majority’s rule will produce a generation of
628
+ Americans—white, middle-class, suburban—who have no
629
+ interest in, and no capacity to, engage in the kinds of play
630
+ the First Amendment protects. And I fear that this Court,
631
+ which prides itself on its willingness to protect the most vul-
632
+ nerable members of our society from harm, will be complicit
633
+ in that generation’s continued marginalization. I respectfully
634
+ dissent.’,
635
+ Decision:
636
+ ‘deny’
637
+
C9E4T4oBgHgl3EQf5w7z/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
E9AyT4oBgHgl3EQfrPkW/content/tmp_files/2301.00555v1.pdf.txt ADDED
@@ -0,0 +1,1504 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Task-specific Scene Structure Representations
2
+ Jisu Shin*, Seunghyun Shin*and Hae-Gon Jeon†
3
+ AI Graduate School, GIST, South Korea
4
+ {jsshin98, seunghyuns98}@gm.gist.ac.kr, [email protected]
5
+ Abstract
6
+ Understanding the informative structures of scenes is essen-
7
+ tial for low-level vision tasks. Unfortunately, it is difficult to
8
+ obtain a concrete visual definition of the informative struc-
9
+ tures because influences of visual features are task-specific.
10
+ In this paper, we propose a single general neural network ar-
11
+ chitecture for extracting task-specific structure guidance for
12
+ scenes. To do this, we first analyze traditional spectral cluster-
13
+ ing methods, which computes a set of eigenvectors to model a
14
+ segmented graph forming small compact structures on image
15
+ domains. We then unfold the traditional graph-partitioning
16
+ problem into a learnable network, named Scene Structure
17
+ Guidance Network (SSGNet), to represent the task-specific
18
+ informative structures. The SSGNet yields a set of coeffi-
19
+ cients of eigenvectors that produces explicit feature repre-
20
+ sentations of image structures. In addition, our SSGNet is
21
+ light-weight (∼ 55K parameters), and can be used as a plug-
22
+ and-play module for off-the-shelf architectures. We optimize
23
+ the SSGNet without any supervision by proposing two novel
24
+ training losses that enforce task-specific scene structure gen-
25
+ eration during training. Our main contribution is to show that
26
+ such a simple network can achieve state-of-the-art results for
27
+ several low-level vision applications including joint upsam-
28
+ pling and image denoising. We also demonstrate that our SS-
29
+ GNet generalizes well on unseen datasets, compared to ex-
30
+ isting methods which use structural embedding frameworks.
31
+ Our source codes are available at https://github.com/jsshin98/
32
+ SSGNet.
33
+ 1
34
+ Introduction
35
+ Methods for estimating scene structures have attracted wide
36
+ research attention for the past several decades. As an exam-
37
+ ple, texture representations based on image edges have been
38
+ extensively studied with impressive performance on low-
39
+ level vision tasks, i.e. image denoising (Tomasi and Man-
40
+ duchi 1998), deblurring (Krishnan and Fergus 2009; Levin
41
+ et al. 2007), super-resolution (Tai et al. 2010) and inpaint-
42
+ ing (Nazeri et al. 2019; Yang, Qi, and Shi 2020; Guo, Yang,
43
+ and Huang 2021). Another aspect of scene structures in-
44
+ volves inferring robust object boundaries to quantify uncer-
45
+ tainty and refine initial predictions in visual perception tasks
46
+ *These authors contributed equally.
47
+ †Corresponding author
48
+ Copyright © 2023, Association for the Advancement of Artificial
49
+ Intelligence (www.aaai.org). All rights reserved.
50
+ SSGNet
51
+ Baseline
52
+ Networks
53
+ Scene
54
+ Structure for
55
+ Depth Upsampling
56
+ LR Depth
57
+ Noisy Image
58
+ Scene
59
+ Structure for
60
+ Image Denoising
61
+ HR Depth
62
+ Ours
63
+ Baseline
64
+ HR Depth
65
+ Denoised Image
66
+ Denoised Image
67
+ RMSE : 25.72cm
68
+ RMSE : 25.15cm
69
+ PSNR : 34.00dB
70
+ PSNR : 30.51dB
71
+ Figure 1: Our SSGNet is a lightweight architecture and can
72
+ be applied as a plug-and-play module to improve the perfor-
73
+ mance of baseline networks for low-level vision tasks.
74
+ including joint filtering (He, Sun, and Tang 2012; Guo et al.
75
+ 2018; Li et al. 2016) and depth completion (Eldesokey et al.
76
+ 2020). Clearly, the goodness of scene structures depends on
77
+ the target applications, and is defined by either training data
78
+ or objective functions.
79
+ More recent approaches of extracting informative scene
80
+ structures have focused on capturing task-specific features
81
+ from various learning frameworks. One interesting work for
82
+ joint filtering in (de Lutio et al. 2022) builds graph nodes on
83
+ learned features from a guidance image to encode semantic
84
+ information, and represents scene structures by segmenting
85
+ the graph edges based on objective functions. However, they
86
+ have heavy computational burdens and are not implemented
87
+ as an end-to-end architecture. To formulate an end-to-end ar-
88
+ chitecture, edge priors, directly obtained from conventional
89
+ edge detection (Irwin et al. 1968; Canny 1986), are used as
90
+ a guide. Typically, image edges (or gradients) represent high
91
+ frequency features and can be forced to generate fine details
92
+ in the prediction results (Fang, Li, and Zeng 2020). Never-
93
+ theless, the question of how to effectively exploit structure
94
+ guidance information remains unanswered. Tremendous ef-
95
+ forts have been made to only generate a single purpose scene
96
+ structure with each different architecture.
97
+ In this paper, we propose a Scene Structure Guidance Net-
98
+ work (SSGNet), a single general neural network architecture
99
+ for extracting task-specific structural features of scenes. Our
100
+ SSGNet is lightweight in both size and computation, and
101
+ is a plug-and-play module that can be applied to any base-
102
+ line low-level vision architectures. The SSGNet computes
103
+ a set of parameterized eigenvector maps, whose combina-
104
+ tion is selectively determined in favor of the target domain.
105
+ arXiv:2301.00555v1 [cs.CV] 2 Jan 2023
106
+
107
+ To achieve this, we introduce two effective losses: (1) Eigen
108
+ loss, motivated by the traditional graph partitioning prob-
109
+ lem (Shi and Malik 2000), forms a basis set of scene struc-
110
+ tures based on weight graphs on an image grid. (2) Spatial
111
+ loss enforces the sparsity of each eigenvector for diverse rep-
112
+ resentations of scene structures. We note that, without any
113
+ supervision, our SSGNet can successfully learn to gener-
114
+ ate task-specific and informative structural information as
115
+ shown in Fig.1. To demonstrate the wide applicability of our
116
+ SSGNet, we conduct extensive experiments on several low-
117
+ level vision applications, including joint upsampling and im-
118
+ age denoising, and achieve state-of-the-art results, even in
119
+ cross-dataset generalization.
120
+ 2
121
+ Related Work
122
+ Our work is closely related to scene structure embedding for
123
+ low-level vision tasks.
124
+ 2.1
125
+ Low-level vision tasks
126
+ The goal of low-level vision tasks such as denoising, super-
127
+ resolution, deblurring and inpainting is to recover a sharp
128
+ latent image from an input image that has been degraded by
129
+ the inherent limitations of the acquisition systems (i.e. sen-
130
+ sor size, depth of field or light efficiency). In the past decade,
131
+ there have been significant improvements in low-level vision
132
+ tasks, and recently deep learning-based techniques have es-
133
+ pecially proven to be powerful systems.
134
+ With the help of inductive bias (Cohen and Shashua
135
+ 2017), convolutional neural networks (CNNs) with a pixel-
136
+ wise photo consistency loss (Li et al. 2016; Zhang and
137
+ Sabuncu 2018; Zhong et al. 2021) are adopted. To mitigate
138
+ the issue on inter-pixel consistency on CNNs, generative ad-
139
+ versarial networks (GANs) (Goodfellow et al. 2014; Zhu
140
+ et al. 2017; Karras, Laine, and Aila 2019; Liu et al. 2021a;
141
+ Wang et al. 2021)-based methods are proposed to produce
142
+ visually pleasing results with perceptual losses (Johnson,
143
+ Alahi, and Fei-Fei 2016; Fuoli, Van Gool, and Timofte 2021;
144
+ Suvorov et al. 2022) based on high-level semantic features.
145
+ Nowadays, a vision transformer (ViT) (Dosovitskiy et al.
146
+ 2021; Liu et al. 2021b; Caron et al. 2021; Chen et al. 2021)
147
+ has been used to capture both local and global image infor-
148
+ mation by leveraging the ability to model long-range con-
149
+ text.
150
+ Such approaches have shown good progress with struc-
151
+ tural details. For regularization, adding robust penalties
152
+ to objective functions (Tibshirani 1996; Xu et al. 2010;
153
+ Loshchilov and Hutter 2019; de Lutio et al. 2022) suppresses
154
+ high-frequency components, and hence the results usually
155
+ provide a smooth plausible reconstruction. However, those
156
+ constraints often suffer from severe overfitting to noisy la-
157
+ bels and are sensitive to hyperparameters, which leads to a
158
+ lack of model generality.
159
+ 2.2
160
+ Structural information
161
+ Extensive studies on low-level vision have verified the feasi-
162
+ bility and necessity of the image prior including image edges
163
+ and gradients. One of the representative works involves joint
164
+ image filters which leverage a guidance image as a prior
165
+ and transfer its structural details to a target image for edge-
166
+ preserved smoothing (Tomasi and Manduchi 1998; He, Sun,
167
+ and Tang 2012; Zhang et al. 2014).
168
+ Such structure information can be defined in practice,
169
+ depending on the tasks. Both super-resolution (Pickup,
170
+ Roberts, and Zisserman 2003; Sun, Xu, and Shum 2008;
171
+ Xie, Feris, and Sun 2015; Fang, Li, and Zeng 2020) and im-
172
+ age denoising (Liu et al. 2020), which utilize a patch simi-
173
+ larity, generate gradient maps to reconstruct high frequency
174
+ details or suppress image noises. Works in (Gu et al. 2017;
175
+ Jin et al. 2020) infer object boundaries to refine initial pre-
176
+ dictions in visual perception tasks, including depth estima-
177
+ tion/completion. Also, image inpainting (Nazeri et al. 2019;
178
+ Yang, Qi, and Shi 2020; Guo, Yang, and Huang 2021; Cao
179
+ and Fu 2021), filling in missing parts of corrupted scenes,
180
+ adopt edge maps from traditional method like Canny edge
181
+ detector (Canny 1986) to hallucinate their own scene struc-
182
+ tures.
183
+ In spite of promising results from the state-of-the-art
184
+ methods learning meaningful details for each task, they re-
185
+ quire a high modeling capacity with numerous parameters
186
+ and ground-truth structure maps for training. In contrast,
187
+ our SSGNet, a very small network generating scene struc-
188
+ tures without any supervision, has advantages for various
189
+ low-level vision tasks, simply by embedding as an additional
190
+ module.
191
+ 3
192
+ Methodology
193
+ Motivated by spectral graph theory (Shi and Malik 2000;
194
+ Levin, Rav-Acha, and Lischinski 2008; Levin, Lischinski,
195
+ and Weiss 2007), a set of basis represents scene configu-
196
+ rations as a linear combination of the basis. Such parame-
197
+ terization provides a restrictive solution space to accommo-
198
+ date semantic entities like textures and object boundaries.
199
+ Following the works in (Tang and Tan 2019; Bloesch et al.
200
+ 2018), we begin with an introduction to spectral methods,
201
+ and then parameterize scene structures which can be used as
202
+ guidance for various vision tasks.
203
+ 3.1
204
+ Motivation
205
+ Let us set a weighted undirected graph G = (V, E) in an
206
+ arbitrary feature space with a set of nodes V, and a set of
207
+ edges E, whose weight can be represented as an N × N
208
+ non-negative adjacency matrix W = {w(i, j) : (i, j) ∈ E}
209
+ where i, j denote graph nodes. The Laplacian matrix L of
210
+ this graph is then obtained by L = D − W, where D is a
211
+ diagonal matrix with the row-wise sum of W on its diagonal.
212
+ Since the Laplacian matrix is a positive semidefinite matrix,
213
+ for every N dimensional vector y from the matrix Y which
214
+ consists of a set of vectors, it holds that
215
+ yT Ly =
216
+
217
+ (i,j)∈E
218
+ w(i, j){y(i) − y(j)}2 ≥ 0.
219
+ (1)
220
+ To minimize the Eq.(1), the indicator vector y should take
221
+ similar values for nodes i and j. When the adjacent value
222
+ w(i, j) is high, the two nodes are more tightly coupled.
223
+ Spectral graph theory in (Fiedler 1973; Shi and Malik
224
+ 2000) proves that the eigenvectors of the graph Laplacian
225
+
226
+ ℒ𝑒𝑖𝑔𝑒𝑛
227
+ ℒ𝑠𝑝𝑎𝑡𝑖𝑎𝑙
228
+ Image
229
+ I ∈ ℝ𝐻×𝑊×3
230
+ Affinity Matrix
231
+ W ∈ ℝ(𝐻𝑊)×(𝐻𝑊)
232
+ Eigenvectors
233
+ Y ∈ ℝ𝐻×𝑊×3
234
+ Attention
235
+ Layer
236
+ Baseline
237
+ Networks
238
+
239
+ Skip Connection
240
+ CONV
241
+ BLK
242
+ CONV
243
+ BLK
244
+ CONV
245
+ BLK
246
+ CONV
247
+ BLK
248
+ CONV
249
+ BLK
250
+ CONV
251
+ BLK
252
+ CONV
253
+ BLK 3⨯3 Conv.-LN-GeLU
254
+ CONV
255
+ BLK 3⨯3 Conv.-LN-Softmax CONV
256
+ BLK 3⨯3 Conv.-LeakyReLU
257
+ Figure 2: An overview of SSGNet. LN, GeLU, and
258
+ LeakyReLU denote the layer normalization, GeLU activa-
259
+ tion, and LeakyReLU activation, respectively. The eigenvec-
260
+ tors are integrated via the attention layer, and then embedded
261
+ to any baseline network.
262
+ yield minimum-energy graph partitions, and each smallest
263
+ eigenvector, like the indicator vector y, partitions the graph
264
+ into soft-segments based on its adjacent matrix.
265
+ In the image domain, a reference pixel and its similarity to
266
+ neighboring pixels can be interpreted as a node and edges in
267
+ a graph (Boykov, Veksler, and Zabih 2001), respectively. In
268
+ general, affinity is defined by appearance similarities (i.e. the
269
+ absolute of intensity differences). With this motivation, im-
270
+ ages can be decomposed into soft image clusters from a pre-
271
+ computed affinity matrix. In addition, scene configurations
272
+ in images can be described as a set of eigenvectors whose
273
+ smallest eigenvalues indicate connected components on the
274
+ affinity matrix.
275
+ 3.2
276
+ Scene Structure Guidance Network
277
+ In this work, our goal is to train the proposed network, SS-
278
+ GNet, without any supervision because it is infeasible to
279
+ define a unique objective function for a task-specific struc-
280
+ ture guidance. To accomplish this, we devise a learnable and
281
+ parametric way of efficiently representing scene structures.
282
+ Given single color images I ∈ Rh×w×3, our SSGNet σ
283
+ yields a set of eigenvectors Y ∈ Rh×w×n, where n denotes
284
+ the number of eigenvectors and is empirically set to 3:
285
+ Y = σ(I).
286
+ (2)
287
+ As illustrated in Fig.2, SSGNet takes a simple encoder-
288
+ decoder architecture (∼ 55K), consisting of two 3×3 con-
289
+ volutional layers and three 3×3 deconvolutional layers with
290
+ layer normalizations (Ba, Kiros, and Hinton 2016) and gelu
291
+ activations (Hendrycks and Gimpel 2016) after each layer
292
+ except for the last softmax layer. The output of our SSGNet
293
+ is associated with learnable weights that will be finetuned in
294
+ accordance with an objective function of target applications.
295
+ To optimize SSGNet in an unsupervised manner, we de-
296
+ fine a loss function Lssg which is a linear combination of
297
+ two loss terms as follows:
298
+ Eigen Loss
299
+ The main objective of SSGNet is to obtain a
300
+ set of smallest eigenvectors Y of the graph Laplacian L, in-
301
+ spired by the spectral graph theory (Fiedler 1973; Shi and
302
+ Malik 2000; Levin, Rav-Acha, and Lischinski 2008).
303
+ To generate the graph Laplacian L, we trace back all the
304
+ way down to some traditional similarity matrix methods.
305
+ Since an image is segmented based on a constructed affin-
306
+ ity matrix in spectral graph theory, the form of the matrix
307
+ depends on the pixel-level similarity encoding (Levin, Rav-
308
+ Acha, and Lischinski 2008; Levin, Lischinski, and Weiss
309
+ 2007; Chen, Li, and Tang 2013). In this work, we adopt the
310
+ sparse KNN-matting matrix (Chen, Li, and Tang 2013). To
311
+ be specific, we first collect nonlocal neighborhoods j of a
312
+ pixel i by the k-nearest neighbor algorithm (KNN) (Cover
313
+ and Hart 1967). Then, we define the feature vector ϕ(i) at a
314
+ given pixel i as follows:
315
+ ϕ(i) = (r, g, b, dx, dy)i,
316
+ (3)
317
+ where (r, g, b) denotes each color channel, and (dx, dy) is
318
+ a weighted spatial coordinate for the x- and y-axes. We fol-
319
+ low the KNN kernel function KNN(i) to construct the sparse
320
+ affinity matrix W based on feature vectors ϕ:
321
+ W(i, j) =
322
+
323
+ 1− ∥ ϕ(i) − ϕ(j) ∥,
324
+ j ∈ KNN(i)
325
+ 0,
326
+ otherwise,
327
+ (4)
328
+ where j ∈ KNN(i) are the k-nearest neighbors of i based on
329
+ the distance defined by ϕ. Using the sparse KNN-matting
330
+ matrix, we can take account of both spatial distance and
331
+ color information with less computational cost than a tra-
332
+ ditional similarity matrix. The graph Laplacian L is finally
333
+ obtained by L = D − W as the same manner, described
334
+ in Sec.3.1.
335
+ We can finally obtain a set of eigenvectors Y by minimiz-
336
+ ing the quadratic form of L, Leigen, as below:
337
+ Leigen =
338
+
339
+ k
340
+ YT
341
+ k LYk.
342
+ (5)
343
+ However, we observe that SSGNet sometimes produces
344
+ undesirable results during the training phase because of the
345
+ degenerate case, where the rank of Y may be lower, and
346
+ needs an additional loss term to regularize it.
347
+ Spatial Loss
348
+ Since our SSGNet uses a softmax function
349
+ in the last layer to prevent the eigenvectors from converging
350
+ to zero vectors, we only need to handle the degenerate case,
351
+ where all eigenvectors have the same value. Our spatial loss
352
+ Lspatial considers the sparsity of each eigenvector to enforce
353
+ diverse representations of scene structure, defined as below:
354
+ Lspatial =
355
+
356
+ k
357
+ (|Yk|γ + |1 − Yk|γ) − 1,
358
+ (6)
359
+ where | · | indicates an absolute value, and the hyperparam-
360
+ eter γ is set to 0.9 in our implementation. We can intuitively
361
+ figure out that Lspatial has a minimum value when Yk is ei-
362
+ ther 0 or 1 for each pixel. With the Lspatial and the softmax
363
+ operation together, we show that if a pixel of one eigenvector
364
+ converges near to 1, the pixel of other eigenvectors should
365
+ go to 0. This makes each pixel across the eigenvectors have
366
+ different value due to the sparsity penalty, which produces
367
+ diverse feature representations of image structures.
368
+ In total, the final loss function for SSGNet is defined as:
369
+ Lssg = Leigen + λLspatial
370
+ (7)
371
+ where λ is the hyper-parameter, and is empirically set to 40.
372
+ Our SSGNet is pretrained on a single dataset and can
373
+ be embedded in various baseline networks after passing
374
+
375
+ (b) Eigenvectors
376
+ 𝜆=0
377
+ 𝜆=40
378
+ 𝜆=1000
379
+ 0.4
380
+ 0.0
381
+ (a) Input Image
382
+ Undesirable
383
+ Seams
384
+ Same Eigenvectors
385
+ 1.0
386
+ Figure 3: Visualization of the sets of eigenvectors according
387
+ to λ = 0, 40, and 1000.
388
+ through an additional single convolution layer which acts as
389
+ an attention module. In favor of the target domain on each
390
+ task, this layer produces adaptive structural information of
391
+ input scenes by linearly combining the set of eigenvectors.
392
+ 3.3
393
+ Analysis
394
+ To the best of our knowledge, the SSGNet is the first to un-
395
+ fold the eigen-decomposition problem into a learnable net-
396
+ work. To validate its effectiveness, we provide a series of
397
+ analyses on SSGNet.
398
+ First, we analyze our loss function in Eq.(7) by tuning
399
+ the hyper-parameter λ used as a balancing term between
400
+ Lspatial and Leigen. In our experiment, the best performance
401
+ is obtained with λ = 40. In Fig.3, we show the visualization
402
+ results for three different λ values, including λ = 0, 40, and
403
+ 1000. When λ is set to 0, Lspatial is not forced enough to
404
+ give a sparsity penalty across eigenvectors, which leads to
405
+ the degenerate case. Otherwise, if λ is set to 1000, the im-
406
+ age is not well-segmented because the overwhelming major-
407
+ ity of Lspatial causes undesirable seams on the image. From
408
+ this, we can see that the absence of either one leads to an
409
+ undesirable situation, which emphasizes the role of each of
410
+ the two terms in our loss functions.
411
+ Next, we demonstrate that our SSGNet yields task-
412
+ specific structural guidance features. As we highlighted, the
413
+ SSGNet can be embedded in baseline networks. When the
414
+ pretrained SSGNet is attached to baseline networks, the net-
415
+ work parameters on SSGNet are finetuned to produce guid-
416
+ ance features suitable for each task as the training proceeds.
417
+ In Fig.4, we visualize how the eigenvectors from SSGNet
418
+ change at each iteration during finetuning, including joint
419
+ depth upsampling (Dong et al. 2022) and single image de-
420
+ noising (Zhang et al. 2022).
421
+ The joint depth upsampling needs accurate object bound-
422
+ aries as a prior (Li et al. 2014). For obvious reasons, an ob-
423
+ jective function in the joint depth upsampling encourages a
424
+ greater focus on reconstructing object boundaries. As shown
425
+ in Fig.4(a), our SSGNet generates attentive features on them
426
+ during fine-tuning. In addition, for image denoising, it is es-
427
+ sential to preserve fine detailed textures. In Fig.4(b), with
428
+ the meaningful scene structures from our SSGNet, the plau-
429
+ sible result is inferred as well. We claim that it is possible for
430
+ Denoised Image
431
+ Noise Image
432
+ Initial
433
+ Guidance
434
+ (a) Depth Upsampling
435
+ LR Depth map
436
+ Initial
437
+ Guidance
438
+ HR Depth map
439
+ Intermediate
440
+ Guidance
441
+ Finetuned
442
+ Guidance
443
+ Intermediate
444
+ Guidance
445
+ Finetuned
446
+ Guidance
447
+ (b) Image Denoising
448
+ Figure 4: Examples of task-specific scene structures: in-
449
+ tial, intermediate and final results from SSGNet for (a) joint
450
+ depth upsampling and (b) image denoising.
451
+ (a) Depth Upsampling
452
+ HR Depth
453
+ LR Depth
454
+ Scene
455
+ Structure
456
+ RGB Image
457
+ Encoder
458
+ Decoder
459
+ Encoder
460
+ Baseline
461
+ Ours
462
+ (b) Image Denoising
463
+ Noisy Image
464
+ Concatenation
465
+ C
466
+ Baseline
467
+ Scene
468
+ Structure
469
+ Denoised Image
470
+ Ours
471
+ Encoder
472
+ Decoder
473
+ C
474
+ SSGNet
475
+ SSGNet
476
+ Figure 5: Illustrations of SSGNet for low-level vision tasks.
477
+ The yellow colored networks indicate our SSGNet that out-
478
+ puts informative task-specific structure guidances.
479
+ our SSGNet to capture informative and task-specific struc-
480
+ tures through gradient updates from backpropagation (Le-
481
+ Cun et al. 1989). We will describe the experimental details
482
+ and SSGNet���s quantitative benefits on each task in Sec.4.
483
+ 3.4
484
+ Training Scheme
485
+ We implement the proposed framework using a public Py-
486
+ torch (Paszke et al. 2019), and utilize the Adam (Kingma
487
+ and Ba 2014) optimizer with β1 = 0.9 and, β2 = 0.999.
488
+ The learning rate and the batch size are set to 0.0001 and
489
+ 4 on SSGNet, respectively. We train the proposed frame-
490
+ work on images with a 256×256 resolution. Since the pro-
491
+ posed framework consists of fully convolutional layers, im-
492
+ ages with higher resolutions than that used in the training
493
+ phase are available in inference. The training on SSGNet
494
+ took about 40 hours on two NVIDIA Tesla v100 GPUs.
495
+ 4
496
+ Experiments
497
+ We conduct a variety of experiments on low-level vi-
498
+ sion tasks, including self-supervised joint depth upsam-
499
+ pling (Sec.4.1) and unsupervised single image denoising
500
+ (Sec.4.2), to demonstrate the effectiveness of our SSGNet.
501
+ Moreover, we provide an extensive ablation study (Sec.4.3)
502
+ to precisely describe the effects of each component in SS-
503
+ GNet. Note that the higher resolution version of experimen-
504
+ tal results is reported in our supplementary material.
505
+ Baselines with SSGNet
506
+ In this section, our goal is to val-
507
+ idate a wide applicability of SSGNet. To do this, we incor-
508
+ porate SSGNet into existing CNN architectures for the joint
509
+ depth upsampling and the unsupervised image denoising by
510
+ simply embedding scene structures from ours to the models.
511
+
512
+ Supervised
513
+ Self-Supervised
514
+ Dataset
515
+ Scale
516
+ DKN
517
+ FDKN
518
+ FDSR
519
+ P2P
520
+ MMSR
521
+ Ours
522
+ 2005
523
+ ×4
524
+ RMSE
525
+ 1.103
526
+ 0.964
527
+ 0.886
528
+ 1.288
529
+ 0.708
530
+ 0.612
531
+ MAE
532
+ 0.275
533
+ 0.222
534
+ 0.211
535
+ 0.273
536
+ 0.239
537
+ 0.188
538
+ ×8
539
+ RMSE
540
+ 1.182
541
+ 1.629
542
+ 1.043
543
+ 1.177
544
+ 1.043
545
+ 0.830
546
+ MAE
547
+ 0.288
548
+ 0.339
549
+ 0.333
550
+ 0.280
551
+ 0.319
552
+ 0.245
553
+ 2006
554
+ ×4
555
+ RMSE
556
+ 1.623
557
+ 1.337
558
+ 1.198
559
+ 2.604
560
+ 0.555
561
+ 0.504
562
+ MAE
563
+ 0.297
564
+ 0.222
565
+ 0.198
566
+ 0.413
567
+ 0.232
568
+ 0.201
569
+ ×8
570
+ RMSE
571
+ 1.790
572
+ 1.883
573
+ 1.170
574
+ 2.684
575
+ 0.723
576
+ 0.648
577
+ MAE
578
+ 0.307
579
+ 0.305
580
+ 0.267
581
+ 0.300
582
+ 0.261
583
+ 0.225
584
+ 2014
585
+ ×4
586
+ RMSE
587
+ 2.878
588
+ 2.593
589
+ 3.217
590
+ 4.019
591
+ 1.953
592
+ 1.819
593
+ MAE
594
+ 0.739
595
+ 0.659
596
+ 0.595
597
+ 0.822
598
+ 0.573
599
+ 0.451
600
+ ×8
601
+ RMSE
602
+ 3.642
603
+ 3.510
604
+ 3.606
605
+ 3.894
606
+ 2.765
607
+ 2.714
608
+ MAE
609
+ 0.775
610
+ 0.871
611
+ 0.885
612
+ 0.920
613
+ 0.785
614
+ 0.675
615
+ Table 1: Quantitative results on joint depth upsampling
616
+ tasks. The best and the second best results are marked as
617
+ bold and underlined, respectively. (unit:cm)
618
+ DKN
619
+ FDKN
620
+ FDSR
621
+ Ours
622
+ MMSR
623
+ Baseline
624
+ DKN
625
+ FDKN
626
+ FDSR
627
+ Ours
628
+ MMSR
629
+ Baseline
630
+ Guidance
631
+ Source
632
+ GT
633
+ Scene
634
+ Structure
635
+ 0.0
636
+ 1.0
637
+ (a) Predicted Depth Map
638
+ (b) Predicted Error Map
639
+ Figure 6: Comparison results on the joint depth upsampling
640
+ with a resolution factor of 8 on the Middlebury 2005 dataset.
641
+ We visualize the predictions and their corresponding error
642
+ maps of competitive methods and ours.
643
+ Prior to the evaluations, we train our SSGNet on a well-
644
+ known NYUv2 dataset (Silberman and Fergus 2011), con-
645
+ sisting of 1,000 training images and 449 test images. With
646
+ the pre-trained weight of SSGNet, we embed it to the base-
647
+ line networks and finetune on each task. As mentioned
648
+ above, we do not need any supervision for training SSGNet.
649
+ We note that NYUv2 dataset is not used for evaluations, to
650
+ validate the zero-shot generalization across various datasets.
651
+ 4.1
652
+ Joint Depth Upsampling
653
+ Joint depth upsampling leverages the explicit structure de-
654
+ tail of the input image as a guidance and transfers it to the
655
+ target low-resolution depth map for enhancing spatial res-
656
+ olution. With this application, we demonstrate the synergy
657
+ of the structure details from clean input images and the pro-
658
+ posed learnable scene structure from SSGNet.
659
+ For this experiment, we choose MMSR (Dong et al. 2022)
660
+ as a baseline depth upsampling network. MMSR introduces
661
+ a mutual modulation strategy with the cross-domain adap-
662
+ tive filtering and adopts a cycle consistency loss to train the
663
+ model in a fully self-supervised manner. Instead of directly
664
+ using the input image as the guidance, we employ the struc-
665
+ ture guidance from the pretrained SSGNet in Fig.5(a), and
666
+ follow the training scheme of MMSR for fair comparisons
667
+ such that all the supervised methods are trained on NYUv2
668
+ Kodak
669
+ BSD300
670
+ BSD68
671
+ Method
672
+ σ = 25
673
+ σ = 50
674
+ σ = 25
675
+ σ = 50
676
+ σ = 25
677
+ σ = 50
678
+ BM3D
679
+ PSNR
680
+ 31.88
681
+ 28.64
682
+ 30.47
683
+ 27.14
684
+ 28.55
685
+ 25.59
686
+ SSIM
687
+ 0.869
688
+ 0.772
689
+ 0.863
690
+ 0.745
691
+ 0.782
692
+ 0.670
693
+ N2V
694
+ PSNR
695
+ 31.63
696
+ 28.57
697
+ 30.72
698
+ 27.60
699
+ 27.64
700
+ 25.46
701
+ SSIM
702
+ 0.869
703
+ 0.776
704
+ 0.874
705
+ 0.775
706
+ 0.781
707
+ 0.681
708
+ Nr2n
709
+ PSNR
710
+ 31.96
711
+ 28.73
712
+ 29.57
713
+ 26.18
714
+ N/A
715
+ N/A
716
+ SSIM
717
+ 0.869
718
+ 0.770
719
+ 0.815
720
+ 0.684
721
+ N/A
722
+ N/A
723
+ DBSN
724
+ PSNR
725
+ 32.07
726
+ 28.81
727
+ 31.12
728
+ 27.87
729
+ 28.81
730
+ 25.95
731
+ SSIM
732
+ 0.875
733
+ 0.783
734
+ 0.881
735
+ 0.782
736
+ 0.818
737
+ 0.703
738
+ N2N
739
+ PSNR
740
+ 32.39
741
+ 29.23
742
+ 31.39
743
+ 28.17
744
+ 29.15
745
+ 26.23
746
+ SSIM
747
+ 0.886
748
+ 0.803
749
+ 0.889
750
+ 0.799
751
+ 0.831
752
+ 0.725
753
+ IDR
754
+ PSNR
755
+ 32.36
756
+ 29.27
757
+ 31.48
758
+ 28.25
759
+ 29.20
760
+ 26.25
761
+ SSIM
762
+ 0.884
763
+ 0.803
764
+ 0.890
765
+ 0.802
766
+ 0.835
767
+ 0.726
768
+ Ours
769
+ PSNR
770
+ 32.39
771
+ 29.34
772
+ 31.52
773
+ 28.33
774
+ 29.25
775
+ 26.36
776
+ SSIM
777
+ 0.885
778
+ 0.806
779
+ 0.891
780
+ 0.805
781
+ 0.835
782
+ 0.731
783
+ Table 2: Quantitative results on single image denoising.
784
+ DBSN
785
+ Noisy(𝜎 = 50)
786
+ GT
787
+ Ours
788
+ IDR(baseline)
789
+ N2N
790
+ BSD300: 2092
791
+ PSNR
792
+ 14.69dB
793
+ Kodak: 1
794
+ DBSN
795
+ GT
796
+ Ours
797
+ IDR(baseline)
798
+ N2N
799
+ PSNR
800
+ 28.43dB
801
+ 28.69dB
802
+ 28.49dB
803
+ 31.87dB
804
+ 15.91dB
805
+ 29.68dB
806
+ 31.29dB
807
+ 25.73dB
808
+ 31.43dB
809
+ Noisy(𝜎 = 50)
810
+ Figure 7: Examples of the single image denoising. For
811
+ the noisy level σ = 50, we visualize the results from
812
+ IDR+SSGNet as well as the state-of-the-art methods.
813
+ dataset.
814
+ We also follow the evaluation protocol described in (Dong
815
+ et al. 2022) to quantitatively measure the root mean
816
+ square error (RMSE) and the mean absolute error (MAE).
817
+ To be specific, we use the Middlebury stereo dataset
818
+ 2005 (Scharstein and Pal 2007), 2006 (Hirschmuller and
819
+ Scharstein 2007), and 2014 (Scharstein et al. 2014)1, and
820
+ augment them, which provides 40, 72, and 308 image-depth
821
+ pairs, respectively, using a public code2.
822
+ We compare with various state-of-the-art models, in-
823
+ cluding supervised, DKN (Kim, Ponce, and Ham 2021),
824
+ FDKN (Kim, Ponce, and Ham 2021) and FDSR (He et al.
825
+ 2021), and self-supervised manners, P2P (Lutio et al. 2019)
826
+ and MMSR (Dong et al. 2022). As shown in Tab.1, MMSR
827
+ with our SSGNet embedded achieves the best performance
828
+ in almost datasets over the comparison methods. Our SS-
829
+ GNet brings the performance gain over the second best
830
+ method is about 10.4% and 11.8% with respect to RMSE
831
+ and MAE, respectively. It is also noticeable that the scene
832
+ structure contributes to reducing the errors in the star-like
833
+ object boundary and the inside surface, visualized in Fig.6.
834
+ We highlight that the result demonstrates the strong general-
835
+ ization capabilities of our SSGNet on unseen data again.
836
+ 1Since Middlebury 2003 provides neither depth maps nor cam-
837
+ era parameters, we could not use it in this evaluation.
838
+ 2Downloaded from https://rb.gy/bxyqgi
839
+
840
+ Task
841
+ Depth Upsampling
842
+ Denoising
843
+ RMSE
844
+ MAE
845
+ PSNR
846
+ SSIM
847
+ Ours
848
+ 0.83
849
+ 0.245
850
+ 29.34
851
+ 0.806
852
+ Hyper-parameter λ
853
+ 0.001
854
+ 0.84
855
+ 0.246
856
+ 29.17
857
+ 0.801
858
+ 0.1
859
+ 0.83
860
+ 0.245
861
+ 29.15
862
+ 0.800
863
+ 1
864
+ 0.82
865
+ 0.244
866
+ 29.13
867
+ 0.800
868
+ 100
869
+ 0.81
870
+ 0.244
871
+ 29.16
872
+ 0.801
873
+ 1000
874
+ 0.84
875
+ 0.248
876
+ 29.26
877
+ 0.803
878
+ # of eigenvectors
879
+ 2
880
+ 0.84
881
+ 0.272
882
+ 29.15
883
+ 0.800
884
+ 5
885
+ 0.81
886
+ 0.242
887
+ 29.16
888
+ 0.800
889
+ 7
890
+ 0.82
891
+ 0.242
892
+ 29.17
893
+ 0.800
894
+ 10
895
+ 0.82
896
+ 0.244
897
+ 29.27
898
+ 0.804
899
+ Canny Edge
900
+ ψ = {0.6, 0.9, 1.4}
901
+ 0.90
902
+ 0.298
903
+ 24.86
904
+ 0.510
905
+ ψ = {1.0, 2.0, 3.0}
906
+ 0.90
907
+ 0.282
908
+ 24.37
909
+ 0.489
910
+ Table 3: Ablation study for the effects of each component of
911
+ SSGNet. We use the Middlebury 2005 with ×8 for the joint
912
+ depth upsampling, and the Kodak with a noise level σ=50
913
+ for the single image denoising.
914
+ 4.2
915
+ Image Denoising
916
+ We treat single image denoising to check the effectiveness of
917
+ our SSGNet if the scene structure in the input image is cor-
918
+ rupted by noise. For this experiment, we use IDR (Zhang
919
+ et al. 2022) as a baseline image denoising network. IDR
920
+ suppresses the image noise in a self-supervised manner by
921
+ proposing an iterative data refinement scheme. The key of
922
+ IDR is to reduce a data bias between synthetic-real noisy
923
+ images and ideal noisy-clean images. To embed the scene
924
+ structure to IDR, we simply concatenate it from our pre-
925
+ trained SSGNet with the noisy input image in Fig.5(b). As
926
+ the rounds go on iteratively, our SSGNet focuses more on
927
+ texture information of input scenes by ignoring the image
928
+ noise, as already displayed in Fig.4.
929
+ To validate the applicability to the image denoising task
930
+ as well, we compare our results with various state-of-
931
+ the-art self-supervised models, including BM3D (M¨akinen,
932
+ Azzari, and Foi 2019) N2V (Krull, Buchholz, and Jug
933
+ 2019), Nr2n (Moran et al. 2020) DBSN (Wu et al. 2020),
934
+ N2N (Lehtinen et al. 2018), and IDR (Zhang et al. 2022).
935
+ For the evaluation, we strictly follow the experimental setup
936
+ in (Zhang et al. 2022). We quantitatively measure PSNR and
937
+ SSIM on Kodak (Kodak 1993), BSD300 (Movahedi and El-
938
+ der 2010) and BSD68 (Martin et al. 2001) datasets for the
939
+ zero-shot generalization. The models are trained on Gaus-
940
+ sian noise with the continuous noise level σ = [0, 50] and
941
+ tested on σ = 25 and 50.
942
+ As shown in Tab.2, IDR with our SSGNet embedded
943
+ achieves the best performance among all the competitive
944
+ methods regardless of the noise levels. We emphasize that
945
+ the performance gain by our SSGNet is about 0.58dB on av-
946
+ erage. Considering the performance difference between the
947
+ second and the third best methods is about 0.26dB achieved
948
+ by the paradigm shift from a statistical reasoning of im-
949
+ age restoration (Lehtinen et al. 2018) to the iterative refine-
950
+ ment (Zhang et al. 2022), SSGNet makes meaningful contri-
951
+ bution. Fig.7 shows some example results. With the power-
952
+ ful capability of IDR on the noise suppression, our SSGNet
953
+ preserves the scene texture of the objects well.
954
+ Guidance
955
+ LR
956
+ 𝜆=0.001
957
+ 𝜆=0.1
958
+ 𝜆=1
959
+ 𝜆=100
960
+ 𝜆=1000
961
+ Canny2
962
+ EV=10
963
+ EV=7
964
+ EV=5
965
+ EV=2
966
+ Canny1
967
+ Ours
968
+ (a) Joint Depth Upsampling
969
+ GT
970
+ 𝜆=0.001
971
+ 𝜆=0.1
972
+ 𝜆=1
973
+ 𝜆=100
974
+ 𝜆=1000
975
+ Canny2
976
+ Canny1
977
+ EV=10
978
+ EV=7
979
+ EV=5
980
+ EV=2
981
+ Noise
982
+ (𝜎 = 50)
983
+ Ours
984
+ PSNR
985
+ 26.66dB
986
+ 26.69dB
987
+ 26.68dB
988
+ 26.68dB
989
+ 26.73dB
990
+ 23.78dB
991
+ 26.92dB
992
+ 24.35dB
993
+ 26.65dB
994
+ 26.71dB
995
+ 26.69dB
996
+ 26.68dB
997
+ 16.16dB
998
+ (b) Image Denoising
999
+ Figure 8: Qualitative comparison for different settings of SS-
1000
+ GNet. EV denotes the number of eigenvectors, and Canny1
1001
+ and Canny2 mean edge threshold settings such as ψ = {0.6,
1002
+ 0.9, 1.4} and ψ = {1.0, 2.0, 3.0}, respectively. For the joint
1003
+ depth upsampling, we display the reconstruction results and
1004
+ the error maps, together.
1005
+ 4.3
1006
+ Ablation Study
1007
+ An extensive ablation study is conducted to examine the ef-
1008
+ fect of each component on SSGNet: the hyper-parameter
1009
+ λ in our loss function and the number of eigenvectors. We
1010
+ additionally test alternative scene structures computed from
1011
+ Canny Edge (Canny 1986) with different thresholds. For this
1012
+ ablation study, we measure RMSE and MAE on the Middle-
1013
+ bury 2005 dataset for the joint depth upsampling (×8), and
1014
+ PSNR and SSIM on the Kodak dataset for the single im-
1015
+ age denoising (σ = 50), whose results and examples are
1016
+ reported in Tab.3 and Fig.8, respectively.
1017
+ Choice of Hyper-parameter λ
1018
+ Since our loss function re-
1019
+ quires the selection of a hyper-parameter λ, it is important to
1020
+ study the sensitivity of the performances to the choice of λ.
1021
+ We carry out this experiment for six different values: 0.001,
1022
+ 0.1, 1, 100 and 1000 as well as 40 in our setting.
1023
+ As a result, SSGNet’s performance is insensitive to the
1024
+ choice of λ. In the joint depth upsampling, the performance
1025
+ difference according to λ is very marginal in that RMSE
1026
+ and MAE are at most 0.02cm and 0.001cm off the optimal
1027
+ values. In contrast, the performance gain for the image de-
1028
+ noising when using λ = 40 is relatively large. Compared
1029
+ to λ = 1000 which shows the second best performance, the
1030
+ improvement from 0.08dB in PSNR when using λ = 40
1031
+ brings more benefits for the comparisons with the state-of-
1032
+ the-art methods. In total, we find the optimal trade-off be-
1033
+
1034
+ tween these two tasks.
1035
+ The Number of Eigenvectors
1036
+ The number of eigenvec-
1037
+ tors to represent scene structures is closely related to the
1038
+ number of learnable parameters in SSGNet. It is impor-
1039
+ tant for us to determine the optimal trade-off parameter in
1040
+ consideration of both the minimum number and the perfor-
1041
+ mances on these two tasks.
1042
+ We investigate the performances of SSGNet with two,
1043
+ five, seven and ten as well as three eigenvectors. Interest-
1044
+ ingly, we observe the similar phenomenon just as above. The
1045
+ performance degradation on the joint depth upsampling is
1046
+ very small (about 0.02cm in RMSE and 0.003 in MAE), and
1047
+ the performance gain by 0.07dB in PSNR over the second
1048
+ best value on the image denoising is achieved. For the same
1049
+ reason, we set the number of eigenvectors to 3.
1050
+ Comparison with Hand-crafted Structure Prior
1051
+ Hand-
1052
+ crafted edge detection is widely used for representing scene
1053
+ structures, even in recent models for low-level vision i.e. in-
1054
+ painting (Guo, Yang, and Huang 2021; Dong, Cao, and
1055
+ Fu 2022) and super-resolution (Nazeri, Thasarathan, and
1056
+ Ebrahimi 2019). We employ one of the representative hand-
1057
+ crafted edge map detection methods, Canny edge.
1058
+ For fair comparison, we generate a set of edge maps with
1059
+ various thresholds for image gradients ψ, and embed it into
1060
+ the baseline networks. We note that the edge maps are used
1061
+ as input of our attention layer for the networks to selec-
1062
+ tively choose informative scene structures during the train-
1063
+ ing phase. Here, we set two types of thresholds ψ to {0.6,
1064
+ 1.5, 2.4} and {1.0, 2.0, 3.0} that we manually find the best
1065
+ settings to configure image textures and object boundaries.
1066
+ As shown in Tab.3, the interesting fact is that the perfor-
1067
+ mance drop of the joint depth upsampling is not huge when
1068
+ using the hand-crafted edge maps (within 0.07cm in RMSE
1069
+ and 0.037cm in MAE). On the other hand, there is the large
1070
+ performance gap between ours and the Canny edge maps
1071
+ (about 4dB in PSNR and 0.3 in SSIM).
1072
+ Two possible reasons why the Canny edge fails to gen-
1073
+ erate task-specific scene representations are: (1) The edge
1074
+ maps are not affected by back-propagation in training phase.
1075
+ (2) Based on the experimental results for the image denois-
1076
+ ing, the Canny edge is sensitive to image noise, which may
1077
+ corrupt the estimated scene structures and eventually not
1078
+ work as a prior. On the other hand, as displayed in Fig.8, our
1079
+ SSGNet returns the sharpest images, enabling the contents
1080
+ to be read. We thus argue that this experiment demonstrates
1081
+ the efficacy of our learnable structure guidance.
1082
+ 5
1083
+ Conclusion
1084
+ In this paper, we present a single general network for repre-
1085
+ senting task-specific scene structures. We cast the problem
1086
+ of the acquisition of informative scene structures as a tradi-
1087
+ tional graph partitioning problem on the image domain, and
1088
+ solve it using a lightweight CNN framework without any
1089
+ supervision, Scene Structure Guidance Network (SSGNet).
1090
+ Our SSGNet computes coefficients of a set of eigenvectors,
1091
+ enabling to efficiently produce diverse feature representa-
1092
+ tions of a scene with a small number of learnable parameters.
1093
+ With our proposed two loss terms, the eigen loss and the spa-
1094
+ tial loss, SSGNet is first initialized to parameterize the scene
1095
+ Scale
1096
+ ×2
1097
+ ×3
1098
+ ×4
1099
+ PSNR
1100
+ SSIM
1101
+ PSNR
1102
+ SSIM
1103
+ PSNR
1104
+ SSIM
1105
+ SeaNet
1106
+ 38.08
1107
+ 0.9609
1108
+ 34.55
1109
+ 0.9282
1110
+ 32.33
1111
+ 0.8981
1112
+ SeaNet+
1113
+ 38.15
1114
+ 0.9611
1115
+ 34.65
1116
+ 0.9290
1117
+ 32.44
1118
+ 0.8981
1119
+ Ours
1120
+ 38.18
1121
+ 0.9612
1122
+ 34.68
1123
+ 0.9290
1124
+ 32.51
1125
+ 0.8983
1126
+ Scale
1127
+ KITTI
1128
+ NYU
1129
+ RMSE
1130
+ MAE
1131
+ RMSE
1132
+ MAE
1133
+ pNCNN
1134
+ 1013.08
1135
+ 251.53
1136
+ 0.058
1137
+ 0.144
1138
+ Ours
1139
+ 1009.51
1140
+ 256.06
1141
+ 0.056
1142
+ 0.138
1143
+ Table 4: Additional experiments on Set5 (Bevilacqua et al.
1144
+ 2012) for image super-resolution with ×2, ×3 and ×4, and
1145
+ NYUv2 (Silberman et al. 2012) and KITTI (Uhrig et al.
1146
+ 2017) datasets for unguided depth completion. (unit:cm)
1147
+ Baseline
1148
+ Ours
1149
+ Scene
1150
+ Structure
1151
+ GT
1152
+ Sparse Point
1153
+ Figure 9: Comparison results on unguided depth completion
1154
+ on the NYUv2 dataset with pNCNN. Even no a guidance
1155
+ image, our SSGNet establishes the scene structure well.
1156
+ structures. The SSGNet is then embedded into the baseline
1157
+ networks and the parameters are fine-tuned to learn task-
1158
+ specific guidance features as the training proceeds. Lastly,
1159
+ we show the promising performance gains for both the joint
1160
+ depth upsampling and image denoising, even with the good
1161
+ cross-dataset generalization capability.
1162
+ Discussion
1163
+ Although our SSGNet achieves the state-of-
1164
+ the-art results for the tasks with the simple embedding ap-
1165
+ proach across the baseline networks, there are still rooms
1166
+ for improvements.
1167
+ To suggest our future directions, we conduct additional
1168
+ small experiments for image super-resolution and unguided
1169
+ depth completion tasks which is a dense depth prediction
1170
+ from a sparse input depth without any guidance image. In
1171
+ these experiments, the super-resolution only use downsam-
1172
+ pled input images to extract scene structures, and the depth
1173
+ completion relies on the pretrained weight of SSGNet to rep-
1174
+ resent scene configurations. We choose SeaNet (Fang, Li,
1175
+ and Zeng 2020), a CNN architecture equipped with a sep-
1176
+ arate scene texture estimation branch, and pNCNN (Eldes-
1177
+ okey et al. 2020), a lightweight probabilistic CNN (∼ 670K)
1178
+ to refine initial dense depth predictions based on a statistical
1179
+ uncertainty measure, for the image super-resolution and the
1180
+ unguided depth completion, respectively.
1181
+ Tab.4 reports that we obtain the performance gains with
1182
+ our SSGNet over the baseline models. Particularly, the syn-
1183
+ ergy between our SSGNet and pNCNN is noticeable in
1184
+ Fig.9. Unfortunately, the baseline models with our SSGNet
1185
+ do not reach the quality of huge size models, a ViT-based
1186
+ image super-resolution (Liang et al. 2021) and a GAN-based
1187
+ unguided depth completion (Lu et al. 2020).
1188
+ One of the future directions is to devise the best incorpo-
1189
+ ration scheme of our SSGNet in that their structures are too
1190
+ tricky to intuitively embed it. Another is that a joint multi-
1191
+ modality training from heterogeneous data is expected to
1192
+ represent more informative scene structures and to extend
1193
+ the applicability of SSGNet.
1194
+
1195
+ Acknowledgements
1196
+ This research was partially supported by ’Project for Sci-
1197
+ ence and Technology Opens the Future of the Region’ pro-
1198
+ gram through the INNOPOLIS FOUNDATION funded by
1199
+ Ministry of Science and ICT (Project Number: 2022-DD-
1200
+ UP-0312), GIST-MIT Research Collaboration funded by the
1201
+ GIST, the Ministry of Trade, Industry and Energy (MOTIE)
1202
+ and Korea Institute for Advancement of Technology (KIAT)
1203
+ through the International Cooperative R&D program in part
1204
+ (P0019797), the National Research Foundation of Korea
1205
+ (NRF) (No.2020R1C1C1012635) grant funded by the Ko-
1206
+ rea government (MSIT), Vehicles AI Convergence Research
1207
+ & Development Program through the National IT Industry
1208
+ Promotion Agency of Korea (NIPA) funded by the Ministry
1209
+ of Science and ICT (No.S1602-20-1001), and the Institute
1210
+ of Information & communications Technology Planning &
1211
+ Evaluation (IITP) grant funded by the Korea government
1212
+ (MSIT) (No.2019-0-01842, Artificial Intelligence Graduate
1213
+ School Program (GIST), No.2021-0-02068, Artificial Intel-
1214
+ ligence Innovation Hub)
1215
+ References
1216
+ Ba, J. L.; Kiros, J. R.; and Hinton, G. E. 2016. Layer normalization.
1217
+ arXiv preprint arXiv:1607.06450.
1218
+ Bevilacqua, M.; Roumy, A.; Guillemot, C.; and Alberi-Morel,
1219
+ M. L. 2012. Low-complexity single-image super-resolution based
1220
+ on nonnegative neighbor embedding.
1221
+ In Proceedings of British
1222
+ Machine Vision Conference (BMVC).
1223
+ Bloesch, M.; Czarnowski, J.; Clark, R.; Leutenegger, S.; and Davi-
1224
+ son, A. J. 2018.
1225
+ CodeSLAM—learning a compact, optimisable
1226
+ representation for dense visual SLAM. In Proceedings of IEEE
1227
+ Conference on Computer Vision and Pattern Recognition (CVPR).
1228
+ Boykov, Y.; Veksler, O.; and Zabih, R. 2001. Fast approximate
1229
+ energy minimization via graph cuts. IEEE Transactions on Pattern
1230
+ Analysis and Machine Intelligence (TPAMI), 23(11): 1222–1239.
1231
+ Canny, J. 1986.
1232
+ A computational approach to edge detection.
1233
+ IEEE Transactions on Pattern Analysis and Machine Intelligence
1234
+ (TPAMI), 8(6): 679–698.
1235
+ Cao, C.; and Fu, Y. 2021. Learning a sketch tensor space for image
1236
+ inpainting of man-made scenes. In Proceedings of International
1237
+ Conference on Computer Vision (ICCV).
1238
+ Caron, M.; Touvron, H.; Misra, I.; J´egou, H.; Mairal, J.; Bo-
1239
+ janowski, P.; and Joulin, A. 2021.
1240
+ Emerging properties in self-
1241
+ supervised vision transformers. In Proceedings of International
1242
+ Conference on Computer Vision (ICCV).
1243
+ Chen, H.; Wang, Y.; Guo, T.; Xu, C.; Deng, Y.; Liu, Z.; Ma, S.; Xu,
1244
+ C.; Xu, C.; and Gao, W. 2021. Pre-trained image processing trans-
1245
+ former. In Proceedings of IEEE Conference on Computer Vision
1246
+ and Pattern Recognition (CVPR).
1247
+ Chen, Q.; Li, D.; and Tang, C.-K. 2013. KNN matting. IEEE Trans-
1248
+ actions on Pattern Analysis and Machine Intelligence (TPAMI),
1249
+ 35(9): 2175–2188.
1250
+ Cohen, N.; and Shashua, A. 2017.
1251
+ Inductive bias of deep con-
1252
+ volutional networks through pooling geometry. In International
1253
+ Conference on Learning Representations (ICLR).
1254
+ Cover, T.; and Hart, P. 1967. Nearest neighbor pattern classifica-
1255
+ tion. IEEE Transactions on Information Theory, 13(1): 21–27.
1256
+ de Lutio, R.; Becker, A.; D’Aronco, S.; Russo, S.; Wegner, J. D.;
1257
+ and Schindler, K. 2022. Learning Graph Regularisation for Guided
1258
+ Super-Resolution. In Proceedings of IEEE Conference on Com-
1259
+ puter Vision and Pattern Recognition (CVPR).
1260
+ Dong, Q.; Cao, C.; and Fu, Y. 2022. Incremental transformer struc-
1261
+ ture enhanced image inpainting with masking positional encoding.
1262
+ In Proceedings of IEEE Conference on Computer Vision and Pat-
1263
+ tern Recognition (CVPR).
1264
+ Dong, X.; Yokoya, N.; Wang, L.; and Uezato, T. 2022.
1265
+ Learn-
1266
+ ing Mutual Modulation for Self-Supervised Cross-Modal Super-
1267
+ Resolution. In Proceedings of European Conference on Computer
1268
+ Vision (ECCV).
1269
+ Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai,
1270
+ X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.;
1271
+ Gelly, S.; et al. 2021. An image is worth 16x16 words: Transform-
1272
+ ers for image recognition at scale. In International Conference on
1273
+ Learning Representations (ICLR).
1274
+ Eldesokey, A.; Felsberg, M.; Holmquist, K.; and Persson, M. 2020.
1275
+ Uncertainty-aware cnns for depth completion: Uncertainty from
1276
+ beginning to end. In Proceedings of IEEE Conference on Com-
1277
+ puter Vision and Pattern Recognition (CVPR).
1278
+ Fang, F.; Li, J.; and Zeng, T. 2020. Soft-edge assisted network
1279
+ for single image super-resolution. IEEE Transactions on Image
1280
+ Processing (TIP), 29: 4656–4668.
1281
+ Fiedler, M. 1973. Algebraic connectivity of graphs. Czechoslovak
1282
+ mathematical journal, 23(2): 298–305.
1283
+ Fuoli, D.; Van Gool, L.; and Timofte, R. 2021. Fourier space losses
1284
+ for efficient perceptual image super-resolution. In Proceedings of
1285
+ International Conference on Computer Vision (ICCV).
1286
+ Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-
1287
+ Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Gen-
1288
+ erative adversarial nets. In Proceedings of the Neural Information
1289
+ Processing Systems (NeurIPS).
1290
+ Gu, S.; Zuo, W.; Guo, S.; Chen, Y.; Chen, C.; and Zhang, L. 2017.
1291
+ Learning dynamic guidance for depth image enhancement.
1292
+ In
1293
+ Proceedings of IEEE Conference on Computer Vision and Pattern
1294
+ Recognition (CVPR).
1295
+ Guo, X.; Li, Y.; Ma, J.; and Ling, H. 2018. Mutually guided im-
1296
+ age filtering. IEEE Transactions on Pattern Analysis and Machine
1297
+ Intelligence (TPAMI), 42(3): 694–707.
1298
+ Guo, X.; Yang, H.; and Huang, D. 2021. Image Inpainting via Con-
1299
+ ditional Texture and Structure Dual Generation. In Proceedings of
1300
+ International Conference on Computer Vision (ICCV).
1301
+ He, K.; Sun, J.; and Tang, X. 2012.
1302
+ Guided image filtering.
1303
+ IEEE Transactions on Pattern Analysis and Machine Intelligence
1304
+ (TPAMI), 35(6): 1397–1409.
1305
+ He, L.; Zhu, H.; Li, F.; Bai, H.; Cong, R.; Zhang, C.; Lin, C.; Liu,
1306
+ M.; and Zhao, Y. 2021. Towards fast and accurate real-world depth
1307
+ super-resolution: Benchmark dataset and baseline. In Proceedings
1308
+ of IEEE Conference on Computer Vision and Pattern Recognition
1309
+ (CVPR).
1310
+ Hendrycks, D.; and Gimpel, K. 2016. Gaussian error linear units
1311
+ (gelus). arXiv preprint arXiv:1606.08415.
1312
+ Hirschmuller, H.; and Scharstein, D. 2007. Evaluation of cost func-
1313
+ tions for stereo matching. In Proceedings of IEEE Conference on
1314
+ Computer Vision and Pattern Recognition (CVPR).
1315
+ Irwin, F.; et al. 1968. An isotropic 3x3 image gradient operator.
1316
+ Presentation at Stanford AI Project, 2014(02).
1317
+ Jin, L.; Xu, Y.; Zheng, J.; Zhang, J.; Tang, R.; Xu, S.; Yu, J.; and
1318
+ Gao, S. 2020. Geometric structure based and regularized depth
1319
+ estimation from 360 indoor imagery. In Proceedings of IEEE Con-
1320
+ ference on Computer Vision and Pattern Recognition (CVPR).
1321
+
1322
+ Johnson, J.; Alahi, A.; and Fei-Fei, L. 2016. Perceptual losses for
1323
+ real-time style transfer and super-resolution. In Proceedings of Eu-
1324
+ ropean Conference on Computer Vision (ECCV).
1325
+ Karras, T.; Laine, S.; and Aila, T. 2019. A style-based generator
1326
+ architecture for generative adversarial networks. In Proceedings
1327
+ of IEEE Conference on Computer Vision and Pattern Recognition
1328
+ (CVPR).
1329
+ Kim, B.; Ponce, J.; and Ham, B. 2021. Deformable kernel networks
1330
+ for joint image filtering. International Journal on Computer Vision
1331
+ (IJCV), 129(2): 579–600.
1332
+ Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic
1333
+ optimization. In International Conference on Learning Represen-
1334
+ tations (ICLR).
1335
+ Kodak, E. 1993. Kodak lossless true color image suite (PhotoCD
1336
+ PCD0992). Journal of Signal and Information Processing, 6.
1337
+ Krishnan, D.; and Fergus, R. 2009. Fast image deconvolution using
1338
+ hyper-Laplacian priors. In Proceedings of the Neural Information
1339
+ Processing Systems (NeurIPS).
1340
+ Krull, A.; Buchholz, T.-O.; and Jug, F. 2019. Noise2void-learning
1341
+ denoising from single noisy images. In Proceedings of IEEE Con-
1342
+ ference on Computer Vision and Pattern Recognition (CVPR).
1343
+ LeCun, Y.; Boser, B.; Denker, J. S.; Henderson, D.; Howard, R. E.;
1344
+ Hubbard, W.; and Jackel, L. D. 1989. Backpropagation applied to
1345
+ handwritten zip code recognition. Neural computation, 1(4): 541–
1346
+ 551.
1347
+ Lehtinen, J.; Munkberg, J.; Hasselgren, J.; Laine, S.; Karras, T.;
1348
+ Aittala, M.; and Aila, T. 2018.
1349
+ Noise2Noise: Learning image
1350
+ restoration without clean data. Proceedings of the International
1351
+ Conference on Machine Learning (ICML).
1352
+ Levin, A.; Fergus, R.; Durand, F.; and Freeman, W. T. 2007. Image
1353
+ and depth from a conventional camera with a coded aperture. ACM
1354
+ transactions on graphics (TOG), 26(3): 70–es.
1355
+ Levin, A.; Lischinski, D.; and Weiss, Y. 2007. A closed-form solu-
1356
+ tion to natural image matting. IEEE Transactions on Pattern Anal-
1357
+ ysis and Machine Intelligence (TPAMI), 30(2): 228–242.
1358
+ Levin, A.; Rav-Acha, A.; and Lischinski, D. 2008. Spectral mat-
1359
+ ting. IEEE Transactions on Pattern Analysis and Machine Intelli-
1360
+ gence (TPAMI), 30(10): 1699–1712.
1361
+ Li, J.; Lu, Z.; Zeng, G.; Gan, R.; and Zha, H. 2014. Similarity-
1362
+ aware patchwork assembly for depth image super-resolution. In
1363
+ Proceedings of IEEE Conference on Computer Vision and Pattern
1364
+ Recognition (CVPR).
1365
+ Li, Y.; Huang, J.-B.; Ahuja, N.; and Yang, M.-H. 2016. Deep joint
1366
+ image filtering. In Proceedings of European Conference on Com-
1367
+ puter Vision (ECCV).
1368
+ Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; and Timofte,
1369
+ R. 2021. Swinir: Image restoration using swin transformer. In Pro-
1370
+ ceedings of International Conference on Computer Vision (ICCV).
1371
+ Liu, H.; Wan, Z.; Huang, W.; Song, Y.; Han, X.; and Liao, J. 2021a.
1372
+ Pd-gan: Probabilistic diverse gan for image inpainting. In Proceed-
1373
+ ings of IEEE Conference on Computer Vision and Pattern Recog-
1374
+ nition (CVPR).
1375
+ Liu, Y.; Anwar, S.; Zheng, L.; and Tian, Q. 2020. Gradnet image
1376
+ denoising. In Proceedings of IEEE Conference on Computer Vision
1377
+ and Pattern Recognition Workshop (CVPRW).
1378
+ Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; and
1379
+ Guo, B. 2021b. Swin transformer: Hierarchical vision transformer
1380
+ using shifted windows. In Proceedings of International Conference
1381
+ on Computer Vision (ICCV).
1382
+ Loshchilov, I.; and Hutter, F. 2019. Decoupled weight decay reg-
1383
+ ularization. In International Conference on Learning Representa-
1384
+ tions (ICLR).
1385
+ Lu, K.; Barnes, N.; Anwar, S.; and Zheng, L. 2020. From depth
1386
+ what can you see? Depth completion via auxiliary image recon-
1387
+ struction. In Proceedings of IEEE Conference on Computer Vision
1388
+ and Pattern Recognition (CVPR).
1389
+ Lutio, R. d.; D’aronco, S.; Wegner, J. D.; and Schindler, K. 2019.
1390
+ Guided super-resolution as pixel-to-pixel transformation. In Pro-
1391
+ ceedings of International Conference on Computer Vision (ICCV).
1392
+ M¨akinen, Y.; Azzari, L.; and Foi, A. 2019. Exact transform-domain
1393
+ noise variance for collaborative filtering of stationary correlated
1394
+ noise. In Proceedings of International Conference on Image Pro-
1395
+ cessing (ICIP).
1396
+ Martin, D.; Fowlkes, C.; Tal, D.; and Malik, J. 2001. A database
1397
+ of human segmented natural images and its application to evalu-
1398
+ ating segmentation algorithms and measuring ecological statistics.
1399
+ In Proceedings of International Conference on Computer Vision
1400
+ (ICCV).
1401
+ Moran, N.; Schmidt, D.; Zhong, Y.; and Coady, P. 2020. Nois-
1402
+ ier2noise: Learning to denoise from unpaired noisy data. In Pro-
1403
+ ceedings of IEEE Conference on Computer Vision and Pattern
1404
+ Recognition (CVPR).
1405
+ Movahedi, V.; and Elder, J. H. 2010. Design and perceptual vali-
1406
+ dation of performance measures for salient object segmentation. In
1407
+ Proceedings of IEEE Conference on Computer Vision and Pattern
1408
+ Recognition Workshop (CVPRW).
1409
+ Nazeri, K.; Ng, E.; Joseph, T.; Qureshi, F.; and Ebrahimi, M. 2019.
1410
+ Edgeconnect: Structure guided image inpainting using edge pre-
1411
+ diction. In Proceedings of International Conference on Computer
1412
+ Vision Workshop (ICCVW).
1413
+ Nazeri, K.; Thasarathan, H.; and Ebrahimi, M. 2019.
1414
+ Edge-
1415
+ informed single image super-resolution. In Proceedings of Inter-
1416
+ national Conference on Computer Vision Workshop (ICCVW).
1417
+ Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan,
1418
+ G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019.
1419
+ Pytorch: An imperative style, high-performance deep learning li-
1420
+ brary. In Proceedings of the Neural Information Processing Sys-
1421
+ tems (NeurIPS).
1422
+ Pickup, L.; Roberts, S. J.; and Zisserman, A. 2003. A sampled tex-
1423
+ ture prior for image super-resolution. In Proceedings of the Neural
1424
+ Information Processing Systems (NeurIPS).
1425
+ Scharstein, D.; Hirschm¨uller, H.; Kitajima, Y.; Krathwohl, G.;
1426
+ Neˇsi´c, N.; Wang, X.; and Westling, P. 2014. High-resolution stereo
1427
+ datasets with subpixel-accurate ground truth. In German confer-
1428
+ ence on pattern recognition (GCPR), 31–42. Springer.
1429
+ Scharstein, D.; and Pal, C. 2007.
1430
+ Learning conditional random
1431
+ fields for stereo. In Proceedings of IEEE Conference on Computer
1432
+ Vision and Pattern Recognition (CVPR).
1433
+ Shi, J.; and Malik, J. 2000. Normalized cuts and image segmenta-
1434
+ tion. IEEE Transactions on Pattern Analysis and Machine Intelli-
1435
+ gence (TPAMI), 22(8): 888–905.
1436
+ Silberman, N.; and Fergus, R. 2011. Indoor scene segmentation
1437
+ using a structured light sensor.
1438
+ In Proceedings of International
1439
+ Conference on Computer Vision Workshop (ICCVW).
1440
+ Silberman, N.; Hoiem, D.; Kohli, P.; and Fergus, R. 2012. Indoor
1441
+ segmentation and support inference from rgbd images. In Proceed-
1442
+ ings of European Conference on Computer Vision (ECCV).
1443
+ Sun, J.; Xu, Z.; and Shum, H.-Y. 2008. Image super-resolution
1444
+ using gradient profile prior. In Proceedings of IEEE Conference on
1445
+ Computer Vision and Pattern Recognition (CVPR).
1446
+
1447
+ Suvorov, R.; Logacheva, E.; Mashikhin, A.; Remizova, A.;
1448
+ Ashukha, A.; Silvestrov, A.; Kong, N.; Goka, H.; Park, K.; and
1449
+ Lempitsky, V. 2022. Resolution-robust large mask inpainting with
1450
+ fourier convolutions. In Proceedings of the IEEE/CVF Winter Con-
1451
+ ference on Applications of Computer Vision (WACV).
1452
+ Tai, Y.-W.; Liu, S.; Brown, M. S.; and Lin, S. 2010. Super resolu-
1453
+ tion using edge prior and single image detail synthesis. In Proceed-
1454
+ ings of IEEE Conference on Computer Vision and Pattern Recog-
1455
+ nition (CVPR).
1456
+ Tang, C.; and Tan, P. 2019. Ba-net: Dense bundle adjustment net-
1457
+ work. In International Conference on Learning Representations
1458
+ (ICLR).
1459
+ Tibshirani, R. 1996. Regression shrinkage and selection via the
1460
+ lasso. Journal of the Royal Statistical Society: Series B (Method-
1461
+ ological), 58(1): 267–288.
1462
+ Tomasi, C.; and Manduchi, R. 1998. Bilateral filtering for gray
1463
+ and color images. In Proceedings of International Conference on
1464
+ Computer Vision (ICCV).
1465
+ Uhrig, J.; Schneider, N.; Schneider, L.; Franke, U.; Brox, T.; and
1466
+ Geiger, A. 2017. Sparsity Invariant CNNs. In International Con-
1467
+ ference on 3D Vision (3DV).
1468
+ Wang, X.; Xie, L.; Dong, C.; and Shan, Y. 2021.
1469
+ Real-esrgan:
1470
+ Training real-world blind super-resolution with pure synthetic data.
1471
+ In Proceedings of International Conference on Computer Vision
1472
+ (ICCV).
1473
+ Wu, X.; Liu, M.; Cao, Y.; Ren, D.; and Zuo, W. 2020. Unpaired
1474
+ learning of deep image denoising.
1475
+ In Proceedings of European
1476
+ Conference on Computer Vision (ECCV).
1477
+ Xie, J.; Feris, R. S.; and Sun, M.-T. 2015. Edge-guided single depth
1478
+ image super resolution. IEEE Transactions on Image Processing
1479
+ (TIP), 25(1): 428–438.
1480
+ Xu, Z.; Zhang, H.; Wang, Y.; Chang, X.; and Liang, Y. 2010. L1/2
1481
+ regularization. Science China Information Sciences, 53(6): 1159–
1482
+ 1169.
1483
+ Yang, J.; Qi, Z.; and Shi, Y. 2020. Learning to incorporate struc-
1484
+ ture knowledge for image inpainting. In Proceedings of the AAAI
1485
+ Conference on Artificial Intelligence (AAAI).
1486
+ Zhang, Q.; Shen, X.; Xu, L.; and Jia, J. 2014. Rolling guidance
1487
+ filter. In Proceedings of European Conference on Computer Vision
1488
+ (ECCV).
1489
+ Zhang, Y.; Li, D.; Law, K. L.; Wang, X.; Qin, H.; and Li, H. 2022.
1490
+ IDR: Self-Supervised Image Denoising via Iterative Data Refine-
1491
+ ment. In Proceedings of IEEE Conference on Computer Vision and
1492
+ Pattern Recognition (CVPR).
1493
+ Zhang, Z.; and Sabuncu, M. 2018. Generalized cross entropy loss
1494
+ for training deep neural networks with noisy labels. In Proceedings
1495
+ of the Neural Information Processing Systems (NeurIPS).
1496
+ Zhong, Y.; Yuan, B.; Wu, H.; Yuan, Z.; Peng, J.; and Wang, Y.-X.
1497
+ 2021. Pixel contrastive-consistent semi-supervised semantic seg-
1498
+ mentation. In Proceedings of International Conference on Com-
1499
+ puter Vision (ICCV).
1500
+ Zhu, J.-Y.; Park, T.; Isola, P.; and Efros, A. A. 2017. Unpaired
1501
+ image-to-image translation using cycle-consistent adversarial net-
1502
+ works. In Proceedings of International Conference on Computer
1503
+ Vision (ICCV).
1504
+
E9AyT4oBgHgl3EQfrPkW/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
FtE1T4oBgHgl3EQfqwWe/content/tmp_files/2301.03347v1.pdf.txt ADDED
@@ -0,0 +1,1020 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A Novel Waveform Design for OFDM-Based
2
+ Joint Sensing and Communication System
3
+ Yi Geng
4
+ Cictmobile, China
5
6
+ Abstract—The dominating waveform in 5G is orthogonal
7
+ frequency division multiplexing (OFDM). OFDM will remain
8
+ a promising waveform candidate for joint communication and
9
+ sensing (JCAS) in 6G since OFDM can provide excellent
10
+ data transmission capability and accurate sensing information.
11
+ This paper proposes a novel OFDM-based diagonal waveform
12
+ structure and corresponding signal processing algorithm. This
13
+ approach allocates the sensing signals along the diagonal of the
14
+ time-frequency resource block. Therefore, the sensing signals in a
15
+ linear structure span both the frequency and time domains. The
16
+ range and velocity of the object can be estimated simultaneously
17
+ by applying 1D-discrete Fourier transform (DFT) to the diagonal
18
+ sensing signals. Compared to the conventional 2D-DFT OFDM
19
+ radar algorithm, the computational complexity of the proposed
20
+ algorithm is low. In addition, the sensing overhead can be sub-
21
+ stantially reduced. The performance of the proposed waveform
22
+ is evaluated using simulation and analysis of results.
23
+ Index Terms—OFDM, 6G, JCAS, radar, waveform, DFT, IDFT
24
+ I. INTRODUCTION
25
+ Sixth generation (6G) will not be only about communica-
26
+ tion. Joint communication and sensing (JCAS), which will en-
27
+ able a myriad of new use cases, is a significant area of interest
28
+ within 6G research. To achieve a favorable trade-off between
29
+ communication and sensing, the waveforms for 6G need to
30
+ be designed for simultaneous communication and sensing
31
+ [1]. Orthogonal frequency division multiplexing (OFDM) is a
32
+ promising waveform candidate for JCAS systems. Compared
33
+ to other JCAS waveform candidates, e.g., frequency modu-
34
+ lated continuous wave (FMCW), OFDM waveform naturally
35
+ supports MIMO processing and has excellent communication
36
+ performance [2]. In terms of sensing performance, OFDM is
37
+ well-suited for range and velocity estimation. For example,
38
+ an OFDM-based periodogram algorithm can obtain range-
39
+ velocity estimation by applying 2D-discrete Fourier transform
40
+ (DFT) to signals in modulation symbol domain [3] [4]. On
41
+ the other hand, OFDM waveform avoids extra hardware com-
42
+ plexity and costs compared to dual-waveform systems (e.g.,
43
+ time-multiplexing of OFDM and FMCW) [3].
44
+ The paper is structured as follows. Section II gives an
45
+ example of how to design the sensing signal structure ac-
46
+ cording to the requirements of a traffic monitoring scenario.
47
+ Section III provides an overview of the periodogram algorithm.
48
+ In Section IV, we propose a novel diagonal waveform structure
49
+ and corresponding signal processing algorithm. Section V
50
+ concludes the paper.
51
+ TABLE I: Exemplary KPIs for traffic monitoring
52
+ Requirement
53
+ Symbol
54
+ Value
55
+ Range resolution
56
+ ∆R
57
+ 0.4 m
58
+ Velocity resolution
59
+ ∆v
60
+ 0.2 m/s
61
+ Maximum detection range
62
+ Rmax
63
+ 150 m
64
+ Maximum detection velocity
65
+ vmax
66
+ 90 m/s
67
+ TABLE II: OFDM system parameters under the condition of
68
+ satisfying the KPIs in Table I
69
+ System parameter
70
+ Symbol
71
+ Value
72
+ Carrier frequency
73
+ fc
74
+ 28 GHz
75
+ Bandwidth
76
+ B
77
+ 400 MHz
78
+ Subcarrier spacing
79
+ SCS (∆f)
80
+ 120 KHz
81
+ Total subcarriers
82
+ Nc
83
+ 3360
84
+ OFDM symbol duration
85
+ Tsym
86
+ 8.92 µs
87
+ OFDM slot duration
88
+ Ts
89
+ 0.125 ms
90
+ Time-domain duration
91
+ Tb
92
+ 240Ts (30 ms)
93
+ Total symbols in Tb
94
+ Nsym
95
+ 3360
96
+ Comb size in frequency domain
97
+ Cf
98
+ 7∆f
99
+ Comb size in time domain
100
+ Ct
101
+ 7Tsym
102
+ Sensing signals in frequency domain
103
+ Nf
104
+ 480
105
+ Sensing signals in time domain
106
+ Nt
107
+ 480
108
+ Sensing signals along diagonal
109
+ N
110
+ 480
111
+ II. WAVEFORM DESIGN FOR JCAS SYSTEM
112
+ The waveform design of an OFDM-based JCAS system
113
+ depends on the key performance indicators (KPIs) requested
114
+ by the sensing applications, such as range resolution ∆R,
115
+ velocity resolution ∆v, maximum detection range Rmax, and
116
+ maximum detection velocity vmax. For example, an OFDM-
117
+ based JCAS system at 28 GHz carrier frequency (fc) with
118
+ 120 kHz subcarrier spacing (SCS), which is deployed for traf-
119
+ fic monitoring and communication simultaneously, is designed
120
+ to meet the KPIs tabulated in Table I. To realize such a system,
121
+ the OFDM system parameters that will be used in this paper
122
+ are given in Table II. A waveform structure to meet the KPIs in
123
+ Table I is shown in Fig. 1(a). To reduce the sensing overhead,
124
+ the sensing subcarriers are assigned in a comb structure. Nf
125
+ sensing signals are uniformly distributed at an interval of comb
126
+ size Cf = 7∆f within the band B. Note that the Nf sensing
127
+ signals occupy the full bandwidth B. Hence no deterioration
128
+ of range resolution occurs [5]. The range resolution of this
129
+ waveform ∆R is given by
130
+ ∆R =
131
+ c
132
+ 2B ,
133
+ (1)
134
+ arXiv:2301.03347v1 [cs.IT] 9 Jan 2023
135
+
136
+ (a) A comb structure of sensing signals
137
+ (b) Consecutively transmitted blocks of sensing signals
138
+ Fig. 1: A comb structure of sensing signals to meet the KPIs
139
+ in Table I.
140
+ Fig. 2: Tx and Rx scheme of OFDM-based JCAS system.
141
+ where c is the speed of light.
142
+ The maximum unambiguous detection range of this inter-
143
+ leaved scheme Rmax is reduced by a factor Nc
144
+ Nf compared to
145
+ the maximum unambiguous range with a classical contiguous
146
+ subcarrier allocation. Rmax is given by
147
+ Rmax =
148
+ cNf
149
+ 2∆fNc
150
+ .
151
+ (2)
152
+ Similarly, in the time domain, Nt sensing signals are uni-
153
+ formly distributed at an interval of comb size Ct = 7Tsym within
154
+ 240 OFDM slots (30 ms). The sensing signals are transmitted
155
+ at symbol 2 and symbol 9 of each slot. The velocity resolution
156
+ of this waveform ∆v is thus
157
+ ∆v =
158
+ c
159
+ 2fcTb
160
+ .
161
+ (3)
162
+ The maximum unambiguous velocity vmax can be given by
163
+ vmax = c∆fNt
164
+ 2fcNsym
165
+ .
166
+ (4)
167
+ According to (1)-(4), a sensing system with the parameters
168
+ in Table II would be suitable to support the KPIs listed in
169
+ Table I. One sensing block illustrated in Fig. 1(a) can be used
170
+ to derive one range-velocity estimate. The sensing blocks can
171
+ be transmitted consecutively to track the objects with an update
172
+ rate of 33.3 Hz, as illustrated in Fig. 1(b).
173
+ III. OFDM RANGE-DOPPLER PROCESSING IN THE
174
+ MODULATION SYMBOL DOMAIN
175
+ To date, various OFDM-based radar algorithms have been
176
+ developed. In this section, a periodogram algorithm in the
177
+ “modulation symbol” domain [4] is presented. Fig. 2 illus-
178
+ trates the transmitting and receiving block diagram of an
179
+ OFDM-based JCAS system using the algorithm in the cited
180
+ work [4] and using the waveform shown in Fig. 1. At Tx,
181
+ sensing signals are converted from serial to parallel. Each
182
+ parallel symbol stream modulates a 120 kHz subcarrier. Over
183
+ 30 ms, a 480×480 modulation symbol matrix DTx(m, n) is
184
+ processed by Inverse Fast Fourier Transform (IFFT). Each
185
+ row of DTx(m, n) represents a vector carrying Doppler in-
186
+ formation obtained from a sensing subcarrier. The indices of
187
+ sensing subcarriers m in the frequency domain range from
188
+ 0 to Nf − 1. Each column of DTx(m, n) represents a vector
189
+ carrying range information obtained from a sensing symbol.
190
+ The indices of sensing symbols n range from 0 to Nt − 1.
191
+ After IFFT processing, CP insertion, and digital-to-analog
192
+ conversion, the matrix DTx(m, n) is transmitted in the air.
193
+ The objects in the detection range affect the propagation of
194
+ DTx(m, n). Therefore, the received modulation symbol matrix
195
+ DRx(m, n) is the combination of DTx(m, n) and information
196
+ of the objects. The user data carried by the sensing signals are
197
+ eliminated by element-wise division between DRx(m, n) and
198
+ DTx(m, n), yielding
199
+ D(m, n) = DRx(m, n)
200
+ DTx(m, n) = yR(m) ⊗ yD(n),
201
+ (5)
202
+ where
203
+ yR(m) = exp(−j4π∆fRm
204
+ c
205
+ ), m = 0, · · · , Nf − 1
206
+ (6)
207
+ yD(n) = exp(j4πTsymfcvn
208
+ c
209
+ ), n = 0, · · · , Nt − 1
210
+ (7)
211
+ where D(m, n) is a Nf × Nt matrix, the operator ⊗ denotes
212
+ dyadic product, yR(m) and yD(n) are Nf×1 vector and 1×Nt
213
+ vector, respectively. R is the range between the JCAS antenna
214
+ and the object, v is the radial velocity of the object.
215
+ The linear phase shifts of yR(m) and yD(n) carry the range
216
+ and Doppler information of the object. When a D(m, n)
217
+ is extracted from a sensing block, inverse discrete Fourier
218
+ transform (IDFT) and DFT are performed in the frequency
219
+ domain and time domain, respectively, to derive the range R
220
+ as well as the velocity v of the object [6] [7],
221
+ Yr(p) = IDFT(yR(m)) = 1
222
+ Nf
223
+ Nf−1
224
+
225
+ m=0
226
+ yR(m)exp(j2πmp
227
+ Nf
228
+ )
229
+ = 1
230
+ Nf
231
+ Nf−1
232
+
233
+ m=0
234
+ exp(−j4π∆fRm
235
+ c
236
+ )exp(j2πmp
237
+ Nf
238
+ ),
239
+ p = 0, · · · , Nf − 1
240
+ (8)
241
+
242
+ 3359
243
+ .....
244
+ Subcarrier No.
245
+ 21
246
+ 400 MHz
247
+ 14
248
+ 0
249
+ Symbol 2
250
+ Symbol 9
251
+ Symbol 2
252
+ Symbol 9
253
+ Slot 0
254
+ Slot 1
255
+ Slot 239
256
+ Sensing signal symbol
257
+ Tb = 30 ms
258
+ Communication symbolBlock 0
259
+ Block 1
260
+ Block 2
261
+ B = 400 MHz
262
+ t
263
+ Tp = 30 ms
264
+ toTx
265
+ △f
266
+ dTx(m,n)
267
+ Subcarriers
268
+ CP
269
+ S/P
270
+ IFFT
271
+ insertion
272
+ to RF Txi
273
+ = 1/△f
274
+ Sensing symbols
275
+ Rx
276
+ dRx (m,n)
277
+ CP
278
+ from RF Rx
279
+ FFT
280
+ P/S
281
+ removal
282
+ Sensing symbolsFig. 3: Sensing signal processing of a matrix D(m, n) with
283
+ application of 2D-DFT.
284
+ Yv(q) = DFT(yD(n)) =
285
+ Nt−1
286
+
287
+ n=0
288
+ yD(n)exp(−j2πnq
289
+ Nt
290
+ )
291
+ =
292
+ Nt−1
293
+
294
+ n=0
295
+ exp(j4πTsymfcvn
296
+ c
297
+ )exp(−j2πnq
298
+ Nt
299
+ ),
300
+ q = 0, · · · , Nt − 1
301
+ (9)
302
+ By performing IDFT and DFT, the phase shifts of yR(m)
303
+ and yD(n) are transformed from the time-frequency domain to
304
+ spectral peaks in the delay domain and Doppler domain [8].
305
+ Range R can be calculated by the IDFT bin index p in the
306
+ delay domain where a peak occurs. Similarly, velocity v can
307
+ be obtained by the DFT bin index q in the Doppler domain
308
+ where a peak occurs. The range and velocity can be calculated
309
+ by
310
+ R = cppeak
311
+ 2∆fNf
312
+ ,
313
+ (10)
314
+ v =
315
+ cqpeak
316
+ 2fcTsymNt
317
+ ,
318
+ (11)
319
+ where ppeak is the bin index at peak location in the delay
320
+ domain, qpeak is the bin index at peak location in the Doppler
321
+ domain.
322
+ Matrix D(m, n) in (5) can be rewritten as
323
+ D(m, n) =
324
+
325
+
326
+
327
+ D(0, 0)
328
+ . . .
329
+ D(0, Nt − 1)
330
+ ...
331
+ ...
332
+ ...
333
+ D(Nf − 1, 0)
334
+ . . .
335
+ D(Nf − 1, Nt − 1)
336
+
337
+
338
+ � .
339
+ (12)
340
+ 2D-DFT of D(m, n) can be performed to compute the range
341
+ and velocity of the object. It can be implemented in two stages:
342
+ column-by-column 1D-IDFTs of length Nf are proceeded after
343
+ row-by-row 1D-DFTs of length Nt as shown in Fig. 3. Then,
344
+ the 2D range-velocity periodogram can be obtained, giving an
345
+ intuitive indication of the reflecting objects.
346
+ IV. A NOVEL WAVEFORM DESIGN AND SENSING SIGNAL
347
+ PROCESSING ALGORITHM
348
+ A. Challenges
349
+ Despite its performance and practicality, the 2D-DFT-based
350
+ periodogram method suffers from several drawbacks. First,
351
+ the 2D-DFT calculation is computationally expensive. The
352
+ complexity of an N-point DFT is O(N 2). For example,
353
+ to apply 2D-DFT to one matrix D(m, n) produced by the
354
+ sensing block illustrated in Fig. 1(a), 960O(N 2) complex
355
+ multiplications (480 DFTs in row and 480 IDFTs in column)
356
+ are needed to generate one range-velocity estimate. Since
357
+ (a) A diagonal of sensing signals
358
+ (b) Consecutively transmitted diagonals of sensing signals
359
+ Fig. 4: A diagonal structure of sensing signals to meet the
360
+ KPIs in Table I.
361
+ the sensing system is “unintelligent” and cannot predict the
362
+ positions of objects in the detection range, it must scan
363
+ the environment in each direction to localize and track the
364
+ objects using narrow beams, leading to an extremely high
365
+ computational complexity. Second, 2D-DFT is calculated in
366
+ stages where all the results of the row-by-row 1D-DFTs
367
+ must be available before the column-by-column 1D-IDFTs can
368
+ be performed. Therefore, an additional intermediate cache is
369
+ required, thus increasing hardware cost, especially for large
370
+ DFT size. Finally, the sensing signals must be transmitted both
371
+ in the time and frequency domain in a comb structure, which
372
+ induces high sensing overhead.
373
+ B. Method
374
+ To tackle these challenges, a novel OFDM-based sensing
375
+ waveform structure and corresponding signal processing algo-
376
+ rithm are proposed. As shown in Fig. 4(a), a rectangular time-
377
+ frequency block with a bandwidth of 400 MHz contains 3360
378
+ subcarriers in the frequency domain, and the block spans 30 ms
379
+ in the time domain. Uniformly distributed sensing signals are
380
+ allocated along the diagonal of the block. The time domain and
381
+ the frequency domain contain the same number, N, of sensing
382
+ signals. A diagonal of sensing signals produces a transmitted
383
+ modulation symbol sequence dTx(k) of length N, spanning
384
+ 240 slots in the time domain and 400 MHz in the frequency
385
+ domain. The normalized modulation symbol vector d(k) can
386
+ be obtained by the element-wise division between the received
387
+ modulation symbol sequence dRx(k) and dTx(k), yielding
388
+ d(k) = dRx(k)
389
+ dTx(k) = yR(k) ⊗ yD(k), k = 0, · · · , N − 1
390
+ (13)
391
+ where
392
+ yR(k) = exp(−j4π∆fRk
393
+ c
394
+ ), k = 0, · · · , N − 1
395
+ (14)
396
+
397
+ D(m,n)
398
+ D(m,n)
399
+ 1D-DFT in row
400
+ 1D-IDET in column
401
+ m =N-
402
+ Range-
403
+ velocity
404
+ m=1
405
+ profile
406
+ m=0
407
+ n=0n=1
408
+ n=Nt-1
409
+ n=0n=1
410
+ n=Nt-1
411
+ .3359
412
+ ......
413
+ .....
414
+ Subcarrier No.
415
+ 21
416
+ 400 MHz
417
+ 14
418
+ 7
419
+ 0
420
+ Symbol 2
421
+ Symbol 9
422
+ Symbol 2
423
+ Symbol 9
424
+ Slot 0
425
+ Slot 1
426
+ Slot 239
427
+ Sensing signal symbol
428
+ 7 Communication symbol
429
+ 30 msBlock 0
430
+ Block 1
431
+ Block 2
432
+ B = 400 MHz
433
+ T
434
+ Tb = 30 msFig. 5: The radar image of an object with range of 40 m and
435
+ velocity of 5 m/s using the proposed algorithm.
436
+ yD(k) = exp(j4πTsymfcvk
437
+ c
438
+ ), k = 0, · · · , N − 1
439
+ (15)
440
+ The range R of the object causes the phase shifts of the
441
+ individual elements of vector yR(k). The velocity v of the
442
+ object causes the phase shifts of the individual elements of
443
+ vector yD(k). Since vector yR(k) and vector yD(k) contain the
444
+ same number of elements and they are equally spaced in both
445
+ the frequency domain and time domain, the range and velocity
446
+ of the object can be obtained simultaneously by performing
447
+ DFT of d(k), which yields
448
+ Yrv(l) = DFT(d(k)) =
449
+ N−1
450
+
451
+ k=0
452
+ yR(k)yD(k)exp(−j2πkl
453
+ N
454
+ ),
455
+ l = 0, · · · , N − 1
456
+ (16)
457
+ Fig. 5 shows the normalized DFT result of a modulation
458
+ symbol vector d(k), which is derived from the sensing signals
459
+ reflected by a moving object with range R = 40 m and veloc-
460
+ ity v = 5 m/s at time t0. The x-axis represents DFT bin indices
461
+ l. The number of bins equals the DFT size N. The integer
462
+ values on the x-axis correspond to the spectral frequencies
463
+ sampled by the DFT. The DFT result shows a dual-peak-like
464
+ profile. The bin indices at two peak locations are l1 = 81 and
465
+ l2 = 134. Two spectral frequencies fH and fL, which carry the
466
+ range and Doppler information of the object, can be calculated
467
+ by
468
+ fH = l1 + l2
469
+ 2
470
+ ,
471
+ (17)
472
+ fL = l2 − l1
473
+ 2
474
+ .
475
+ (18)
476
+ The DFT of d(k) translates the modulation signals in the
477
+ time-frequency domain to two spectral peaks centered at fH
478
+ and spaced at an interval of 2fL in the radar image. One of fH
479
+ and fL contains range information, whereas the other contains
480
+ velocity information. The range and velocity of the object can
481
+ be extracted by
482
+ R =
483
+ cfH
484
+ 2∆fNc
485
+ , v =
486
+ cfL
487
+ 2TsymfcNc
488
+ ,
489
+ (19)
490
+ or
491
+ R =
492
+ cfL
493
+ 2∆fNc
494
+ , v =
495
+ cfH
496
+ 2TsymfcNc
497
+ .
498
+ (20)
499
+ As a side effect of the proposed algorithm, using (19) and
500
+ (20) yields two rang-velocity estimates from one block of
501
+ sensing signals. These two estimates include a correct range-
502
+ velocity estimate and an incorrect range-velocity estimate. For
503
+ example, two range-velocity estimates can be extracted from
504
+ bin indices l1 = 81 and l2 = 134 shown in Fig. 5, referred to
505
+ as estimate A and estimate B below.
506
+ • Estimate A: R = 40 m and v = 5 m/s
507
+ • Estimate B: R = 10 m and v = 20 m/s
508
+ By using the proposed method, an object with R = 40 m
509
+ and v = 5 m/s produces the same radar image as that of an-
510
+ other object with R = 10 m and v = 20 m/s. The uncertainty
511
+ is caused by the unlabeled bin indices l1 and l2 at peak
512
+ locations. For spectral frequencies fH and fL, it is uncertain
513
+ which one is indicative of the range and which one indicates
514
+ the velocity. These two estimates cannot be discerned using
515
+ information from one sensing block. A multi-temporal data
516
+ fusion method can be performed to filter out the incorrect
517
+ estimate. The main idea of the multi-temporal data fusion
518
+ method is as follows. Based on the two range-velocity esti-
519
+ mates acquired in the recent past, the range-velocity estimate
520
+ in the present can be predicted. The incorrect estimate can be
521
+ identified by comparing the actual estimates obtained in the
522
+ present with the predicted estimates.
523
+ A traffic monitoring scenario is considered to show how the
524
+ incorrect estimate is identified. We assume that the vehicles
525
+ in the scenario comply with the linear motion model, which
526
+ is popularly used in traffic research [9]. The linear motion
527
+ model is a motion model where an object’s velocity or
528
+ acceleration is held constant within a span of time, provided
529
+ that the time is sufficiently short. We assume the JCAS
530
+ system has no prior range-velocity information on the vehicles.
531
+ The vehicle’s maximum acceleration (a) is assumed to be
532
+ 5.4 m/s2, corresponding to the acceleration from 0 to 60 mph
533
+ in 5 seconds (acceleration of high-performance cars). The
534
+ consecutive sensing blocks are transmitted at time 0 ms (t0),
535
+ 30 ms (t1), 60 ms (t2), 90 ms (t3), 120 ms (t4), and so on. For
536
+ the linear motion model with constant velocity, the range and
537
+ velocity of the vehicle at time t1-t4 can be predicted based on
538
+ estimate A and estimate B obtained at time t0. Similarly, the
539
+ range and velocity of the vehicle at time t1-t4 can be obtained
540
+ for the linear motion model with a constant acceleration of
541
+ 5.4 m/s2.
542
+ The amplitude of the spectral peak in the radar image is
543
+ determined by range R as it affects the received signal power
544
+ PR reflected by the object. PR is formulated as
545
+ PR = PTxGTxGRxσλ2
546
+ (4π)3R4f 2c
547
+ ,
548
+ (21)
549
+ where PTx is the transmitted power, GTx is the Tx antenna
550
+ gain, GRx is the Rx antenna gain, σ is the RCS of the object,
551
+ λ is the wavelength of the carrier.
552
+
553
+ 0
554
+ Peak 1
555
+ Peak 2
556
+ = 81
557
+ 。= 134
558
+ = 107.5
559
+ -10
560
+ -15
561
+ = 26.5
562
+ = 26.5
563
+ -20
564
+ dB
565
+ -25
566
+ 0
567
+ in
568
+ results
569
+ -10
570
+ -30
571
+ 60
572
+ 80
573
+ 100
574
+ 120-
575
+ 140
576
+ -20
577
+ -30
578
+ 40
579
+ -50
580
+ 0
581
+ 50
582
+ 100
583
+ 150
584
+ 200
585
+ Bin indices of /(a) Real radar image for a vehicle with R = 40 m and constant
586
+ v = 5 m/s at time t0, and predicted radar images at time t0-t4
587
+ (b) Real radar image for a vehicle with R = 10 m and constant
588
+ v = 20 m/s at time t0, and predicted radar images at time t0-t4
589
+ (c) Real radar image for a vehicle with R = 40 m, v = 5 m/s
590
+ and constant acceleration 5.4 m/s2 at time t0, and predicted
591
+ radar images at time t0-t4
592
+ (d) Real radar image for a vehicle with R = 10 m, v = 20 m/s
593
+ and constant acceleration 5.4 m/s2 at time t0, and predicted
594
+ radar images at time t0-t4
595
+ Fig. 6: Predicted radar images at time t1-t4 based on estimate A and estimate B obtained at time t0.
596
+ TABLE III: The predicted range and velocity of the vehicle
597
+ at time t1-t4 based on estimate A and estimate B obtained at
598
+ time t0
599
+ Estimate
600
+ t (ms)
601
+ R(t) (m)
602
+ v(t) (m/s)
603
+ Ap(t) (dB)
604
+ t0 = 0
605
+ 40
606
+ 5
607
+ 0
608
+ Estimate A
609
+ t1 = 30
610
+ 39.9
611
+ 5
612
+ 0.06
613
+ with
614
+ t2 = 60
615
+ 39.7
616
+ 5
617
+ 0.13
618
+ constant v
619
+ t3 = 90
620
+ 39.6
621
+ 5
622
+ 0.19
623
+ t4 = 120
624
+ 39.4
625
+ 5
626
+ 0.26
627
+ t0 = 0
628
+ 10
629
+ 20
630
+ 0
631
+ Estimate B
632
+ t1 = 30
633
+ 9.4
634
+ 20
635
+ 1.07
636
+ with
637
+ t2 = 60
638
+ 8.8
639
+ 20
640
+ 2.22
641
+ constant v
642
+ t3 = 90
643
+ 8.2
644
+ 20
645
+ 3.45
646
+ t4 = 120
647
+ 7.6
648
+ 20
649
+ 4.77
650
+ t0 = 0
651
+ 40
652
+ 5
653
+ 0
654
+ Estimate A
655
+ t1 = 30
656
+ 39.9
657
+ 5.2
658
+ 0.06
659
+ with
660
+ t2 = 60
661
+ 39.7
662
+ 5.3
663
+ 0.13
664
+ constant a
665
+ t3 = 90
666
+ 39.5
667
+ 5.5
668
+ 0.2
669
+ t4 = 120
670
+ 38.4
671
+ 5.7
672
+ 0.28
673
+ t0 = 0
674
+ 10
675
+ 20
676
+ 0
677
+ Estimate B
678
+ t1 = 30
679
+ 9.4
680
+ 20.2
681
+ 1.08
682
+ with
683
+ t2 = 60
684
+ 8.8
685
+ 20.3
686
+ 2.24
687
+ constant a
688
+ t3 = 90
689
+ 8.2
690
+ 20.5
691
+ 3.49
692
+ t4 = 120
693
+ 7.6
694
+ 20.7
695
+ 4.86
696
+ Based on estimate A and estimate B obtained at time t0 and
697
+ the linear motion model, the predicted ranges R(t), velocities
698
+ v(t) and normalized peak amplitudes Ap(t) at time t1-t4 are
699
+ tabulated in Table III. Ap(t0) is normalized to 0 dB. The real
700
+ radar images calculated at time t0 and the predicted radar
701
+ images at time t1-t4 in the future are depicted in Fig. 6. The
702
+ blue dashed lines indicate the real radar images yielded by
703
+ the sensing block transmitted at time t0, while the predicted
704
+ radar images at time t1-t4 are visualized with assorted colors.
705
+ It can be observed that although a vehicle with R = 40 m and
706
+ v = 5 m/s produces the same radar image as that of a vehicle
707
+ with R = 10 m and v = 20 m/s at time t0, the following
708
+ radar images at time t1-t4 show different patterns. Fig. 6(a)
709
+ shows minor peak shifts due to the elapsed time. Because
710
+ constant velocity causes unchanged fL, and the quite low
711
+ velocity (5 m/s) causes minor range changes at time t1-t4.
712
+ Hence, the changes in fH are less significant. The bin indices
713
+ at peak locations at time t0-t4 are within 79-81 and 133-134,
714
+ as shown in the zoomed-in sections in Fig. 6(a). Also, the
715
+ peak amplitude increases slightly from 0 dB at t0 to 0.26 dB
716
+ at t4. The minor range changes causes small received signal
717
+ power changes, which, in turn, leads to slight peak amplitude
718
+ fluctuation.
719
+ It can be observed that the radar images in Fig. 6(b)
720
+ are pretty different from the radar images in Fig. 6(a). The
721
+ constant velocity produces an unchanged spectral frequency
722
+ fH in Fig. 6(b). Meanwhile, the high velocity (v = 20 m/s)
723
+ leads to significant changes in the range (major changes in
724
+ spectral frequency fL). The bin indices at peak locations at
725
+ time t0-t4 are within 80-88 and 128-134, which are distinctly
726
+ different from the bin indices in Fig. 6(a). Furthermore, due to
727
+ the significant distance changes, the peak amplitude increases
728
+ rapidly from 0 dB at t0 to 4.77 dB at t4. Therefore, for constant
729
+ velocity, by comparing the bin indices and peak amplitudes
730
+
731
+ 0
732
+ B
733
+ ul
734
+ 0
735
+ -10
736
+ results
737
+ 20
738
+ 2
739
+ Normalized DF'
740
+ 76
741
+ 78
742
+ 80
743
+ 134
744
+ 135
745
+ 136
746
+ : R= 40 m, v= 5 m/s
747
+ 30
748
+ t,: R= 39.9 m, v= 5.2 m/s
749
+ t.: R= 39.7 m, v= 5.3 m/s
750
+ t.: R= 39.5 m, v= 5.5 m/s
751
+ 40
752
+ 0
753
+ 20
754
+ 40
755
+ 60
756
+ 80
757
+ 100
758
+ 120
759
+ 140
760
+ 160
761
+ 180
762
+ 200
763
+ Indices of /B
764
+ 0
765
+ d
766
+ 4
767
+ T results
768
+ 2
769
+ -10
770
+ 0
771
+ -20
772
+ 131132133134
773
+ 80
774
+ 85
775
+ 90
776
+ : R= 10 m, v= 20 m/s
777
+ Normalized
778
+ t,: R= 9.4 m, v= 20.2 m/s
779
+ -30
780
+ t,: R= 8.8 m, v= 20.3 m/s
781
+ t.: R= 8.2 m, v= 20.5 m/s
782
+ -40
783
+ : R= 7.6 m, v= 20.7 m/s
784
+ 4
785
+ 0
786
+ 20
787
+ 40
788
+ 60
789
+ 80
790
+ 100
791
+ 120
792
+ 140
793
+ 160
794
+ 180
795
+ 200
796
+ Indices of /0
797
+ B
798
+ d
799
+ ul
800
+ 0
801
+ -10
802
+ results
803
+ 20
804
+ .2
805
+ 79
806
+ 80
807
+ 81
808
+ 133
809
+ 134
810
+ DF
811
+ t.: R= 40 m, v= 5 m/s
812
+ 0
813
+ 30
814
+ t,: R= 39.9 m, v= 5 m/s
815
+ : R= 39.7 m, v= 5 m/s
816
+ t.: R= 39.6 m, v= 5 m/s
817
+ -40
818
+ 3
819
+ t.: R= 39.4 m, v= 5 m/s
820
+ 0
821
+ 20
822
+ 40
823
+ 60
824
+ 80
825
+ 100
826
+ 120
827
+ 140
828
+ 160
829
+ 180
830
+ 200
831
+ Indices of /B
832
+ 0
833
+ da
834
+ 4
835
+ results
836
+ 2
837
+ 2
838
+ -10
839
+ 0
840
+ 0
841
+ .2.
842
+ -2
843
+ -20
844
+ 128
845
+ 130
846
+ 132
847
+ 134
848
+ 80
849
+ 82
850
+ 84
851
+ 86
852
+ 88
853
+ t.: R= 10 m, v= 20 m/s
854
+ Jormalized
855
+ 0
856
+ t,: R= 9.4 m, v= 20 m/s
857
+ -30
858
+ t.: R= 8.8 m, v= 20 m/s
859
+ t,: R= 8.2 m, v= 20 m/s
860
+ 40
861
+ t,: R= 7.6 m, v= 20 m/s
862
+ 4'
863
+ 0
864
+ 20
865
+ 40
866
+ 60
867
+ 80
868
+ 100
869
+ 120
870
+ 140
871
+ 160
872
+ 180
873
+ 200
874
+ Indices of /from predicted radar images at time t1-t4 with the bin indices
875
+ and peak amplitudes from actual radar images obtained at time
876
+ t1-t4, the correct range-velocity estimate can be identified.
877
+ Although the acceleration of vehicles is utterly unpre-
878
+ dictable in the traffic monitoring scenario, the acceleration
879
+ of vehicles is limited. The maximum acceleration of high-
880
+ performance cars is 5.4 m/s2. For the linear motion model with
881
+ constant acceleration 5.4 m/s2, the predicted radar images at
882
+ time t1-t4 are still significantly different between estimate A
883
+ and estimate B, as shown in Fig. 6(c) and Fig. 6(d). The non-
884
+ zero acceleration will induce changes in both fH and fL. For
885
+ estimate A, the velocity changes slowly due to the limited
886
+ acceleration. The range also changes slowly due to the low
887
+ acceleration and low velocity (5 m/s) at time t0. Therefore,
888
+ the vehicle yields minor changes in both fH and fL at t1-
889
+ t4 as shown in Fig. 6(c). For a vehicle with R = 10 m,
890
+ v = 20 m/s, and a = 5.4 m/s2 at time t0, enormous range
891
+ changes due to high velocity induce major changes in fL at
892
+ time t1-t4. Major changes in fL shift the peaks significantly,
893
+ leading to a sparser peak pattern in Fig. 6(d) than the pattern
894
+ in Fig. 6(c). Therefore, the velocity is crucial to filter out the
895
+ incorrect estimates herein since the range change depends on
896
+ the velocity. The limited acceleration in the traffic scenario is
897
+ a minor factor in changing the peaks in the radar image.
898
+ C. Discussions
899
+ It has been mentioned that the signal processing complexity
900
+ of applying 2D-DFT is computationally high. An OFDM
901
+ sensing system with parameters in Table II needs 2N ×O(N 2)
902
+ complex multiplications (480 DFTs in row and 480 IDFTs
903
+ in column) to generate a range-velocity estimate. In contrast,
904
+ the computational complexity of the proposed algorithm is
905
+ relatively low because it avoids the DFT calculation both on
906
+ all rows and all columns.
907
+ Since the phase shift of modulation signals along the
908
+ frequency axis can extract the delay, and the phase shift of
909
+ modulation signals along the time axis can extract the Doppler,
910
+ the signaling overhead of the proposed diagonal structure is
911
+ reduced by combining the phase shift along both the frequency
912
+ axis and the time axis into a series of sensing signals along the
913
+ diagonal of the resource block. Hence the range and velocity
914
+ can be derived simultaneously. The sensing overhead of the
915
+ conventional comb structure is N 2/N 2
916
+ c . The sensing overhead
917
+ of the proposed diagonal structure is N/N 2
918
+ c , reducing the
919
+ sensing overhead by a factor of N.
920
+ The only minor disadvantage of the proposed algorithm is
921
+ that two unlabeled spectral peaks in the radar image cause
922
+ ambiguity in the range-velocity estimates. The proposed multi-
923
+ temporal data fusion method can be used to resolve ambiguity.
924
+ Furthermore, steady objects do not create ambiguity by using
925
+ the proposed algorithm. For example, Fig. 7 shows the radar
926
+ image of a steady object with a range of 40 m and a
927
+ velocity of 0 m/s. Another object with a range of 0 m and
928
+ a velocity of 20 m/s produces the same radar image shown in
929
+ Fig. 7. However, it directly contacts the antenna of the JCAS
930
+ system due to the range R being zero, which is kinematically
931
+ Fig. 7: The radar image of an object with range of 40 m and
932
+ velocity of 0 m/s.
933
+ infeasible. Therefore, the proposed algorithm can derive the
934
+ range-velocity estimate of a steady object without ambiguity
935
+ by using one diagonal block of sensing signals only.
936
+ V. CONCLUSIONS
937
+ The typical processing algorithm of OFDM radar applies
938
+ 2D-DFT for extracting the range and velocity information.
939
+ This paper proposes a diagonal waveform structure and cor-
940
+ responding signal processing algorithm. With this approach,
941
+ two advantages can be achieved. First, the signal processing
942
+ algorithm is simple regarding computational complexity and
943
+ memory requirement. Second, sensing overhead is signifi-
944
+ cantly reduced by the proposed waveform structure. The minor
945
+ disadvantage is that the proposed approach obtains the range
946
+ and velocity in a coupled manner. Therefore, range-velocity
947
+ ambiguity may occur. A multi-temporal data fusion method
948
+ can be performed to resolve ambiguity. The simulations have
949
+ proven the operability of the proposed waveform and signal
950
+ processing algorithm.
951
+ REFERENCES
952
+ [1] H. Wymeersch et al., “Deliverable D3.1 Localisation and sensing use
953
+ cases and gap analysis,” Hexa-X, Dec. 31, 2021.
954
+ [2] F. Liu, C. Masouros, A. P. Petropulu, H. Griffiths and L. Hanzo, “Joint
955
+ Radar and Communication Design: Applications, State-of-the-Art, and
956
+ the Road Ahead,” in IEEE Transactions on Communications, vol. 68,
957
+ no. 6, pp. 3834-3862, June 2020, doi: 10.1109/TCOMM.2020.2973976.
958
+ [3] T. Wild, V. Braun and H. Viswanathan, “Joint Design of Communication
959
+ and Sensing for Beyond 5G and 6G Systems,” in IEEE Access, vol. 9,
960
+ pp. 30845-30857, 2021, doi: 10.1109/ACCESS.2021.3059488.
961
+ [4] C. Sturm and W. Wiesbeck, “Waveform Design and Signal Processing
962
+ Aspects for Fusion of Wireless Communications and Radar Sensing,” in
963
+ Proceedings of the IEEE, vol. 99, no. 7, pp. 1236-1259, July 2011, doi:
964
+ 10.1109/JPROC.2011.2131110.
965
+ [5] C. Sturm, Y. Sit, M. Braun and T. Zwick, “Spectrally interleaved multi-
966
+ carrier signals for radar network applications and multi-input multi-
967
+ output radar,” IET Radar, Sonar and Navigation, 2012, doi: 10.1049/iet-
968
+ rsn.2012.0040.
969
+ [6] C. Sturm, E. Pancera, T. Zwick and W. Wiesbeck, “A novel approach
970
+ to OFDM radar processing,” 2009 IEEE Radar Conference, 2009, pp.
971
+ 1-4, doi: 10.1109/RADAR.2009.4977002.
972
+ [7] C. Sturm, M. Braun, T. Zwick and W. Wiesbeck, “A multiple target
973
+ doppler estimation algorithm for OFDM based intelligent radar systems,”
974
+ The 7th European Radar Conference, 2010, pp. 73-76.
975
+ [8] A. Behravan et al., “Introducing sensing into future wireless com-
976
+ munication systems,” 2022 2nd IEEE International Symposium on
977
+ Joint
978
+ Communications
979
+ &
980
+ Sensing
981
+ (JC&S),
982
+ 2022,
983
+ pp.
984
+ 1-5,
985
+ doi:
986
+ 10.1109/JCS54387.2022.9743513.
987
+
988
+ 0
989
+ B
990
+ a
991
+ 0
992
+ -5
993
+ -10
994
+ -10
995
+ -15
996
+ 20
997
+ -20
998
+ -25
999
+ -30
1000
+ 105
1001
+ 110
1002
+ 100
1003
+ 115
1004
+ 40
1005
+ 0
1006
+ 20
1007
+ 40
1008
+ 60
1009
+ 80
1010
+ 100
1011
+ 120
1012
+ 140
1013
+ 160
1014
+ 180
1015
+ 200
1016
+ 220
1017
+ Bin indices of /[9] R. Schubert, E. Richter and G. Wanielik, “Comparison and evaluation of
1018
+ advanced motion models for vehicle tracking,” 2008 11th International
1019
+ Conference on Information Fusion, 2008, pp. 1-6.
1020
+
FtE1T4oBgHgl3EQfqwWe/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,401 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf,len=400
2
+ page_content='A Novel Waveform Design for OFDM-Based Joint Sensing and Communication System Yi Geng Cictmobile, China gengyi@cictmobile.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
3
+ page_content='com Abstract—The dominating waveform in 5G is orthogonal frequency division multiplexing (OFDM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
4
+ page_content=' OFDM will remain a promising waveform candidate for joint communication and sensing (JCAS) in 6G since OFDM can provide excellent data transmission capability and accurate sensing information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
5
+ page_content=' This paper proposes a novel OFDM-based diagonal waveform structure and corresponding signal processing algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
6
+ page_content=' This approach allocates the sensing signals along the diagonal of the time-frequency resource block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
7
+ page_content=' Therefore, the sensing signals in a linear structure span both the frequency and time domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
8
+ page_content=' The range and velocity of the object can be estimated simultaneously by applying 1D-discrete Fourier transform (DFT) to the diagonal sensing signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
9
+ page_content=' Compared to the conventional 2D-DFT OFDM radar algorithm, the computational complexity of the proposed algorithm is low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
10
+ page_content=' In addition, the sensing overhead can be sub- stantially reduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
11
+ page_content=' The performance of the proposed waveform is evaluated using simulation and analysis of results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
12
+ page_content=' Index Terms—OFDM, 6G, JCAS, radar, waveform, DFT, IDFT I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
13
+ page_content=' INTRODUCTION Sixth generation (6G) will not be only about communica- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
14
+ page_content=' Joint communication and sensing (JCAS), which will en- able a myriad of new use cases, is a significant area of interest within 6G research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
15
+ page_content=' To achieve a favorable trade-off between communication and sensing, the waveforms for 6G need to be designed for simultaneous communication and sensing [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
16
+ page_content=' Orthogonal frequency division multiplexing (OFDM) is a promising waveform candidate for JCAS systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
17
+ page_content=' Compared to other JCAS waveform candidates, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
18
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
19
+ page_content=', frequency modu- lated continuous wave (FMCW), OFDM waveform naturally supports MIMO processing and has excellent communication performance [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
20
+ page_content=' In terms of sensing performance, OFDM is well-suited for range and velocity estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
21
+ page_content=' For example, an OFDM-based periodogram algorithm can obtain range- velocity estimation by applying 2D-discrete Fourier transform (DFT) to signals in modulation symbol domain [3] [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
22
+ page_content=' On the other hand, OFDM waveform avoids extra hardware com- plexity and costs compared to dual-waveform systems (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
23
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
24
+ page_content=', time-multiplexing of OFDM and FMCW) [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
25
+ page_content=' The paper is structured as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
26
+ page_content=' Section II gives an example of how to design the sensing signal structure ac- cording to the requirements of a traffic monitoring scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
27
+ page_content=' Section III provides an overview of the periodogram algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
28
+ page_content=' In Section IV, we propose a novel diagonal waveform structure and corresponding signal processing algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
29
+ page_content=' Section V concludes the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
30
+ page_content=' TABLE I: Exemplary KPIs for traffic monitoring Requirement Symbol Value Range resolution ∆R 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
31
+ page_content='4 m Velocity resolution ∆v 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
32
+ page_content='2 m/s Maximum detection range Rmax 150 m Maximum detection velocity vmax 90 m/s TABLE II: OFDM system parameters under the condition of satisfying the KPIs in Table I System parameter Symbol Value Carrier frequency fc 28 GHz Bandwidth B 400 MHz Subcarrier spacing SCS (∆f) 120 KHz Total subcarriers Nc 3360 OFDM symbol duration Tsym 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
33
+ page_content='92 µs OFDM slot duration Ts 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
34
+ page_content='125 ms Time-domain duration Tb 240Ts (30 ms) Total symbols in Tb Nsym 3360 Comb size in frequency domain Cf 7∆f Comb size in time domain Ct 7Tsym Sensing signals in frequency domain Nf 480 Sensing signals in time domain Nt 480 Sensing signals along diagonal N 480 II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
35
+ page_content=' WAVEFORM DESIGN FOR JCAS SYSTEM The waveform design of an OFDM-based JCAS system depends on the key performance indicators (KPIs) requested by the sensing applications, such as range resolution ∆R, velocity resolution ∆v, maximum detection range Rmax, and maximum detection velocity vmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
36
+ page_content=' For example, an OFDM- based JCAS system at 28 GHz carrier frequency (fc) with 120 kHz subcarrier spacing (SCS), which is deployed for traf- fic monitoring and communication simultaneously, is designed to meet the KPIs tabulated in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
37
+ page_content=' To realize such a system, the OFDM system parameters that will be used in this paper are given in Table II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
38
+ page_content=' A waveform structure to meet the KPIs in Table I is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
39
+ page_content=' 1(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
40
+ page_content=' To reduce the sensing overhead, the sensing subcarriers are assigned in a comb structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
41
+ page_content=' Nf sensing signals are uniformly distributed at an interval of comb size Cf = 7∆f within the band B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
42
+ page_content=' Note that the Nf sensing signals occupy the full bandwidth B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
43
+ page_content=' Hence no deterioration of range resolution occurs [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
44
+ page_content=' The range resolution of this waveform ∆R is given by ∆R = c 2B , (1) arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
45
+ page_content='03347v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
46
+ page_content='IT] 9 Jan 2023 (a) A comb structure of sensing signals (b) Consecutively transmitted blocks of sensing signals Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
47
+ page_content=' 1: A comb structure of sensing signals to meet the KPIs in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
48
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
49
+ page_content=' 2: Tx and Rx scheme of OFDM-based JCAS system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
50
+ page_content=' where c is the speed of light.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
51
+ page_content=' The maximum unambiguous detection range of this inter- leaved scheme Rmax is reduced by a factor Nc Nf compared to the maximum unambiguous range with a classical contiguous subcarrier allocation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
52
+ page_content=' Rmax is given by Rmax = cNf 2∆fNc .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
53
+ page_content=' (2) Similarly, in the time domain, Nt sensing signals are uni- formly distributed at an interval of comb size Ct = 7Tsym within 240 OFDM slots (30 ms).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
54
+ page_content=' The sensing signals are transmitted at symbol 2 and symbol 9 of each slot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
55
+ page_content=' The velocity resolution of this waveform ∆v is thus ∆v = c 2fcTb .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
56
+ page_content=' (3) The maximum unambiguous velocity vmax can be given by vmax = c∆fNt 2fcNsym .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
57
+ page_content=' (4) According to (1)-(4), a sensing system with the parameters in Table II would be suitable to support the KPIs listed in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
58
+ page_content=' One sensing block illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
59
+ page_content=' 1(a) can be used to derive one range-velocity estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
60
+ page_content=' The sensing blocks can be transmitted consecutively to track the objects with an update rate of 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
61
+ page_content='3 Hz, as illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
62
+ page_content=' 1(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
63
+ page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
64
+ page_content=' OFDM RANGE-DOPPLER PROCESSING IN THE MODULATION SYMBOL DOMAIN To date, various OFDM-based radar algorithms have been developed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
65
+ page_content=' In this section, a periodogram algorithm in the “modulation symbol” domain [4] is presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
66
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
67
+ page_content=' 2 illus- trates the transmitting and receiving block diagram of an OFDM-based JCAS system using the algorithm in the cited work [4] and using the waveform shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
68
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
69
+ page_content=' At Tx, sensing signals are converted from serial to parallel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
70
+ page_content=' Each parallel symbol stream modulates a 120 kHz subcarrier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
71
+ page_content=' Over 30 ms, a 480×480 modulation symbol matrix DTx(m, n) is processed by Inverse Fast Fourier Transform (IFFT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
72
+ page_content=' Each row of DTx(m, n) represents a vector carrying Doppler in- formation obtained from a sensing subcarrier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
73
+ page_content=' The indices of sensing subcarriers m in the frequency domain range from 0 to Nf − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
74
+ page_content=' Each column of DTx(m, n) represents a vector carrying range information obtained from a sensing symbol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
75
+ page_content=' The indices of sensing symbols n range from 0 to Nt − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
76
+ page_content=' After IFFT processing, CP insertion, and digital-to-analog conversion, the matrix DTx(m, n) is transmitted in the air.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
77
+ page_content=' The objects in the detection range affect the propagation of DTx(m, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
78
+ page_content=' Therefore, the received modulation symbol matrix DRx(m, n) is the combination of DTx(m, n) and information of the objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
79
+ page_content=' The user data carried by the sensing signals are eliminated by element-wise division between DRx(m, n) and DTx(m, n), yielding D(m, n) = DRx(m, n) DTx(m, n) = yR(m) ⊗ yD(n), (5) where yR(m) = exp(−j4π∆fRm c ), m = 0, · · · , Nf − 1 (6) yD(n) = exp(j4πTsymfcvn c ), n = 0, · · · , Nt − 1 (7) where D(m, n) is a Nf × Nt matrix, the operator ⊗ denotes dyadic product, yR(m) and yD(n) are Nf×1 vector and 1×Nt vector, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
80
+ page_content=' R is the range between the JCAS antenna and the object, v is the radial velocity of the object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
81
+ page_content=' The linear phase shifts of yR(m) and yD(n) carry the range and Doppler information of the object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
82
+ page_content=' When a D(m, n) is extracted from a sensing block, inverse discrete Fourier transform (IDFT) and DFT are performed in the frequency domain and time domain, respectively, to derive the range R as well as the velocity v of the object [6] [7], Yr(p) = IDFT(yR(m)) = 1 Nf Nf−1 � m=0 yR(m)exp(j2πmp Nf ) = 1 Nf Nf−1 � m=0 exp(−j4π∆fRm c )exp(j2πmp Nf ), p = 0, · · · , Nf − 1 (8) 3359 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
83
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
84
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
85
+ page_content=' Subcarrier No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
86
+ page_content=' 21 400 MHz 14 0 Symbol 2 Symbol 9 Symbol 2 Symbol 9 Slot 0 Slot 1 Slot 239 Sensing signal symbol Tb = 30 ms Communication symbolBlock 0 Block 1 Block 2 B = 400 MHz t Tp = 30 ms toTx △f dTx(m,n) Subcarriers CP S/P IFFT insertion to RF Txi = 1/△f Sensing symbols Rx dRx (m,n) CP from RF Rx FFT P/S removal Sensing symbolsFig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
87
+ page_content=' 3: Sensing signal processing of a matrix D(m, n) with application of 2D-DFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
88
+ page_content=' Yv(q) = DFT(yD(n)) = Nt−1 � n=0 yD(n)exp(−j2πnq Nt ) = Nt−1 � n=0 exp(j4πTsymfcvn c )exp(−j2πnq Nt ), q = 0, · · · , Nt − 1 (9) By performing IDFT and DFT, the phase shifts of yR(m) and yD(n) are transformed from the time-frequency domain to spectral peaks in the delay domain and Doppler domain [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
89
+ page_content=' Range R can be calculated by the IDFT bin index p in the delay domain where a peak occurs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
90
+ page_content=' Similarly, velocity v can be obtained by the DFT bin index q in the Doppler domain where a peak occurs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
91
+ page_content=' The range and velocity can be calculated by R = cppeak 2∆fNf , (10) v = cqpeak 2fcTsymNt , (11) where ppeak is the bin index at peak location in the delay domain, qpeak is the bin index at peak location in the Doppler domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
92
+ page_content=' Matrix D(m, n) in (5) can be rewritten as D(m, n) = � � � D(0, 0) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
93
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
94
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
95
+ page_content=' D(0, Nt − 1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
96
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
97
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
98
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
99
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
100
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
101
+ page_content=' D(Nf − 1, 0) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
102
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
103
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
104
+ page_content=' D(Nf − 1, Nt − 1) � � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
105
+ page_content=' (12) 2D-DFT of D(m, n) can be performed to compute the range and velocity of the object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
106
+ page_content=' It can be implemented in two stages: column-by-column 1D-IDFTs of length Nf are proceeded after row-by-row 1D-DFTs of length Nt as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
107
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
108
+ page_content=' Then, the 2D range-velocity periodogram can be obtained, giving an intuitive indication of the reflecting objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
109
+ page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
110
+ page_content=' A NOVEL WAVEFORM DESIGN AND SENSING SIGNAL PROCESSING ALGORITHM A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
111
+ page_content=' Challenges Despite its performance and practicality, the 2D-DFT-based periodogram method suffers from several drawbacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
112
+ page_content=' First, the 2D-DFT calculation is computationally expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
113
+ page_content=' The complexity of an N-point DFT is O(N 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
114
+ page_content=' For example, to apply 2D-DFT to one matrix D(m, n) produced by the sensing block illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
115
+ page_content=' 1(a), 960O(N 2) complex multiplications (480 DFTs in row and 480 IDFTs in column) are needed to generate one range-velocity estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
116
+ page_content=' Since (a) A diagonal of sensing signals (b) Consecutively transmitted diagonals of sensing signals Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
117
+ page_content=' 4: A diagonal structure of sensing signals to meet the KPIs in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
118
+ page_content=' the sensing system is “unintelligent” and cannot predict the positions of objects in the detection range, it must scan the environment in each direction to localize and track the objects using narrow beams, leading to an extremely high computational complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
119
+ page_content=' Second, 2D-DFT is calculated in stages where all the results of the row-by-row 1D-DFTs must be available before the column-by-column 1D-IDFTs can be performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
120
+ page_content=' Therefore, an additional intermediate cache is required, thus increasing hardware cost, especially for large DFT size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
121
+ page_content=' Finally, the sensing signals must be transmitted both in the time and frequency domain in a comb structure, which induces high sensing overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
122
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
123
+ page_content=' Method To tackle these challenges, a novel OFDM-based sensing waveform structure and corresponding signal processing algo- rithm are proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
124
+ page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
125
+ page_content=' 4(a), a rectangular time- frequency block with a bandwidth of 400 MHz contains 3360 subcarriers in the frequency domain, and the block spans 30 ms in the time domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
126
+ page_content=' Uniformly distributed sensing signals are allocated along the diagonal of the block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
127
+ page_content=' The time domain and the frequency domain contain the same number, N, of sensing signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
128
+ page_content=' A diagonal of sensing signals produces a transmitted modulation symbol sequence dTx(k) of length N, spanning 240 slots in the time domain and 400 MHz in the frequency domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
129
+ page_content=' The normalized modulation symbol vector d(k) can be obtained by the element-wise division between the received modulation symbol sequence dRx(k) and dTx(k), yielding d(k) = dRx(k) dTx(k) = yR(k) ⊗ yD(k), k = 0, · · · , N − 1 (13) where yR(k) = exp(−j4π∆fRk c ), k = 0, · · · , N − 1 (14) D(m,n) D(m,n) 1D-DFT in row 1D-IDET in column m =N- Range- velocity m=1 profile m=0 n=0n=1 n=Nt-1 n=0n=1 n=Nt-1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
130
+ page_content='3359 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
131
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
132
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
133
+ page_content='. .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
134
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
135
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
136
+ page_content=' Subcarrier No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
137
+ page_content=' 21 400 MHz 14 7 0 Symbol 2 Symbol 9 Symbol 2 Symbol 9 Slot 0 Slot 1 Slot 239 Sensing signal symbol 7 Communication symbol 30 msBlock 0 Block 1 Block 2 B = 400 MHz T Tb = 30 msFig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
138
+ page_content=' 5: The radar image of an object with range of 40 m and velocity of 5 m/s using the proposed algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
139
+ page_content=' yD(k) = exp(j4πTsymfcvk c ), k = 0, · · · , N − 1 (15) The range R of the object causes the phase shifts of the individual elements of vector yR(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
140
+ page_content=' The velocity v of the object causes the phase shifts of the individual elements of vector yD(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
141
+ page_content=' Since vector yR(k) and vector yD(k) contain the same number of elements and they are equally spaced in both the frequency domain and time domain, the range and velocity of the object can be obtained simultaneously by performing DFT of d(k), which yields Yrv(l) = DFT(d(k)) = N−1 � k=0 yR(k)yD(k)exp(−j2πkl N ), l = 0, · · · , N − 1 (16) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
142
+ page_content=' 5 shows the normalized DFT result of a modulation symbol vector d(k), which is derived from the sensing signals reflected by a moving object with range R = 40 m and veloc- ity v = 5 m/s at time t0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
143
+ page_content=' The x-axis represents DFT bin indices l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
144
+ page_content=' The number of bins equals the DFT size N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
145
+ page_content=' The integer values on the x-axis correspond to the spectral frequencies sampled by the DFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
146
+ page_content=' The DFT result shows a dual-peak-like profile.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
147
+ page_content=' The bin indices at two peak locations are l1 = 81 and l2 = 134.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
148
+ page_content=' Two spectral frequencies fH and fL, which carry the range and Doppler information of the object, can be calculated by fH = l1 + l2 2 , (17) fL = l2 − l1 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
149
+ page_content=' (18) The DFT of d(k) translates the modulation signals in the time-frequency domain to two spectral peaks centered at fH and spaced at an interval of 2fL in the radar image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
150
+ page_content=' One of fH and fL contains range information, whereas the other contains velocity information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
151
+ page_content=' The range and velocity of the object can be extracted by R = cfH 2∆fNc , v = cfL 2TsymfcNc , (19) or R = cfL 2∆fNc , v = cfH 2TsymfcNc .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
152
+ page_content=' (20) As a side effect of the proposed algorithm, using (19) and (20) yields two rang-velocity estimates from one block of sensing signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
153
+ page_content=' These two estimates include a correct range- velocity estimate and an incorrect range-velocity estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
154
+ page_content=' For example, two range-velocity estimates can be extracted from bin indices l1 = 81 and l2 = 134 shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
155
+ page_content=' 5, referred to as estimate A and estimate B below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
156
+ page_content=' Estimate A: R = 40 m and v = 5 m/s Estimate B: R = 10 m and v = 20 m/s By using the proposed method, an object with R = 40 m and v = 5 m/s produces the same radar image as that of an- other object with R = 10 m and v = 20 m/s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
157
+ page_content=' The uncertainty is caused by the unlabeled bin indices l1 and l2 at peak locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
158
+ page_content=' For spectral frequencies fH and fL, it is uncertain which one is indicative of the range and which one indicates the velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
159
+ page_content=' These two estimates cannot be discerned using information from one sensing block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
160
+ page_content=' A multi-temporal data fusion method can be performed to filter out the incorrect estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
161
+ page_content=' The main idea of the multi-temporal data fusion method is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
162
+ page_content=' Based on the two range-velocity esti- mates acquired in the recent past, the range-velocity estimate in the present can be predicted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
163
+ page_content=' The incorrect estimate can be identified by comparing the actual estimates obtained in the present with the predicted estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
164
+ page_content=' A traffic monitoring scenario is considered to show how the incorrect estimate is identified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
165
+ page_content=' We assume that the vehicles in the scenario comply with the linear motion model, which is popularly used in traffic research [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
166
+ page_content=' The linear motion model is a motion model where an object’s velocity or acceleration is held constant within a span of time, provided that the time is sufficiently short.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
167
+ page_content=' We assume the JCAS system has no prior range-velocity information on the vehicles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
168
+ page_content=' The vehicle’s maximum acceleration (a) is assumed to be 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
169
+ page_content='4 m/s2, corresponding to the acceleration from 0 to 60 mph in 5 seconds (acceleration of high-performance cars).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
170
+ page_content=' The consecutive sensing blocks are transmitted at time 0 ms (t0), 30 ms (t1), 60 ms (t2), 90 ms (t3), 120 ms (t4), and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
171
+ page_content=' For the linear motion model with constant velocity, the range and velocity of the vehicle at time t1-t4 can be predicted based on estimate A and estimate B obtained at time t0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
172
+ page_content=' Similarly, the range and velocity of the vehicle at time t1-t4 can be obtained for the linear motion model with a constant acceleration of 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
173
+ page_content='4 m/s2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
174
+ page_content=' The amplitude of the spectral peak in the radar image is determined by range R as it affects the received signal power PR reflected by the object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
175
+ page_content=' PR is formulated as PR = PTxGTxGRxσλ2 (4π)3R4f 2c , (21) where PTx is the transmitted power, GTx is the Tx antenna gain, GRx is the Rx antenna gain, σ is the RCS of the object, λ is the wavelength of the carrier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
176
+ page_content=' 0 Peak 1 Peak 2 = 81 。' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
177
+ page_content='= 134 = 107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
178
+ page_content='5 10 15 = 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
179
+ page_content='5 = 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
180
+ page_content='5 20 dB 25 0 in results 10 30 60 80 100 120- 140 20 30 40 50 0 50 100 150 200 Bin indices of /(a) Real radar image for a vehicle with R = 40 m and constant v = 5 m/s at time t0, and predicted radar images at time t0-t4 (b) Real radar image for a vehicle with R = 10 m and constant v = 20 m/s at time t0, and predicted radar images at time t0-t4 (c) Real radar image for a vehicle with R = 40 m, v = 5 m/s and constant acceleration 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
181
+ page_content='4 m/s2 at time t0, and predicted radar images at time t0-t4 (d) Real radar image for a vehicle with R = 10 m, v = 20 m/s and constant acceleration 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
182
+ page_content='4 m/s2 at time t0, and predicted radar images at time t0-t4 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
183
+ page_content=' 6: Predicted radar images at time t1-t4 based on estimate A and estimate B obtained at time t0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
184
+ page_content=' TABLE III: The predicted range and velocity of the vehicle at time t1-t4 based on estimate A and estimate B obtained at time t0 Estimate t (ms) R(t) (m) v(t) (m/s) Ap(t) (dB) t0 = 0 40 5 0 Estimate A t1 = 30 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
185
+ page_content='9 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
186
+ page_content='06 with t2 = 60 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
187
+ page_content='7 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
188
+ page_content='13 constant v t3 = 90 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
189
+ page_content='6 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
190
+ page_content='19 t4 = 120 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
191
+ page_content='4 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
192
+ page_content='26 t0 = 0 10 20 0 Estimate B t1 = 30 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
193
+ page_content='4 20 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
194
+ page_content='07 with t2 = 60 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
195
+ page_content='8 20 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
196
+ page_content='22 constant v t3 = 90 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
197
+ page_content='2 20 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
198
+ page_content='45 t4 = 120 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
199
+ page_content='6 20 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
200
+ page_content='77 t0 = 0 40 5 0 Estimate A t1 = 30 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
201
+ page_content='9 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
202
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
203
+ page_content='06 with t2 = 60 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
204
+ page_content='7 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
205
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
206
+ page_content='13 constant a t3 = 90 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
207
+ page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
208
+ page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
209
+ page_content='2 t4 = 120 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
210
+ page_content='4 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
211
+ page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
212
+ page_content='28 t0 = 0 10 20 0 Estimate B t1 = 30 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
213
+ page_content='4 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
214
+ page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
215
+ page_content='08 with t2 = 60 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
216
+ page_content='8 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
217
+ page_content='3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
218
+ page_content='24 constant a t3 = 90 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
219
+ page_content='2 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
220
+ page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
221
+ page_content='49 t4 = 120 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
222
+ page_content='6 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
223
+ page_content='7 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
224
+ page_content='86 Based on estimate A and estimate B obtained at time t0 and the linear motion model, the predicted ranges R(t), velocities v(t) and normalized peak amplitudes Ap(t) at time t1-t4 are tabulated in Table III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
225
+ page_content=' Ap(t0) is normalized to 0 dB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
226
+ page_content=' The real radar images calculated at time t0 and the predicted radar images at time t1-t4 in the future are depicted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
227
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
228
+ page_content=' The blue dashed lines indicate the real radar images yielded by the sensing block transmitted at time t0, while the predicted radar images at time t1-t4 are visualized with assorted colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
229
+ page_content=' It can be observed that although a vehicle with R = 40 m and v = 5 m/s produces the same radar image as that of a vehicle with R = 10 m and v = 20 m/s at time t0, the following radar images at time t1-t4 show different patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
230
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
231
+ page_content=' 6(a) shows minor peak shifts due to the elapsed time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
232
+ page_content=' Because constant velocity causes unchanged fL, and the quite low velocity (5 m/s) causes minor range changes at time t1-t4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
233
+ page_content=' Hence, the changes in fH are less significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
234
+ page_content=' The bin indices at peak locations at time t0-t4 are within 79-81 and 133-134, as shown in the zoomed-in sections in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
235
+ page_content=' 6(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
236
+ page_content=' Also, the peak amplitude increases slightly from 0 dB at t0 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
237
+ page_content='26 dB at t4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
238
+ page_content=' The minor range changes causes small received signal power changes, which, in turn, leads to slight peak amplitude fluctuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
239
+ page_content=' It can be observed that the radar images in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
240
+ page_content=' 6(b) are pretty different from the radar images in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
241
+ page_content=' 6(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
242
+ page_content=' The constant velocity produces an unchanged spectral frequency fH in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
243
+ page_content=' 6(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
244
+ page_content=' Meanwhile, the high velocity (v = 20 m/s) leads to significant changes in the range (major changes in spectral frequency fL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
245
+ page_content=' The bin indices at peak locations at time t0-t4 are within 80-88 and 128-134, which are distinctly different from the bin indices in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
246
+ page_content=' 6(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
247
+ page_content=' Furthermore, due to the significant distance changes, the peak amplitude increases rapidly from 0 dB at t0 to 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
248
+ page_content='77 dB at t4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
249
+ page_content=" Therefore, for constant velocity, by comparing the bin indices and peak amplitudes 0 B ul 0 10 results 20 2 Normalized DF' 76 78 80 134 135 136 : R= 40 m, v= 5 m/s 30 t,: R= 39." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
250
+ page_content='9 m, v= 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
251
+ page_content='2 m/s t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
252
+ page_content=': R= 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
253
+ page_content='7 m, v= 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
254
+ page_content='3 m/s t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
255
+ page_content=': R= 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
256
+ page_content='5 m, v= 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
257
+ page_content='5 m/s 40 0 20 40 60 80 100 120 140 160 180 200 Indices of /B 0 d 4 T results 2 10 0 20 131132133134 80 85 90 : R= 10 m, v= 20 m/s Normalized t,: R= 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
258
+ page_content='4 m, v= 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
259
+ page_content='2 m/s 30 t,: R= 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
260
+ page_content='8 m, v= 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
261
+ page_content='3 m/s t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
262
+ page_content=': R= 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
263
+ page_content='2 m, v= 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
264
+ page_content='5 m/s 40 : R= 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
265
+ page_content='6 m, v= 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
266
+ page_content='7 m/s 4 0 20 40 60 80 100 120 140 160 180 200 Indices of /0 B d ul 0 10 results 20 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
267
+ page_content='2 79 80 81 133 134 DF t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
268
+ page_content=': R= 40 m, v= 5 m/s 0 30 t,: R= 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
269
+ page_content='9 m, v= 5 m/s : R= 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
270
+ page_content='7 m, v= 5 m/s t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
271
+ page_content=': R= 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
272
+ page_content='6 m, v= 5 m/s 40 3 t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
273
+ page_content=': R= 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
274
+ page_content='4 m, v= 5 m/s 0 20 40 60 80 100 120 140 160 180 200 Indices of /B 0 da 4 results 2 2 10 0 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
275
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
276
+ page_content=' 2 20 128 130 132 134 80 82 84 86 88 t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
277
+ page_content=': R= 10 m, v= 20 m/s Jormalized 0 t,: R= 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
278
+ page_content='4 m, v= 20 m/s 30 t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
279
+ page_content=': R= 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
280
+ page_content='8 m, v= 20 m/s t,: R= 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
281
+ page_content='2 m, v= 20 m/s 40 t,: R= 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
282
+ page_content="6 m, v= 20 m/s 4' 0 20 40 60 80 100 120 140 160 180 200 Indices of /from predicted radar images at time t1-t4 with the bin indices and peak amplitudes from actual radar images obtained at time t1-t4, the correct range-velocity estimate can be identified." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
283
+ page_content=' Although the acceleration of vehicles is utterly unpre- dictable in the traffic monitoring scenario, the acceleration of vehicles is limited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
284
+ page_content=' The maximum acceleration of high- performance cars is 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
285
+ page_content='4 m/s2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
286
+ page_content=' For the linear motion model with constant acceleration 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
287
+ page_content='4 m/s2, the predicted radar images at time t1-t4 are still significantly different between estimate A and estimate B, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
288
+ page_content=' 6(c) and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
289
+ page_content=' 6(d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
290
+ page_content=' The non- zero acceleration will induce changes in both fH and fL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
291
+ page_content=' For estimate A, the velocity changes slowly due to the limited acceleration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
292
+ page_content=' The range also changes slowly due to the low acceleration and low velocity (5 m/s) at time t0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
293
+ page_content=' Therefore, the vehicle yields minor changes in both fH and fL at t1- t4 as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
294
+ page_content=' 6(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
295
+ page_content=' For a vehicle with R = 10 m, v = 20 m/s, and a = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
296
+ page_content='4 m/s2 at time t0, enormous range changes due to high velocity induce major changes in fL at time t1-t4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
297
+ page_content=' Major changes in fL shift the peaks significantly, leading to a sparser peak pattern in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
298
+ page_content=' 6(d) than the pattern in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
299
+ page_content=' 6(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
300
+ page_content=' Therefore, the velocity is crucial to filter out the incorrect estimates herein since the range change depends on the velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
301
+ page_content=' The limited acceleration in the traffic scenario is a minor factor in changing the peaks in the radar image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
302
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
303
+ page_content=' Discussions It has been mentioned that the signal processing complexity of applying 2D-DFT is computationally high.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
304
+ page_content=' An OFDM sensing system with parameters in Table II needs 2N ×O(N 2) complex multiplications (480 DFTs in row and 480 IDFTs in column) to generate a range-velocity estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
305
+ page_content=' In contrast, the computational complexity of the proposed algorithm is relatively low because it avoids the DFT calculation both on all rows and all columns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
306
+ page_content=' Since the phase shift of modulation signals along the frequency axis can extract the delay, and the phase shift of modulation signals along the time axis can extract the Doppler, the signaling overhead of the proposed diagonal structure is reduced by combining the phase shift along both the frequency axis and the time axis into a series of sensing signals along the diagonal of the resource block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
307
+ page_content=' Hence the range and velocity can be derived simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
308
+ page_content=' The sensing overhead of the conventional comb structure is N 2/N 2 c .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
309
+ page_content=' The sensing overhead of the proposed diagonal structure is N/N 2 c , reducing the sensing overhead by a factor of N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
310
+ page_content=' The only minor disadvantage of the proposed algorithm is that two unlabeled spectral peaks in the radar image cause ambiguity in the range-velocity estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
311
+ page_content=' The proposed multi- temporal data fusion method can be used to resolve ambiguity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
312
+ page_content=' Furthermore, steady objects do not create ambiguity by using the proposed algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
313
+ page_content=' For example, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
314
+ page_content=' 7 shows the radar image of a steady object with a range of 40 m and a velocity of 0 m/s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
315
+ page_content=' Another object with a range of 0 m and a velocity of 20 m/s produces the same radar image shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
316
+ page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
317
+ page_content=' However, it directly contacts the antenna of the JCAS system due to the range R being zero, which is kinematically Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
318
+ page_content=' 7: The radar image of an object with range of 40 m and velocity of 0 m/s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
319
+ page_content=' infeasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
320
+ page_content=' Therefore, the proposed algorithm can derive the range-velocity estimate of a steady object without ambiguity by using one diagonal block of sensing signals only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
321
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
322
+ page_content=' CONCLUSIONS The typical processing algorithm of OFDM radar applies 2D-DFT for extracting the range and velocity information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
323
+ page_content=' This paper proposes a diagonal waveform structure and cor- responding signal processing algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
324
+ page_content=' With this approach, two advantages can be achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
325
+ page_content=' First, the signal processing algorithm is simple regarding computational complexity and memory requirement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
326
+ page_content=' Second, sensing overhead is signifi- cantly reduced by the proposed waveform structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
327
+ page_content=' The minor disadvantage is that the proposed approach obtains the range and velocity in a coupled manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
328
+ page_content=' Therefore, range-velocity ambiguity may occur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
329
+ page_content=' A multi-temporal data fusion method can be performed to resolve ambiguity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
330
+ page_content=' The simulations have proven the operability of the proposed waveform and signal processing algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
331
+ page_content=' REFERENCES [1] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
332
+ page_content=' Wymeersch et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
333
+ page_content=', “Deliverable D3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
334
+ page_content='1 Localisation and sensing use cases and gap analysis,” Hexa-X, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
335
+ page_content=' 31, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
336
+ page_content=' [2] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
337
+ page_content=' Liu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
338
+ page_content=' Masouros, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
339
+ page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
340
+ page_content=' Petropulu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
341
+ page_content=' Griffiths and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
342
+ page_content=' Hanzo, “Joint Radar and Communication Design: Applications, State-of-the-Art, and the Road Ahead,” in IEEE Transactions on Communications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
343
+ page_content=' 68, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
344
+ page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
345
+ page_content=' 3834-3862, June 2020, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
346
+ page_content='1109/TCOMM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
347
+ page_content='2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
348
+ page_content='2973976.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
349
+ page_content=' [3] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
350
+ page_content=' Wild, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
351
+ page_content=' Braun and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
352
+ page_content=' Viswanathan, “Joint Design of Communication and Sensing for Beyond 5G and 6G Systems,” in IEEE Access, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
353
+ page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
354
+ page_content=' 30845-30857, 2021, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
355
+ page_content='1109/ACCESS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
356
+ page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
357
+ page_content='3059488.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
358
+ page_content=' [4] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
359
+ page_content=' Sturm and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
360
+ page_content=' Wiesbeck, “Waveform Design and Signal Processing Aspects for Fusion of Wireless Communications and Radar Sensing,” in Proceedings of the IEEE, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
361
+ page_content=' 99, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
362
+ page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
363
+ page_content=' 1236-1259, July 2011, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
364
+ page_content='1109/JPROC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
365
+ page_content='2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
366
+ page_content='2131110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
367
+ page_content=' [5] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
368
+ page_content=' Sturm, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
369
+ page_content=' Sit, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
370
+ page_content=' Braun and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
371
+ page_content=' Zwick, “Spectrally interleaved multi- carrier signals for radar network applications and multi-input multi- output radar,” IET Radar, Sonar and Navigation, 2012, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
372
+ page_content='1049/iet- rsn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
373
+ page_content='2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
374
+ page_content='0040.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
375
+ page_content=' [6] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
376
+ page_content=' Sturm, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
377
+ page_content=' Pancera, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
378
+ page_content=' Zwick and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
379
+ page_content=' Wiesbeck, “A novel approach to OFDM radar processing,” 2009 IEEE Radar Conference, 2009, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
380
+ page_content=' 1-4, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
381
+ page_content='1109/RADAR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
382
+ page_content='2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
383
+ page_content='4977002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
384
+ page_content=' [7] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
385
+ page_content=' Sturm, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
386
+ page_content=' Braun, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
387
+ page_content=' Zwick and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
388
+ page_content=' Wiesbeck, “A multiple target doppler estimation algorithm for OFDM based intelligent radar systems,” The 7th European Radar Conference, 2010, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
389
+ page_content=' 73-76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
390
+ page_content=' [8] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
391
+ page_content=' Behravan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
392
+ page_content=', “Introducing sensing into future wireless com- munication systems,” 2022 2nd IEEE International Symposium on Joint Communications & Sensing (JC&S), 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
393
+ page_content=' 1-5, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
394
+ page_content='1109/JCS54387.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
395
+ page_content='2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
396
+ page_content='9743513.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
397
+ page_content=' 0 B a 0 5 10 10 15 20 20 25 30 105 110 100 115 40 0 20 40 60 80 100 120 140 160 180 200 220 Bin indices of /[9] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
398
+ page_content=' Schubert, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
399
+ page_content=' Richter and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
400
+ page_content=' Wanielik, “Comparison and evaluation of advanced motion models for vehicle tracking,” 2008 11th International Conference on Information Fusion, 2008, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
401
+ page_content=' 1-6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE1T4oBgHgl3EQfqwWe/content/2301.03347v1.pdf'}
GdE2T4oBgHgl3EQf-gnI/content/tmp_files/2301.04240v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
GdE2T4oBgHgl3EQf-gnI/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
HtAzT4oBgHgl3EQfHvtN/content/tmp_files/2301.01049v1.pdf.txt ADDED
@@ -0,0 +1,1293 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Frequency-Domain Detection for Molecular
2
+ Communications
3
+ Meltem Civas∗† Ali Abdali∗ Murat Kuscu∗ Ozgur B. Akan∗†
4
+ ∗Center for neXt-generation Communications (CXC)
5
+ Department of Electrical and Electronics Engineering
6
+ Koc¸ University, 34450, Istanbul, Turkey
7
+ {mcivas16, aabdali21, mkuscu, akan}@ku.edu.tr
8
+ †Internet of Everything (IoE) Group
9
+ Electrical Engineering Division, Department of Engineering
10
+ University of Cambridge, CB3 0FA Cambridge, UK
11
+ {mc2365, oba21}@cam.ac.uk
12
+ Abstract—Molecular Communications (MC) is a bio-inspired
13
+ communication paradigm which uses molecules as information
14
+ carriers, thereby requiring unconventional transmitter/receiver
15
+ architectures and modulation/detection techniques. Practical MC
16
+ receivers (MC-Rxs) can be implemented based on field-effect
17
+ transistor biosensor (bioFET) architectures, where surface recep-
18
+ tors reversibly react with ligands, whose concentration encodes
19
+ the information. The time-varying concentration of ligand-bound
20
+ receptors is then translated into electrical signals via field-effect,
21
+ which is used to decode the transmitted information. However,
22
+ ligand-receptor interactions do not provide an ideal molecular
23
+ selectivity, as similar types of ligands, i.e., interferers, co-existing
24
+ in the MC channel can interact with the same type of receptors,
25
+ resulting in cross-talk. Overcoming this molecular cross-talk with
26
+ time-domain samples of the Rx’s electrical output is not always
27
+ attainable, especially when Rx has no knowledge of the interferer
28
+ statistics or it operates near saturation. In this study, we propose
29
+ a frequency-domain detection (FDD) technique for bioFET-based
30
+ MC-Rxs, which exploits the difference in binding reaction rates
31
+ of different types of ligands, reflected to the noise spectrum of the
32
+ ligand-receptor binding fluctuations. We analytically derive the
33
+ bit error probability (BEP) of the FDD technique, and demon-
34
+ strate its effectiveness in decoding transmitted concentration
35
+ signals under stochastic molecular interference, in comparison
36
+ to a widely-used time-domain detection (TDD) technique. The
37
+ proposed FDD method can be applied to any biosensor-based
38
+ MC-Rxs, which employ receptor molecules as the channel-Rx
39
+ interface.
40
+ Index Terms—Molecular communications, receiver, frequency-
41
+ domain detection, biosensor, ligand-receptor interactions
42
+ I. INTRODUCTION
43
+ Using molecules to encode and transfer information, i.e.,
44
+ Molecular Communications (MC), is nature’s way of con-
45
+ necting bio things, such as natural cells, with each other.
46
+ Engineering this unconventional communication paradigm to
47
+ extend our connectivity to synthetic bio-nano things, such as
48
+ nanobiosensors, artificial cells, is the vision that gave rise to
49
+ the Internet of Bio-Nano Things (IoBNT), a novel networking
50
+ framework promising for unprecedented healthcare and envi-
51
+ ronmental applications of bionanotechnology [1], [2].
52
+ Being fundamentally different from the conventional elec-
53
+ tromagnetic communication techniques, MC requires novel
54
+ transceiver architectures along with new modulation, coding,
55
+ and detection techniques that can cope with the highly time-
56
+ varying, nonlinear, and complex channel characteristics in bio-
57
+ chemical environments [3]. The design of MC receivers (MC-
58
+ Rxs) and detection techniques has unquestionably attracted
59
+ the most attention in the literature. However, due to the
60
+ simplicity it provides in modeling, many of the previous
61
+ studies considered passive Rx architectures, that are physically
62
+ unlinked from the MC channel, and thus, of little practical
63
+ relevance [3]. An emerging trend in MC is to model and
64
+ design more practical MC-Rxs that employ ligand receptors on
65
+ their surface as selective biorecognition units, resembling the
66
+ sensing and communication interface of natural cells. One such
67
+ design, which was practically implemented in [4], is based on
68
+ field-effect transistor biosensors (bioFETs), where the ligand-
69
+ receptor (LR) interactions are translated into electrical signals
70
+ via field-effect for the decoding of the transmitted information.
71
+ LR interactions are fundamental to the sensing and commu-
72
+ nication of natural cells. However, the selectivity of biological
73
+ receptors against their target ligands is not ideal, and this so-
74
+ called receptor promiscuity results in cross-talk of other types
75
+ of molecules co-existing in the biochemical environment [5].
76
+ This cross-talk is often dealt with by natural cells through
77
+ intracellular chemical reaction networks and multi-state recep-
78
+ tor mechanisms, such as kinetic proofreading [6]. The same
79
+ molecular interference problem also applies to abiotic MC-Rxs
80
+ that employ ligand receptors, and thus, should be addressed
81
+ in developing reliable detection techniques [7].
82
+ Our previous studies on biosynthetic MC-Rxs have ad-
83
+ dressed the molecular interference problem by developing
84
+ detection techniques based on sampling the bound time inter-
85
+ vals of individual receptors to discriminate between interferer
86
+ and information molecules [6], [7]. However, this approach
87
+ is not plausible for biosensor-based MC-Rxs, which have
88
+ no access to time-trajectory of individual receptor states. On
89
+ the other hand, decoding information from the time-varying
90
+ concentration of bound receptors performs poorly due to the
91
+ indistinguishability of different ligand types in time-domain,
92
+ especially when the Rx does not have any knowledge of the
93
+ statistics of the interferer concentration, and when the Rx
94
+ operates near saturation [7].
95
+ In this paper, we develop a frequency-domain detec-
96
+ tion (FDD) technique for biosensor-based MC-Rxs based on
97
+ arXiv:2301.01049v1 [eess.SP] 3 Jan 2023
98
+
99
+ LR binding interactions, which can distinguish different types
100
+ of ligands co-existing in the channel and estimate their indi-
101
+ vidual concentrations from the power spectral density (PSD)
102
+ of the fluctuations in receptor occupancy, i.e., binding noise.
103
+ Stochastic and reversible LR interactions can be modeled
104
+ as a two-state continuous-time Markov process at equilibrium
105
+ where the state transition rates are given by the binding and
106
+ unbinding rates of LR pair [5]. Although many different types
107
+ of ligands can interact with the same type of receptors, these
108
+ interactions are typically governed by different binding and
109
+ unbinding rates. This difference in reaction rates is reflected to
110
+ a difference in characteristic frequency fch of the interactions,
111
+ which is the reciprocal of the correlation time τB of the
112
+ Markov process at equilibrium, and also a function of ligand
113
+ concentration and LR reaction rates [7]. The characteristic fre-
114
+ quency of the LR pair manifests itself as a cut-off frequency in
115
+ the Lorentzian-shaped PSD of the binding noise. The proposed
116
+ FDD method exploits this correlation in the frequency domain
117
+ to estimate the concentration of information molecules in a
118
+ Maximum Likelihood (ML) manner, and using the estimated
119
+ concentration, it optimally decodes the transmitted informa-
120
+ tion. We obtained the bit error probability (BEP) for FDD
121
+ in closed form and compared it to the error performance of
122
+ a time-domain detection (TDD) technique, which relies on
123
+ the number of bound receptors, sampled at a single sampling
124
+ point. The results of the performance analysis indicate that the
125
+ proposed FDD method vastly outperforms the TDD method,
126
+ especially at high interference conditions.
127
+ II. SYSTEM MODEL
128
+ We consider a microfluidic MC system utilizing binary
129
+ concentration shift keying (CSK) such that the transmitter (Tx)
130
+ instantly releases Nm|s number of molecules at the beginning
131
+ of each signaling interval [8]. Here m stands for information
132
+ molecules, and s ∈ {0, 1} denotes the transmitted bit. The
133
+ signaling interval is assumed to be large enough to neglect
134
+ inter-symbol interference (ISI). The microfluidic channel is
135
+ abstracted as a 3-dimensional channel with a rectangular cross-
136
+ section, as shown in Fig. 1(a). Tx is located at the channel
137
+ inlet, and the molecules are released instantly and uniformly
138
+ across the cross-section of the channel and propagate through
139
+ unidirectional fluid flow from Tx to Rx, which is located at
140
+ the channel bottom. We consider a two-dimensional graphene
141
+ bioFET-based MC-Rx as illustrated in Fig. 1(b) [4]. There is
142
+ a single type of interferer molecules in the channel, which can
143
+ also bind the receptors on Rx, though with different reaction
144
+ rates. The concentration of the interferer molecules in the Rx’s
145
+ vicinity, ci, at the sampling time is assumed to follow a log-
146
+ normal distribution with mean µci and variance σ2
147
+ ci. We as-
148
+ sume that Rx has the knowledge of the number of information
149
+ molecules transmitted, Nm|s, and the binding/unbinding rates
150
+ of information and interferer molecules.
151
+ The released molecules propagate along the microfluidic
152
+ channel through convection and diffusion. While convection
153
+ results in the uniform and unidirectional drift of the transmitted
154
+ molecules from Tx to Rx, diffusion acts in all directions
155
+ hch
156
+ lch
157
+ x
158
+
159
+ z
160
+ Si
161
+ SiO2
162
+ Drain
163
+ electrode
164
+ Source
165
+ electrode
166
+ Receptor
167
+ Information
168
+ molecule
169
+ Interferer
170
+ molecule
171
+ Insulator
172
+ Graphene
173
+ channel
174
+ (a)
175
+ (b)
176
+ MC Receiver
177
+ MC Transmitter
178
+ Flow
179
+ Direction
180
+ x = xR
181
+ y
182
+ x
183
+ z
184
+ hch
185
+ lch
186
+ Fig. 1: a) 3D view of the microfluidic channel; the locations of
187
+ Tx and Rx are shown. b) Graphene FET-based MC-Rx exposed
188
+ to information and interferer molecules.
189
+ causing the dispersion of the molecules as they propagate. The
190
+ dispersion results in a smooth concentration profile which can
191
+ be approximated by a Gaussian distribution. Assuming that
192
+ the number of ligands binding the receptors is low enough to
193
+ neglect the change of concentration in the channel, the prop-
194
+ agation can be represented as a one-dimensional convection-
195
+ diffusion problem with the following solution [8]:
196
+ cm|s(x, t) =
197
+ Nm|s
198
+ Ach
199
+
200
+ 4πDt
201
+ exp
202
+
203
+ −(x − ut)2
204
+ 4Dt
205
+
206
+ ,
207
+ (1)
208
+ where cm|s(x, t) is the ligand concentration at position x and
209
+ time t, Ach = hch × lch is the cross-sectional area of the
210
+ channel with hch and lch being the channel height and width,
211
+ respectively, u is fluid flow velocity in the x-axis, and D is the
212
+ effective diffusion coefficient. For channels with rectangular
213
+ cross-section, D can be expressed as follows [9]:
214
+ D =
215
+
216
+ 1 +
217
+ 8.5u2h2
218
+ chl2
219
+ ch
220
+ 210D2
221
+ 0(h2
222
+ ch + 2.4hchlch + l2
223
+ ch)
224
+
225
+ D0,
226
+ (2)
227
+ where D0 is the diffusion coefficient of the ligand.
228
+ The peak of ligand concentration profile given by (1)
229
+ reaches the Rx’s center position, xR, at time tD = xR
230
+ u . As
231
+ the MC channel characteristic is similar to a low-pass filter
232
+ due to diffusion, the concentration signal is slowly varying
233
+ around the Rx position, thus allowing equilibrium conditions
234
+ for the LR reactions with steady ligand concentration in a
235
+ short time window around tD [8], [9]. Rx can sample the
236
+ receptor states at time t = tD when the ligand concentration
237
+ is cm|s(xR, tD) =
238
+ Nm|s
239
+ Ach
240
+ √4πDtD [10]. Therefore, the number
241
+ of bound receptors, Nb|s, follows Binomial distribution with
242
+ mean µNb|s = pb|sNr and variance σ2
243
+ Nb|s = pb|s(1 − pb|s)Nr
244
+ [10], where Nr is the number of independent surface receptors.
245
+ Bound state probability of a single receptor, pb|s, in the
246
+ presence of two different types of ligands, i.e., information
247
+
248
+ and interferer molecules, is given as [6]
249
+ pb|s =
250
+ cm|s/KDm + ci/KDi
251
+ 1 + cm|s/KDm + ci/KDi
252
+ ,
253
+ (3)
254
+ where KDm = k−
255
+ m/k+
256
+ m and KDi = k−
257
+ i /k+
258
+ i is the dissociation
259
+ constant of information and interferer molecules, respectively.
260
+ The binding of charged ligands to the receptors creates an
261
+ effective charge reflected on the graphene channel as expressed
262
+ by QGr|s = Nb|sqeffNe−, where Ne− is number of free
263
+ electrons per ligand molecules. qeff is the effective charge
264
+ of a single electron of a bound ligand in the presence of
265
+ ionic screening, i.e., Debye screening: qeff = q×exp
266
+
267
+ − r
268
+ λD
269
+
270
+ ,
271
+ where q is the elementary charge, r is the length of a surface
272
+ receptor, and λD is Debye length whose relation is given by
273
+ λD =
274
+
275
+ (ϵκBT)/(2NAq2cion), where ϵ is the permittivity of
276
+ the medium, κB is the Boltzmann’s constant and NA is the
277
+ Avogadro’s constant [10]. Then, the mean surface potential
278
+ due to bound molecules can be written as ΨGr|s =
279
+ QGr|s
280
+ CG ,
281
+ where CG=
282
+
283
+ 1
284
+ CGr+ 1
285
+ CQ
286
+ �−1
287
+ is the total gate capacitance of the
288
+ bioFET. CGr is the electrical double layer capacitance between
289
+ graphene and electrolyte channel, CGr = AGrϵ/λD, with AGr
290
+ being the area of graphene surface exposed to the electrolyte,
291
+ and CQ is quantum capacitance, CQ = cq × AGr, where cq is
292
+ the quantum capacitance of graphene per unit area [4]. The
293
+ deviation in the output current due to bound molecules at
294
+ equilibrium is
295
+ ∆Ib|s = g × ΨGr|s,
296
+ (4)
297
+ where g is the bioFET transconductance. For large Nr, the
298
+ number of bound receptors Nb|s at the sampling time can
299
+ be approximated as Gaussian distributed [10], i.e., Nb|s ∼
300
+ N(µNb|s, σ2
301
+ Nb|s). As the transduction process is linear, the
302
+ change in the output current due to bound molecules can also
303
+ be approximated as Gaussian with mean µ∆Ib|s = ζµNb|s and
304
+ variance σ2
305
+ ∆Ib|s = ζ2σ2
306
+ Nb|s, where ζ =
307
+
308
+ qeff Ne−g
309
+ CG
310
+
311
+ .
312
+ Another type of noise that contributes to the overall output
313
+ current fluctuations in low-dimensional semiconductor mate-
314
+ rials is 1/f noise, which depends on the gate voltage and
315
+ is independent of the received signal. We use the commonly
316
+ utilized charge-noise model describing the behavior of 1/f
317
+ noise in graphene FETs [11]: Sf(f) = Sf1Hz/f β where Sf1Hz
318
+ is the noise power at 1 Hz, and the noise exponent β is an
319
+ empirical parameter 0.8 ≤ β ≤ 1.2. As discussed in [10], 1/f
320
+ noise can be approximated as white noise within physically
321
+ relevant observation windows. Based on this, the variance of
322
+ 1/f noise can be written as
323
+ σ2
324
+ f =
325
+ � fL
326
+ 0
327
+ Sf(fL)df +
328
+ � fH
329
+ fL
330
+ Sf(f)df,
331
+ (5)
332
+ where fL is the lower frequency of the observation window,
333
+ below which the noise power is considered constant, and fH is
334
+ the upper frequency, beyond which the noise power is assumed
335
+ to be negligible. Hence, the variance and mean of total output
336
+ current variance is σ2
337
+ ∆Is = ζ2σ2
338
+ Nb|s + σ2
339
+ f and µ∆Is = µ∆Ib|s.
340
+ III. TIME-DOMAIN DETECTION
341
+ Since Rx has no knowledge of the interferer concentration
342
+ statistics, it constructs the optimal ML decision threshold for
343
+ TDD solely based on its knowledge of the received signal
344
+ statistics corresponding to the transmitted concentration of
345
+ information molecules [7]:
346
+ γtd =
347
+ 1
348
+ σ2
349
+ ∆I1 − σ2
350
+ ∆I0
351
+
352
+ σ2
353
+ ∆I1µ∆I0 − σ2
354
+ ∆I0µ∆I1 + σ∆I1σ∆I0
355
+ ×
356
+
357
+ (µ∆I1 − µ∆I0)2 + 2(σ2
358
+ ∆I1 − σ2
359
+ ∆I0) ln(σ∆I1/σ∆I0)
360
+
361
+ .
362
+ (6)
363
+ As Rx does not account for interference statistics in calculating
364
+ γtd, it uses the bound state probability corresponding to a
365
+ single molecule case, namely, pb|s =
366
+ cm|s/KDm
367
+ 1+cm|s/KDm .
368
+ To derive the BEP for TDD, we first obtain the statis-
369
+ tics of the receiver output. By applying the law of total
370
+ expectation, we can express the mean number of bound
371
+ receptors as follows: µNb|s =
372
+ � ∞
373
+ 0
374
+ Nrpb|s(ci)f(ci)dci, where
375
+ pb|s(ci) =
376
+ cm|s/KDm+ci/KDi
377
+ 1+cm|s/KDm+ci/KDi , and f(·) is the probability
378
+ density function of log-normal distribution. Hence, µ∆Is =
379
+ ζµNb|s. Similarly, by applying the law of total variance, we
380
+ obtain the output current variance as
381
+ σ2
382
+ ∆Is = ζ2
383
+ � � ∞
384
+ 0
385
+
386
+ 1 − pb|s(ci)
387
+
388
+ pb|s(ci)Nrf(ci)dci
389
+ +
390
+ � ∞
391
+ 0
392
+
393
+ pb|s(ci)Nr
394
+ �2 f(ci)dci
395
+
396
+ − µ2
397
+ ∆Is + σ2
398
+ f.
399
+ (7)
400
+ Therefore, given the decision threshold γtd, BEP for time
401
+ detection method can be expressed as follows [7]:
402
+ P T DD
403
+ e
404
+ = 1
405
+ 4 erfc
406
+
407
+ �γtd − µ∆I0
408
+
409
+ 2σ2
410
+ ∆I0
411
+
412
+ � + 1
413
+ 4 erfc
414
+
415
+ �µ∆I1 − γtd
416
+
417
+ 2σ2
418
+ ∆I1
419
+
420
+ � .
421
+ (8)
422
+ IV. FREQUENCY-DOMAIN DETECTION
423
+ In this section, we introduce the FDD method utilizing the
424
+ model and observed PSD of the overall noise process (binding
425
+ noise + 1/f noise of the graphene bioFET-based MC-Rx) to
426
+ estimate the received concentration of information molecules
427
+ cm, which will be used in symbol decision. Here, the observed
428
+ PSD is the periodogram of the noise constructed with the time-
429
+ domain samples. In the sequel, we describe the model PSD
430
+ and then introduce the proposed estimation method.
431
+ A. Theoretical Model of Binding Noise PSD
432
+ This section describes the theoretical model of the binding
433
+ noise PSD for a particular pair of information and interference
434
+ concentration, namely λ = [cm, ci]. The binding process of
435
+ receptors can be described by the Langmuir reaction model
436
+ with three states, i.e., unbound (R), bound with information
437
+ molecules (RM) and bound with interferer molecules (RI),
438
+ with state occupation probabilities pR, pRM and pRI, respec-
439
+ tively [12]: R + M
440
+ k−
441
+ m
442
+
443
+ k+
444
+ m
445
+ RM, and R + I
446
+ k−
447
+ i⇌
448
+ k+
449
+ i
450
+ RI. Hence, the
451
+
452
+ chemical master equations are expressed as follows:
453
+
454
+ �����
455
+ dpRM
456
+ dt
457
+ dpRI
458
+ dt
459
+ dpR
460
+ dt
461
+
462
+ �����
463
+ =
464
+
465
+
466
+ −k−
467
+ m
468
+ 0
469
+ k+
470
+ mcm
471
+ 0
472
+ −k−
473
+ i
474
+ k+
475
+ i ci
476
+ k−
477
+ m
478
+ k−
479
+ i
480
+ −k+
481
+ mcm − k+
482
+ i ci
483
+
484
+
485
+
486
+
487
+ pRM
488
+ pRI
489
+ pR
490
+
491
+ � (9)
492
+ The matrix containing reaction rates and the concentrations in
493
+ (9), has rank 2 since one state probability can be written in
494
+ terms of the other two state occupation probabilities as pR +
495
+ pRM + pRI = 1. Therefore, by setting the left-hand side in
496
+ (9) to zero the equilibrium probabilities can be obtained as
497
+ p0
498
+ RM =
499
+ cm/KDm
500
+ 1 +
501
+ cm
502
+ KDm +
503
+ ci
504
+ KDi
505
+ , p0
506
+ RI =
507
+ ci/KDi
508
+ 1 +
509
+ cm
510
+ KDm +
511
+ ci
512
+ KDi
513
+ (10)
514
+ and p0
515
+ R = 1−(p0
516
+ RM +p0
517
+ RI). In the equilibrium conditions, the
518
+ state occupation probabilities can be expressed in terms of the
519
+ equilibrium state probability and the fluctuations around this
520
+ probability [12], [13] as
521
+ pj(t) = p0
522
+ j + ∆pj(t),
523
+ j ∈ {RM, RI, R}.
524
+ (11)
525
+ Putting (11) into (9) and using Taylor’s expansion, the state
526
+ fluctuations can be expressed as follows [12]:
527
+ d∆p′(t)
528
+ dt
529
+ = Ω∆p′(t).
530
+ (12)
531
+ In (12), ∆p′(t) = [∆pRM(t); ∆pRI(t)] is the reduced form of
532
+ the vector containing the state occupation probabilities, where
533
+ Ω is
534
+ Ω =
535
+ �−k+
536
+ mcm − k−
537
+ m
538
+ −k+
539
+ m
540
+ −k+
541
+ i ci
542
+ −k+
543
+ i ci − k−
544
+ i
545
+
546
+ .
547
+ (13)
548
+ The deviation in the output current of the MC-Rx due to
549
+ stochastic binding reactions, i.e., ∆Ib(t), is then obtained as
550
+ ∆Ib(t) = qeff g
551
+ CG
552
+ zT R∆p′(t)
553
+ (14)
554
+ where z = [Ne−; Ne−; 0] is the vector containing the number
555
+ of elementary charges corresponding to each state and R is the
556
+ transformation matrix such that ∆p(t) = R∆p′(t). As ∆Ib(t)
557
+ is a stationary process, the theoretical PSD of the binding noise
558
+ fluctuations can be found by setting t = 0 as follows [12]:
559
+ Sb(f) = 2 F{E[∆Ib(t)∆Ib(t + τ)]}
560
+ (15)
561
+ = 2 F{E[∆Ib(0)∆Ib(τ)]}
562
+ = 4Nr
563
+ �qeffg
564
+ CG
565
+ �2
566
+ z⊺RΓ
567
+
568
+ Re{(j2πfI2×2 − Ω)−1}
569
+ �⊺ R⊺z
570
+ where F{·} stands for Fourier transform, I2×2 is the identity
571
+ matrix and Γ is the matrix containing the expected state
572
+ probabilities, which is given as follows [12]:
573
+ Γ =
574
+ �p0
575
+ RM
576
+
577
+ 1 − p0
578
+ RM
579
+
580
+ −p0
581
+ RMp0
582
+ RI
583
+ −p0
584
+ RMp0
585
+ RI
586
+ p0
587
+ RI
588
+
589
+ 1 − p0
590
+ RI
591
+
592
+
593
+ .
594
+ (16)
595
+ Therefore, the theoretical PSD of the total current noise
596
+ corresponding to a particular (cm, ci) pair can be written as
597
+ S(f) = Sb(f) + Sf(f).
598
+ (17)
599
+ B. Maximum Likelihood Estimation of PSD Parameters
600
+ In the following part, we describe the parameter value
601
+ extraction, namely the estimation of information and inter-
602
+ fering molecule concentrations, λ = [cm, ci], from the noise
603
+ PSD. The detector uses the estimated information molecule
604
+ concentration ˆcm for symbol decision, as will be explained in
605
+ the following section, Sec. IV-C. Our analysis is based on the
606
+ following assumptions:
607
+ • The total noise process, namely the binding fluctuations
608
+ combined with 1/f noise, is stationary, zero-mean with a
609
+ single-sided spectrum.
610
+ • Rx is given the model PSD function expressed by (17), and
611
+ the binding/unbinding rates of information and interferer
612
+ molecules. Rx also has the knowledge of the number of
613
+ information molecules transmitted for bits s = 0 and s = 1
614
+ as mentioned in Sec. II. Therefore, Rx will estimate the
615
+ steady information and interferer concentrations by taking
616
+ time samples from the output current ∆Ib in a sampling
617
+ window, where we consider a single realization of the
618
+ interferer concentration ci following log-normal distribution
619
+ as mentioned in Sec. II. The DC component of ∆Ib is
620
+ discarded to isolate the noise.
621
+ • The information and interferer concentrations are considered
622
+ constant in the sampling window based on the equilibrium
623
+ assumption discussed in Sec. II [10].
624
+ • The observed PSD of time domain samples and the para-
625
+ metric model of the PSD expressed by (17) will be used in
626
+ the ML estimation of λ = [cm, ci]. It is assumed that the
627
+ observed PSD is calculated with the periodogram method.
628
+ For each transmitted symbol, we have N number of noise
629
+ samples x = (x1, x2, ..., xN) taken with the sampling period
630
+ of ∆t. Hence, the total duration of sampling per symbol,
631
+ namely the length of the sampling window, is Td = N∆t.
632
+ Periodogram for the sampled signal can be computed from
633
+ the Discrete Fourier transform (DFT) of the samples x.
634
+ With even N, the periodogram values are then expressed as
635
+ follows: Yk = 2∆t
636
+ N |Xk|2 where k = 1, ..., N/2 − 1, and |Xk|
637
+ DFT components of x.
638
+ For a stochastic time series of length N, the random variable
639
+ Wk = 2
640
+ Yk
641
+ S(fk) follows chi-squared distribution χ2 [14], where
642
+ S(fk) given by Eq. (17) is the true PSD at frequency fk and
643
+ fk =
644
+ k
645
+ N∆t and k = 1, ..., N/2 − 1. The χ2 distribution with
646
+ two degrees of freedom is in fact the exponential distribution
647
+ [15]. Therefore, the periodogram values are exponentially
648
+ distributed about the true PSD with the following probability
649
+ given the model PSD value at a given frequency:
650
+ p(Yk|S(fk)) =
651
+ 1
652
+ S(fk)e−
653
+ Yk
654
+ S(fk) ,
655
+ (18)
656
+ following that S(fk) is also expectation value at fk [15].
657
+ Based on (18), the likelihood of observing a pair of particular
658
+ information and interferer concentrations, λ = [cm, ci], is
659
+ L(λ) =
660
+ N/2−1
661
+
662
+ k=1
663
+ p(Yk|S(fk, λ)) =
664
+ N/2−1
665
+
666
+ k=1
667
+ 1
668
+ S(fk, λ)e−
669
+ Yk
670
+ S(fk,λ) ,
671
+ (19)
672
+
673
+ Algorithm 1 Algorithm for the FDD
674
+ 1: Run Newton method with initial guess λ0 = [c0
675
+ m, c0
676
+ i ] to
677
+ find optimal λ∗ = [c∗
678
+ m, c∗
679
+ i ] satisfying (21).
680
+ 2: ˆcm ← c∗
681
+ m
682
+ 3: Find the decision threshold γfd.
683
+ 4: Run threshold operation:
684
+ 5: if ˆcm > γfd Estimated bit ˆs ← 1
685
+ 6: else Estimated bit ˆs ← 0
686
+ 7: end if
687
+ where λ = [cm, ci] is the parameters to be estimated. Here, we
688
+ use Whittle likelihood, which can be a good approximation to
689
+ the exact likelihood asymptotically, and also provide computa-
690
+ tional efficiency, i.e., O(n log n) compared to O(n2) for exact
691
+ likelihood [15], [16]. Accordingly, the quasi-log likelihood can
692
+ be written as follows:
693
+ ln L(λ) = −
694
+ N/2−1
695
+
696
+ k=1
697
+
698
+ Yk
699
+ S(fk, λ) + ln S(fk, λ)
700
+
701
+ .
702
+ (20)
703
+ ML estimator extracts the value of λ, i.e., ˆλ, that maximizes
704
+ (20). Maximizing ln L(λ) is equivalent to minimizing l =
705
+ − ln L(λ) [17], such that
706
+ ˆλ = arg min
707
+ λ {l}.
708
+ (21)
709
+ Eq. (21) can be solved using numerical methods such as
710
+ Newton-Ralphson method attaining the ML within few iter-
711
+ ations [18].
712
+ C. Symbol Detection
713
+ The ML estimator described in Sec. IV-B is asymptotically
714
+ unbiased such that ˆλ tends to have multi-normal distribution
715
+ [19] with E[ˆλ] = λ, and the respective variance of the esti-
716
+ mated parameters, which is the diagonal elements of inverse
717
+ Fisher information matrix (FIM) F(λ),
718
+ σ2
719
+ ˆ
720
+ λi = (F(λ))−1
721
+ (ii),
722
+ F(λ)(ij) = E
723
+
724
+ ∂2l
725
+ ∂λi∂λj
726
+
727
+ .
728
+ (22)
729
+ where the expectation is taken with respect to the probabil-
730
+ ity distribution of the observed spectrum p(Y1, Y2, ..., YN).
731
+ Putting l =−lnL(λ) into (22), the FIM can be expanded as:
732
+ F(ij) =
733
+ E
734
+
735
+
736
+ N/2−1
737
+
738
+ k=1
739
+ S(fk) − Yk
740
+ S(fk)2
741
+ ∂2S
742
+ ∂λi∂λj
743
+ + 2Yk − S(fk)
744
+ S(fk)3
745
+ ∂S
746
+ ∂λi
747
+ ∂S
748
+ ∂λj
749
+
750
+ � .
751
+ (23)
752
+ Considering that S(f) is a slowly varying function, there
753
+ is no need to calculate individual periodogram values in
754
+ (23). Because periodogram values can be smoothed by
755
+ summing over frequency such that �N/2−1
756
+ n=1
757
+ Ykφ(fk)
758
+
759
+ �N/2−1
760
+ n=1
761
+ S(fk)φ(fk), for any smooth function φ(fk)
762
+ [19],
763
+ [20]. Based on this, Eq. (23) can be simplified as [19]
764
+ F(ij) ≃
765
+ N/2−1
766
+
767
+ k=1
768
+ 1
769
+ S(fk)2
770
+ ∂S
771
+ ∂λi
772
+ ∂S
773
+ ∂λj
774
+ ,
775
+ (24)
776
+ where the derivatives are taken at the true value of the
777
+ parameters. This is a good approximation for the large number
778
+ of samples such that periodogram values can be approximated
779
+ as Gaussian by the central limit theorem [21]. Rx decides
780
+ the transmitted bit by applying the ML decision rule on the
781
+ estimated information molecule concentration ˆcm, as described
782
+ by the pseudo-algorithm for FDD in Algorithm 1. The ML
783
+ decision threshold for FDD is
784
+ γfd =
785
+ 1
786
+ σ2
787
+ ˆcm|1 − σ2
788
+ ˆcm|0
789
+
790
+ σ2
791
+ ˆcm|1cm|0 − σ2
792
+ ˆcm|0cm|1 + σˆcm|1σˆcm|0
793
+ ×
794
+
795
+ (cm|1 − cm|0)2 + 2(σ2
796
+ ˆcm|1 − σ2
797
+ ˆcm|0) ln
798
+
799
+ σˆcm|1/σˆcm|0
800
+ ��
801
+ ,
802
+ (25)
803
+ where σ2
804
+ ˆcm|s is the variance and cm|s is the expected value of
805
+ estimated information molecule when the transmitted bit is s ∈
806
+ {0, 1}. Since Rx does not know the true value of the interfering
807
+ molecule concentration, it computes the decision threshold as
808
+ if there was no interference. Therefore, the following model
809
+ PSD is used while computing the threshold γfd:
810
+ S(f, cm) = 4Nrζ2
811
+ 1
812
+ 2πf + 1/τm
813
+ pb(1 − pb) + Sf(f),
814
+ (26)
815
+ where τm = 1/(cmk+
816
+ m + k−
817
+ m) and pb =
818
+ cm
819
+ KDm + cm
820
+ . Hence,
821
+ using (26), and (24) for λ = [cm], the variance of the
822
+ estimated information molecule concentration corresponding
823
+ to the transmitted bit s ∈ {0, 1} can be written as σ2
824
+ ˆcm|s =
825
+ 1/ �N/2−1
826
+ k=1
827
+ 1
828
+ S(fk)2 ( ∂S
829
+ ∂cm )2���cm=cm|s. Note that here the expres-
830
+ sion for σ2
831
+ ˆcm|s does not give the actual asymptotic variances
832
+ since Rx estimates the value of cm based on the model PSD
833
+ described by (17).
834
+ D. Asymptotic Bit Error Probability
835
+ To calculate BEP for FDD, we need the actual values of
836
+ the variance of estimated information molecule concentration
837
+ corresponding to s=0 and s=1, i.e., σ2
838
+ ˆcm|s. Using the model
839
+ PSD S(f, (cm, ci)) given by (17), and (24) with λ = [cm, ci],
840
+ the variance can be expressed as σ2
841
+ ˆcm|s=(Fs(λ))−1
842
+ (11), where
843
+ Fs’s elements are:
844
+ Fs(11) =
845
+ N/2−1
846
+
847
+ k=1
848
+ 1
849
+ S(fk)2
850
+ � ∂S
851
+ ∂cm
852
+ �2���cm=cm|s
853
+ ci=µci
854
+ ,
855
+ (27)
856
+ Fs(22) =
857
+ N/2−1
858
+
859
+ k=1
860
+ 1
861
+ S(fk)2
862
+ � ∂S
863
+ ∂ci
864
+ �2���cm=cm|s
865
+ ci=µci
866
+ ,
867
+ (28)
868
+ Fs(12),(21) =
869
+ N/2−1
870
+
871
+ k=1
872
+ 1
873
+ S(fk)2
874
+ � ∂S
875
+ ∂cm
876
+ � � ∂S
877
+ ∂ci
878
+ ����cm=cm|s
879
+ ci=µci
880
+ .
881
+ (29)
882
+ As a result, BEP for FDD can be written as
883
+ P F DD
884
+ e
885
+ = 1
886
+ 4 erfc
887
+
888
+ �γfd − cm|0
889
+
890
+ 2σ2
891
+ ˆcm|0
892
+
893
+ � + 1
894
+ 4 erfc
895
+
896
+ �cm|1 − γfd
897
+
898
+ 2σ2
899
+ ˆcm|1
900
+
901
+ � .
902
+ (30)
903
+ Here, it should be noted that (30) is an asymptotic expression
904
+ based on the Gaussian distribution assumption in Sec. IV-C.
905
+
906
+ 10-2
907
+ 10-1
908
+ 100
909
+ 101
910
+ 102
911
+ Frequency (Hz)
912
+ 10-23
913
+ 10-22
914
+ 10-21
915
+ PSD (A2/Hz)
916
+ Interferer molecule
917
+ Information molecule
918
+ fch|m
919
+ fch|i
920
+ Fig. 2: The model PSD with highlighted characteristic fre-
921
+ quencies.
922
+ V. PERFORMANCE EVALUATION
923
+ In this section, we analyze the performance of FDD and
924
+ TDD in terms of BEP. The default values of the system
925
+ parameters are given in Table I, with the reaction rates
926
+ adopted from [7]. In the rest of the paper, the saturation
927
+ and the non-saturation corresponds to the Rx’s receptors
928
+ being saturated due to high ligand concentrations and far
929
+ from the saturation, respectively. To simulate the saturation,
930
+ the number of transmitted information molecules is taken as
931
+ Nm|s∈{0,1} = [2, 5] × 104. Otherwise, default values are used.
932
+ We first consider the effect of the mean interference con-
933
+ centration, µci on the BEP performance of TDD and FDD in
934
+ saturation and non-saturation conditions. We define a tuning
935
+ parameter γ such that mean interferer concentration is given
936
+ by µci = γ cm|s=1. As shown in Fig. 3a, FDD outperforms
937
+ TDD in both scenarios. The performance of TDD degrades
938
+ dramatically due to Rx saturation with increasing µci. In non-
939
+ saturation, the performance of FDD improves with increasing
940
+ TABLE I: Default Values of System Parameters
941
+ Temperature (T )
942
+ 300K
943
+ Microfluidic channel height (hch), width (lch)
944
+ 5 µm, 10 µm
945
+ Average flow velocity (u)
946
+ 10 µm/s
947
+ Distance of Rx’s center position to Tx (xR)
948
+ 1mm
949
+ Ionic concentration of medium (cion)
950
+ 30 mol/m3
951
+ Relative permittivity of medium (ϵ/ϵ0)
952
+ 80
953
+ Intrinsic diffusion coefficient (D0)
954
+ 2 × 10−11 m2/s
955
+ Binding
956
+ rate
957
+ of
958
+ information
959
+ and
960
+ interferer
961
+ molecules (k+
962
+ m, k+
963
+ i )
964
+ 4 × 10−17 m3/s
965
+ Unbinding rate of information molecules (k−
966
+ m)
967
+ 2 s−1
968
+ Unbinding rate of interferers (k−
969
+ i )
970
+ 8 s−1
971
+ Average # of electrons in a ligand (Ne−)
972
+ 3
973
+ Number of independent receptors (Nr)
974
+ 120
975
+ Length of a surface receptor (r)
976
+ 2 nm
977
+ Transconductance of graphene bioFET (g)
978
+ 1.9044 × 10−4 A/V
979
+ Width of graphene in transistor (lgr)
980
+ 10µm
981
+ Quant. capacitance of graphene per unit area (cq)
982
+ 2 × 10−2 F m−2
983
+ # of transmitted ligands for s = 0, 1 (Nm|s)
984
+ [1, 5] × 103
985
+ # of noise samples (N)
986
+ 700
987
+ Sampling period (∆t)
988
+ 0.005 s
989
+ Mean interference to information concentration
990
+ ratio (γ = µci/cm|s=1)
991
+ 1
992
+ Interference mean/std ratio (µci/σci)
993
+ 10
994
+ Power of 1/f noise at 1 Hz (Sf1Hz )
995
+ 10−23 A2/Hz
996
+ µci up to a certain point, beyond which further increase in
997
+ µci degrades the performance of FDD because when Rx is not
998
+ saturated, the variance of the estimated information molecule
999
+ concentration σ2
1000
+ ˆcm|s is minimized at a certain µci beyond
1001
+ which its value increases with increasing µci. In saturation,
1002
+ however, σ2
1003
+ ˆcm|s monotonically increases with µci.
1004
+ Next, we consider the effect of similarity parameter, namely
1005
+ the affinity ratio of information and interferer molecules η =
1006
+ KDi/KDm, on the BEP performance for saturation and non-
1007
+ saturation cases. As displayed in Fig. 3b, FDD outperforms
1008
+ TDD in both saturation and non-saturation cases. Regarding
1009
+ the non-saturation case, the performances of both detection
1010
+ methods improve with increasing similarity up to a certain
1011
+ point because the effect of interference on the detection perfor-
1012
+ mance weakens; namely, bound state probability for interferer
1013
+ molecules decreases. However, when the similarity is further
1014
+ increased, the performance of FDD degrades. Intuitively, this
1015
+ is because the characteristic frequencies corresponding to bits
1016
+ s = 0 and s = 1 [22]
1017
+ fch|s = [fch|m, fch|i]
1018
+ = 1
1019
+
1020
+
1021
+ � 1
1022
+ τm|s
1023
+ + 1
1024
+ τi
1025
+ ±
1026
+ �� 1
1027
+ τm|s
1028
+ − 1
1029
+ τi
1030
+ �2
1031
+ + 4k+
1032
+ mcm|sk+
1033
+ i ci
1034
+
1035
+ � ,
1036
+ (31)
1037
+ where τm|s = 1/(cm|sk+
1038
+ m + k−
1039
+ m) and τi = 1/(cik+
1040
+ i + k−
1041
+ i ),
1042
+ are approaching each other in the spectrum, making it dif-
1043
+ ficult to distinguish the bits. As shown in Fig. 2 for an
1044
+ example scenario, two characteristic frequencies, fch|m and
1045
+ fch|i, appear in the spectrum for each transmitted bit due to
1046
+ the binding of two types of molecules, namely information
1047
+ and interferer molecules, with the order depending on the
1048
+ concentrations and binding/unbinding rates of the individual
1049
+ molecule types. In the non-saturation case, we do not observe
1050
+ this phenomenon because the characteristic frequencies do not
1051
+ come close to each other to degrade the detection performance
1052
+ with increasing similarity. We also consider the effect of the
1053
+ number of time samples, N, and the sampling period ∆t on
1054
+ the BEP performance. As shown in Fig. 3c, the performance
1055
+ of FDD increases with N. This is expected as taking more
1056
+ samples decreases the variance of the estimated information
1057
+ molecule concentration σ2
1058
+ ˆcm|s, hence, decreases the BEP. For
1059
+ TDD, the performance does not change with N as Rx takes
1060
+ one sample in the sampling window. For varying ∆t, the
1061
+ performance of FDD increases with increasing ∆t for the
1062
+ non-saturation case. Note that ∆t should be shorter than the
1063
+ characteristic time scale of any reactions to be able to capture
1064
+ the fluctuations and to satisfy the sampling at the equilibrium
1065
+ assumption, which is discussed in Sec. II. Therefore, we
1066
+ consider ∆t values satisfying this condition.
1067
+ VI. CONCLUSION
1068
+ In this paper, we proposed a FDD method for the FET-
1069
+ based MC-Rx, which utilizes the output noise PSD to extract
1070
+ the transmitted bit. We derived the BEP for the proposed
1071
+ method and a one-shot TDD method considering the existence
1072
+
1073
+ 0
1074
+ 1
1075
+ 2
1076
+ 3
1077
+ 4
1078
+ 5
1079
+ Interference/Information (.)
1080
+ 10-6
1081
+ 10-5
1082
+ 10-4
1083
+ 10-3
1084
+ 10-2
1085
+ 10-1
1086
+ 100
1087
+ Bit error probability
1088
+ 0.6
1089
+ 0.65
1090
+ 0.7
1091
+ 0.75
1092
+ 0.8
1093
+ 0.85
1094
+ 0.9
1095
+ 0.95
1096
+ 1
1097
+ Bound state probability, PB
1098
+ TDD (Sat)
1099
+ FDD (Sat)
1100
+ TDD (Non-sat)
1101
+ FDD (Non-sat)
1102
+ s = 0 (Sat)
1103
+ s = 1 (Sat)
1104
+ s = 0 (Non-sat)
1105
+ s = 1 (Non-sat)
1106
+ (a)
1107
+ 0
1108
+ 5
1109
+ 10
1110
+ 15
1111
+ 20
1112
+ 25
1113
+ 30
1114
+ 35
1115
+ 40
1116
+ Similarity (2)
1117
+ 10-5
1118
+ 10-4
1119
+ 10-3
1120
+ 10-2
1121
+ 10-1
1122
+ 100
1123
+ Bit error probability
1124
+ 0.6
1125
+ 0.65
1126
+ 0.7
1127
+ 0.75
1128
+ 0.8
1129
+ 0.85
1130
+ 0.9
1131
+ 0.95
1132
+ 1
1133
+ Bound state probability, PB
1134
+ TDD (Sat)
1135
+ FDD (Sat)
1136
+ TDD (Non-sat)
1137
+ FDD (Non-sat)
1138
+ s = 0 (Sat)
1139
+ s = 1 (Sat)
1140
+ s = 0 (Non-sat)
1141
+ s = 1 (Non-sat)
1142
+ (b)
1143
+ 100
1144
+ 200
1145
+ 300
1146
+ 400
1147
+ 500
1148
+ 600
1149
+ 700
1150
+ 800
1151
+ 900
1152
+ 1000
1153
+ # of Samples (N)
1154
+ 10-6
1155
+ 10-4
1156
+ 10-2
1157
+ 100
1158
+ Bit error probability
1159
+ 0.6
1160
+ 0.65
1161
+ 0.7
1162
+ 0.75
1163
+ 0.8
1164
+ 0.85
1165
+ 0.9
1166
+ 0.95
1167
+ 1
1168
+ Bound state probability, PB
1169
+ TDD (Sat)
1170
+ FDD (Sat)
1171
+ TDD (Non-sat)
1172
+ FDD (Non-sat)
1173
+ s = 0 (Sat)
1174
+ s = 1 (Sat)
1175
+ s = 0 (Non-sat)
1176
+ s = 1 (Non-sat)
1177
+ (c)
1178
+ Fig. 3: BEP for varying (a) mean interference concentration level (b) similarity of affinities for information and interferer
1179
+ molecules (c) number of time samples N.
1180
+ 1
1181
+ 2
1182
+ 3
1183
+ 4
1184
+ 5
1185
+ 6
1186
+ 7
1187
+ 8
1188
+ 9
1189
+ Sampling Period ("t(ms))
1190
+ 10-8
1191
+ 10-6
1192
+ 10-4
1193
+ 10-2
1194
+ 100
1195
+ Bit error probability
1196
+ 0.6
1197
+ 0.65
1198
+ 0.7
1199
+ 0.75
1200
+ 0.8
1201
+ 0.85
1202
+ 0.9
1203
+ 0.95
1204
+ 1
1205
+ Bound state probability, PB
1206
+ TDD (Sat)
1207
+ FDD (Sat)
1208
+ TDD (Non-sat)
1209
+ FDD (Non-sat)
1210
+ s = 0 (Sat)
1211
+ s = 1 (Sat)
1212
+ s = 0 (Non-sat)
1213
+ s = 1 (Non-sat)
1214
+ Fig. 4: BEP for varying sampling period.
1215
+ of a single type of interferer molecules in a microfluidic
1216
+ channel. Our analysis reveals that the proposed detection
1217
+ method significantly outperforms the TDD, primarily when
1218
+ high interference exists in the channel.
1219
+ ACKNOWLEDGMENT
1220
+ This work was supported in part by the AXA Research Fund
1221
+ (AXA Chair for Internet of Everything at Koc¸ University), the
1222
+ Horizon 2020 Marie Skłodowska-Curie Individual Fellowship
1223
+ under Grant Agreement 101028935, and by The Scientific and
1224
+ Technological Research Council of Turkey (TUBITAK) under
1225
+ Grant #120E301, and Huawei Graduate Research Scholarship.
1226
+ REFERENCES
1227
+ [1] O. B. Akan, et al., “Fundamentals of molecular information and commu-
1228
+ nication science,” Proceedings of the IEEE, vol. 105, no. 2, pp. 306–318,
1229
+ 2016.
1230
+ [2] I. F. Akyildiz, et al., “Panacea: An internet of bio-nanothings application
1231
+ for early detection and mitigation of infectious diseases,” IEEE Access,
1232
+ vol. 8, pp. 140 512–140 523, 2020.
1233
+ [3] M. Kuscu, et al., “Transmitter and receiver architectures for molecular
1234
+ communications: A survey on physical design with modulation, coding,
1235
+ and detection techniques,” Proceedings of the IEEE, vol. 107, no. 7, pp.
1236
+ 1302–1341, 2019.
1237
+ [4] M. Kuscu, et al., “Fabrication and microfluidic analysis of graphene-
1238
+ based molecular communication receiver for internet of nano things
1239
+ (iont),” Scientific reports, vol. 11, no. 1, pp. 1–20, 2021.
1240
+ [5] T. Mora, “Physical limit to concentration sensing amid spurious ligands,”
1241
+ Physical review letters, vol. 115, no. 3, p. 038102, 2015.
1242
+ [6] M. Kuscu and O. B. Akan, “Channel sensing in molecular commu-
1243
+ nications with single type of ligand receptors,” IEEE Transactions on
1244
+ Communications, vol. 67, no. 10, pp. 6868–6884, 2019.
1245
+ [7] M. Kuscu and O. B. Akan, “Detection in molecular communications
1246
+ with ligand receptors under molecular interference,” Digital Signal
1247
+ Processing, vol. 124, p. 103186, 2022.
1248
+ [8] M. Kuscu and O. B. Akan, “Modeling convection-diffusion-reaction
1249
+ systems for microfluidic molecular communications with surface-based
1250
+ receivers in internet of bio-nano things,” PloS one, vol. 13, no. 2, p.
1251
+ e0192202, 2018.
1252
+ [9] A. O. Bicen and I. F. Akyildiz, “System-theoretic analysis and least-
1253
+ squares design of microfluidic channels for flow-induced molecular
1254
+ communication,” IEEE Transactions on Signal Processing, vol. 61,
1255
+ no. 20, pp. 5000–5013, 2013.
1256
+ [10] M. Kuscu and O. B. Akan, “Modeling and analysis of sinw fet-based
1257
+ molecular communication receiver,” IEEE Transactions on Communica-
1258
+ tions, vol. 64, no. 9, pp. 3708–3721, 2016.
1259
+ [11] I. Heller, et al., “Charge noise in graphene transistors,” Nano letters,
1260
+ vol. 10, no. 5, pp. 1563–1567, 2010.
1261
+ [12] L. J. Mele, et al., “General model and equivalent circuit for the chemical
1262
+ noise spectrum associated to surface charge fluctuation in potentiometric
1263
+ sensors,” IEEE Sensors Journal, vol. 21, no. 5, pp. 6258–6269, 2020.
1264
+ [13] J. Mucksch, et al., “Quantifying reversible surface binding via surface-
1265
+ integrated fluorescence correlation spectroscopy,” Nano Letters, vol. 18,
1266
+ no. 5, pp. 3185–3192, 2018.
1267
+ [14] S. Vaughan, “A bayesian test for periodic signals in red noise,” Monthly
1268
+ Notices of the Royal Astronomical Society, vol. 402, no. 1, pp. 307–320,
1269
+ 2010.
1270
+ [15] D. Barret and S. Vaughan, “Maximum likelihood fitting of x-ray power
1271
+ density spectra: application to high-frequency quasi-periodic oscillations
1272
+ from the neutron star x-ray binary 4u1608-522,” The Astrophysical
1273
+ Journal, vol. 746, no. 2, p. 131, 2012.
1274
+ [16] A. M. Sykulski, et al., “The debiased whittle likelihood,” Biometrika,
1275
+ vol. 106, no. 2, pp. 251–266, 2019.
1276
+ [17] E. R. Anderson, et al., “Modeling of solar oscillation power spectra,”
1277
+ The Astrophysical Journal, vol. 364, pp. 699–705, 1990.
1278
+ [18] D. Pfefferl´e and S. I. Abarzhi, “Whittle maximum likelihood estimate of
1279
+ spectral properties of rayleigh-taylor interfacial mixing using hot-wire
1280
+ anemometry experimental data,” Physical Review E, vol. 102, no. 5, p.
1281
+ 053107, 2020.
1282
+ [19] T. Toutain and T. Appourchaux, “Maximum likelihood estimators: An
1283
+ application to the estimation of the precision of helioseismic measure-
1284
+ ments,” Astronomy and Astrophysics, vol. 289, pp. 649–658, 1994.
1285
+ [20] M. Levin, “Power spectrum parameter estimation,” IEEE Transactions
1286
+ on Information Theory, vol. 11, no. 1, pp. 100–107, 1965.
1287
+ [21] K. Libbrecht, “On the ultimate accuracy of solar oscillation frequency
1288
+ measurements,” The Astrophysical Journal, vol. 387, pp. 712–714, 1992.
1289
+ [22] M. Frantlovi´c, et al., “Analysis of the competitive adsorption and mass
1290
+ transfer influence on equilibrium mass fluctuations in affinity-based
1291
+ biosensors,” Sensors and Actuators B: Chemical, vol. 189, pp. 71–79,
1292
+ 2013.
1293
+
HtAzT4oBgHgl3EQfHvtN/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,419 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf,len=418
2
+ page_content='Frequency-Domain Detection for Molecular Communications Meltem Civas∗† Ali Abdali∗ Murat Kuscu∗ Ozgur B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
3
+ page_content=' Akan∗† ∗Center for neXt-generation Communications (CXC) Department of Electrical and Electronics Engineering Koc¸ University, 34450, Istanbul, Turkey {mcivas16, aabdali21, mkuscu, akan}@ku.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
4
+ page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
5
+ page_content='tr †Internet of Everything (IoE) Group Electrical Engineering Division, Department of Engineering University of Cambridge, CB3 0FA Cambridge, UK {mc2365, oba21}@cam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
6
+ page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
7
+ page_content='uk Abstract—Molecular Communications (MC) is a bio-inspired communication paradigm which uses molecules as information carriers, thereby requiring unconventional transmitter/receiver architectures and modulation/detection techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
8
+ page_content=' Practical MC receivers (MC-Rxs) can be implemented based on field-effect transistor biosensor (bioFET) architectures, where surface recep- tors reversibly react with ligands, whose concentration encodes the information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
9
+ page_content=' The time-varying concentration of ligand-bound receptors is then translated into electrical signals via field-effect, which is used to decode the transmitted information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
10
+ page_content=' However, ligand-receptor interactions do not provide an ideal molecular selectivity, as similar types of ligands, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
11
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
12
+ page_content=', interferers, co-existing in the MC channel can interact with the same type of receptors, resulting in cross-talk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
13
+ page_content=' Overcoming this molecular cross-talk with time-domain samples of the Rx’s electrical output is not always attainable, especially when Rx has no knowledge of the interferer statistics or it operates near saturation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
14
+ page_content=' In this study, we propose a frequency-domain detection (FDD) technique for bioFET-based MC-Rxs, which exploits the difference in binding reaction rates of different types of ligands, reflected to the noise spectrum of the ligand-receptor binding fluctuations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
15
+ page_content=' We analytically derive the bit error probability (BEP) of the FDD technique, and demon- strate its effectiveness in decoding transmitted concentration signals under stochastic molecular interference, in comparison to a widely-used time-domain detection (TDD) technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
16
+ page_content=' The proposed FDD method can be applied to any biosensor-based MC-Rxs, which employ receptor molecules as the channel-Rx interface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
17
+ page_content=' Index Terms—Molecular communications, receiver, frequency- domain detection, biosensor, ligand-receptor interactions I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
18
+ page_content=' INTRODUCTION Using molecules to encode and transfer information, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
19
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
20
+ page_content=', Molecular Communications (MC), is nature’s way of con- necting bio things, such as natural cells, with each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
21
+ page_content=' Engineering this unconventional communication paradigm to extend our connectivity to synthetic bio-nano things, such as nanobiosensors, artificial cells, is the vision that gave rise to the Internet of Bio-Nano Things (IoBNT), a novel networking framework promising for unprecedented healthcare and envi- ronmental applications of bionanotechnology [1], [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
22
+ page_content=' Being fundamentally different from the conventional elec- tromagnetic communication techniques, MC requires novel transceiver architectures along with new modulation, coding, and detection techniques that can cope with the highly time- varying, nonlinear, and complex channel characteristics in bio- chemical environments [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
23
+ page_content=' The design of MC receivers (MC- Rxs) and detection techniques has unquestionably attracted the most attention in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
24
+ page_content=' However, due to the simplicity it provides in modeling, many of the previous studies considered passive Rx architectures, that are physically unlinked from the MC channel, and thus, of little practical relevance [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
25
+ page_content=' An emerging trend in MC is to model and design more practical MC-Rxs that employ ligand receptors on their surface as selective biorecognition units, resembling the sensing and communication interface of natural cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
26
+ page_content=' One such design, which was practically implemented in [4], is based on field-effect transistor biosensors (bioFETs), where the ligand- receptor (LR) interactions are translated into electrical signals via field-effect for the decoding of the transmitted information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
27
+ page_content=' LR interactions are fundamental to the sensing and commu- nication of natural cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
28
+ page_content=' However, the selectivity of biological receptors against their target ligands is not ideal, and this so- called receptor promiscuity results in cross-talk of other types of molecules co-existing in the biochemical environment [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
29
+ page_content=' This cross-talk is often dealt with by natural cells through intracellular chemical reaction networks and multi-state recep- tor mechanisms, such as kinetic proofreading [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
30
+ page_content=' The same molecular interference problem also applies to abiotic MC-Rxs that employ ligand receptors, and thus, should be addressed in developing reliable detection techniques [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
31
+ page_content=' Our previous studies on biosynthetic MC-Rxs have ad- dressed the molecular interference problem by developing detection techniques based on sampling the bound time inter- vals of individual receptors to discriminate between interferer and information molecules [6], [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
32
+ page_content=' However, this approach is not plausible for biosensor-based MC-Rxs, which have no access to time-trajectory of individual receptor states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
33
+ page_content=' On the other hand, decoding information from the time-varying concentration of bound receptors performs poorly due to the indistinguishability of different ligand types in time-domain, especially when the Rx does not have any knowledge of the statistics of the interferer concentration, and when the Rx operates near saturation [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
34
+ page_content=' In this paper, we develop a frequency-domain detec- tion (FDD) technique for biosensor-based MC-Rxs based on arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
35
+ page_content='01049v1 [eess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
36
+ page_content='SP] 3 Jan 2023 LR binding interactions, which can distinguish different types of ligands co-existing in the channel and estimate their indi- vidual concentrations from the power spectral density (PSD) of the fluctuations in receptor occupancy, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
37
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
38
+ page_content=', binding noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
39
+ page_content=' Stochastic and reversible LR interactions can be modeled as a two-state continuous-time Markov process at equilibrium where the state transition rates are given by the binding and unbinding rates of LR pair [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
40
+ page_content=' Although many different types of ligands can interact with the same type of receptors, these interactions are typically governed by different binding and unbinding rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
41
+ page_content=' This difference in reaction rates is reflected to a difference in characteristic frequency fch of the interactions, which is the reciprocal of the correlation time τB of the Markov process at equilibrium, and also a function of ligand concentration and LR reaction rates [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
42
+ page_content=' The characteristic fre- quency of the LR pair manifests itself as a cut-off frequency in the Lorentzian-shaped PSD of the binding noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
43
+ page_content=' The proposed FDD method exploits this correlation in the frequency domain to estimate the concentration of information molecules in a Maximum Likelihood (ML) manner, and using the estimated concentration, it optimally decodes the transmitted informa- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
44
+ page_content=' We obtained the bit error probability (BEP) for FDD in closed form and compared it to the error performance of a time-domain detection (TDD) technique, which relies on the number of bound receptors, sampled at a single sampling point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
45
+ page_content=' The results of the performance analysis indicate that the proposed FDD method vastly outperforms the TDD method, especially at high interference conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
46
+ page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
47
+ page_content=' SYSTEM MODEL We consider a microfluidic MC system utilizing binary concentration shift keying (CSK) such that the transmitter (Tx) instantly releases Nm|s number of molecules at the beginning of each signaling interval [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
48
+ page_content=' Here m stands for information molecules, and s ∈ {0, 1} denotes the transmitted bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
49
+ page_content=' The signaling interval is assumed to be large enough to neglect inter-symbol interference (ISI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
50
+ page_content=' The microfluidic channel is abstracted as a 3-dimensional channel with a rectangular cross- section, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
51
+ page_content=' 1(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
52
+ page_content=' Tx is located at the channel inlet, and the molecules are released instantly and uniformly across the cross-section of the channel and propagate through unidirectional fluid flow from Tx to Rx, which is located at the channel bottom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
53
+ page_content=' We consider a two-dimensional graphene bioFET-based MC-Rx as illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
54
+ page_content=' 1(b) [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
55
+ page_content=' There is a single type of interferer molecules in the channel, which can also bind the receptors on Rx, though with different reaction rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
56
+ page_content=' The concentration of the interferer molecules in the Rx’s vicinity, ci, at the sampling time is assumed to follow a log- normal distribution with mean µci and variance σ2 ci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
57
+ page_content=' We as- sume that Rx has the knowledge of the number of information molecules transmitted, Nm|s, and the binding/unbinding rates of information and interferer molecules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
58
+ page_content=' The released molecules propagate along the microfluidic channel through convection and diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
59
+ page_content=' While convection results in the uniform and unidirectional drift of the transmitted molecules from Tx to Rx, diffusion acts in all directions hch lch x � z Si SiO2 Drain electrode Source electrode Receptor Information molecule Interferer molecule Insulator Graphene channel (a) (b) MC Receiver MC Transmitter Flow Direction x = xR y x z hch lch Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
60
+ page_content=' 1: a) 3D view of the microfluidic channel;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
61
+ page_content=' the locations of Tx and Rx are shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
62
+ page_content=' b) Graphene FET-based MC-Rx exposed to information and interferer molecules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
63
+ page_content=' causing the dispersion of the molecules as they propagate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
64
+ page_content=' The dispersion results in a smooth concentration profile which can be approximated by a Gaussian distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
65
+ page_content=' Assuming that the number of ligands binding the receptors is low enough to neglect the change of concentration in the channel,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
66
+ page_content=' the prop- agation can be represented as a one-dimensional convection- diffusion problem with the following solution [8]: cm|s(x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
67
+ page_content=' t) = Nm|s Ach √ 4πDt exp � −(x − ut)2 4Dt � ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
68
+ page_content=' (1) where cm|s(x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
69
+ page_content=' t) is the ligand concentration at position x and time t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
70
+ page_content=' Ach = hch × lch is the cross-sectional area of the channel with hch and lch being the channel height and width,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
71
+ page_content=' respectively,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
72
+ page_content=' u is fluid flow velocity in the x-axis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
73
+ page_content=' and D is the effective diffusion coefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
74
+ page_content=' For channels with rectangular cross-section, D can be expressed as follows [9]: D = � 1 + 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
75
+ page_content='5u2h2 chl2 ch 210D2 0(h2 ch + 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
76
+ page_content='4hchlch + l2 ch) � D0, (2) where D0 is the diffusion coefficient of the ligand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
77
+ page_content=' The peak of ligand concentration profile given by (1) reaches the Rx’s center position, xR, at time tD = xR u .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
78
+ page_content=' As the MC channel characteristic is similar to a low-pass filter due to diffusion, the concentration signal is slowly varying around the Rx position, thus allowing equilibrium conditions for the LR reactions with steady ligand concentration in a short time window around tD [8], [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
79
+ page_content=' Rx can sample the receptor states at time t = tD when the ligand concentration is cm|s(xR, tD) = Nm|s Ach √4πDtD [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
80
+ page_content=' Therefore, the number of bound receptors, Nb|s, follows Binomial distribution with mean µNb|s = pb|sNr and variance σ2 Nb|s = pb|s(1 − pb|s)Nr [10], where Nr is the number of independent surface receptors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
81
+ page_content=' Bound state probability of a single receptor, pb|s, in the presence of two different types of ligands, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
82
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
83
+ page_content=', information and interferer molecules, is given as [6] pb|s = cm|s/KDm + ci/KDi 1 + cm|s/KDm + ci/KDi , (3) where KDm = k− m/k+ m and KDi = k− i /k+ i is the dissociation constant of information and interferer molecules, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
84
+ page_content=' The binding of charged ligands to the receptors creates an effective charge reflected on the graphene channel as expressed by QGr|s = Nb|sqeffNe−, where Ne− is number of free electrons per ligand molecules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
85
+ page_content=' qeff is the effective charge of a single electron of a bound ligand in the presence of ionic screening, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
86
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
87
+ page_content=', Debye screening: qeff = q×exp � − r λD � , where q is the elementary charge, r is the length of a surface receptor, and λD is Debye length whose relation is given by λD = � (ϵκBT)/(2NAq2cion), where ϵ is the permittivity of the medium, κB is the Boltzmann’s constant and NA is the Avogadro’s constant [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
88
+ page_content=' Then, the mean surface potential due to bound molecules can be written as ΨGr|s = QGr|s CG , where CG= � 1 CGr+ 1 CQ �−1 is the total gate capacitance of the bioFET.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
89
+ page_content=' CGr is the electrical double layer capacitance between graphene and electrolyte channel, CGr = AGrϵ/λD, with AGr being the area of graphene surface exposed to the electrolyte, and CQ is quantum capacitance, CQ = cq × AGr, where cq is the quantum capacitance of graphene per unit area [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
90
+ page_content=' The deviation in the output current due to bound molecules at equilibrium is ∆Ib|s = g × ΨGr|s, (4) where g is the bioFET transconductance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
91
+ page_content=' For large Nr, the number of bound receptors Nb|s at the sampling time can be approximated as Gaussian distributed [10], i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
92
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
93
+ page_content=', Nb|s ∼ N(µNb|s, σ2 Nb|s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
94
+ page_content=' As the transduction process is linear, the change in the output current due to bound molecules can also be approximated as Gaussian with mean µ∆Ib|s = ζµNb|s and variance σ2 ∆Ib|s = ζ2σ2 Nb|s, where ζ = � qeff Ne−g CG � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
95
+ page_content=' Another type of noise that contributes to the overall output current fluctuations in low-dimensional semiconductor mate- rials is 1/f noise, which depends on the gate voltage and is independent of the received signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
96
+ page_content=' We use the commonly utilized charge-noise model describing the behavior of 1/f noise in graphene FETs [11]: Sf(f) = Sf1Hz/f β where Sf1Hz is the noise power at 1 Hz, and the noise exponent β is an empirical parameter 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
97
+ page_content='8 ≤ β ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
98
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
99
+ page_content=' As discussed in [10], 1/f noise can be approximated as white noise within physically relevant observation windows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
100
+ page_content=' Based on this, the variance of 1/f noise can be written as σ2 f = � fL 0 Sf(fL)df + � fH fL Sf(f)df, (5) where fL is the lower frequency of the observation window, below which the noise power is considered constant, and fH is the upper frequency, beyond which the noise power is assumed to be negligible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
101
+ page_content=' Hence, the variance and mean of total output current variance is σ2 ∆Is = ζ2σ2 Nb|s + σ2 f and µ∆Is = µ∆Ib|s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
102
+ page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
103
+ page_content=' TIME-DOMAIN DETECTION Since Rx has no knowledge of the interferer concentration statistics, it constructs the optimal ML decision threshold for TDD solely based on its knowledge of the received signal statistics corresponding to the transmitted concentration of information molecules [7]: γtd = 1 σ2 ∆I1 − σ2 ∆I0 � σ2 ∆I1µ∆I0 − σ2 ∆I0µ∆I1 + σ∆I1σ∆I0 × � (µ∆I1 − µ∆I0)2 + 2(σ2 ∆I1 − σ2 ∆I0) ln(σ∆I1/σ∆I0) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
104
+ page_content=' (6) As Rx does not account for interference statistics in calculating γtd, it uses the bound state probability corresponding to a single molecule case, namely, pb|s = cm|s/KDm 1+cm|s/KDm .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
105
+ page_content=' To derive the BEP for TDD, we first obtain the statis- tics of the receiver output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
106
+ page_content=' By applying the law of total expectation, we can express the mean number of bound receptors as follows: µNb|s = � ∞ 0 Nrpb|s(ci)f(ci)dci, where pb|s(ci) = cm|s/KDm+ci/KDi 1+cm|s/KDm+ci/KDi , and f(·) is the probability density function of log-normal distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
107
+ page_content=' Hence, µ∆Is = ζµNb|s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
108
+ page_content=' Similarly, by applying the law of total variance, we obtain the output current variance as σ2 ∆Is = ζ2 � � ∞ 0 � 1 − pb|s(ci) � pb|s(ci)Nrf(ci)dci + � ∞ 0 � pb|s(ci)Nr �2 f(ci)dci � − µ2 ∆Is + σ2 f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
109
+ page_content=' (7) Therefore, given the decision threshold γtd, BEP for time detection method can be expressed as follows [7]: P T DD e = 1 4 erfc � �γtd − µ∆I0 � 2σ2 ∆I0 � � + 1 4 erfc � �µ∆I1 − γtd � 2σ2 ∆I1 � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
110
+ page_content=' (8) IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
111
+ page_content=' FREQUENCY-DOMAIN DETECTION In this section, we introduce the FDD method utilizing the model and observed PSD of the overall noise process (binding noise + 1/f noise of the graphene bioFET-based MC-Rx) to estimate the received concentration of information molecules cm, which will be used in symbol decision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
112
+ page_content=' Here, the observed PSD is the periodogram of the noise constructed with the time- domain samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
113
+ page_content=' In the sequel, we describe the model PSD and then introduce the proposed estimation method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
114
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
115
+ page_content=' Theoretical Model of Binding Noise PSD This section describes the theoretical model of the binding noise PSD for a particular pair of information and interference concentration, namely λ = [cm, ci].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
116
+ page_content=' The binding process of receptors can be described by the Langmuir reaction model with three states, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
117
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
118
+ page_content=', unbound (R), bound with information molecules (RM) and bound with interferer molecules (RI), with state occupation probabilities pR, pRM and pRI, respec- tively [12]: R + M k− m ⇌ k+ m RM, and R + I k− i⇌ k+ i RI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
119
+ page_content=' Hence, the chemical master equations are expressed as follows: � ����� dpRM dt dpRI dt dpR dt � ����� = � � −k− m 0 k+ mcm 0 −k− i k+ i ci k− m k− i −k+ mcm − k+ i ci � � � � pRM pRI pR � � (9) The matrix containing reaction rates and the concentrations in (9), has rank 2 since one state probability can be written in terms of the other two state occupation probabilities as pR + pRM + pRI = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
120
+ page_content=' Therefore, by setting the left-hand side in (9) to zero the equilibrium probabilities can be obtained as p0 RM = cm/KDm 1 + cm KDm + ci KDi , p0 RI = ci/KDi 1 + cm KDm + ci KDi (10) and p0 R = 1−(p0 RM +p0 RI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
121
+ page_content=' In the equilibrium conditions, the state occupation probabilities can be expressed in terms of the equilibrium state probability and the fluctuations around this probability [12], [13] as pj(t) = p0 j + ∆pj(t), j ∈ {RM, RI, R}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
122
+ page_content=' (11) Putting (11) into (9) and using Taylor’s expansion, the state fluctuations can be expressed as follows [12]: d∆p′(t) dt = Ω∆p′(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
123
+ page_content=' (12) In (12), ∆p′(t) = [∆pRM(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
124
+ page_content=' ∆pRI(t)] is the reduced form of the vector containing the state occupation probabilities, where Ω is Ω = �−k+ mcm − k− m −k+ m −k+ i ci −k+ i ci − k− i � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
125
+ page_content=' (13) The deviation in the output current of the MC-Rx due to stochastic binding reactions, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
126
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
127
+ page_content=', ∆Ib(t), is then obtained as ∆Ib(t) = qeff g CG zT R∆p′(t) (14) where z = [Ne−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
128
+ page_content=' Ne−;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
129
+ page_content=' 0] is the vector containing the number of elementary charges corresponding to each state and R is the transformation matrix such that ∆p(t) = R∆p′(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
130
+ page_content=' As ∆Ib(t) is a stationary process,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
131
+ page_content=' the theoretical PSD of the binding noise fluctuations can be found by setting t = 0 as follows [12]: Sb(f) = 2 F{E[∆Ib(t)∆Ib(t + τ)]} (15) = 2 F{E[∆Ib(0)∆Ib(τ)]} = 4Nr �qeffg CG �2 z⊺RΓ � Re{(j2πfI2×2 − Ω)−1} �⊺ R⊺z where F{·} stands for Fourier transform,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
132
+ page_content=' I2×2 is the identity matrix and Γ is the matrix containing the expected state probabilities,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
133
+ page_content=' which is given as follows [12]: Γ = �p0 RM � 1 − p0 RM � −p0 RMp0 RI −p0 RMp0 RI p0 RI � 1 − p0 RI � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
134
+ page_content=' (16) Therefore, the theoretical PSD of the total current noise corresponding to a particular (cm, ci) pair can be written as S(f) = Sb(f) + Sf(f).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
135
+ page_content=' (17) B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
136
+ page_content=' Maximum Likelihood Estimation of PSD Parameters In the following part, we describe the parameter value extraction, namely the estimation of information and inter- fering molecule concentrations, λ = [cm, ci], from the noise PSD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
137
+ page_content=' The detector uses the estimated information molecule concentration ˆcm for symbol decision, as will be explained in the following section, Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
138
+ page_content=' IV-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
139
+ page_content=' Our analysis is based on the following assumptions: The total noise process, namely the binding fluctuations combined with 1/f noise, is stationary, zero-mean with a single-sided spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
140
+ page_content=' Rx is given the model PSD function expressed by (17), and the binding/unbinding rates of information and interferer molecules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
141
+ page_content=' Rx also has the knowledge of the number of information molecules transmitted for bits s = 0 and s = 1 as mentioned in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
142
+ page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
143
+ page_content=' Therefore, Rx will estimate the steady information and interferer concentrations by taking time samples from the output current ∆Ib in a sampling window, where we consider a single realization of the interferer concentration ci following log-normal distribution as mentioned in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
144
+ page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
145
+ page_content=' The DC component of ∆Ib is discarded to isolate the noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
146
+ page_content=' The information and interferer concentrations are considered constant in the sampling window based on the equilibrium assumption discussed in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
147
+ page_content=' II [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
148
+ page_content=' The observed PSD of time domain samples and the para- metric model of the PSD expressed by (17) will be used in the ML estimation of λ = [cm, ci].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
149
+ page_content=' It is assumed that the observed PSD is calculated with the periodogram method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
150
+ page_content=' For each transmitted symbol, we have N number of noise samples x = (x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
151
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
152
+ page_content=', xN) taken with the sampling period of ∆t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
153
+ page_content=' Hence, the total duration of sampling per symbol, namely the length of the sampling window, is Td = N∆t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
154
+ page_content=' Periodogram for the sampled signal can be computed from the Discrete Fourier transform (DFT) of the samples x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
155
+ page_content=' With even N, the periodogram values are then expressed as follows: Yk = 2∆t N |Xk|2 where k = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
156
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
157
+ page_content=', N/2 − 1, and |Xk| DFT components of x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
158
+ page_content=' For a stochastic time series of length N, the random variable Wk = 2 Yk S(fk) follows chi-squared distribution χ2 [14], where S(fk) given by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
159
+ page_content=' (17) is the true PSD at frequency fk and fk = k N∆t and k = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
160
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
161
+ page_content=', N/2 − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
162
+ page_content=' The χ2 distribution with two degrees of freedom is in fact the exponential distribution [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
163
+ page_content=' Therefore, the periodogram values are exponentially distributed about the true PSD with the following probability given the model PSD value at a given frequency: p(Yk|S(fk)) = 1 S(fk)e− Yk S(fk) , (18) following that S(fk) is also expectation value at fk [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
164
+ page_content=' Based on (18), the likelihood of observing a pair of particular information and interferer concentrations, λ = [cm, ci], is L(λ) = N/2−1 � k=1 p(Yk|S(fk, λ)) = N/2−1 � k=1 1 S(fk, λ)e− Yk S(fk,λ) , (19) Algorithm 1 Algorithm for the FDD 1: Run Newton method with initial guess λ0 = [c0 m, c0 i ] to find optimal λ∗ = [c∗ m, c∗ i ] satisfying (21).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
165
+ page_content=' 2: ˆcm ← c∗ m 3: Find the decision threshold γfd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
166
+ page_content=' 4: Run threshold operation: 5: if ˆcm > γfd Estimated bit ˆs ← 1 6: else Estimated bit ˆs ← 0 7: end if where λ = [cm, ci] is the parameters to be estimated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
167
+ page_content=' Here, we use Whittle likelihood, which can be a good approximation to the exact likelihood asymptotically, and also provide computa- tional efficiency, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
168
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
169
+ page_content=', O(n log n) compared to O(n2) for exact likelihood [15], [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
170
+ page_content=' Accordingly, the quasi-log likelihood can be written as follows: ln L(λ) = − N/2−1 � k=1 � Yk S(fk, λ) + ln S(fk, λ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
171
+ page_content=' (20) ML estimator extracts the value of λ, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
172
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
173
+ page_content=', ˆλ, that maximizes (20).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
174
+ page_content=' Maximizing ln L(λ) is equivalent to minimizing l = − ln L(λ) [17], such that ˆλ = arg min λ {l}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
175
+ page_content=' (21) Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
176
+ page_content=' (21) can be solved using numerical methods such as Newton-Ralphson method attaining the ML within few iter- ations [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
177
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
178
+ page_content=' Symbol Detection The ML estimator described in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
179
+ page_content=' IV-B is asymptotically unbiased such that ˆλ tends to have multi-normal distribution [19] with E[ˆλ] = λ, and the respective variance of the esti- mated parameters, which is the diagonal elements of inverse Fisher information matrix (FIM) F(λ), σ2 ˆ λi = (F(λ))−1 (ii), F(λ)(ij) = E � ∂2l ∂λi∂λj � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
180
+ page_content=' (22) where the expectation is taken with respect to the probabil- ity distribution of the observed spectrum p(Y1, Y2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
181
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
182
+ page_content=', YN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
183
+ page_content=' Putting l =−lnL(λ) into (22), the FIM can be expanded as: F(ij) = E � � N/2−1 � k=1 S(fk) − Yk S(fk)2 ∂2S ∂λi∂λj + 2Yk − S(fk) S(fk)3 ∂S ∂λi ∂S ∂λj � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
184
+ page_content=' (23) Considering that S(f) is a slowly varying function, there is no need to calculate individual periodogram values in (23).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
185
+ page_content=' Because periodogram values can be smoothed by summing over frequency such that �N/2−1 n=1 Ykφ(fk) ≃ �N/2−1 n=1 S(fk)φ(fk), for any smooth function φ(fk) [19], [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
186
+ page_content=' Based on this, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
187
+ page_content=' (23) can be simplified as [19] F(ij) ≃ N/2−1 � k=1 1 S(fk)2 ∂S ∂λi ∂S ∂λj , (24) where the derivatives are taken at the true value of the parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
188
+ page_content=' This is a good approximation for the large number of samples such that periodogram values can be approximated as Gaussian by the central limit theorem [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
189
+ page_content=' Rx decides the transmitted bit by applying the ML decision rule on the estimated information molecule concentration ˆcm, as described by the pseudo-algorithm for FDD in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
190
+ page_content=' The ML decision threshold for FDD is γfd = 1 σ2 ˆcm|1 − σ2 ˆcm|0 � σ2 ˆcm|1cm|0 − σ2 ˆcm|0cm|1 + σˆcm|1σˆcm|0 × � (cm|1 − cm|0)2 + 2(σ2 ˆcm|1 − σ2 ˆcm|0) ln � σˆcm|1/σˆcm|0 �� , (25) where σ2 ˆcm|s is the variance and cm|s is the expected value of estimated information molecule when the transmitted bit is s ∈ {0, 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
191
+ page_content=' Since Rx does not know the true value of the interfering molecule concentration, it computes the decision threshold as if there was no interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
192
+ page_content=' Therefore, the following model PSD is used while computing the threshold γfd: S(f, cm) = 4Nrζ2 1 2πf + 1/τm pb(1 − pb) + Sf(f), (26) where τm = 1/(cmk+ m + k− m) and pb = cm KDm + cm .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
193
+ page_content=' Hence, using (26), and (24) for λ = [cm], the variance of the estimated information molecule concentration corresponding to the transmitted bit s ∈ {0, 1} can be written as σ2 ˆcm|s = 1/ �N/2−1 k=1 1 S(fk)2 ( ∂S ∂cm )2���cm=cm|s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
194
+ page_content=' Note that here the expres- sion for σ2 ˆcm|s does not give the actual asymptotic variances since Rx estimates the value of cm based on the model PSD described by (17).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
195
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
196
+ page_content=' Asymptotic Bit Error Probability To calculate BEP for FDD, we need the actual values of the variance of estimated information molecule concentration corresponding to s=0 and s=1, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
197
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
198
+ page_content=', σ2 ˆcm|s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
199
+ page_content=' Using the model PSD S(f, (cm, ci)) given by (17), and (24) with λ = [cm, ci], the variance can be expressed as σ2 ˆcm|s=(Fs(λ))−1 (11), where Fs’s elements are: Fs(11) = N/2−1 � k=1 1 S(fk)2 � ∂S ∂cm �2���cm=cm|s ci=µci , (27) Fs(22) = N/2−1 � k=1 1 S(fk)2 � ∂S ∂ci �2���cm=cm|s ci=µci , (28) Fs(12),(21) = N/2−1 � k=1 1 S(fk)2 � ∂S ∂cm � � ∂S ∂ci ����cm=cm|s ci=µci .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
200
+ page_content=' (29) As a result, BEP for FDD can be written as P F DD e = 1 4 erfc � �γfd − cm|0 � 2σ2 ˆcm|0 � � + 1 4 erfc � �cm|1 − γfd � 2σ2 ˆcm|1 � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
201
+ page_content=' (30) Here, it should be noted that (30) is an asymptotic expression based on the Gaussian distribution assumption in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
202
+ page_content=' IV-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
203
+ page_content=' 10-2 10-1 100 101 102 Frequency (Hz) 10-23 10-22 10-21 PSD (A2/Hz) Interferer molecule Information molecule fch|m fch|i Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
204
+ page_content=' 2: The model PSD with highlighted characteristic fre- quencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
205
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
206
+ page_content=' PERFORMANCE EVALUATION In this section, we analyze the performance of FDD and TDD in terms of BEP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
207
+ page_content=' The default values of the system parameters are given in Table I, with the reaction rates adopted from [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
208
+ page_content=' In the rest of the paper, the saturation and the non-saturation corresponds to the Rx’s receptors being saturated due to high ligand concentrations and far from the saturation, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
209
+ page_content=' To simulate the saturation, the number of transmitted information molecules is taken as Nm|s∈{0,1} = [2, 5] × 104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
210
+ page_content=' Otherwise, default values are used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
211
+ page_content=' We first consider the effect of the mean interference con- centration, µci on the BEP performance of TDD and FDD in saturation and non-saturation conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
212
+ page_content=' We define a tuning parameter γ such that mean interferer concentration is given by µci = γ cm|s=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
213
+ page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
214
+ page_content=' 3a, FDD outperforms TDD in both scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
215
+ page_content=' The performance of TDD degrades dramatically due to Rx saturation with increasing µci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
216
+ page_content=' In non- saturation,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
217
+ page_content=' the performance of FDD improves with increasing TABLE I: Default Values of System Parameters Temperature (T ) 300K Microfluidic channel height (hch),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
218
+ page_content=' width (lch) 5 µm,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
219
+ page_content=' 10 µm Average flow velocity (u) 10 µm/s Distance of Rx’s center position to Tx (xR) 1mm Ionic concentration of medium (cion) 30 mol/m3 Relative permittivity of medium (ϵ/ϵ0) 80 Intrinsic diffusion coefficient (D0) 2 × 10−11 m2/s Binding rate of information and interferer molecules (k+ m,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
220
+ page_content=' k+ i ) 4 × 10−17 m3/s Unbinding rate of information molecules (k− m) 2 s−1 Unbinding rate of interferers (k− i ) 8 s−1 Average # of electrons in a ligand (Ne−) 3 Number of independent receptors (Nr) 120 Length of a surface receptor (r) 2 nm Transconductance of graphene bioFET (g) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
221
+ page_content='9044 × 10−4 A/V Width of graphene in transistor (lgr) 10µm Quant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
222
+ page_content=' capacitance of graphene per unit area (cq) 2 × 10−2 F m−2 # of transmitted ligands for s = 0, 1 (Nm|s) [1, 5] × 103 # of noise samples (N) 700 Sampling period (∆t) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
223
+ page_content='005 s Mean interference to information concentration ratio (γ = µci/cm|s=1) 1 Interference mean/std ratio (µci/σci) 10 Power of 1/f noise at 1 Hz (Sf1Hz ) 10−23 A2/Hz µci up to a certain point, beyond which further increase in µci degrades the performance of FDD because when Rx is not saturated, the variance of the estimated information molecule concentration σ2 ˆcm|s is minimized at a certain µci beyond which its value increases with increasing µci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
224
+ page_content=' In saturation, however, σ2 ˆcm|s monotonically increases with µci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
225
+ page_content=' Next, we consider the effect of similarity parameter, namely the affinity ratio of information and interferer molecules η = KDi/KDm, on the BEP performance for saturation and non- saturation cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
226
+ page_content=' As displayed in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
227
+ page_content=' 3b, FDD outperforms TDD in both saturation and non-saturation cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
228
+ page_content=' Regarding the non-saturation case, the performances of both detection methods improve with increasing similarity up to a certain point because the effect of interference on the detection perfor- mance weakens;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
229
+ page_content=' namely, bound state probability for interferer molecules decreases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
230
+ page_content=' However, when the similarity is further increased, the performance of FDD degrades.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
231
+ page_content=' Intuitively, this is because the characteristic frequencies corresponding to bits s = 0 and s = 1 [22] fch|s = [fch|m, fch|i] = 1 4π � � 1 τm|s + 1 τi ± �� 1 τm|s − 1 τi �2 + 4k+ mcm|sk+ i ci � � , (31) where τm|s = 1/(cm|sk+ m + k− m) and τi = 1/(cik+ i + k− i ), are approaching each other in the spectrum, making it dif- ficult to distinguish the bits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
232
+ page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
233
+ page_content=' 2 for an example scenario, two characteristic frequencies, fch|m and fch|i, appear in the spectrum for each transmitted bit due to the binding of two types of molecules, namely information and interferer molecules, with the order depending on the concentrations and binding/unbinding rates of the individual molecule types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
234
+ page_content=' In the non-saturation case, we do not observe this phenomenon because the characteristic frequencies do not come close to each other to degrade the detection performance with increasing similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
235
+ page_content=' We also consider the effect of the number of time samples, N, and the sampling period ∆t on the BEP performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
236
+ page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
237
+ page_content=' 3c, the performance of FDD increases with N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
238
+ page_content=' This is expected as taking more samples decreases the variance of the estimated information molecule concentration σ2 ˆcm|s, hence, decreases the BEP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
239
+ page_content=' For TDD, the performance does not change with N as Rx takes one sample in the sampling window.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
240
+ page_content=' For varying ∆t, the performance of FDD increases with increasing ∆t for the non-saturation case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
241
+ page_content=' Note that ∆t should be shorter than the characteristic time scale of any reactions to be able to capture the fluctuations and to satisfy the sampling at the equilibrium assumption, which is discussed in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
242
+ page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
243
+ page_content=' Therefore, we consider ∆t values satisfying this condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
244
+ page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
245
+ page_content=' CONCLUSION In this paper, we proposed a FDD method for the FET- based MC-Rx, which utilizes the output noise PSD to extract the transmitted bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
246
+ page_content=' We derived the BEP for the proposed method and a one-shot TDD method considering the existence 0 1 2 3 4 5 Interference/Information (.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
247
+ page_content=') 10-6 10-5 10-4 10-3 10-2 10-1 100 Bit error probability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
248
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
249
+ page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
250
+ page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
251
+ page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
252
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
253
+ page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
254
+ page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
255
+ page_content='95 1 Bound state probability, PB TDD (Sat) FDD (Sat) TDD (Non-sat) FDD (Non-sat) s = 0 (Sat) s = 1 (Sat) s = 0 (Non-sat) s = 1 (Non-sat) (a) 0 5 10 15 20 25 30 35 40 Similarity (2) 10-5 10-4 10-3 10-2 10-1 100 Bit error probability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
256
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
257
+ page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
258
+ page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
259
+ page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
260
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
261
+ page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
262
+ page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
263
+ page_content='95 1 Bound state probability, PB TDD (Sat) FDD (Sat) TDD (Non-sat) FDD (Non-sat) s = 0 (Sat) s = 1 (Sat) s = 0 (Non-sat) s = 1 (Non-sat) (b) 100 200 300 400 500 600 700 800 900 1000 # of Samples (N) 10-6 10-4 10-2 100 Bit error probability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
264
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
265
+ page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
266
+ page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
267
+ page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
268
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
269
+ page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
270
+ page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
271
+ page_content='95 1 Bound state probability, PB TDD (Sat) FDD (Sat) TDD (Non-sat) FDD (Non-sat) s = 0 (Sat) s = 1 (Sat) s = 0 (Non-sat) s = 1 (Non-sat) (c) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
272
+ page_content=' 3: BEP for varying (a) mean interference concentration level (b) similarity of affinities for information and interferer molecules (c) number of time samples N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
273
+ page_content=' 1 2 3 4 5 6 7 8 9 Sampling Period ("t(ms)) 10-8 10-6 10-4 10-2 100 Bit error probability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
274
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
275
+ page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
276
+ page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
277
+ page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
278
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
279
+ page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
280
+ page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
281
+ page_content='95 1 Bound state probability, PB TDD (Sat) FDD (Sat) TDD (Non-sat) FDD (Non-sat) s = 0 (Sat) s = 1 (Sat) s = 0 (Non-sat) s = 1 (Non-sat) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
282
+ page_content=' 4: BEP for varying sampling period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
283
+ page_content=' of a single type of interferer molecules in a microfluidic channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
284
+ page_content=' Our analysis reveals that the proposed detection method significantly outperforms the TDD, primarily when high interference exists in the channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
285
+ page_content=' ACKNOWLEDGMENT This work was supported in part by the AXA Research Fund (AXA Chair for Internet of Everything at Koc¸ University), the Horizon 2020 Marie Skłodowska-Curie Individual Fellowship under Grant Agreement 101028935, and by The Scientific and Technological Research Council of Turkey (TUBITAK) under Grant #120E301, and Huawei Graduate Research Scholarship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
286
+ page_content=' REFERENCES [1] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
287
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
288
+ page_content=' Akan, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
289
+ page_content=', “Fundamentals of molecular information and commu- nication science,” Proceedings of the IEEE, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
290
+ page_content=' 105, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
291
+ page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
292
+ page_content=' 306–318, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
293
+ page_content=' [2] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
294
+ page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
295
+ page_content=' Akyildiz, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
296
+ page_content=', “Panacea: An internet of bio-nanothings application for early detection and mitigation of infectious diseases,” IEEE Access, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
297
+ page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
298
+ page_content=' 140 512–140 523, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
299
+ page_content=' [3] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
300
+ page_content=' Kuscu, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
301
+ page_content=', “Transmitter and receiver architectures for molecular communications: A survey on physical design with modulation, coding, and detection techniques,” Proceedings of the IEEE, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
302
+ page_content=' 107, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
303
+ page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
304
+ page_content=' 1302–1341, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
305
+ page_content=' [4] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
306
+ page_content=' Kuscu, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
307
+ page_content=', “Fabrication and microfluidic analysis of graphene- based molecular communication receiver for internet of nano things (iont),” Scientific reports, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
308
+ page_content=' 11, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
309
+ page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
310
+ page_content=' 1–20, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
311
+ page_content=' [5] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
312
+ page_content=' Mora, “Physical limit to concentration sensing amid spurious ligands,” Physical review letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
313
+ page_content=' 115, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
314
+ page_content=' 3, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
315
+ page_content=' 038102, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
316
+ page_content=' [6] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
317
+ page_content=' Kuscu and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
318
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
319
+ page_content=' Akan, “Channel sensing in molecular commu- nications with single type of ligand receptors,” IEEE Transactions on Communications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
320
+ page_content=' 67, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
321
+ page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
322
+ page_content=' 6868–6884, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
323
+ page_content=' [7] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
324
+ page_content=' Kuscu and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
325
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
326
+ page_content=' Akan, “Detection in molecular communications with ligand receptors under molecular interference,” Digital Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
327
+ page_content=' 124, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
328
+ page_content=' 103186, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
329
+ page_content=' [8] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
330
+ page_content=' Kuscu and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
331
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
332
+ page_content=' Akan, “Modeling convection-diffusion-reaction systems for microfluidic molecular communications with surface-based receivers in internet of bio-nano things,” PloS one, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
333
+ page_content=' 13, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
334
+ page_content=' 2, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
335
+ page_content=' e0192202, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
336
+ page_content=' [9] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
337
+ page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
338
+ page_content=' Bicen and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
339
+ page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
340
+ page_content=' Akyildiz, “System-theoretic analysis and least- squares design of microfluidic channels for flow-induced molecular communication,” IEEE Transactions on Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
341
+ page_content=' 61, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
342
+ page_content=' 20, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
343
+ page_content=' 5000–5013, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
344
+ page_content=' [10] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
345
+ page_content=' Kuscu and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
346
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
347
+ page_content=' Akan, “Modeling and analysis of sinw fet-based molecular communication receiver,” IEEE Transactions on Communica- tions, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
348
+ page_content=' 64, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
349
+ page_content=' 9, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
350
+ page_content=' 3708–3721, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
351
+ page_content=' [11] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
352
+ page_content=' Heller, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
353
+ page_content=', “Charge noise in graphene transistors,” Nano letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
354
+ page_content=' 10, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
355
+ page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
356
+ page_content=' 1563–1567, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
357
+ page_content=' [12] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
358
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
359
+ page_content=' Mele, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
360
+ page_content=', “General model and equivalent circuit for the chemical noise spectrum associated to surface charge fluctuation in potentiometric sensors,” IEEE Sensors Journal, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
361
+ page_content=' 21, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
362
+ page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
363
+ page_content=' 6258–6269, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
364
+ page_content=' [13] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
365
+ page_content=' Mucksch, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
366
+ page_content=', “Quantifying reversible surface binding via surface- integrated fluorescence correlation spectroscopy,” Nano Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
367
+ page_content=' 18, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
368
+ page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
369
+ page_content=' 3185–3192, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
370
+ page_content=' [14] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
371
+ page_content=' Vaughan, “A bayesian test for periodic signals in red noise,” Monthly Notices of the Royal Astronomical Society, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
372
+ page_content=' 402, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
373
+ page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
374
+ page_content=' 307–320, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
375
+ page_content=' [15] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
376
+ page_content=' Barret and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
377
+ page_content=' Vaughan, “Maximum likelihood fitting of x-ray power density spectra: application to high-frequency quasi-periodic oscillations from the neutron star x-ray binary 4u1608-522,” The Astrophysical Journal, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
378
+ page_content=' 746, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
379
+ page_content=' 2, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
380
+ page_content=' 131, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
381
+ page_content=' [16] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
382
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
383
+ page_content=' Sykulski, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
384
+ page_content=', “The debiased whittle likelihood,” Biometrika, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
385
+ page_content=' 106, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
386
+ page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
387
+ page_content=' 251–266, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
388
+ page_content=' [17] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
389
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
390
+ page_content=' Anderson, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
391
+ page_content=', “Modeling of solar oscillation power spectra,” The Astrophysical Journal, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
392
+ page_content=' 364, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
393
+ page_content=' 699–705, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
394
+ page_content=' [18] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
395
+ page_content=' Pfefferl´e and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
396
+ page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
397
+ page_content=' Abarzhi, “Whittle maximum likelihood estimate of spectral properties of rayleigh-taylor interfacial mixing using hot-wire anemometry experimental data,” Physical Review E, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
398
+ page_content=' 102, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
399
+ page_content=' 5, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
400
+ page_content=' 053107, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
401
+ page_content=' [19] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
402
+ page_content=' Toutain and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
403
+ page_content=' Appourchaux, “Maximum likelihood estimators: An application to the estimation of the precision of helioseismic measure- ments,” Astronomy and Astrophysics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
404
+ page_content=' 289, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
405
+ page_content=' 649–658, 1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
406
+ page_content=' [20] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
407
+ page_content=' Levin, “Power spectrum parameter estimation,” IEEE Transactions on Information Theory, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
408
+ page_content=' 11, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
409
+ page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
410
+ page_content=' 100–107, 1965.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
411
+ page_content=' [21] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
412
+ page_content=' Libbrecht, “On the ultimate accuracy of solar oscillation frequency measurements,” The Astrophysical Journal, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
413
+ page_content=' 387, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
414
+ page_content=' 712–714, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
415
+ page_content=' [22] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
416
+ page_content=' Frantlovi´c, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
417
+ page_content=', “Analysis of the competitive adsorption and mass transfer influence on equilibrium mass fluctuations in affinity-based biosensors,” Sensors and Actuators B: Chemical, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
418
+ page_content=' 189, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
419
+ page_content=' 71–79, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/HtAzT4oBgHgl3EQfHvtN/content/2301.01049v1.pdf'}
I9E0T4oBgHgl3EQfSAAQ/content/tmp_files/2301.02214v1.pdf.txt ADDED
@@ -0,0 +1,876 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ AUTOMATIC SOUND EVENT DETECTION AND CLASSIFICATION OF
2
+ GREAT APE CALLS USING NEURAL NETWORKS
3
+ Zifan Jiang1,2, Adrian Soldati1, Isaac Schamberg2, Adriano R. Lameira3, Steven Moran1
4
+ 1University of Neuchâtel, 2University of Zurich, 3University of Warwick
5
+ [email protected], {adrian.soldati, steven.moran}@unine.ch, [email protected], [email protected]
6
+ ABSTRACT
7
+ We present a novel approach to automatically
8
+ detect and classify great ape calls from continuous
9
+ raw
10
+ audio
11
+ recordings
12
+ collected
13
+ during
14
+ field
15
+ research.
16
+ Our method leverages deep pretrained
17
+ and sequential neural networks, including wav2vec
18
+ 2.0 and LSTM, and is validated on three data sets
19
+ from three different great ape lineages (orangutans,
20
+ chimpanzees, and bonobos). The recordings were
21
+ collected by different researchers and include
22
+ different annotation schemes, which our pipeline
23
+ preprocesses and trains in a uniform fashion.
24
+ Our results for call detection and classification
25
+ attain high accuracy. Our method is aimed to be
26
+ generalizable to other animal species, and more
27
+ generally, sound event detection tasks.
28
+ To foster
29
+ future research, we make our pipeline and methods
30
+ publicly available.1
31
+ Keywords: sound event detection, neural networks,
32
+ primatology, phonetics, computational linguistics
33
+ 1. INTRODUCTION
34
+ In primatology, as in documentary linguistics, the
35
+ collection, annotation, and analysis of primary
36
+ field data are time-consuming and expensive. The
37
+ recordings of primate calls are also often undertaken
38
+ in not ideal environmental conditions (e.g., in a
39
+ dense and noisy forest where other vocal species are
40
+ present), making the process even more challenging.
41
+ Therefore it is typical that primatologists first
42
+ manually annotate their recordings and then conduct
43
+ acoustic analyses on their species’ specific calls.
44
+ Our goal in this paper is to automatically and
45
+ accurately detect and classify primate calls from
46
+ raw audio data.
47
+ After discussing related work
48
+ (§2), we describe our data sets (§3) and present
49
+ our method that leverages pretrained and sequential
50
+ neural networks (§4).
51
+ We carry out reliable
52
+ and reproducible experiments (§5) to support the
53
+ effectiveness of our approach and compare our
54
+ results between different architectural choices on
55
+ the data sets of three great ape lineages. Overall,
56
+ our models achieve more than 80.0 frame-level
57
+ classification accuracy and weighted F1-score.
58
+ Interestingly, we find that the wav2vec 2.0 [1] model
59
+ – even though pretrained on a large corpus of human
60
+ speech [2] – generalizes surprisingly well as an
61
+ audio feature representation layer for great ape calls
62
+ without additional fine-tuning. This finding perhaps
63
+ bides well with the similar vocal tract morphology
64
+ between extant great apes and humans, and the
65
+ larger prediction that ancestral ape-like calls evolved
66
+ to become the building blocks of human speech.
67
+ 2. RELATED WORK
68
+ To the best of our knowledge,
69
+ there is no
70
+ precedent research on the automatic detection and
71
+ classification of great ape calls from raw audio.
72
+ Therefore, we find our work lies in between general-
73
+ purpose sound event detection and human speech
74
+ recognition. We briefly discuss these also in light
75
+ of existing research on animal sound classification.
76
+ Sound
77
+ Event
78
+ Detection
79
+ (SED)
80
+ aims
81
+ at
82
+ automatically
83
+ recognizing
84
+ what
85
+ is
86
+ happening
87
+ in an audio signal and when it is happening [3].
88
+ SED tasks are usually general-purpose (e.g., birds
89
+ singing or footsteps) as opposed to domain-specific
90
+ tasks like human speech or music analysis, and thus
91
+ encounter particular challenges, such as the cocktail
92
+ party effect [4]. Notably, the DCASE challenge that
93
+ takes place in recent years involves relevant SED
94
+ tasks [5]. A variant of SED is the audio tagging task
95
+ [6], in which only what is happening in an audio
96
+ signal is annotated and recognized, but not when.
97
+ Prominent work on both tasks [7, 8] tends to
98
+ consist of common components, including:
99
+ (1)
100
+ an acoustic feature representation layer,
101
+ either
102
+ (a) traditional spectrogram-based approaches or
103
+ (b)
104
+ convolutional
105
+ neural
106
+ networks
107
+ (CNN)
108
+ on
109
+ spectrogram features or on raw waveforms [9]; (2)
110
+ a sequence modeling layer that captures temporal
111
+ interaction between frame-level features, which
112
+ takes the forms of mean/max pooling strategies,
113
+ recurrent neural networks (RNN) or Transformers
114
+ [10]; and (3) an objective function, usually cross-
115
+ arXiv:2301.02214v1 [eess.AS] 5 Jan 2023
116
+
117
+ data set
118
+ # audio clips
119
+ mean / call / total duration
120
+ # indiv. (♂/ ♀)
121
+ # call types (duration ratio) / units
122
+ chimpanzee
123
+ 235
124
+ ~ 8s / 1,955s / 1,964s
125
+ 11 (11 / 0)
126
+ 4 (6:3:3:1) / 686
127
+ orangutan
128
+ 65
129
+ ~ 74s / 2,793s / 4,817s
130
+ 10 (10 / 0)
131
+ 7 (700:40:200:100:20:1:1) / 9016
132
+ bonobo
133
+ 28
134
+ ~ 24s / 62s / 677s
135
+ 7 (3 / 4)
136
+ 18 (20:40:10:20:...:200:...:5:1) / 356
137
+ Table 1: Data set overview and stats. Shown in the table are the number of audio clips, mean duration of each clip,
138
+ total duration of calls of all clips, total duration of all clips, number of individuals (male and female), number of
139
+ call types (approximate total duration ratio of them, partially omitted for bonobo due to space limit), and number
140
+ of individually annotated call units.
141
+ entropy classification on frame-level (for SED) or
142
+ clip-level (for audio tagging), or occasionally CTC
143
+ [11] for “sequentially labeled data” [12].
144
+ Animal sound classification involves related
145
+ research
146
+ that
147
+ addresses
148
+ specifically
149
+ at
150
+ animal
151
+ sounds, e.g., [13, 14] on the classification of general
152
+ animal species and [15, 16, 17] on species, call types
153
+ and caller identities of primates. While this line of
154
+ research uses similar acoustic feature representation
155
+ techniques as seen in SED tasks, the models tend
156
+ to work on top of individual short audio units of
157
+ animal vocalizations that are manually selected from
158
+ raw audio recordings by human experts, and thus
159
+ fall short on dealing with a continuous audio signal
160
+ that contains many audio events as well as noise.
161
+ Instead, our method detects and classifies great ape
162
+ calls directly from raw recordings ranging from
163
+ seconds to minutes (theoretically recurrent models
164
+ also extend to hours, but it could pose challenges in
165
+ model training time).
166
+ Automatic
167
+ Speech
168
+ Recognition
169
+ (ASR)
170
+ on
171
+ humans – one of the extant five great apes – has
172
+ seen significant progress [18] (while animal calls
173
+ remain mysterious to fully decipher), including the
174
+ recent work wav2vec 2.0 [1].
175
+ Wav2vec 2.0 is
176
+ pretrained on Librispeech [2] in a self-supervised
177
+ way, and it demonstrates the feasibility of ASR with
178
+ limited amounts of labeled data. We are interested
179
+ in whether the speech representation learned by
180
+ wav2vec 2.0 can be applied successfully to great ape
181
+ call detection and classification.
182
+ 3. DATA
183
+ Our data consists of recordings of chimpanzee pant-
184
+ hoots, orangutan long calls, and bonobo high-hoots.
185
+ Table 1 provides an overview of these data sets.
186
+ Chimpanzee pant-hoots are vocal sequences
187
+ composed of up to four acoustically distinct phases,
188
+ typically produced in this order: introduction, build-
189
+ up, climax, and let-down [19].
190
+ Most pant-hoots
191
+ contain two or more phases, although single phases
192
+ can be produced in specific contexts [20].
193
+ The
194
+ successive nature and well-balanced phase duration
195
+ facilitate the training of a classifier (§5.1). We use
196
+ annotated recordings of isolated pant-hoots (i.e., no
197
+ temporal overlap with others’ calls) produced during
198
+ feeding, traveling, and resting context.
199
+ Orangutan long calls are composed by a
200
+ full pulse, which is sub-divided into a sub-pulse
201
+ transitory element and a pulse body,
202
+ or into
203
+ sequences of bubble sub-pulse or grumble sub-pulse
204
+ [21, 22].
205
+ The complex temporal interrelationship
206
+ between phases and the overlapping and class-
207
+ unbalanced
208
+ characteristics
209
+ (some
210
+ phases
211
+ are
212
+ extremely short) pose challenges for training a
213
+ multi-class classifier.
214
+ On the other hand, the
215
+ duration of calls and non-calls in the data set are
216
+ well balanced, which opens the gate to a binary
217
+ call detection model (§5.2). Annotated recordings
218
+ include spontaneous long calls and long calls
219
+ produced in response to other males’ vocal presence
220
+ or environmental disturbances.
221
+ Bonobo high-hoots – which make up the
222
+ plurality of our bonobo call data set – are loud,
223
+ tonal vocalizations [23].
224
+ Unlike the chimpanzee
225
+ and orangutan data sets, call types other than the
226
+ homologous loud calls are also present in the data
227
+ set but are rather minor and diverse (e.g., peep-yelp,
228
+ soft barks), which could be challenging to find. The
229
+ total call duration is relatively short.
230
+ 4. METHOD
231
+ This section introduces our method,
232
+ which is
233
+ illustrated in Fig. 1. We describe each step in turn.
234
+ 4.1. Audio Preprocessing
235
+ First, all audio clips are converted to .wav format and
236
+ resampled to 16 kHz sample rate. We segment them
237
+ into 20ms frames and pad with zeros if necessary.
238
+ 4.2. Feature and Label Extraction
239
+ Next, we extract three types of acoustic features with
240
+ the torchaudio Python package. For an audio clip of
241
+ T frames, we get a sequence of frame-level features
242
+ I1:T, in the shape of T × feature_dim:
243
+
244
+ • Raw waveform: the original amplitude values
245
+ over time. feature_dim = 0.02×16000 = 320.
246
+ • Spectrogram: calculated then from the raw
247
+ waveform with feature_dim = 201.
248
+ • wav2vec 2.0: inferred from the raw waveform
249
+ by the WAV2VEC2_BASE model on a CPU
250
+ device with feature_dim = 768.
251
+ We
252
+ then
253
+ extract
254
+ frame-level
255
+ labels
256
+ from
257
+ our
258
+ annotations. For each annotated unit, we mark all
259
+ frames inside the annotated time span with a positive
260
+ class index indicating the call type (e.g., 1 for intro);
261
+ unmarked frames are 0s by default, indicating non-
262
+ calls. The labels’ shape of a clip L1:T is T ×1.
263
+ 4.3. Data Split
264
+ We shuffle the clips (features and labels) of each
265
+ data set and split them into 80%, 10%, and 10%
266
+ for training, validation, and test sets, respectively.
267
+ We make three different splits by three different
268
+ random seeds 0, 42, and 3407 [24] to allow multiple
269
+ experiments and study the effect of randomness.
270
+ 4.4. Sequence Modeling
271
+ Since our goal is to learn a function f from I1:T
272
+ to L1:T, we first input I1:T to a sequence modeling
273
+ function f m that outputs a hidden sequence H1:T
274
+ (T × hidden_dim),
275
+ which
276
+ captures
277
+ inter-frame
278
+ interaction. H1:T then goes through a dense linear
279
+ function f d (output D1:T) followed by a Softmax
280
+ function f s to produce the probability distribution
281
+ over target classes, i.e., P1:T (T ×num_class).
282
+ Specifically, f m takes the following forms:
283
+ • RNN: a bidirectional LSTM with hidden size
284
+ 1024. hidden_dim = 1024∗2 = 2048.
285
+ • Transformer encoder: a Transformer encoder
286
+ with 8 heads and 6 layers. hidden_dim = 1024.
287
+ • Autoregressive model:
288
+ we optionally add
289
+ autoregressive connections between time steps
290
+ to encourage consistent output labels.
291
+ The
292
+ output of
293
+ f d at time step t,
294
+ i.e.,
295
+ Dt is
296
+ concatenated to the next time step’s input It+1.
297
+ Lastly, a cross-entropy loss is computed between
298
+ the model output P1:T and the gold labels L1:T, then
299
+ backpropagated to train the models. To mitigate the
300
+ class imbalance problem, we set class weights to the
301
+ reciprocal of each class’s occurrence times.
302
+ 5. EXPERIMENTS AND RESULTS
303
+ Our experiments were done in PyTorch [25], with
304
+ Python 3.8 on an Nvidia Tesla V100 GPU (32GB
305
+ ram). Most models have ∼40k million parameters
306
+ and finish the training of up to 200 epochs with early
307
+ raw waveform
308
+ spectrogram
309
+ features
310
+ wav2vec 2.0
311
+ features
312
+ autoregressive
313
+ frame-level
314
+ sequence model
315
+ strong labels
316
+ prediction
317
+ (ar vs. non-ar)
318
+ intro
319
+ build up climax let down
320
+ distribution
321
+ via Softmax
322
+ (ar vs. non-ar)
323
+ +
324
+ +
325
+ +
326
+ I t+1
327
+ I t+2
328
+ I T
329
+ Figure 1: Our method (bottom-up). The audio
330
+ clip shown in the figure is from the chimpanzee
331
+ data set. (Non-)ar stands for (non-)autoregressive.
332
+ stopping on validation F1-score within one hour (or
333
+ a few hours because autoregressive recurrent models
334
+ run slowly on PyTorch). Table 2 presents our results.
335
+ 5.1. Initial Exploration with Chimpanzee Data
336
+ We first test the viability of our approach on
337
+ the chimpanzee data in light of the simplicity
338
+ of the pant-hoot annotation scheme,
339
+ although
340
+ phylogenetically orangutans are closer to humans.
341
+ We start from E1, a simple waveform + LSTM
342
+ baseline, and observe in E2 that wav2vec 2.0
343
+ outperforms the raw waveform and spectrogram by
344
+ a large margin, which demonstrates the power of
345
+ transfer learning from pretraining on human speech.
346
+ In E2.1, we find that the Transformer encoder
347
+ does not outperform LSTM. Hence, we infer that
348
+
349
+ Dt+1
350
+ Dt+2
351
+ DT
352
+ Dense
353
+ Dense
354
+ Dense
355
+ T1
356
+ RNN
357
+ RNN
358
+ RNN
359
+ D
360
+ t+1
361
+ -18200
362
+ 30
363
+ 175
364
+ 20
365
+ 150
366
+ 10
367
+ 125
368
+ 0
369
+ bin
370
+ freg
371
+ 100
372
+ -10
373
+ 75
374
+ -20
375
+ 50
376
+ -30
377
+ 25
378
+ -4010
379
+ 700
380
+ 600
381
+ 0.8
382
+ feature dimension
383
+ 500
384
+ 0.6
385
+ 400
386
+ OOE
387
+ 0.4
388
+ 200
389
+ 0.2
390
+ 100
391
+ 0
392
+ 0.0no call
393
+ intro
394
+ build up
395
+ dimax
396
+ let downno call
397
+ intro
398
+ build up
399
+ xeup
400
+ let down
401
+ AULMULMID
402
+ data
403
+ feature
404
+ model
405
+ dev acc.
406
+ dev f1
407
+ test acc.
408
+ test f1
409
+ aucpr
410
+ Explore the best feature and model combination
411
+ E1
412
+ chimp
413
+ waveform
414
+ lstm (baseline)
415
+ 51.7±2.1
416
+ 35.4±2.5
417
+ 51.0±3.6
418
+ 34.7±4.3
419
+ -
420
+ E1.1
421
+ chimp
422
+ spectrogram
423
+ lstm
424
+ 60.3±1.5
425
+ 55.7±2.3
426
+ 58.7±4.7
427
+ 53.9±5.4
428
+ -
429
+ E2
430
+ chimp
431
+ wav2vec2
432
+ lstm
433
+ 81.0±4.0
434
+ 79.9±4.6
435
+ 79.3±2.3
436
+ 77.9±3.6
437
+ -
438
+ E2.1
439
+ chimp
440
+ wav2vec2
441
+ transformer
442
+ 71.3±0.6
443
+ 68.3±0.2
444
+ 75.3±0.6
445
+ 72.1±0.5
446
+ -
447
+ Explore the hyper-parameters
448
+ E3.1
449
+ chimp
450
+ wav2vec2
451
+ lstm (E2 + batch_size = 4)
452
+ 69.7±1.5
453
+ 71.8±2.6
454
+ 67.7±4.0
455
+ 69.6±4.0
456
+ -
457
+ E3.2
458
+ chimp
459
+ wav2vec2
460
+ lstm (E2 + batch_size = 8)
461
+ 63.3±0.6
462
+ 62.6±1.0
463
+ 62.0±4.4
464
+ 61.5±4.0
465
+ -
466
+ E3.3
467
+ chimp
468
+ wav2vec2
469
+ lstm (E2 + dropout = 0.2)
470
+ 80.7±3.5
471
+ 80.0±4.4
472
+ 78.0±1.7
473
+ 76.8±2.7
474
+ -
475
+ E3.4
476
+ chimp
477
+ wav2vec2
478
+ lstm (E2 + dropout = 0.1)
479
+ 81.0±4.0
480
+ 80.2±4.8
481
+ 78.7±2.9
482
+ 77.3±3.9
483
+ -
484
+ E3.5
485
+ chimp
486
+ wav2vec2
487
+ lstm (E2 - balance_weights)
488
+ 81.0±3.6
489
+ 79.6±4.4
490
+ 79.3±2.3
491
+ 78.3±3.6
492
+ -
493
+ Explore autoregressive modeling
494
+ E4
495
+ chimp
496
+ wav2vec2
497
+ lstm (E2 + autoregressive)
498
+ 87.7±1.2
499
+ 87.1±1.8
500
+ 85.7±2.1
501
+ 85.6±2.5
502
+ -
503
+ Extend to orangutan long calls and a binary setting
504
+ E5
505
+ orang
506
+ wav2vec2
507
+ lstm (= E4)
508
+ 83.0±1.0
509
+ 82.7±1.4
510
+ 81.7±3.1
511
+ 82.0±2.6
512
+ -
513
+ E5.1
514
+ orang
515
+ wav2vec2
516
+ lstm (E5 + binary target)
517
+ 92.3±2.5
518
+ 92.1±2.5
519
+ 92.0±1.0
520
+ 91.9±1.1
521
+ 0.96
522
+ Extend to bonobo calls and a binary setting
523
+ E6
524
+ bonobo
525
+ wav2vec2
526
+ lstm (= E4)
527
+ 87.0±4.6
528
+ 85.9±6.3
529
+ 83.7±3.8
530
+ 82.3±2.2
531
+ -
532
+ E6.1
533
+ bonobo
534
+ wav2vec2
535
+ lstm (E6 + binary target)
536
+ 92.0±3.6
537
+ 91.9±3.4
538
+ 87.7±3.5
539
+ 87.8±2.9
540
+ 0.87
541
+ Zero-shot transferring from orangutan to bonobo
542
+ E7
543
+ bonobo
544
+ wav2vec2
545
+ lstm (= E5.1)
546
+ 63.0±13
547
+ 69.2±10
548
+ 72.0±4.0
549
+ 74.2±3.1
550
+ 0.55
551
+ Table 2: Experimental results. We run all experiments three times based on different random seeds and report the
552
+ mean and standard deviation. acc. stands for frame-level accuracy, f1 stands for the frame-level average F1-score
553
+ weighted by the number of true instances per class, and aucpr stands for the area under the precision-recall curve
554
+ for the positive class in the binary case at test time when the random seed is set to 0. For hyper-parameters, we
555
+ start E1 with batch_size = 1, dropout = 0.4 and keep them by default, if not otherwise specified in the table.
556
+ Transformer’s ability to capture arbitrary long-range
557
+ dependency is not beneficial to our task.
558
+ Next, we explore the hyper-parameters in the
559
+ second experimental group and we find in E3.5 that
560
+ balancing class weights (§4.4) has a small impact.
561
+ Lastly, we show in E4 that the autoregressive
562
+ connections are beneficial for consistent output, as
563
+ illustrated by the tiny gaps in non-ar output in Fig. 1.
564
+ 5.2. Extending to Orangutan and Bonobo Data
565
+ We successfully extend the model in E4 to as trained
566
+ on the orangutan (E5) and bonobo (E6) data sets. We
567
+ note that some minority classes perform less well
568
+ due to data scarcity, in contrast to the well-balanced
569
+ situation in the chimpanzee data set (see Table 1).
570
+ We further reduce the task to a binary (call vs.
571
+ non-call) classification that resembles voice activity
572
+ detection of human speech.
573
+ This is a useful tool
574
+ to automatically extract calls from raw recordings
575
+ for further bioacoustic analysis (e.g., studying the
576
+ repertoire of a given species).
577
+ Finally, to understand the generalizability of our
578
+ models, we try zero-shot transferring the model
579
+ trained in E5.1 on orangutans directly to unseen
580
+ bonobo data. The results show that it is promising
581
+ to build a potential general-purpose sound event
582
+ detection model for all great ape calls.
583
+ 6. DISCUSSION
584
+ We have addressed a gap in the bioacoustics
585
+ research of non-human great apes by developing
586
+ an approach for automatically identifying sound
587
+ events and classifying great ape calls using a neural
588
+ network architecture. Our method successfully and
589
+ accurately identifies and classifies calls in three
590
+ species of non-human great apes, and provides a tool
591
+ for primatologists to bootstrap call identification and
592
+ analysis from raw unannotated audio recordings.
593
+ Our method also shows the general applicability
594
+ of the wav2vec 2.0 model trained on human speech
595
+ for identifying vocalizations and call types in other
596
+ species. Future work may apply our approach to
597
+ more animals, as part of the goal to decode the
598
+ communication systems of great apes and other non-
599
+ human animals more broadly.2
600
+ 1 https://github.com/J22Melody/sed_great_ape
601
+ 2 For example: https://www.earthspecies.org
602
+
603
+ 7. REFERENCES
604
+ [1] A. Baevski, Y. Zhou, A. Mohamed, and M. Auli,
605
+ “wav2vec 2.0: A framework for self-supervised
606
+ learning of speech representations,” Advances in
607
+ Neural Information Processing Systems, vol. 33,
608
+ pp. 12 449–12 460, 2020.
609
+ [2] V.
610
+ Panayotov,
611
+ G.
612
+ Chen,
613
+ D.
614
+ Povey,
615
+ and
616
+ S. Khudanpur,
617
+ “Librispeech:
618
+ an asr corpus
619
+ based on public domain audio books,” in 2015
620
+ IEEE
621
+ international
622
+ conference
623
+ on
624
+ acoustics,
625
+ speech and signal processing (ICASSP).
626
+ IEEE,
627
+ 2015, pp. 5206–5210.
628
+ [3] A. Mesaros, T. Heittola, T. Virtanen, and M. D.
629
+ Plumbley, “Sound event detection:
630
+ A tutorial,”
631
+ IEEE Signal Processing Magazine, vol. 38, no. 5,
632
+ pp. 67–83, 2021.
633
+ [4] B. Arons, “A review of the cocktail party effect,”
634
+ Journal of the American Voice I/O society, vol. 12,
635
+ no. 7, pp. 35–50, 1992.
636
+ [5] N.
637
+ Turpault,
638
+ R.
639
+ Serizel,
640
+ A.
641
+ Parag
642
+ Shah,
643
+ and
644
+ J.
645
+ Salamon,
646
+ “Sound
647
+ event
648
+ detection
649
+ in
650
+ domestic
651
+ environments
652
+ with
653
+ weakly
654
+ labeled
655
+ data and soundscape synthesis,”
656
+ in Workshop
657
+ on
658
+ Detection
659
+ and
660
+ Classification
661
+ of
662
+ Acoustic
663
+ Scenes and Events,
664
+ New York City,
665
+ United
666
+ States,
667
+ October
668
+ 2019.
669
+ [Online].
670
+ Available:
671
+ https://hal.inria.fr/hal-02160855
672
+ [6] A. Mesaros, T. Heittola, E. Benetos, P. Foster,
673
+ M. Lagrange, T. Virtanen, and M. D. Plumbley,
674
+ “Detection and classification of acoustic scenes and
675
+ events: Outcome of the DCASE 2016 challenge,”
676
+ IEEE/ACM Transactions on Audio, Speech, and
677
+ Language Processing, vol. 26, no. 2, pp. 379–393,
678
+ Feb 2018.
679
+ [7] J. Ebbers and R. Haeb-Umbach, “Self-trained
680
+ audio tagging and sound event detection in
681
+ domestic environments,” in Proceedings of the 6th
682
+ Detection and Classification of Acoustic Scenes
683
+ and Events 2021 Workshop (DCASE2021), 2021.
684
+ [8] Y. Gong,
685
+ Y.-A. Chung,
686
+ and J. Glass,
687
+ “Ast:
688
+ Audio spectrogram transformer,” arXiv preprint
689
+ arXiv:2104.01778, 2021.
690
+ [9] W. Dai, C. Dai, S. Qu, J. Li, and S. Das,
691
+ “Very deep convolutional neural networks for
692
+ raw waveforms,”
693
+ in 2017 IEEE international
694
+ conference
695
+ on
696
+ acoustics,
697
+ speech
698
+ and
699
+ signal
700
+ processing (ICASSP).
701
+ IEEE, 2017, pp. 421–425.
702
+ [10] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit,
703
+ L.
704
+ Jones,
705
+ A.
706
+ N.
707
+ Gomez,
708
+ Ł.
709
+ Kaiser,
710
+ and
711
+ I.
712
+ Polosukhin,
713
+ “Attention
714
+ is
715
+ all
716
+ you
717
+ need,”
718
+ Advances
719
+ in
720
+ neural
721
+ information
722
+ processing
723
+ systems, vol. 30, 2017.
724
+ [11] A.
725
+ Graves,
726
+ S.
727
+ Fernández,
728
+ F.
729
+ Gomez,
730
+ and
731
+ J.
732
+ Schmidhuber,
733
+ “Connectionist
734
+ temporal
735
+ classification:
736
+ labelling unsegmented sequence
737
+ data
738
+ with
739
+ recurrent
740
+ neural
741
+ networks,”
742
+ in
743
+ Proceedings of the 23rd international conference
744
+ on Machine learning, 2006, pp. 369–376.
745
+ [12] Y. Hou, Q. Kong, J. Wang, and S. Li, “Polyphonic
746
+ audio tagging with sequentially labelled data using
747
+ crnn with learnable gated linear units,” arXiv
748
+ preprint arXiv:1811.07072, 2018.
749
+ [13] Y. Sun, T. M. Maeda, C. Solis-Lemus, D. Pimentel-
750
+ Alarcon, and Z. Burivalova, “Classification of
751
+ animal sounds in a hyperdiverse rainforest using
752
+ convolutional neural networks,” arXiv preprint
753
+ arXiv:2111.14971, 2021.
754
+ [14] E.
755
+ ¸Sa¸smaz
756
+ and
757
+ F.
758
+ B.
759
+ Tek,
760
+ “Animal
761
+ sound
762
+ classification
763
+ using
764
+ a
765
+ convolutional
766
+ neural
767
+ network,” in 2018 3rd International Conference
768
+ on Computer Science and Engineering (UBMK).
769
+ IEEE, 2018, pp. 625–629.
770
+ [15] L. Pozzi, M. Gamba, and C. Giacoma, “The use
771
+ of artificial neural networks to classify primate
772
+ vocalizations:
773
+ a pilot study on black lemurs,”
774
+ American Journal of Primatology: Official Journal
775
+ of the American Society of Primatologists, vol. 72,
776
+ no. 4, pp. 337–348, 2010.
777
+ [16] A.
778
+ Mielke
779
+ and
780
+ K.
781
+ Zuberbühler,
782
+ “A
783
+ method
784
+ for automated individual, species and call type
785
+ recognition
786
+ in
787
+ free-ranging
788
+ animals,”
789
+ Animal
790
+ Behaviour, vol. 86, no. 2, pp. 475–482, 2013.
791
+ [17] P. Fedurek, K. Zuberbühler, and C. D. Dahl,
792
+ “Sequential information in a great ape utterance,”
793
+ Scientific reports, vol. 6, no. 1, pp. 1–11, 2016.
794
+ [18] J.
795
+ Li
796
+ et
797
+ al.,
798
+ “Recent
799
+ advances
800
+ in
801
+ end-to-
802
+ end
803
+ automatic
804
+ speech
805
+ recognition,”
806
+ APSIPA
807
+ Transactions
808
+ on
809
+ Signal
810
+ and
811
+ Information
812
+ Processing, vol. 11, no. 1, 2022.
813
+ [19] P. Marler and L. Hobbett, “Individuality in a
814
+ long-range vocalization of wild chimpanzees,”
815
+ Zeitschrift für Tierpsychologie, vol. 38, no. 1, pp.
816
+ 97–109, 1975.
817
+ [20] A. Soldati, P. Fedurek, G. Dezecache, J. Call,
818
+ and
819
+ K.
820
+ Zuberbühler,
821
+ “Audience
822
+ sensitivity
823
+ in
824
+ chimpanzee
825
+ display
826
+ pant
827
+ hoots,”
828
+ Animal
829
+ Behaviour, vol. 190, pp. 23–40, 2022.
830
+ [21] M. Hardus, “A description of the orangutan’s vocal
831
+ and sound repertoire, with a focus on geographical
832
+ variation,” Orangutans: Geographic variation in
833
+ behavioral ecology and conservation, pp. 49–64,
834
+ 2009.
835
+ [22] A. R. Lameira and S. A. Wich, “Orangutan long
836
+ call degradation and individuality over distance:
837
+ a playback approach,” International Journal of
838
+ Primatology, vol. 29, no. 3, pp. 615–625, 2008.
839
+ [23] I. Schamberg, D. L. Cheney, Z. Clay, G. Hohmann,
840
+ and R. M. Seyfarth, “Call combinations, vocal
841
+ exchanges
842
+ and
843
+ interparty
844
+ movement
845
+ in
846
+ wild
847
+ bonobos,” Animal Behaviour, vol. 122, pp. 109–
848
+ 116, 2016.
849
+ [24] D. Picard, “Torch. manual_seed (3407) is all you
850
+ need: On the influence of random seeds in deep
851
+ learning architectures for computer vision,” arXiv
852
+ preprint arXiv:2109.08203, 2021.
853
+ [25] A.
854
+ Paszke,
855
+ S.
856
+ Gross,
857
+ F.
858
+ Massa,
859
+ A.
860
+ Lerer,
861
+ J. Bradbury, G. Chanan, T. Killeen, Z. Lin,
862
+ N.
863
+ Gimelshein,
864
+ L.
865
+ Antiga
866
+ et
867
+ al.,
868
+ “Pytorch:
869
+ An
870
+ imperative
871
+ style,
872
+ high-performance
873
+ deep
874
+ learning library,” Advances in neural information
875
+ processing systems, vol. 32, 2019.
876
+
I9E0T4oBgHgl3EQfSAAQ/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,503 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf,len=502
2
+ page_content='AUTOMATIC SOUND EVENT DETECTION AND CLASSIFICATION OF GREAT APE CALLS USING NEURAL NETWORKS Zifan Jiang1,2, Adrian Soldati1, Isaac Schamberg2, Adriano R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
3
+ page_content=' Lameira3, Steven Moran1 1University of Neuchâtel, 2University of Zurich, 3University of Warwick jiang@cl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
4
+ page_content='uzh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
5
+ page_content='ch, {adrian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
6
+ page_content='soldati, steven.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
7
+ page_content='moran}@unine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
8
+ page_content='ch, isaac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
9
+ page_content='schamberg@uzh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
10
+ page_content='ch, adriano.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
11
+ page_content='lameira@warwick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
12
+ page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
13
+ page_content='uk ABSTRACT We present a novel approach to automatically detect and classify great ape calls from continuous raw audio recordings collected during field research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
14
+ page_content=' Our method leverages deep pretrained and sequential neural networks, including wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
15
+ page_content='0 and LSTM, and is validated on three data sets from three different great ape lineages (orangutans, chimpanzees, and bonobos).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
16
+ page_content=' The recordings were collected by different researchers and include different annotation schemes, which our pipeline preprocesses and trains in a uniform fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
17
+ page_content=' Our results for call detection and classification attain high accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
18
+ page_content=' Our method is aimed to be generalizable to other animal species, and more generally, sound event detection tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
19
+ page_content=' To foster future research, we make our pipeline and methods publicly available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
20
+ page_content='1 Keywords: sound event detection, neural networks, primatology, phonetics, computational linguistics 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
21
+ page_content=' INTRODUCTION In primatology, as in documentary linguistics, the collection, annotation, and analysis of primary field data are time-consuming and expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
22
+ page_content=' The recordings of primate calls are also often undertaken in not ideal environmental conditions (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
23
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
24
+ page_content=', in a dense and noisy forest where other vocal species are present), making the process even more challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
25
+ page_content=' Therefore it is typical that primatologists first manually annotate their recordings and then conduct acoustic analyses on their species’ specific calls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
26
+ page_content=' Our goal in this paper is to automatically and accurately detect and classify primate calls from raw audio data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
27
+ page_content=' After discussing related work (§2), we describe our data sets (§3) and present our method that leverages pretrained and sequential neural networks (§4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
28
+ page_content=' We carry out reliable and reproducible experiments (§5) to support the effectiveness of our approach and compare our results between different architectural choices on the data sets of three great ape lineages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
29
+ page_content=' Overall, our models achieve more than 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
30
+ page_content='0 frame-level classification accuracy and weighted F1-score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
31
+ page_content=' Interestingly, we find that the wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
32
+ page_content='0 [1] model – even though pretrained on a large corpus of human speech [2] – generalizes surprisingly well as an audio feature representation layer for great ape calls without additional fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
33
+ page_content=' This finding perhaps bides well with the similar vocal tract morphology between extant great apes and humans, and the larger prediction that ancestral ape-like calls evolved to become the building blocks of human speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
34
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
35
+ page_content=' RELATED WORK To the best of our knowledge, there is no precedent research on the automatic detection and classification of great ape calls from raw audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
36
+ page_content=' Therefore, we find our work lies in between general- purpose sound event detection and human speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
37
+ page_content=' We briefly discuss these also in light of existing research on animal sound classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
38
+ page_content=' Sound Event Detection (SED) aims at automatically recognizing what is happening in an audio signal and when it is happening [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
39
+ page_content=' SED tasks are usually general-purpose (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
40
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
41
+ page_content=', birds singing or footsteps) as opposed to domain-specific tasks like human speech or music analysis, and thus encounter particular challenges, such as the cocktail party effect [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
42
+ page_content=' Notably, the DCASE challenge that takes place in recent years involves relevant SED tasks [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
43
+ page_content=' A variant of SED is the audio tagging task [6], in which only what is happening in an audio signal is annotated and recognized, but not when.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
44
+ page_content=' Prominent work on both tasks [7, 8] tends to consist of common components, including: (1) an acoustic feature representation layer, either (a) traditional spectrogram-based approaches or (b) convolutional neural networks (CNN) on spectrogram features or on raw waveforms [9];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
45
+ page_content=' (2) a sequence modeling layer that captures temporal interaction between frame-level features, which takes the forms of mean/max pooling strategies, recurrent neural networks (RNN) or Transformers [10];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
46
+ page_content=' and (3) an objective function, usually cross- arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
47
+ page_content='02214v1 [eess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
48
+ page_content='AS] 5 Jan 2023 data set # audio clips mean / call / total duration # indiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
49
+ page_content=' (♂/ ♀) # call types (duration ratio) / units chimpanzee 235 ~ 8s / 1,955s / 1,964s 11 (11 / 0) 4 (6:3:3:1) / 686 orangutan 65 ~ 74s / 2,793s / 4,817s 10 (10 / 0) 7 (700:40:200:100:20:1:1) / 9016 bonobo 28 ~ 24s / 62s / 677s 7 (3 / 4) 18 (20:40:10:20:.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
50
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
51
+ page_content=':200:.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
52
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
53
+ page_content=':5:1) / 356 Table 1: Data set overview and stats.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
54
+ page_content=' Shown in the table are the number of audio clips, mean duration of each clip, total duration of calls of all clips, total duration of all clips, number of individuals (male and female), number of call types (approximate total duration ratio of them, partially omitted for bonobo due to space limit), and number of individually annotated call units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
55
+ page_content=' entropy classification on frame-level (for SED) or clip-level (for audio tagging), or occasionally CTC [11] for “sequentially labeled data” [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
56
+ page_content=' Animal sound classification involves related research that addresses specifically at animal sounds, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
57
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
58
+ page_content=', [13, 14] on the classification of general animal species and [15, 16, 17] on species, call types and caller identities of primates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
59
+ page_content=' While this line of research uses similar acoustic feature representation techniques as seen in SED tasks, the models tend to work on top of individual short audio units of animal vocalizations that are manually selected from raw audio recordings by human experts, and thus fall short on dealing with a continuous audio signal that contains many audio events as well as noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
60
+ page_content=' Instead, our method detects and classifies great ape calls directly from raw recordings ranging from seconds to minutes (theoretically recurrent models also extend to hours, but it could pose challenges in model training time).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
61
+ page_content=' Automatic Speech Recognition (ASR) on humans – one of the extant five great apes – has seen significant progress [18] (while animal calls remain mysterious to fully decipher), including the recent work wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
62
+ page_content='0 [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
63
+ page_content=' Wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
64
+ page_content='0 is pretrained on Librispeech [2] in a self-supervised way, and it demonstrates the feasibility of ASR with limited amounts of labeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
65
+ page_content=' We are interested in whether the speech representation learned by wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
66
+ page_content='0 can be applied successfully to great ape call detection and classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
67
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
68
+ page_content=' DATA Our data consists of recordings of chimpanzee pant- hoots, orangutan long calls, and bonobo high-hoots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
69
+ page_content=' Table 1 provides an overview of these data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
70
+ page_content=' Chimpanzee pant-hoots are vocal sequences composed of up to four acoustically distinct phases, typically produced in this order: introduction, build- up, climax, and let-down [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
71
+ page_content=' Most pant-hoots contain two or more phases, although single phases can be produced in specific contexts [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
72
+ page_content=' The successive nature and well-balanced phase duration facilitate the training of a classifier (§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
73
+ page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
74
+ page_content=' We use annotated recordings of isolated pant-hoots (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
75
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
76
+ page_content=', no temporal overlap with others’ calls) produced during feeding, traveling, and resting context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
77
+ page_content=' Orangutan long calls are composed by a full pulse, which is sub-divided into a sub-pulse transitory element and a pulse body, or into sequences of bubble sub-pulse or grumble sub-pulse [21, 22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
78
+ page_content=' The complex temporal interrelationship between phases and the overlapping and class- unbalanced characteristics (some phases are extremely short) pose challenges for training a multi-class classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
79
+ page_content=' On the other hand, the duration of calls and non-calls in the data set are well balanced, which opens the gate to a binary call detection model (§5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
80
+ page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
81
+ page_content=' Annotated recordings include spontaneous long calls and long calls produced in response to other males’ vocal presence or environmental disturbances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
82
+ page_content=' Bonobo high-hoots – which make up the plurality of our bonobo call data set – are loud, tonal vocalizations [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
83
+ page_content=' Unlike the chimpanzee and orangutan data sets, call types other than the homologous loud calls are also present in the data set but are rather minor and diverse (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
84
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
85
+ page_content=', peep-yelp, soft barks), which could be challenging to find.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
86
+ page_content=' The total call duration is relatively short.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
87
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
88
+ page_content=' METHOD This section introduces our method, which is illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
89
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
90
+ page_content=' We describe each step in turn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
91
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
92
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
93
+ page_content=' Audio Preprocessing First, all audio clips are converted to .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
94
+ page_content='wav format and resampled to 16 kHz sample rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
95
+ page_content=' We segment them into 20ms frames and pad with zeros if necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
96
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
97
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
98
+ page_content=' Feature and Label Extraction Next, we extract three types of acoustic features with the torchaudio Python package.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
99
+ page_content=' For an audio clip of T frames, we get a sequence of frame-level features I1:T, in the shape of T × feature_dim: Raw waveform: the original amplitude values over time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
100
+ page_content=' feature_dim = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
101
+ page_content='02×16000 = 320.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
102
+ page_content=' Spectrogram: calculated then from the raw waveform with feature_dim = 201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
103
+ page_content=' wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
104
+ page_content='0: inferred from the raw waveform by the WAV2VEC2_BASE model on a CPU device with feature_dim = 768.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
105
+ page_content=' We then extract frame-level labels from our annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
106
+ page_content=' For each annotated unit, we mark all frames inside the annotated time span with a positive class index indicating the call type (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
107
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
108
+ page_content=', 1 for intro);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
109
+ page_content=' unmarked frames are 0s by default, indicating non- calls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
110
+ page_content=' The labels’ shape of a clip L1:T is T ×1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
111
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
112
+ page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
113
+ page_content=' Data Split We shuffle the clips (features and labels) of each data set and split them into 80%, 10%, and 10% for training, validation, and test sets, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
114
+ page_content=' We make three different splits by three different random seeds 0, 42, and 3407 [24] to allow multiple experiments and study the effect of randomness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
115
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
116
+ page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
117
+ page_content=' Sequence Modeling Since our goal is to learn a function f from I1:T to L1:T, we first input I1:T to a sequence modeling function f m that outputs a hidden sequence H1:T (T × hidden_dim), which captures inter-frame interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
118
+ page_content=' H1:T then goes through a dense linear function f d (output D1:T) followed by a Softmax function f s to produce the probability distribution over target classes, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
119
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
120
+ page_content=', P1:T (T ×num_class).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
121
+ page_content=' Specifically, f m takes the following forms: RNN: a bidirectional LSTM with hidden size 1024.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
122
+ page_content=' hidden_dim = 1024∗2 = 2048.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
123
+ page_content=' Transformer encoder: a Transformer encoder with 8 heads and 6 layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
124
+ page_content=' hidden_dim = 1024.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
125
+ page_content=' Autoregressive model: we optionally add autoregressive connections between time steps to encourage consistent output labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
126
+ page_content=' The output of f d at time step t, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
127
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
128
+ page_content=', Dt is concatenated to the next time step’s input It+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
129
+ page_content=' Lastly, a cross-entropy loss is computed between the model output P1:T and the gold labels L1:T, then backpropagated to train the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
130
+ page_content=' To mitigate the class imbalance problem, we set class weights to the reciprocal of each class’s occurrence times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
131
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
132
+ page_content=' EXPERIMENTS AND RESULTS Our experiments were done in PyTorch [25], with Python 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
133
+ page_content='8 on an Nvidia Tesla V100 GPU (32GB ram).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
134
+ page_content=' Most models have ∼40k million parameters and finish the training of up to 200 epochs with early raw waveform spectrogram features wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
135
+ page_content='0 features autoregressive frame-level sequence model strong labels prediction (ar vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
136
+ page_content=' non-ar) intro build up climax let down distribution via Softmax (ar vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
137
+ page_content=' non-ar) + + + I t+1 I t+2 I T Figure 1: Our method (bottom-up).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
138
+ page_content=' The audio clip shown in the figure is from the chimpanzee data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
139
+ page_content=' (Non-)ar stands for (non-)autoregressive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
140
+ page_content=' stopping on validation F1-score within one hour (or a few hours because autoregressive recurrent models run slowly on PyTorch).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
141
+ page_content=' Table 2 presents our results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
142
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
143
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
144
+ page_content=' Initial Exploration with Chimpanzee Data We first test the viability of our approach on the chimpanzee data in light of the simplicity of the pant-hoot annotation scheme, although phylogenetically orangutans are closer to humans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
145
+ page_content=' We start from E1, a simple waveform + LSTM baseline, and observe in E2 that wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
146
+ page_content='0 outperforms the raw waveform and spectrogram by a large margin, which demonstrates the power of transfer learning from pretraining on human speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
147
+ page_content=' In E2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
148
+ page_content='1, we find that the Transformer encoder does not outperform LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
149
+ page_content=' Hence, we infer that Dt+1 Dt+2 DT Dense Dense Dense T1 RNN RNN RNN D t+1 18200 30 175 20 150 10 125 0 bin freg 100 10 75 20 50 30 25 4010 700 600 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
150
+ page_content='8 feature dimension 500 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
151
+ page_content='6 400 OOE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
152
+ page_content='4 200 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
153
+ page_content='2 100 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
154
+ page_content='0no call intro build up dimax let downno call intro build up xeup let down AULMULMID data feature model dev acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
155
+ page_content=' dev f1 test acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
156
+ page_content=' test f1 aucpr Explore the best feature and model combination E1 chimp waveform lstm (baseline) 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
157
+ page_content='7±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
158
+ page_content='1 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
159
+ page_content='4±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
160
+ page_content='5 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
161
+ page_content='0±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
162
+ page_content='6 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
163
+ page_content='7±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
164
+ page_content='3 E1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
165
+ page_content='1 chimp spectrogram lstm 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
166
+ page_content='3±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
167
+ page_content='5 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
168
+ page_content='7±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
169
+ page_content='3 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
170
+ page_content='7±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
171
+ page_content='7 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
172
+ page_content='9±5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
173
+ page_content='4 E2 chimp wav2vec2 lstm 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
174
+ page_content='0±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
175
+ page_content='0 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
176
+ page_content='9±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
177
+ page_content='6 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
178
+ page_content='3±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
179
+ page_content='3 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
180
+ page_content='9±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
181
+ page_content='6 E2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
182
+ page_content='1 chimp wav2vec2 transformer 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
183
+ page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
184
+ page_content='6 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
185
+ page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
186
+ page_content='2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
187
+ page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
188
+ page_content='6 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
189
+ page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
190
+ page_content='5 Explore the hyper-parameters E3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
191
+ page_content='1 chimp wav2vec2 lstm (E2 + batch_size = 4) 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
192
+ page_content='7±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
193
+ page_content='5 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
194
+ page_content='8±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
195
+ page_content='6 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
196
+ page_content='7±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
197
+ page_content='0 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
198
+ page_content='6±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
199
+ page_content='0 E3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
200
+ page_content='2 chimp wav2vec2 lstm (E2 + batch_size = 8) 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
201
+ page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
202
+ page_content='6 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
203
+ page_content='6±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
204
+ page_content='0 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
205
+ page_content='0±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
206
+ page_content='4 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
207
+ page_content='5±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
208
+ page_content='0 E3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
209
+ page_content='3 chimp wav2vec2 lstm (E2 + dropout = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
210
+ page_content='2) 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
211
+ page_content='7±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
212
+ page_content='5 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
213
+ page_content='0±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
214
+ page_content='4 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
215
+ page_content='0±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
216
+ page_content='7 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
217
+ page_content='8±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
218
+ page_content='7 E3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
219
+ page_content='4 chimp wav2vec2 lstm (E2 + dropout = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
220
+ page_content='1) 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
221
+ page_content='0±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
222
+ page_content='0 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
223
+ page_content='2±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
224
+ page_content='8 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
225
+ page_content='7±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
226
+ page_content='9 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
227
+ page_content='3±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
228
+ page_content='9 E3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
229
+ page_content='5 chimp wav2vec2 lstm (E2 - balance_weights) 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
230
+ page_content='0±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
231
+ page_content='6 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
232
+ page_content='6±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
233
+ page_content='4 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
234
+ page_content='3±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
235
+ page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
236
+ page_content='3±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
237
+ page_content='6 Explore autoregressive modeling E4 chimp wav2vec2 lstm (E2 + autoregressive) 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
238
+ page_content='7±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
239
+ page_content='2 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
240
+ page_content='1±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
241
+ page_content='8 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
242
+ page_content='7±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
243
+ page_content='1 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
244
+ page_content='6±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
245
+ page_content='5 Extend to orangutan long calls and a binary setting E5 orang wav2vec2 lstm (= E4) 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
246
+ page_content='0±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
247
+ page_content='0 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
248
+ page_content='7±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
249
+ page_content='4 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
250
+ page_content='7±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
251
+ page_content='1 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
252
+ page_content='0±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
253
+ page_content='6 E5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
254
+ page_content='1 orang wav2vec2 lstm (E5 + binary target) 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
255
+ page_content='3±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
256
+ page_content='5 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
257
+ page_content='1±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
258
+ page_content='5 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
259
+ page_content='0±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
260
+ page_content='0 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
261
+ page_content='9±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
262
+ page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
263
+ page_content='96 Extend to bonobo calls and a binary setting E6 bonobo wav2vec2 lstm (= E4) 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
264
+ page_content='0±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
265
+ page_content='6 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
266
+ page_content='9±6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
267
+ page_content='3 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
268
+ page_content='7±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
269
+ page_content='8 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
270
+ page_content='3±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
271
+ page_content='2 E6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
272
+ page_content='1 bonobo wav2vec2 lstm (E6 + binary target) 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
273
+ page_content='0±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
274
+ page_content='6 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
275
+ page_content='9±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
276
+ page_content='4 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
277
+ page_content='7±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
278
+ page_content='5 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
279
+ page_content='8±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
280
+ page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
281
+ page_content='87 Zero-shot transferring from orangutan to bonobo E7 bonobo wav2vec2 lstm (= E5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
282
+ page_content='1) 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
283
+ page_content='0±13 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
284
+ page_content='2±10 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
285
+ page_content='0±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
286
+ page_content='0 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
287
+ page_content='2±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
288
+ page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
289
+ page_content='55 Table 2: Experimental results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
290
+ page_content=' We run all experiments three times based on different random seeds and report the mean and standard deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
291
+ page_content=' acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
292
+ page_content=' stands for frame-level accuracy, f1 stands for the frame-level average F1-score weighted by the number of true instances per class, and aucpr stands for the area under the precision-recall curve for the positive class in the binary case at test time when the random seed is set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
293
+ page_content=' For hyper-parameters, we start E1 with batch_size = 1, dropout = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
294
+ page_content='4 and keep them by default, if not otherwise specified in the table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
295
+ page_content=' Transformer’s ability to capture arbitrary long-range dependency is not beneficial to our task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
296
+ page_content=' Next, we explore the hyper-parameters in the second experimental group and we find in E3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
297
+ page_content='5 that balancing class weights (§4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
298
+ page_content='4) has a small impact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
299
+ page_content=' Lastly, we show in E4 that the autoregressive connections are beneficial for consistent output, as illustrated by the tiny gaps in non-ar output in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
300
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
301
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
302
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
303
+ page_content=' Extending to Orangutan and Bonobo Data We successfully extend the model in E4 to as trained on the orangutan (E5) and bonobo (E6) data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
304
+ page_content=' We note that some minority classes perform less well due to data scarcity, in contrast to the well-balanced situation in the chimpanzee data set (see Table 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
305
+ page_content=' We further reduce the task to a binary (call vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
306
+ page_content=' non-call) classification that resembles voice activity detection of human speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
307
+ page_content=' This is a useful tool to automatically extract calls from raw recordings for further bioacoustic analysis (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
308
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
309
+ page_content=', studying the repertoire of a given species).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
310
+ page_content=' Finally, to understand the generalizability of our models, we try zero-shot transferring the model trained in E5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
311
+ page_content='1 on orangutans directly to unseen bonobo data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
312
+ page_content=' The results show that it is promising to build a potential general-purpose sound event detection model for all great ape calls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
313
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
314
+ page_content=' DISCUSSION We have addressed a gap in the bioacoustics research of non-human great apes by developing an approach for automatically identifying sound events and classifying great ape calls using a neural network architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
315
+ page_content=' Our method successfully and accurately identifies and classifies calls in three species of non-human great apes, and provides a tool for primatologists to bootstrap call identification and analysis from raw unannotated audio recordings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
316
+ page_content=' Our method also shows the general applicability of the wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
317
+ page_content='0 model trained on human speech for identifying vocalizations and call types in other species.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
318
+ page_content=' Future work may apply our approach to more animals, as part of the goal to decode the communication systems of great apes and other non- human animals more broadly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
319
+ page_content='2 1 https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
320
+ page_content='com/J22Melody/sed_great_ape 2 For example: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
321
+ page_content='earthspecies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
322
+ page_content='org 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
323
+ page_content=' REFERENCES [1] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
324
+ page_content=' Baevski, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
325
+ page_content=' Zhou, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
326
+ page_content=' Mohamed, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
327
+ page_content=' Auli, “wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
328
+ page_content='0: A framework for self-supervised learning of speech representations,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
329
+ page_content=' 33, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
330
+ page_content=' 12 449–12 460, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
331
+ page_content=' [2] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
332
+ page_content=' Panayotov, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
333
+ page_content=' Chen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
334
+ page_content=' Povey, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
335
+ page_content=' Khudanpur, “Librispeech: an asr corpus based on public domain audio books,” in 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
336
+ page_content=' IEEE, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
337
+ page_content=' 5206–5210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
338
+ page_content=' [3] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
339
+ page_content=' Mesaros, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
340
+ page_content=' Heittola, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
341
+ page_content=' Virtanen, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
342
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
343
+ page_content=' Plumbley, “Sound event detection: A tutorial,” IEEE Signal Processing Magazine, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
344
+ page_content=' 38, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
345
+ page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
346
+ page_content=' 67–83, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
347
+ page_content=' [4] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
348
+ page_content=' Arons, “A review of the cocktail party effect,” Journal of the American Voice I/O society, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
349
+ page_content=' 12, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
350
+ page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
351
+ page_content=' 35–50, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
352
+ page_content=' [5] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
353
+ page_content=' Turpault, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
354
+ page_content=' Serizel, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
355
+ page_content=' Parag Shah, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
356
+ page_content=' Salamon, “Sound event detection in domestic environments with weakly labeled data and soundscape synthesis,” in Workshop on Detection and Classification of Acoustic Scenes and Events, New York City, United States, October 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
357
+ page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
358
+ page_content=' Available: https://hal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
359
+ page_content='inria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
360
+ page_content='fr/hal-02160855 [6] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
361
+ page_content=' Mesaros, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
362
+ page_content=' Heittola, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
363
+ page_content=' Benetos, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
364
+ page_content=' Foster, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
365
+ page_content=' Lagrange, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
366
+ page_content=' Virtanen, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
367
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
368
+ page_content=' Plumbley, “Detection and classification of acoustic scenes and events: Outcome of the DCASE 2016 challenge,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
369
+ page_content=' 26, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
370
+ page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
371
+ page_content=' 379–393, Feb 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
372
+ page_content=' [7] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
373
+ page_content=' Ebbers and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
374
+ page_content=' Haeb-Umbach, “Self-trained audio tagging and sound event detection in domestic environments,” in Proceedings of the 6th Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
375
+ page_content=' [8] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
376
+ page_content=' Gong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
377
+ page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
378
+ page_content=' Chung, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
379
+ page_content=' Glass, “Ast: Audio spectrogram transformer,” arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
380
+ page_content='01778, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
381
+ page_content=' [9] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
382
+ page_content=' Dai, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
383
+ page_content=' Dai, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
384
+ page_content=' Qu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
385
+ page_content=' Li, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
386
+ page_content=' Das, “Very deep convolutional neural networks for raw waveforms,” in 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
387
+ page_content=' IEEE, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
388
+ page_content=' 421–425.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
389
+ page_content=' [10] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
390
+ page_content=' Vaswani, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
391
+ page_content=' Shazeer, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
392
+ page_content=' Parmar, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
393
+ page_content=' Uszkoreit, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
394
+ page_content=' Jones, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
395
+ page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
396
+ page_content=' Gomez, Ł.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
397
+ page_content=' Kaiser, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
398
+ page_content=' Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
399
+ page_content=' 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
400
+ page_content=' [11] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
401
+ page_content=' Graves, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
402
+ page_content=' Fernández, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
403
+ page_content=' Gomez, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
404
+ page_content=' Schmidhuber, “Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,” in Proceedings of the 23rd international conference on Machine learning, 2006, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
405
+ page_content=' 369–376.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
406
+ page_content=' [12] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
407
+ page_content=' Hou, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
408
+ page_content=' Kong, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
409
+ page_content=' Wang, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
410
+ page_content=' Li, “Polyphonic audio tagging with sequentially labelled data using crnn with learnable gated linear units,” arXiv preprint arXiv:1811.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
411
+ page_content='07072, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
412
+ page_content=' [13] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
413
+ page_content=' Sun, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
414
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
415
+ page_content=' Maeda, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
416
+ page_content=' Solis-Lemus, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
417
+ page_content=' Pimentel- Alarcon, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
418
+ page_content=' Burivalova, “Classification of animal sounds in a hyperdiverse rainforest using convolutional neural networks,” arXiv preprint arXiv:2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
419
+ page_content='14971, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
420
+ page_content=' [14] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
421
+ page_content=' ¸Sa¸smaz and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
422
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
423
+ page_content=' Tek, “Animal sound classification using a convolutional neural network,” in 2018 3rd International Conference on Computer Science and Engineering (UBMK).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
424
+ page_content=' IEEE, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
425
+ page_content=' 625–629.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
426
+ page_content=' [15] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
427
+ page_content=' Pozzi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
428
+ page_content=' Gamba, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
429
+ page_content=' Giacoma, “The use of artificial neural networks to classify primate vocalizations: a pilot study on black lemurs,” American Journal of Primatology: Official Journal of the American Society of Primatologists, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
430
+ page_content=' 72, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
431
+ page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
432
+ page_content=' 337–348, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
433
+ page_content=' [16] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
434
+ page_content=' Mielke and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
435
+ page_content=' Zuberbühler, “A method for automated individual, species and call type recognition in free-ranging animals,” Animal Behaviour, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
436
+ page_content=' 86, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
437
+ page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
438
+ page_content=' 475–482, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
439
+ page_content=' [17] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
440
+ page_content=' Fedurek, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
441
+ page_content=' Zuberbühler, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
442
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
443
+ page_content=' Dahl, “Sequential information in a great ape utterance,” Scientific reports, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
444
+ page_content=' 6, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
445
+ page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
446
+ page_content=' 1–11, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
447
+ page_content=' [18] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
448
+ page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
449
+ page_content=', “Recent advances in end-to- end automatic speech recognition,” APSIPA Transactions on Signal and Information Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
450
+ page_content=' 11, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
451
+ page_content=' 1, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
452
+ page_content=' [19] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
453
+ page_content=' Marler and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
454
+ page_content=' Hobbett, “Individuality in a long-range vocalization of wild chimpanzees,” Zeitschrift für Tierpsychologie, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
455
+ page_content=' 38, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
456
+ page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
457
+ page_content=' 97–109, 1975.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
458
+ page_content=' [20] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
459
+ page_content=' Soldati, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
460
+ page_content=' Fedurek, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
461
+ page_content=' Dezecache, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
462
+ page_content=' Call, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
463
+ page_content=' Zuberbühler, “Audience sensitivity in chimpanzee display pant hoots,” Animal Behaviour, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
464
+ page_content=' 190, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
465
+ page_content=' 23–40, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
466
+ page_content=' [21] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
467
+ page_content=' Hardus, “A description of the orangutan’s vocal and sound repertoire, with a focus on geographical variation,” Orangutans: Geographic variation in behavioral ecology and conservation, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
468
+ page_content=' 49–64, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
469
+ page_content=' [22] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
470
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
471
+ page_content=' Lameira and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
472
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
473
+ page_content=' Wich, “Orangutan long call degradation and individuality over distance: a playback approach,” International Journal of Primatology, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
474
+ page_content=' 29, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
475
+ page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
476
+ page_content=' 615–625, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
477
+ page_content=' [23] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
478
+ page_content=' Schamberg, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
479
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
480
+ page_content=' Cheney, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
481
+ page_content=' Clay, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
482
+ page_content=' Hohmann, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
483
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
484
+ page_content=' Seyfarth, “Call combinations, vocal exchanges and interparty movement in wild bonobos,” Animal Behaviour, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
485
+ page_content=' 122, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
486
+ page_content=' 109– 116, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
487
+ page_content=' [24] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
488
+ page_content=' Picard, “Torch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
489
+ page_content=' manual_seed (3407) is all you need: On the influence of random seeds in deep learning architectures for computer vision,” arXiv preprint arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
490
+ page_content='08203, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
491
+ page_content=' [25] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
492
+ page_content=' Paszke, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
493
+ page_content=' Gross, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
494
+ page_content=' Massa, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
495
+ page_content=' Lerer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
496
+ page_content=' Bradbury, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
497
+ page_content=' Chanan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
498
+ page_content=' Killeen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
499
+ page_content=' Lin, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
500
+ page_content=' Gimelshein, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
501
+ page_content=' Antiga et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
502
+ page_content=', “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
503
+ page_content=' 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9E0T4oBgHgl3EQfSAAQ/content/2301.02214v1.pdf'}
INE0T4oBgHgl3EQfRwA9/content/tmp_files/2301.02211v1.pdf.txt ADDED
@@ -0,0 +1,1107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Teaching Computer Vision for Ecology
2
+ Elijah Cole1
3
+ Suzanne Stathatos1
4
+ Bj¨orn L¨utjens2
5
+ Tarun Sharma1
6
+ Justin Kay1,3
7
+ Jason Parham4
8
+ Benjamin Kellenberger5
9
+ Sara Beery1,2
10
+ 1Caltech
11
+ 2MIT
12
+ 3Ai.Fish
13
+ 4Wild Me
14
+ 5Yale University
15
+ https://cv4ecology.caltech.edu/
16
+ Abstract
17
+ Computer vision can accelerate ecology research by
18
+ automating the analysis of raw imagery from sen-
19
+ sors like camera traps, drones, and satellites. How-
20
+ ever, computer vision is an emerging discipline that is
21
+ rarely taught to ecologists. This work discusses our
22
+ experience teaching a diverse group of ecologists to
23
+ prototype and evaluate computer vision systems in
24
+ the context of an intensive hands-on summer work-
25
+ shop.
26
+ We explain the workshop structure, discuss
27
+ common challenges, and propose best practices. This
28
+ document is intended for computer scientists who
29
+ teach computer vision across disciplines, but it may
30
+ also be useful to ecologists or other domain experts
31
+ who are learning to use computer vision themselves.
32
+ 1
33
+ Introduction
34
+ Extracting important information from images and
35
+ videos normally requires painstaking manual effort
36
+ from human annotators. Computer vision algorithms
37
+ can automate this process. This is especially impor-
38
+ tant when manually reviewing the data is not feasible,
39
+ either because the amount of data is too large (e.g.
40
+ the >100TB of satellite imagery collected daily) or
41
+ the number of annotators is too small (e.g. when ex-
42
+ pertise is required to identify a species in an image).
43
+ Both of these challenges are common in ecology.
44
+ Ecology presents a particularly compelling use case
45
+ for computer vision.
46
+ Due to the effects of cli-
47
+ mate change, we need to monitor animal popula-
48
+ tions, vegetation properties, and other indicators
49
+ of ecosystem health at a large scale [41].
50
+ Ecolo-
51
+ gists are collecting vast amounts of raw data with
52
+ camera traps, drones, and satellites, but there are
53
+ not enough experts to annotate the data.
54
+ Com-
55
+ puter vision algorithms can accelerate the pace of re-
56
+ search in ecology by efficiently transforming this raw
57
+ data into useful knowledge.
58
+ Encouraging progress
59
+ is already being made in areas like animal detec-
60
+ Git
61
+ Overfitting
62
+ Generalization
63
+ Evaluation
64
+ Deep
65
+ Learning
66
+ Python
67
+ OOP
68
+ AWS
69
+ Unix
70
+ PyTorch
71
+ Data Splits
72
+ Ablations
73
+ Model
74
+ Development
75
+ Loss
76
+ Functions
77
+ Software Engineering
78
+ Machine Learning
79
+ Namespaces
80
+ Data
81
+ Structures
82
+ Data Types
83
+ Mutability
84
+ Figure 1: A simplified dependency graph depicting
85
+ some of the skills required to develop a computer vi-
86
+ sion system. These software engineering and machine
87
+ learning topics are rarely included in ecology train-
88
+ ing. See Appendix B for a catalog of key topics and
89
+ their significance.
90
+ tion [18,25,29], fine-grained species recognition [42],
91
+ individual re-identification [38], species distribution
92
+ modeling [19,21], and land cover mapping [32]. These
93
+ efforts can be viewed in the broader context of compu-
94
+ tational sustainability [23] and efforts to use machine
95
+ learning to combat the effects of climate change [33].
96
+ To build on this progress, we must equip ecologists
97
+ with the skills they need to understand and apply
98
+ computer vision methods in their research.
99
+ While
100
+ ecologists often have training in statistics and pro-
101
+ gramming, they are rarely exposed to the inter-
102
+ connected web of software engineering and machine
103
+ learning topics necessary for computer vision.
104
+ We
105
+ illustrate a few of these topics in Figure 1.
106
+ In this work, we discuss the process of teaching com-
107
+ puter vision to ecologists in the context of the Resnick
108
+ Sustainability Institute Summer Workshop on Com-
109
+ puter Vision Methods for Ecology (CV4E Workshop),
110
+ an intensive 3-week workshop held at Caltech in
111
+ 2022 [4]. We review related work in Section 2 before
112
+ 1
113
+ arXiv:2301.02211v1 [cs.CY] 5 Jan 2023
114
+
115
+ describing the workshop in Section 3, discussing key
116
+ take-aways in Section 4, and outlining educational
117
+ techniques we found useful in Section 5.
118
+ 2
119
+ Related Work
120
+ There is an emerging literature devoted to teach-
121
+ ing machine learning [20, 34, 37], deep learning [30],
122
+ and computer vision [24, 26, 31, 36]. [35] more nar-
123
+ rowly focuses on common errors in machine learning
124
+ course projects. However, most of these works con-
125
+ cern efforts to teach students from computer science
126
+ or related disciplines. There is prior work discussing
127
+ the specific challenge of teaching machine learning
128
+ to cross-disciplinary audiences such as non-CS un-
129
+ dergraduates [39], business students [43], artists [22],
130
+ materials scientists [40], and biologists [28]. Our work
131
+ is complementary, focusing on the process of teaching
132
+ computer vision to ecologists (mostly Ph.D. students
133
+ and postdocs – see Figure 2) who have background
134
+ knowledge in statistics and programming but little
135
+ prior experience in machine learning. In addition, we
136
+ consider an immersive workshop in which researchers
137
+ build prototypes using their own research data, not a
138
+ traditional classroom environment.
139
+ 3
140
+ The CV4E Workshop
141
+ The inaugural CV4E Workshop was held at Caltech
142
+ from August 1 - 19, 2022. The program was designed
143
+ to train ecologists to use computer vision in their own
144
+ research. Here we outline the stages of the workshop.
145
+ Application. The application had five components:
146
+ (i) a one-page project proposal, (ii) a one-page per-
147
+ sonal statement, (iii) a programming example, (iv)
148
+ one letter of reference, and (v) a CV. The most im-
149
+ portant element was the project proposal, in which
150
+ participants described the problem they wanted to
151
+ solve with computer vision, the potential impact of a
152
+ working solution, and the available data and labels.
153
+ Selection process. The CV4E staff recruited appli-
154
+ cation reviewers from the machine learning and ecol-
155
+ ogy communities. Each application received two re-
156
+ views. Final decisions were made by the CV4E staff.
157
+ The primary criteria were: (i) goal clarity, (ii) project
158
+ feasibility, (iii) potential impact, and (iv) candidate
159
+ preparation. Details about the 2022 cohort can be
160
+ found in Figure 2 and Appendix A. To maximize
161
+ accessibility, all participants were funded for travel,
162
+ room, and board for the duration of the program.
163
+ Pre-workshop preparation. All participants were
164
+ Gov't / NGO
165
+ 11.1%
166
+ Postdoc
167
+ 27.8%
168
+ M.S. Student
169
+ 5.6%
170
+ Ph.D. Student
171
+ 55.6%
172
+ Participant Type
173
+ Super-resolution
174
+ 5.6%
175
+ Clustering
176
+ 5.6%
177
+ Segmentation
178
+ 16.7%
179
+ Detection
180
+ 22.2%
181
+ Re-ID
182
+ 11.1%
183
+ Regression
184
+ 5.6%
185
+ Classification
186
+ 33.3%
187
+ Project Type
188
+ Figure 2:
189
+ Summary of the 2022 CV4E Workshop
190
+ participant backgrounds (top) and project categories
191
+ (bottom). Full details can be found in Appendix A.
192
+ added to a Slack workspace which served as the pri-
193
+ mary communication channel for the workshop. Each
194
+ participant was assigned to a working group overseen
195
+ by a CV4E instructor. During the ∼6 months be-
196
+ tween participant selection and the beginning of the
197
+ workshop, participants met with their instructors to
198
+ finalize project plans and address any data or label is-
199
+ sues. Participants were also expected to learn Python
200
+ during this period. Instructors assisted by providing
201
+ Python resources and holding biweekly office hours.
202
+ In-person workshop. Figure 3 gives a representa-
203
+ tive weekly schedule for the CV4E Workshop. Par-
204
+ ticipants received classroom instruction from Lectures
205
+ and Invited Speakers. Each participant joined a Read-
206
+ ing Group on a topic of their choice (see Appendix D),
207
+ which met twice weekly for a guided discussion of re-
208
+ search papers. During the Work Time, participants
209
+ worked on their projects independently, with CV4E
210
+ staff and working groups peers available for questions.
211
+ Each working group discussed their progress and ob-
212
+ stacles during the Group Updates.
213
+ Outcomes. All 18 of our participants had trained
214
+ models for their projects by the end of the work-
215
+ 2
216
+
217
+ shop. Some of these models were already achieving
218
+ high performance, while others needed more investi-
219
+ gation. In addition, the participants and staff formed
220
+ a community that has endured beyond the workshop
221
+ through the Slack workspace and ongoing projects.
222
+ MONDAY
223
+ TUESDAY
224
+ WEDNESDAY
225
+ THURSDAY
226
+ FRIDAY
227
+ 9:00 AM
228
+ Lecture
229
+ Invited Speaker
230
+ Invited Speaker
231
+ Lecture
232
+ Invited Speaker
233
+ 10:00 AM
234
+ Work Time
235
+ Lecture
236
+ Lecture
237
+ Work Time
238
+ Lecture
239
+ 11:00 AM
240
+ Lunch
241
+ Lunch
242
+ Lunch
243
+ Lunch
244
+ Lunch
245
+ 12:00 PM
246
+ 1:00 PM
247
+ Invited Speaker
248
+ Reading Groups
249
+ Invited Speaker
250
+ Reading Groups
251
+ Work Time
252
+ 2:00 PM
253
+ Work Time
254
+ Work Time
255
+ Work Time
256
+ Work Time
257
+ 3:00 PM
258
+ 4:00 PM
259
+ Group Updates
260
+ Group Updates
261
+ 5:00 PM
262
+ Break
263
+ Break
264
+ Break
265
+ Break
266
+ Break
267
+ 6:00 PM
268
+ Dinner
269
+ Dinner
270
+ Dinner
271
+ Dinner
272
+ Dinner
273
+ 7:00 PM
274
+ Figure 3: The weekly schedule for the 2022 CV4E
275
+ Workshop was roughly evenly split between in-
276
+ structional time (Lectures, Invited Speakers, Reading
277
+ Groups, Group Updates) and instructor-supervised
278
+ working time (Work Time). In practice, portions of
279
+ the Group Updates, Lunch, Break, and Dinner slots
280
+ were often used by participants as extra work time.
281
+ 4
282
+ Lessons Learned
283
+ Enforce structured Python preparation. The
284
+ primary obstacle for most participants was insuffi-
285
+ cient Python preparation. While participants were
286
+ not required to know Python before applying, they
287
+ were asked to learn Python before arriving.
288
+ To
289
+ facilitate this process, the staff provided resources
290
+ for learning Python and hosted office hours in the
291
+ months leading up to the CV4E Workshop.
292
+ How-
293
+ ever, many participants (even capable R program-
294
+ mers) still struggled with Python issues throughout
295
+ the workshop.
296
+ In hindsight, we overestimated the
297
+ extent to which R experience is helpful for quickly
298
+ learning Python. In the future we will enforce more
299
+ structured Python preparation before the workshop.
300
+ Start simple. It is challenging to build a working
301
+ computer vision system from scratch in 3 weeks. To
302
+ maximize the probability of success, it is important to
303
+ start simple. When appropriate, we encouraged par-
304
+ ticipants to use standard well-understood pipelines
305
+ e.g. fine-tuning an ImageNet-pretrained ResNet-50.
306
+ Work in long blocks.
307
+ Participants made much
308
+ more progress during long blocks of work time (3+
309
+ hours) than during shorter work blocks (1-2 hours).
310
+ Collect similar projects in working groups.
311
+ Participants were often eager to help each other, es-
312
+ pecially when they were deploying similar techniques.
313
+ Working groups should be constructed to maximize
314
+ opportunities for such collaborations.
315
+ Mix experience levels in working groups. Some
316
+ participants had significant experience with machine
317
+ learning or programming, enabling them to make
318
+ swift progress on their projects with minimal assis-
319
+ tance from instructors. Experienced participants rou-
320
+ tinely volunteered to assist less experienced partici-
321
+ pants, which seemed mutually beneficial. In the fu-
322
+ ture, we plan to ensure that each working group has
323
+ a mix of experienced and inexperienced participants.
324
+ Make unambiguous infrastructure recommen-
325
+ dations.
326
+ There are many reasonable ways to set
327
+ up the infrastructure necessary for computer vision
328
+ work. For instance, consider the problem of develop-
329
+ ing code which is meant to be executed on a VM. One
330
+ approach is to edit the code locally in a text editor
331
+ and move it to the VM using rsync, handling revi-
332
+ sion control locally. Another approach is to use a tool
333
+ like VSCode [17] which allows code on the VM to be
334
+ edited directly via SSH. In this case, revision control
335
+ would be handled on the VM. A third approach is to
336
+ edit code locally, push the code to GitHub, and pull
337
+ the code to the VM. Revision control is “built in”
338
+ for this workflow. The instructors had different pref-
339
+ erences, and no workflow was clearly superior. Par-
340
+ ticipants did not benefit from being asked to make
341
+ their own choice about which workflow to use.
342
+ In
343
+ the future we will provide unambiguous and unified
344
+ recommendations for development infrastructure.
345
+ Avoid deep learning library wrappers. There
346
+ are many “wrappers” for deep learning libraries
347
+ which are meant to make deep learning tools easier to
348
+ use. Some are general-purpose (e.g. PyTorch Light-
349
+ ning [13]) while others are domain-specific (e.g. Open-
350
+ Soundscape [11], DeepForest [5], TorchGeo [16]).
351
+ While these wrappers are undoubtedly useful, they
352
+ are not ideal for our participants for two reasons.
353
+ First, they conceal too much complexity which hin-
354
+ ders the process of learning about e.g. training loops
355
+ and data flow. Second, they were more difficult to
356
+ customize and debug, even with instructor assistance.
357
+ In the future, we will encourage all participants to
358
+ work directly with deep learning libraries.
359
+ Avoid Jupyter Notebooks.
360
+ Jupyter Notebooks
361
+ provide capabilities familiar to experienced R users,
362
+ such as the ability to run sections of code interac-
363
+ tively. However, participants who relied on Jupyter
364
+ Notebooks while learning Python often struggled to
365
+ transition to more traditional command line work-
366
+ flows when developing their computer vision systems.
367
+ We now believe that learning to work with Python
368
+ 3
369
+
370
+ through the command line provides a better founda-
371
+ tion for understanding machine learning workflows.
372
+ Make sure GPUs are available. Cloud computing
373
+ services like AWS and Azure often provide free credits
374
+ for education and research. However, GPUs may not
375
+ be available depending on customer demand.
376
+ It is
377
+ important to confirm with cloud providers that GPUs
378
+ will be made available. Alternatively, consider using
379
+ local computing resources or university clusters.
380
+ 5
381
+ Educational Techniques
382
+ In this section we describe a few educational tech-
383
+ niques we found helpful for the CV4E Workshop.
384
+ Guided troubleshooting. Troubleshooting and de-
385
+ bugging are vital skills in machine learning, and it
386
+ was important to provide participants with opportu-
387
+ nities to hone these abilities.
388
+ However, due to the
389
+ tight schedule of the CV4E Workshop, we did not
390
+ want participants to be stuck on any one problem
391
+ for too long. To balance these objectives, instructors
392
+ tried to walk participants through the troubleshoot-
393
+ ing process by asking leading questions about the
394
+ problems they were encountering. For unusual prob-
395
+ lems of limited educational value (i.e. complex config-
396
+ uration or installation issues), instructors intervened
397
+ to resolve the issue as quickly as possible.
398
+ Pair pseudocoding. Most of our participants were
399
+ not comfortable writing Python code at the beginning
400
+ of the CV4E Workshop, so we wanted to provide fre-
401
+ quent opportunities for hands-on coding. Whenever
402
+ possible, instructors avoided writing code for the par-
403
+ ticipants. To prevent participants from getting stuck
404
+ on code design issues, we used pair pseudocoding:
405
+ 1. The instructor asks the participant to explain
406
+ what they would like to accomplish, discussing
407
+ until the goal is clear to both parties.
408
+ 2. The instructor writes pseudocode that solves the
409
+ problem and walks through it with the partici-
410
+ pant to help them understand the logic of the
411
+ solution. The pseudocode can be more specific
412
+ or vague depending on the participant’s needs.
413
+ 3. The participant writes Python code to solve the
414
+ problem, while the instructor remains available
415
+ to answer questions as they arise.
416
+ Goal statements. During the initial stages of the
417
+ project, some of our participants felt like progress was
418
+ not being made because the code didn’t “work” yet.
419
+ To make their progress more salient, some instructors
420
+ asked participants to make a goal statement at the be-
421
+ ginning of each work session, and to check progress
422
+ towards that goal at the end of each work session.
423
+ This strategy helped participants to maintain moti-
424
+ vation until more tangible results were obtained.
425
+ Contextualized lectures.
426
+ Maintaining interest
427
+ during lectures was not a significant problem for the
428
+ CV4E Workshop due to the enthusiasm of the par-
429
+ ticipants. However, it is easy for lectures on machine
430
+ learning topics to become too abstract. We tried to
431
+ ensure that the lectures remained grounded in ap-
432
+ plications and examples. Since each participant had
433
+ their own applied problem in mind, we often paused
434
+ lectures to ask participants to reflect on how the lec-
435
+ ture topic applied to their individual projects. Partic-
436
+ ipants shared their answers with the class, providing
437
+ concrete examples that illustrated the lecture topic.
438
+ 6
439
+ Conclusion
440
+ We have described our experience at the inaugural
441
+ Resnick Sustainability Institute Summer Workshop
442
+ on Computer Vision Methods for Ecology. We con-
443
+ sider the format to be a success, as all of our partici-
444
+ pants trained models for their projects by the end of
445
+ the workshop. However, we have also discussed some
446
+ challenges we encountered and identified opportuni-
447
+ ties to improve the CV4E Workshop. We hope these
448
+ observations will be useful for others who teach com-
449
+ puter vision across disciplines.
450
+ 7
451
+ Acknowledgements
452
+ We would like to thank the Resnick Sustainability
453
+ Institute, Caroline Murphy, Xenia Amashukeli, and
454
+ Pietro Perona for making the CV4E Workshop pos-
455
+ sible. Computing credits were provided by Amazon
456
+ AWS and Microsoft Azure. We also thank the inau-
457
+ gural cohort of the CV4E Workshop: Ant´on ´Alvarez,
458
+ Carly Batist, Peggy Bevan, Catherine Breen, Anna
459
+ Boser, Tiziana Gelmi Candusso, Melanie Clapham,
460
+ Rowan Converse, Roni Goldshmid, Natalie Imirzian,
461
+ Brian Lee, Francesca Ponce, Alixandra Prybyla,
462
+ Rachel Renne, Felix Rustemeyer, Taiki Sakai, Ethan
463
+ Shafron, and Casey Youngflesh.
464
+ References
465
+ [1] Amazon Web Services (AWS).
466
+ https://aws.amazon.com/.
467
+ [2] Audacity. https://www.audacityteam.org/.
468
+ 4
469
+
470
+ [3] Computer Vision Annotation Tool (CVAT).
471
+ https://github.com/opencv/cvat.
472
+ [4] CV4E Summer Workshop.
473
+ https://cv4ecology.caltech.edu/.
474
+ [5] DeepForest.
475
+ https://deepforest.readthedocs.io/.
476
+ [6] FFMPEG. https:
477
+ //ffmpeg.org/doxygen/3.0/index.html.
478
+ [7] ImageMagick.
479
+ https://imagemagick.org/index.php.
480
+ [8] ImgLab. https:
481
+ //github.com/NaturalIntelligence/imglab.
482
+ [9] Microsoft Azure.
483
+ https://azure.microsoft.com/.
484
+ [10] OpenCV. https://opencv.org/.
485
+ [11] OpenSoundscape.
486
+ http://opensoundscape.org/.
487
+ [12] PyTorch. https://pytorch.org/.
488
+ [13] PyTorch Lightning.
489
+ https://www.pytorchlightning.ai/.
490
+ [14] TensorBoard.
491
+ https://www.tensorflow.org/tensorboard.
492
+ [15] TensorFlow. https://www.tensorflow.org/.
493
+ [16] TorchGeo. https:
494
+ //torchgeo.readthedocs.io/en/stable/.
495
+ [17] VSCode. https://code.visualstudio.com/.
496
+ [18] Sara Beery, Dan Morris, and Siyu Yang.
497
+ Efficient pipeline for camera trap image review.
498
+ arXiv preprint arXiv:1907.06772, 2019.
499
+ [19] Elijah Cole, Benjamin Deneu, Titouan Lorieul,
500
+ Maximilien Servajean, Christophe Botella, Dan
501
+ Morris, Nebojsa Jojic, Pierre Bonnet, and
502
+ Alexis Joly. The geolifeclef 2020 dataset. arXiv
503
+ preprint arXiv:2004.04192, 2020.
504
+ [20] Adrian A de Freitas and Troy B Weingart. I’m
505
+ going to learn what?!? teaching artificial
506
+ intelligence to freshmen in an introductory
507
+ computer science course. In Proceedings of the
508
+ 52nd ACM technical symposium on computer
509
+ science education, pages 198–204, 2021.
510
+ [21] Benjamin Deneu, Alexis Joly, Pierre Bonnet,
511
+ Maximilien Servajean, and Fran¸cois Munoz.
512
+ Very high resolution species distribution
513
+ modeling based on remote sensing imagery:
514
+ How to capture fine-grained and large-scale
515
+ vegetation ecology with convolutional neural
516
+ networks? Frontiers in plant science,
517
+ 13:839279, 2022.
518
+ [22] Rebecca Fiebrink. Machine learning education
519
+ for artists, musicians, and other creative
520
+ practitioners. ACM Transactions on Computing
521
+ Education (TOCE), 19(4):1–32, 2019.
522
+ [23] Carla Gomes, Thomas Dietterich, Christopher
523
+ Barrett, Jon Conrad, Bistra Dilkina, Stefano
524
+ Ermon, Fei Fang, Andrew Farnsworth, Alan
525
+ Fern, Xiaoli Fern, et al. Computational
526
+ sustainability: Computing for a better world
527
+ and a sustainable future. Communications of
528
+ the ACM, 62(9):56–65, 2019.
529
+ [24] Tal Hassner and Itzik Bayaz. Teaching
530
+ computer vision: Bringing research benchmarks
531
+ to the classroom. ACM Transactions on
532
+ Computing Education (TOCE), 14(4):1–17,
533
+ 2015.
534
+ [25] Benjamin Kellenberger, Diego Marcos, and
535
+ Devis Tuia. Detecting mammals in uav images:
536
+ Best practices to address a substantially
537
+ imbalanced dataset with deep learning. Remote
538
+ sensing of environment, 216:139–153, 2018.
539
+ [26] Sami Khorbotly. A project-based learning
540
+ approach to teaching computer vision at the
541
+ undergraduate level. In 2015 ASEE Annual
542
+ Conference & Exposition, pages 26–91, 2015.
543
+ [27] Zachary C Lipton and Jacob Steinhardt.
544
+ Troubling trends in machine learning
545
+ scholarship. arXiv preprint arXiv:1807.03341,
546
+ 2018.
547
+ [28] Chris S Magnano, Fangzhou Mu, Rosemary S
548
+ Russ, Milica Cvetkovic, Debora Treu, and
549
+ Anthony Gitter. An approachable, flexible, and
550
+ practical machine learning workshop for
551
+ biologists. bioRxiv, 2022.
552
+ [29] Jason Parham, Charles Stewart, Jonathan
553
+ Crall, Daniel Rubenstein, Jason Holmberg, and
554
+ Tanya Berger-Wolf. An animal detection
555
+ pipeline for identification. In 2018 IEEE
556
+ Winter Conference on Applications of
557
+ Computer Vision (WACV), pages 1075–1083,
558
+ 2018.
559
+ [30] Simon J.D. Prince. Understanding Deep
560
+ Learning. MIT Press, 2022.
561
+ [31] S.J.D. Prince. Computer Vision: Models
562
+ Learning and Inference. Cambridge University
563
+ Press, 2012.
564
+ 5
565
+
566
+ [32] Caleb Robinson, Le Hou, Kolya Malkin, Rachel
567
+ Soobitsky, Jacob Czawlytko, Bistra Dilkina,
568
+ and Nebojsa Jojic. Large scale high-resolution
569
+ land cover mapping with multi-resolution data.
570
+ In Proceedings of the IEEE/CVF Conference
571
+ on Computer Vision and Pattern Recognition,
572
+ pages 12726–12735, 2019.
573
+ [33] David Rolnick, Priya L Donti, Lynn H Kaack,
574
+ Kelly Kochanski, Alexandre Lacoste, Kris
575
+ Sankaran, Andrew Slavin Ross, Nikola
576
+ Milojevic-Dupont, Natasha Jaques, Anna
577
+ Waldman-Brown, et al. Tackling climate
578
+ change with machine learning. ACM
579
+ Computing Surveys (CSUR), 55(2):1–96, 2022.
580
+ [34] Omar Shouman, Simon Fuchs, and Holger
581
+ Wittges. Experiences from teaching practical
582
+ machine learning courses to master’s students
583
+ with mixed backgrounds. In Proceedings of the
584
+ Second Teaching Machine Learning and
585
+ Artificial Intelligence Workshop, pages 62–67.
586
+ PMLR, 2022.
587
+ [35] James Skripchuk, Yang Shi, and Thomas Price.
588
+ Identifying common errors in open-ended
589
+ machine learning projects. In Proceedings of the
590
+ 53rd ACM Technical Symposium on Computer
591
+ Science Education V. 1, pages 216–222, 2022.
592
+ [36] Scott Spurlock and Shannon Duvall. Making
593
+ computer vision accessible for undergraduates.
594
+ Journal of Computing Sciences in Colleges,
595
+ 33(2):215–221, 2017.
596
+ [37] Peter Steinbach, Heidi Seibold, and Oliver
597
+ Guhr. Teaching machine learning in 2020. In
598
+ European Conference on Machine Learning and
599
+ Principles and Practice of Knowledge Discovery
600
+ in Databases, pages 1–6. PMLR, 2021.
601
+ [38] Charles V. Stewart, Jason R. Parham, Jason
602
+ Holmberg, and Tanya Y. Berger-Wolf. The
603
+ animal id problem: Continual curation. arXiv
604
+ preprint arXiv:2106.10377, 2021.
605
+ [39] Elisabeth Sulmont, Elizabeth Patitsas, and
606
+ Jeremy R Cooperstock. What is hard about
607
+ teaching machine learning to non-majors?
608
+ insights from classifying instructors’ learning
609
+ goals. ACM Transactions on Computing
610
+ Education (TOCE), 19(4):1–16, 2019.
611
+ [40] Shijing Sun, Keith Brown, and A Gilad Kusne.
612
+ Teaching machine learning to materials
613
+ scientists: Lessons from hosting tutorials and
614
+ competitions. Matter, 5(6):1620–1622, 2022.
615
+ [41] Devis Tuia, Benjamin Kellenberger, Sara Beery,
616
+ Blair R Costelloe, Silvia Zuffi, Benjamin Risse,
617
+ Alexander Mathis, Mackenzie W Mathis, Frank
618
+ van Langevelde, Tilo Burghardt, et al.
619
+ Perspectives in machine learning for wildlife
620
+ conservation. Nature communications,
621
+ 13(1):1–15, 2022.
622
+ [42] Grant Van Horn, Oisin Mac Aodha, Yang Song,
623
+ Yin Cui, Chen Sun, Alex Shepard, Hartwig
624
+ Adam, Pietro Perona, and Serge Belongie. The
625
+ inaturalist species classification and detection
626
+ dataset. In Proceedings of the IEEE conference
627
+ on computer vision and pattern recognition,
628
+ pages 8769–8778, 2018.
629
+ [43] Linus Wunderlich, Allen Higgins, and Yossi
630
+ Lichtenstein. Machine learning for business
631
+ students: An experiential learning approach. In
632
+ Proceedings of the 26th ACM Conference on
633
+ Innovation and Technology in Computer
634
+ Science Education V. 1, pages 512–518, 2021.
635
+ 6
636
+
637
+ A
638
+ 2022 Cohort
639
+ A.1
640
+ Participant Backgrounds
641
+ The inaugural 2022 CV4E Workshop had 18 partic-
642
+ ipants. Broken down by current occupation, our co-
643
+ hort consisted of:
644
+ • 1 Master’s student;
645
+ • 10 Ph.D. students;
646
+ • 5 post-doctoral researchers; and
647
+ • 2 researchers from government agencies or non-
648
+ governmental organizations.
649
+ Broken down geographically, our cohort consisted of:
650
+ • 11 participants from 7 different states in the
651
+ U.S.;
652
+ • 5 participants from European countries; and
653
+ • 2 participants from Canada.
654
+ Participants
655
+ came
656
+ from
657
+ diverse
658
+ academic
659
+ back-
660
+ grounds, including conservation biology, biological
661
+ anthropology,
662
+ geography,
663
+ mechanical engineering,
664
+ civil engineering, neuroscience, and ecology.
665
+ A.2
666
+ Participant Projects
667
+ Projects fell into seven main categories.
668
+ 1. Individual
669
+ Re-Identification:
670
+ Associating
671
+ images of the same animal taken from various
672
+ cameras, locations, and times. The two relevant
673
+ projects were: (1) re-identifying bears, (2) re-
674
+ identifying Iberian Lynx.
675
+ 2. Regression: Assigning a continuous number to
676
+ an image or video. The one relevant project was:
677
+ (1) analyzing wind speed from overhead drone
678
+ video of trees.
679
+ 3. Classification: Categorizing or labeling images
680
+ or parts of images from a fixed collection of cat-
681
+ egories. The six relevant projects were: (1) de-
682
+ termining presence or absence of lemur vocal-
683
+ izations, (2) beaked whale species classification
684
+ from echolocation clicks, (3) bumblebee species
685
+ and caste classification from flight sounds, (4)
686
+ assigning ants to size categories, (5) identify-
687
+ ing weather conditions from camera trap images,
688
+ and (6) species identification in urban camera
689
+ traps. Note that projects (1), (2), and (3) used
690
+ computer vision techniques to classify images
691
+ (spectrograms) that represent audio signals.
692
+ 4. Object Detection: Locating instances of ob-
693
+ jects in images or videos.
694
+ The four relevant
695
+ projects were detecting:
696
+ (1) piospheres, (2)
697
+ woodland draws, (3) flies, and (4) waterfowl.
698
+ Projects (3) and (4) use detection as an inter-
699
+ mediate step towards counting.
700
+ 5. Segmentation:
701
+ Classifying pixels based on
702
+ their semantic characteristics.
703
+ The three rele-
704
+ vant projects were segmenting: (1) walrus groups
705
+ in the Arctic, (2) permafrost, and (3) trees. All
706
+ projects were based on remote sensing imagery.
707
+ 6. Clustering: Grouping objects together accord-
708
+ ing to some notion of similarity.
709
+ The one rel-
710
+ evant project was: (1) determining the species
711
+ richness of an area using the number of clusters
712
+ in a collection of camera trap imagery.
713
+ 7. Super-resolution: Increasing the resolution of
714
+ an image.
715
+ The one relevant project was: (1)
716
+ increasing the resolution of land surface temper-
717
+ ature data using satellite imagery.
718
+ B
719
+ Key Topics
720
+ In this section we catalog topics that many of our par-
721
+ ticipants learned during the workshop, either through
722
+ formal instruction or on their own.
723
+ We emphasize
724
+ tools and concepts that were initially unfamiliar to
725
+ most participants. For each topic, we describe the
726
+ content and explain why it was important for our par-
727
+ ticipants. See also the list of lectures in Appendix C.
728
+ B.1
729
+ Tools
730
+ B.1.1
731
+ Annotation Tools
732
+ Content: Using annotation tools to label image [3,8]
733
+ or audio [2] data.
734
+ Motivation: Labeled data is essential for training
735
+ and evaluating computer vision algorithms.
736
+ Since
737
+ CV4E Workshop participants were using their own
738
+ data, many of them needed to learn to use some sort
739
+ of annotation tool. Furthermore, many of these tools
740
+ can export labels in the standard formats expected
741
+ by open-source computer vision libraries.
742
+ B.1.2
743
+ Unix Commands
744
+ Content: Common Unix commands like ls, pwd, rm,
745
+ mkdir, rmdir, mv, cat, head, tail etc. Occasionally,
746
+ less common commands like chmod or grep.
747
+ Motivation: Facility with Unix commands is cru-
748
+ cial for installing packages, working with virtual ma-
749
+ 7
750
+
751
+ chines, and using revision control.
752
+ Understanding
753
+ Unix commands also helps to build intuition for core
754
+ concepts like absolute vs. relative paths.
755
+ B.1.3
756
+ Terminal-Based Text Editing
757
+ Content: Tools like nano for editing text that is
758
+ stored on a server from the command line.
759
+ Motivation: When configuring SSH authentication
760
+ it is often necessary to edit text files on the VM (e.g.
761
+ ∼/.ssh/config).
762
+ B.1.4
763
+ Terminal Multiplexing
764
+ Content: Tools like tmux or screen for managing
765
+ terminal sessions.
766
+ Motivation: Long-running code (e.g. model training
767
+ in PyTorch) should be executed in a terminal session
768
+ that is decoupled from the SSH connection to avoid
769
+ being terminated when a laptop is closed or internet
770
+ connection is lost.
771
+ B.1.5
772
+ SSH
773
+ Content: The ssh command and SSH keys. Occa-
774
+ sionally, SSH tunneling.
775
+ Motivation: The ssh command is used to create a
776
+ terminal session connected to a VM. Related topics
777
+ like SSH keys are also important for e.g. authenticat-
778
+ ing terminal-based file transfers and enabling GitHub
779
+ access. SSH tunneling can be necessary for setting up
780
+ tools like TensorBoard [14].
781
+ B.1.6
782
+ Terminal-Based File Transfers
783
+ Content: Tools like scp or rsync for transferring
784
+ files.
785
+ Motivation: Command line tools are the most re-
786
+ liable way to move large amounts of data from one
787
+ place to another. This is useful for local transfers (e.g.
788
+ from one hard drive to another) and remote transfers
789
+ (e.g. from a local hard drive to a storage volume at-
790
+ tached to a virtual machine).
791
+ B.1.7
792
+ Revision Control
793
+ Content: Using GitHub for tracking changes made
794
+ to code.
795
+ Motivation:
796
+ Code for computer vision projects
797
+ tends to quickly grow in complexity, and it is easy to
798
+ forget what has changed since the last working ver-
799
+ sion. Tools like GitHub allow earlier versions of the
800
+ code to be revisited easily if a bug was introduced
801
+ by some change. In addition, GitHub can be used to
802
+ move code from a local machine (git push) to a vir-
803
+ tual machine (git pull) along with allowing users to
804
+ download (git clone) open-sourced computer vision
805
+ repositories.
806
+ B.1.8
807
+ Cloud Computing
808
+ Content:
809
+ Interacting with the web interfaces of
810
+ cloud computing providers. Creating a virtual ma-
811
+ chine with appropriate resources e.g. GPUs, storage.
812
+ Estimating and managing cost.
813
+ Motivation: One of the most common ways to ac-
814
+ cess GPU resources for computer vision work is to use
815
+ a VM from a cloud computing provider like Amazon
816
+ Web Services (AWS) [1] or Microsoft Azure [9]. It
817
+ is important to understand the benefits (scalability,
818
+ reliability) and drawbacks (cost) of cloud computing.
819
+ B.1.9
820
+ Virtual Environments
821
+ Content: Creating and managing virtual environ-
822
+ ments.
823
+ Motivation: Computer vision projects typically rely
824
+ on large pre-existing codebases, which may require
825
+ particular versions of certain packages to be installed.
826
+ While the user could change their base installations,
827
+ a better solution is to create a virtual environment
828
+ (through e.g. conda) in which the dependencies of the
829
+ codebase can be installed. Virtual environments are
830
+ also useful if a “clean reinstall” becomes necessary,
831
+ because they are easy to create and delete.
832
+ B.1.10
833
+ Python
834
+ Content: Basic syntax, conditionals, loops, string
835
+ parsing, file I/O, functions, classes and data struc-
836
+ tures.
837
+ Motivation: Facility with Python is crucial for effi-
838
+ ciently working with Python-based deep learning li-
839
+ braries, which the computer vision community uses
840
+ almost exclusively.
841
+ B.1.11
842
+ Python Libraries
843
+ Content:
844
+ Common libraries like numpy, pandas,
845
+ ipdb, sklearn, and matplotlib.
846
+ Motivation: Python has many stable, high-quality
847
+ libraries for numerical computing and data analysis.
848
+ Libraries like ipdb allow for in-line debugging.
849
+ 8
850
+
851
+ B.1.12
852
+ Deep Learning Libraries
853
+ Content: Preferably PyTorch [12] and alternatively
854
+ TensorFlow [15] for building deep learning systems.
855
+ Motivation: Modern deep learning libraries are in-
856
+ dispensable for developing and training computer vi-
857
+ sion systems.
858
+ B.1.13
859
+ Image Processing Libraries
860
+ Content:
861
+ Libraries and command-line tools like
862
+ OpenCV [10], ImageMagick [7], and FFmpeg [6].
863
+ Motivation: These tools are often used for efficient
864
+ data augmentation and visualization.
865
+ B.2
866
+ Computer Science Concepts
867
+ There are a few core concepts from computer science
868
+ that came up frequently throughout the program.
869
+ B.2.1
870
+ Object Oriented Programming
871
+ Content: Classes and objects. Inheritance, encap-
872
+ sulation, polymorphism.
873
+ Motivation: Many important libraries assume an
874
+ understanding of object oriented programming con-
875
+ cepts. For instance, one common point of confusion
876
+ for our participants was the difference between the
877
+ PyTorch dataset class and a dataset object from that
878
+ class.
879
+ Understanding object oriented programming
880
+ also makes it easier to understand data structures.
881
+ B.2.2
882
+ Data Structures
883
+ Content:
884
+ Common data structures (e.g. list, tu-
885
+ ple, dictionary, NumPy array, PyTorch tensor) and
886
+ their methods, casting from one data type to another,
887
+ checking data structures.
888
+ Motivation:
889
+ Unexpected behavior differences be-
890
+ tween e.g. Python lists, NumPy arrays, and PyTorch
891
+ tensors can cause significant frustration if data struc-
892
+ tures are not well understood.
893
+ B.2.3
894
+ Data Types
895
+ Content: Common data types e.g. int, float, double,
896
+ string, and bool.
897
+ Motivation:
898
+ Understanding data types increases
899
+ context understanding and can significantly impact
900
+ data storage size.
901
+ B.2.4
902
+ Namespaces
903
+ Content:
904
+ The built-in, global, and local names-
905
+ paces.
906
+ Motivation: Namespaces are the answer to many
907
+ common questions e.g. why variables defined inside a
908
+ function are not accessible outside the function.
909
+ B.2.5
910
+ Mutability
911
+ Content: Mutable and immutable objects. In-place
912
+ operations.
913
+ Motivation: Mutable objects can be changed in-
914
+ place while mutable objects cannot. This is the basis
915
+ for understanding whether changes made to an object
916
+ inside a function will affect the object outside of the
917
+ function.
918
+ B.3
919
+ Machine Learning Concepts
920
+ Participants learned different practical and concep-
921
+ tual aspects of computer vision and machine learning
922
+ depending on their project. However, all participants
923
+ had to engage with a few core concepts.
924
+ B.3.1
925
+ Generalization
926
+ Content: The concept of generalization, different
927
+ types of generalization, identifying a type of general-
928
+ ization that reflects the goals of a project.
929
+ Motivation: In ecology there are many different no-
930
+ tions of generalization, and it is important to choose
931
+ one that reflects the goals of a project. For instance,
932
+ in camera trap image classification it might be impor-
933
+ tant to generalize to new locations or to future data
934
+ from the same locations. These different notions of
935
+ generalization need to be measured in different ways.
936
+ B.3.2
937
+ Data Splits
938
+ Content: The role of training, validation, and test-
939
+ ing data. Designing appropriate splits to measure the
940
+ chosen type of generalization.
941
+ Motivation:
942
+ Training,
943
+ validation,
944
+ and
945
+ testing
946
+ splits should be designed to capture an appropri-
947
+ ate problem-specific notion of generalization. These
948
+ splits must then be handled appropriately (e.g. no hy-
949
+ perparameter tuning on the test split) to ensure that
950
+ performance measurements reflect generalization.
951
+ B.3.3
952
+ Overfitting
953
+ Content: Defining and recognizing overfitting. Mit-
954
+ igating overfitting using regularization techniques.
955
+ 9
956
+
957
+ Motivation:
958
+ All participants were working with
959
+ deep learning, for which overfitting is always a sig-
960
+ nificant concern.
961
+ B.3.4
962
+ Evaluation Metrics
963
+ Content: Common evaluation metrics for different
964
+ tasks, their strengths and limitations, choosing met-
965
+ rics that reflect high-level goals.
966
+ Motivation: Appropriate metrics are vital for de-
967
+ termining which approaches work best and deciding
968
+ if a computer vision system is “good enough” to be
969
+ used for a real application.
970
+ B.3.5
971
+ Deep Learning
972
+ Content: Neural networks, loss functions, minibatch
973
+ gradient descent.
974
+ Motivation: All modern computer vision methods
975
+ rely on deep learning.
976
+ Since our participants were
977
+ building and troubleshooting computer vision sys-
978
+ tems, they needed to understand deep learning ba-
979
+ sics as well. Loss functions were a particular focus,
980
+ since changing the loss is one of the primary ways of
981
+ adapting an existing method to a new problem.
982
+ B.3.6
983
+ Representations
984
+ Content: Image embeddings, distances in embed-
985
+ ding space, pretraining, transfer learning.
986
+ Motivation: ImageNet pretraining is ubiquitous in
987
+ modern computer vision, but many of our partici-
988
+ pants work in specialized domains for which Ima-
989
+ geNet pretraining may not be appropriate. Domain-
990
+ specific pretraining requires an understanding of rep-
991
+ resentation learning. The concept of image embed-
992
+ dings is also useful for understanding many common
993
+ computer vision algorithms (e.g. metric learning) and
994
+ visualization techniques (e.g. t-SNE).
995
+ B.4
996
+ Other Skills
997
+ B.4.1
998
+ Critically Reading Machine Learning
999
+ Papers
1000
+ Content:
1001
+ Understanding machine learning termi-
1002
+ nology and paper structure, critically interpreting
1003
+ claims, evaluating complexity vs. performance trade-
1004
+ offs.
1005
+ Motivation: Exploring the literature in a new field
1006
+ is always daunting.
1007
+ This is particularly challeng-
1008
+ ing in machine learning where papers may be over-
1009
+ enthusiastically written, necessitating extra vigilance
1010
+ from the reader to clearly understand the drawbacks
1011
+ and benefits of a method [27].
1012
+ B.4.2
1013
+ Selecting
1014
+ “Good”
1015
+ Open
1016
+ Source
1017
+ Li-
1018
+ braries
1019
+ Content: Recognizing markers of quality in open
1020
+ source code.
1021
+ Motivation: There is plenty of open-source com-
1022
+ puter vision code, but not all of it is reliable or well-
1023
+ maintained. Participants must learn to check indica-
1024
+ tors of code quality e.g. how many users a library has
1025
+ or how often the developers fix bugs.
1026
+ B.4.3
1027
+ Digging in to Libraries
1028
+ Content: Reading documentation, finding the code
1029
+ that handles a certain task, understanding how com-
1030
+ ponents of a codebase interact.
1031
+ Motivation: Computer vision projects depend on
1032
+ numerous complex but (generally) well-documented
1033
+ libraries. It is important to be able to understand
1034
+ the documentation. Sometimes it also becomes nec-
1035
+ essary to locate and inspect the piece of code being
1036
+ documented (e.g. a function from some library) to
1037
+ understand how it works in detail.
1038
+ B.4.4
1039
+ General Troubleshooting
1040
+ Content: Errors vs. warnings, searching for more
1041
+ information about error messages.
1042
+ Motivation: Errors and warnings are common when
1043
+ e.g. installing packages or testing new code. One of
1044
+ the most important skills in any programming activ-
1045
+ ity is the ability to use a search engine to understand
1046
+ an error message. This involves locating the appro-
1047
+ priate part of an error message to use as a search
1048
+ term, reading through the results, and choosing an
1049
+ appropriate next step.
1050
+ B.4.5
1051
+ Debugging Python Code
1052
+ Content: Types of errors, finding the source of an
1053
+ error, print statement debugging.
1054
+ Motivation: For a given line of code, any number of
1055
+ errors could arise. Understanding the different types
1056
+ of Python errors is helpful for pinpointing the root
1057
+ cause. Print statement debugging is also extremely
1058
+ useful for troubleshooting code running on a remote
1059
+ machine.
1060
+ 10
1061
+
1062
+ C
1063
+ List of Lectures
1064
+ 1. Intro and Logistics (Sara Beery)
1065
+ 2. Dataset Prototyping and Visualization (Jason
1066
+ Parham)
1067
+ 3. Working on the Cloud (Suzanne Stathatos)
1068
+ 4. Data Splitting and Avoiding Data Poisoning
1069
+ (Sara Beery)
1070
+ 5. Training your Model:
1071
+ Deciding on Configura-
1072
+ tions, Launching, Monitoring, Checkpointing,
1073
+ and Keeping Runs Organized (Benjamin Kellen-
1074
+ berger)
1075
+ 6. Working
1076
+ with
1077
+ Open-Source
1078
+ CV
1079
+ Codebases:
1080
+ Choosing a Baseline Model and Custom Data
1081
+ Loading (Sara Beery)
1082
+ 7. Evaluation Metrics (Elijah Cole)
1083
+ 8. Offline Evaluation and Analysis (Sara Beery)
1084
+ 9. What’s next? Rules of Thumb to Improve Re-
1085
+ sults (Benjamin Kellenberger)
1086
+ 10. Data Augmentation (Bj¨orn L¨utjens)
1087
+ 11. Expanding and Improving Training Datasets
1088
+ with Models: Weak Supervision, Self Supervi-
1089
+ sion, Targeted Relabeling, and Anomaly Detec-
1090
+ tion (Tarun Sharma)
1091
+ 12. Fair Comparisons and Ablation Studies: Under-
1092
+ standing What is Important (Elijah Cole)
1093
+ 13. Efficient Models: Speed vs.
1094
+ Accuracy (Justin
1095
+ Kay)
1096
+ 14. Serving, Hosting, and Deploying Models and
1097
+ Quality Control (Jason Parham)
1098
+ D
1099
+ List of Reading Groups
1100
+ 1. Time Series, Spectral Transforms, and Remote
1101
+ Sensing
1102
+ 2. Data Imbalance & Long Tail Distributions
1103
+ 3. Weak Supervision, Unsupervised Learning, Fine-
1104
+ tuning & Transfer Learning
1105
+ 4. Bias & Domain Shift and Generalization
1106
+ 11
1107
+
INE0T4oBgHgl3EQfRwA9/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
KtFIT4oBgHgl3EQfaisW/content/tmp_files/2301.11257v1.pdf.txt ADDED
@@ -0,0 +1,1740 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A Benchmark Study by using various Machine Learning
4
+ Models for Predicting Covid-19 trends
5
+
6
+ D.Kamelesun, R.Saranya, P.Kathiravan
7
+ Department of Computer Science, Central University of Tamil Nadu, India
8
9
+
10
+ Abstract
11
+ Machine learning and deep learning play vital roles in predicting diseases in the medical
12
+ field. Machine learning algorithms are widely classified as supervised, unsupervised, and
13
+ reinforcement learning. This paper contains a detailed description of our experimental research
14
+ work in that we used a supervised machine-learning algorithm to build our model for outbreaks
15
+ of the novel Coronavirus that has spreaded over the whole world and caused many deaths,
16
+ which is one of the most disastrous Pandemics in the history of the world. The people suffered
17
+ physically and economically to survive in this lockdown. This work aims to understand better
18
+ how machine learning, ensemble, and deep learning models work and implements in the real
19
+ dataset. In our work, we are going to analyze the current trend or pattern of the coronavirus and
20
+ then predict the further future of the covid-19 confirmed cases or new cases by training the past
21
+ Covid-19 dataset by using the machine learning algorithm such as Linear Regression,
22
+ Polynomial Regression, K-nearest neighbor, Decision Tree, Support Vector Machine and
23
+ Random forest algorithm are used to train the model.
24
+ The decision tree and the Random Forest algorithm perform better than SVR in this
25
+ work. The performance of SVR and lasso regression are low in all prediction areas Because
26
+ the SVR is challenging to separate the data using the hyperplane for this type of problem. So
27
+ SVR mostly gives a lower performance in this problem. Ensemble (Voting, Bagging, and
28
+ Stacking) and deep learning models(ANN) also predict well. After the prediction, we evaluated
29
+ the model using MAE, MSE, RMSE, and MAPE. This work aims to find the trend/pattern of
30
+ the covid-19.
31
+ 1. Introduction
32
+
33
+ The First known Coronavirus was discovered in Wuhan, China. World Health
34
+ Organisation(WHO) was informed about the case of an unknown virus caused in Wuhan in
35
+ December 2019. The Symptoms of Coronavirus are cough and fever. The cough spreads this
36
+ virus (Jignesh Chowdary et al., 2020; Liu, 2020; Su et al., 2022; Tiwari et al., 2020). This virus
37
+ spread over most countries in the world. In our country, people are also terribly affected by this
38
+ virus. In the last 20 years, As per the national center for Biotechnology Information record,
39
+ they have recorded various coronaviruses such as SARS-Cov in 2002-2004, HINI Influenza in
40
+ 2009, and MERS-Cov in Saudi Arabic in 2012, declared a Pandemic by WHO.
41
+ In this work, we took the covid-19 dataset from the Kaggle site. We built a Machine
42
+ Learning model to predict the current trend/pattern of covid-19 and number of confirmed
43
+ cases using machine-learning algorithms such as linear regression (Mojjada et al., 2021),
44
+ Polynomial Regression (Shaikh et al., 2021; Yadav et al., 2020), SVR (Rivas-Perea et al.,
45
+ 2013), Lasso Regression (Jolliffe et al., 2003; Mojjada et al., 2021; Rustam et al., 2020), K-
46
+
47
+
48
+
49
+ NN, Decision tree (DT), Random Forest, ensemble techniques such as voting, bagging, and
50
+ stacking, and deep learning also used to build the model by current past dataset covid-19. After
51
+ training, we evaluated the model using some evaluation metrics to find the accuracy of the
52
+ models using MAE, MSE, RMSE, and MAPE (Kanimozhi et al., 2020; Liu, 2020; Verma &
53
+ Pal, 2020). Following objectives were addressed in this paper.
54
+ Objective 1: To analyze the current trend of covid-19 confirmed cases by the past covid-19
55
+ data.
56
+ Objective 2: To predict the future of confirmed cases of covid-19 by training the model using
57
+ the past covid-19 dataset.
58
+
59
+ The related works proposed methodologies, the model evaluations using various matrices, and
60
+ the conclusion are in the rest of this paper.
61
+
62
+
63
+ 2. Related work
64
+ S.no
65
+ Authors
66
+ Methodology
67
+ Finding
68
+ 1
69
+ (Gambhir et
70
+ al., 2020)
71
+ SVR,
72
+ Polynomial
73
+ regression
74
+
75
+ Analyzes Current / patterns of covid-19
76
+
77
+ 2
78
+ (Mandayam
79
+ et al., 2020)
80
+ Linear Regression
81
+ and SVR
82
+
83
+ To predict the future number of positive cases
84
+
85
+ 3
86
+ (Rustam et
87
+ al., 2020)
88
+ Linear Regression
89
+ LASSO Regression
90
+ SVR
91
+
92
+ No. of newly infected cases, the no. of deaths,
93
+ and the no. of recoveries in the next 10 days.
94
+
95
+ 4
96
+ (Nikhil et
97
+ al., 2021)
98
+ Polynomial
99
+ Regression
100
+ Predicting the upcoming cases for next 25 days
101
+
102
+ 5
103
+ (Liu, 2020)
104
+ Linear Regression
105
+ Logistic Regression
106
+ and RNN
107
+ predict pandemic data of the U.S
108
+ 6
109
+ (Shaikh et
110
+ al., 2021)
111
+ Linear Regression
112
+ and Polynomial
113
+ Regression
114
+
115
+ Analysis, prediction and Time series forecasting
116
+ 7
117
+ (Painuli et
118
+ al., 2021)
119
+ Random Forest and
120
+ extra tree classifiers
121
+ one for predicting the chance of being infected
122
+ and other for forecasting the number of positive
123
+ cases
124
+ 8
125
+
126
+ (Mojjada et
127
+ al., 2021)
128
+ Linear Regression,
129
+ Lasso Regression
130
+ SVM,
131
+ Exponential
132
+ Smoothing,
133
+ The number of newly infected COVID 19
134
+ people, mortality rates and the recovered
135
+ COVID-19 estimates in the next 10 days.
136
+
137
+
138
+
139
+
140
+
141
+
142
+
143
+
144
+ 3 Proposed methodology
145
+
146
+ In our Research, This research contains two main phases. The first phase is the training,
147
+ and another one is testing. In the first phase, my dataset had many null or missing values, so
148
+ we used pre-processing to clean the data. And then, we used Feature Scaling to modify the data
149
+ from its original form because the machine can't be trained well with its original form of data
150
+ (MinMax, AbslouteMax, Normalization, Standardization, Robust Scaling) (Boente & Salibian-
151
+ Barrera, 2015; Lau & Baldwin, 2016; Lin et al., 2016; Storcheus, Dmitry; Rostamizadeh,
152
+ Afshin; Kumar, 2015), so by the Feature scaling, we can modify the data for our convince to
153
+ build the model. After the feature scaling, we segregated the data to train and test data. For
154
+ training data, we prepared the model and the test data to evaluate the trained model, so we used
155
+ the split function to split the data for training data is 80%, and test data is 20% of the datasets.
156
+ For our experimental research work, we took the date and country attributes as the independent
157
+ variable from the dataset. Both date and country attribute values are in String type. So we have
158
+ to convert the value of the date attribute into numeric data by splitting the date into the date,
159
+ month, and year and putting them into the different attributes, and then convert the country
160
+ attributes data into numeric with the help of Label Encoding, which gives the numeric value
161
+ for each country, Example India-1, USA-2, UK-3 like this it will create the numeric values for
162
+ all the countries in the dataset. We analyzed the current trend or pattern of the coronavirus and
163
+ then predict the further future of the covid-19 confirmed cases or new cases by Training the
164
+ past Covid-19 dataset using the machine learning algorithm, ensembling models, and deep
165
+ learning such as Linear Regression, Polynomial Regression, K-nearest neighbor, Lasso
166
+ Regression, Decision Tree, Support Vector Machine, Decision tree, Random Forest algorithm,
167
+ Artificial Neural Networks(ANN) and Improve the model evaluation by the Ensemble methods
168
+ (Voting, Bagging, Stacking).
169
+
170
+
171
+ In the second phase, we first validated the model using the metrics of (MAE, MSE,
172
+ RMSE, and MAPE) to observe the model's accuracy. The decision tree and the Random Forest
173
+ algorithm perform better than these algorithms. The performance of SVR is low in all
174
+ prediction areas because the SVR is challenging to separate the data using the hyperplane for
175
+ this type of problem. So SVR mostly gives a low performance in this problem. What is the use
176
+ of this paper? If the covid -19 comes, we have to know the current trend/pattern of the covid-
177
+ 19, so we need that current past dataset to train the model. After preparing the model to analyze
178
+ the current trend of the covid-19, predict the future number of confirmed coronavirus cases in
179
+ a day; how it works is shown in the below architecture Diagram (Fig. 1).
180
+
181
+
182
+
183
+
184
+
185
+
186
+
187
+
188
+
189
+
190
+
191
+
192
+
193
+ Covid-19 Dataset
194
+ Data Pre-Processing Feature Scaling
195
+
196
+
197
+
198
+ Data Splitting
199
+
200
+
201
+
202
+ Machine Learning and Deep Learning Ensemble Model
203
+ Trained Models
204
+
205
+ Data
206
+ Preprocessig
207
+ Label
208
+ Encoding
209
+ remove the
210
+ misssing
211
+ values
212
+ Feature
213
+ Scaling
214
+ MinMax
215
+ AbslouteMax
216
+ Standazation
217
+ Normalization
218
+ Robust
219
+ Linear
220
+ Regression
221
+ Polynomial
222
+ Regression
223
+ K-NN
224
+ SVR
225
+ Random
226
+ Forest
227
+ Decsion Tree
228
+ LASSO
229
+ ANN
230
+ Ensembling
231
+ Voting
232
+ Stacking
233
+ Bagging
234
+
235
+
236
+
237
+
238
+
239
+ Figure 1. Architecture Diagram
240
+
241
+
242
+
243
+
244
+
245
+
246
+
247
+ 3.1 Dataset Collection
248
+
249
+ 1. Dataset Description
250
+ Dataset-1
251
+ In this research, The dataset used was taken from the Kaggle Website (Gambhir et al.,
252
+ 2020), and contained the COVID-19 details. In this dataset, we have 201 countries' COVID -
253
+ 19 data. Each country has data from 15 February 2020 to 02nd 22 December 2021 in our
254
+ dataset. And the attributes are date, country, how many people were affected by COVID-19
255
+ daily (daily-new-cases), the total number of affected people on that date(cumulative-total-
256
+ cases), how many people are under treatment on that date(active-cases), and the total number
257
+ of death cases (cumulative-total- deaths). How many people die daily? (daily-new-deaths),
258
+ These are the attributes we have. The entire length of the dataset is 1,45,220.
259
+
260
+ TOP 10 Countries Cumulative total TOP 10 Countries Cumulative total
261
+ confirmed Cases Death Cases
262
+
263
+ Global 264440620 Global 5249736
264
+
265
+ USA 49716825 USA 806398
266
+
267
+ INDIA 34615757 BRAZIL 615225
268
+ BRAZIL 22118782 INDIA 470115
269
+ Evatuation
270
+ Metrics
271
+ MAE
272
+ MSE
273
+ RMSE
274
+ MAPE
275
+
276
+
277
+
278
+ UK 10329063 MEXICO 294428
279
+ RUSSIA 9703107
280
+ RUSSIA 277640
281
+ TURKEY 8839891 PERU 201282
282
+ FRANCE 7773530 UK 145281
283
+ IRAN
284
+ 6125596 INDONESIA 143850
285
+ GERMANY 6026796 ITALY 134003
286
+ ARGENTINA 5335310 IRAN 129988
287
+
288
+ Dataset-2
289
+ In this research, The dataset used was taken from the Kaggle Website (Mandayam et
290
+ al., 2020), and contained the COVID-19 details. In this dataset, we have 193 countries' COVID
291
+ -19 data. Each country has data from 16 November 2020 to 12 September 2021 in our dataset.
292
+ And the attributes are date, country, CountryAlphaCode, Confirmed cases maximum of
293
+ 41million, Death cases maximum of 660k, Recoveries maximum of 31 million, ECR,
294
+ GRTStringencyIndex,
295
+ DaySinceFirstCases,
296
+ DaySince100th
297
+ Cases,
298
+ ConfirmedPopPct,
299
+ DeathPopPct, RecoveriesPopPct. These are the attributes we have. The total length of the
300
+ dataset is 2952600.
301
+ TOP 10 Cumulative total TOP 10 Cumulative total TOP 10 Cumulative total
302
+ Countries confirmed Cases Countries Death Cases Countries Recovery Cases
303
+ Global 3.03868E+13
304
+ Global 6.18299E+11 Global 4.08442E+12
305
+
306
+
307
+ US 2.65162E+13 US 5.10454E+11 Brazil 2.67528E+12
308
+
309
+ Brazil 3.68155E+12 Brazil 1.01685E+11 US 1.34381E+12
310
+
311
+ Canada 68033025900 China 2712603648 China 45989925888
312
+
313
+ China 55937676288 Canada 1666076244 India 4859387857
314
+
315
+ United 34461082300 United 1067745075 Russia 1128064202
316
+ Kingdom Kingdom
317
+
318
+ India 6529623206 India 87802534 Turkey 919259007
319
+
320
+ Russia 1562989024 Mexico 68604089 Italy 759237934
321
+
322
+ France 1511222838 Peru 55929784 Colombia 735029503
323
+
324
+ Turkey 1228359396 Italy 39566917 Argentina 711610324
325
+
326
+ Spain 1100245487 France 34621623 Germany 69524824
327
+
328
+ We had missing or null values in my dataset, so we eliminated the missing values from the
329
+ dataset to build the good machine learning model.
330
+
331
+
332
+
333
+
334
+
335
+
336
+ I used the
337
+ dropna() function to delete the missing value from the dataset. Figure 3 shows how many
338
+ daily cases appear in each country. Below you will see how many null values are in the
339
+ dataset by the graph.
340
+
341
+ Dataset
342
+ Before Eliminating Null Values After Eliminating Null Values
343
+ Dataset -1
344
+
345
+
346
+
347
+ Dataset-2
348
+
349
+
350
+ Table 2. Before and After eliminating null values
351
+ 2 Summary of the dataset
352
+ The covid-19 datasets are mixed of integer and real-valued attribute characteristics
353
+ and this is used to evaluate the machine leaning models.
354
+
355
+ Figure 2. Null values in dataset-1
356
+ Figure3. Null values of dataset-2
357
+
358
+
359
+
360
+ <AxesSubplot:>
361
+ 10
362
+ 0.8
363
+ 0.6
364
+ 0.4
365
+ 0.2
366
+ 0.0
367
+ Country
368
+ CountryAlpha3Code
369
+ Date
370
+ confirmed
371
+ deaths
372
+ recoveries
373
+ confirmed_inc
374
+ deaths_inc
375
+ recoveries_inc
376
+ ECR
377
+ GRTStringencylndex
378
+ DaysSince1Cases
379
+ DaysSince100Cases
380
+ confirmed_PopPct
381
+ deaths_PopPct
382
+ recoveries_PopPctdate
383
+ 0
384
+ country
385
+ 0
386
+ cumulative total cases
387
+ 0
388
+ daily new cases
389
+ 7491
390
+ active_cases
391
+ 4599
392
+ cumulative total deaths
393
+ 7227
394
+ daily new deaths
395
+ 22791
396
+ dtype: int64date
397
+ 0
398
+ country
399
+ 0
400
+ cumulative total cases
401
+ 0
402
+ daily new_cases
403
+ 0
404
+ active_cases
405
+ 0
406
+ cumulative total deaths
407
+ 0
408
+ daily new deaths
409
+ 0
410
+ dtype: int64Country
411
+ 0
412
+ CountryAlpha3Code
413
+ 0
414
+ Date
415
+ 0
416
+ confirmed
417
+ 0
418
+ deaths
419
+ 0
420
+ recoveries
421
+ 0
422
+ confirmed_inc
423
+ 0
424
+ deaths_inc
425
+ 0
426
+ recoveries_inc
427
+ 0
428
+ ECR
429
+ 0
430
+ GRTStringencyIndex
431
+ 130634
432
+ DaysSince1Cases
433
+ 0
434
+ DaysSince100Cases
435
+ 0
436
+ confirmed_PopPct
437
+ 1200
438
+ deaths_PopPct
439
+ 1200
440
+ recoveries_PopPct
441
+ 1200
442
+ dtype: int64Country
443
+ ?
444
+ CountryA1pha3Code
445
+ Date
446
+ confirmed
447
+ deaths
448
+ recoveries
449
+ 0
450
+ confirmed_inc
451
+ deaths_inc
452
+ recoveries_inc
453
+ 0
454
+ ECR
455
+ 0
456
+ GRTStringencyIndex
457
+ 0
458
+ DaysSince1Cases
459
+ 0
460
+ DaysSince100Cases
461
+ confirmed_PopPct
462
+ deaths_PopPct
463
+ recoveries_PopPct
464
+ dtype:
465
+ int64<AxesSubplot:>
466
+ 1.0
467
+ 80
468
+ 0.6
469
+ 0.4
470
+ 0.2
471
+ 0.0
472
+ date
473
+ country
474
+ cumulative_total_cases
475
+ daily_new_cases
476
+ active_cases
477
+ cumulative_total_deaths
478
+
479
+ Attributes Dataset-1 Dataset-2
480
+
481
+ Data set characteristics Covid-19 Covid-19
482
+ Attribute characteristics Integer and alphabetic Integer and alphabetic
483
+ values values
484
+ No .of .Attributes 07 16
485
+ No .of .instance 145220 2952600
486
+ Missing values or errors present yes yes
487
+ Table3. Attributes and Details of the dataset
488
+
489
+ 3 Data Visualisation
490
+ Dataset-1
491
+
492
+
493
+
494
+ The covid-19 global dataset created this Map, showing the Total confirmed cases of
495
+ each country worldwide, separated by the colors depending upon the confirmed cases and the
496
+ Total confirmed cases on each date. This plot was created by this (import plotly. express as px)
497
+ (Plotly Python Graphing Library, n.d.; Preacher et al., 2006). This scale, from blue to yellow,
498
+ refers to the range of covid-19 confirmed cases. If it shows dark blue, it has zero cases. If it
499
+ shows yellow, it will range between 15 million to 20 million cases.
500
+
501
+
502
+ Figure 4. Shows the Confirmed in all over the world dataset-1
503
+
504
+
505
+
506
+ +二网
507
+ Global Spread of Coronavirus
508
+ cumulative_total_cases
509
+ 20M
510
+ 15M
511
+ Australia
512
+ 10M
513
+ date=01-01-2021
514
+ cumulative total cases=28.427k
515
+ 5M
516
+ date=01-01-2021
517
+ 01-01-2021 03-06-2020 05-11-2021 08-06-2020 10-11-2021
518
+ 13-06-2020
519
+ 15-11-2021 18-06-2020 20-11-2021
520
+ 23-05-2021 25-10-2020 28-03-2020 30-09-2020
521
+
522
+ This Map also worked like the previous Confirmed cases Map. This Map shows the
523
+ Total death cases of each country worldwide, each country separated by different colors
524
+ depending upon the death cases and the Total death cases on each date.
525
+
526
+ Figure 5. Shows the Death Cases in all over World dataset-1
527
+ Dataset-2
528
+
529
+ The covid-19 global dataset created this Map, showing the Total confirmed cases of
530
+ each country worldwide, separated by the colors depending upon the confirmed cases. This
531
+ plot was created by this (import plotly. express as px). This scale, from blue to yellow, refers
532
+ to the range of covid-19 confirmed cases. If it shows light sandals, it has zero cases. If it shows
533
+ yellow, it will range between 1.5 billion to 2 billion cases.
534
+
535
+
536
+ Figure 6. Shows the Confirmed cases in all over world dataset-2
537
+
538
+ This Map also worked like the previous Confirmed cases Map. This Map shows the
539
+ Total death cases of each country worldwide, separated by the different colours depending
540
+
541
+ +-网
542
+ Global Spread of Coronavirus Death Cases
543
+ cumulative_total_deaths
544
+ 600k
545
+ date=04-11-2021
546
+ country=Japan
547
+ 400k
548
+ 200k
549
+ date=04-11-2021
550
+ 01-01-2021 03-06-2021 06-01-2021 08-07-2021 11-02-2021
551
+ 13-08-2021 16-03-2021 18-09-2021 21-04-2021 23-10-2020 26-03-2021 28-08-2021 31-05-2021+二网
552
+ Confirmed Cases
553
+ 2B
554
+ 2.12967BUS
555
+ 1.5B
556
+ 1B
557
+ 0.5B
558
+
559
+ upon the range of death cases in each country.
560
+
561
+ Figure 7. Shows the Death cases in all over world dataset-2
562
+
563
+
564
+
565
+ Global Spread of covid-19 top 5 countries in the graph
566
+ Dataset-1
567
+ We took five countries from this dataset to plot this graph with confirmed cases data
568
+ on the y-axis and the date on the x-axis. From this graph, we can see the confirmed cases on
569
+ the respective date for these countries.
570
+
571
+ Figure 8. Five countries confirmed cases data show in graph for dataset-1
572
+
573
+
574
+ Dataset-2
575
+
576
+ Deaths Cases
577
+ 30M
578
+ 25M
579
+ 20M
580
+ 16.43183
581
+ Brazi
582
+ 15M
583
+ 10M
584
+ 5MCovid-19bycountry
585
+ USA
586
+ 400k
587
+ UK
588
+ (0.9-05-2021.366.499k)
589
+ India
590
+ 350k
591
+ Italy
592
+ Russia
593
+ 300k
594
+ Spain
595
+ 250k
596
+ 200k
597
+ 150k
598
+ 100k
599
+ 50k
600
+ NN
601
+ Date
602
+
603
+
604
+ From this graph, you can see the data from the five countries of confirmed cases with
605
+ the respective date(X-axis). Confirmed cases in Y-axis and date in X-axis. If we drag the
606
+ mouse on the graph line, it will show the confirmed cases on respective dates for a country.
607
+
608
+ Figure 9. Five countries confirmed cases data show in graph for dataset-1
609
+ 3.2.Data Pre-processing
610
+ 1 Feature Scaling
611
+ a) Absolute Maximum Scaling
612
+ Find the absolute maximum value in the column and divide all the values in the queue
613
+ by that maximum value. Our data will lie between -1 and 1 (Mojjada et al., 2021; Rustam et
614
+ al., 2020).
615
+ Absolute Maximum Scaling= Each value in the column / Maximum value of the column (1)
616
+ In our experimental research work, we used this Feature scaling for the dependent and
617
+ independent variables to enhance the model by changing the values of the dataset into the
618
+ range of -1 to 1 because these data will understand by engines easily. If I do the model
619
+ training with my original data, the model may not be trained well.
620
+ b) MinMax Scaling
621
+ Subtract each column value by the minimum value of the column and then divide this
622
+ by the subtracted value of the maximum and minimum value of the column. By this method,
623
+ our data will lie between 0 and 1. if we have negative values in the dataset. This Scaling will
624
+ compress all the inliers in the narrow range [0, 0.005] .
625
+
626
+ MinMax Scaling= (X - Min(X)) / (Max(X) - Min(X))
627
+
628
+
629
+ (2)
630
+ We used this feature scaling to enhance the model of my experimental research work by
631
+ changing the values of the dependent and independent variables into the range -1 to 1 for
632
+ machine-readable (Sklearn.Preprocessing.Minmax_scale — Scikit-Learn 1.1.3
633
+ Documentation, n.d.) .
634
+
635
+ Covid-19 by country
636
+ US
637
+ 40M
638
+ Brazil
639
+ India
640
+ Italy
641
+ Russia
642
+ WSe
643
+ Spain
644
+ 30M
645
+ (Mar 22,2021,29.87613M)US
646
+ 25M
647
+ Confirmed
648
+ 20M
649
+ 15M
650
+ Wot
651
+ 5M
652
+ Mar 2020
653
+ May 2020
654
+ Jul 2020
655
+ Sep 2020
656
+ Nov 2020
657
+ Jan 2021
658
+ Mar 2021
659
+ May 2021
660
+ Jul 2021
661
+ Sep 2021
662
+ Date
663
+
664
+ c) Normalization
665
+ In this case, we used the average value of the column instead of the minimum value.
666
+ Normalization Scaling = (X - Mean(X)) / (Max(X) - Min(X))
667
+
668
+
669
+ (3)
670
+ In my Experimental research work, I have the columns with different values, not in sequential
671
+ order. I used this Scaling to enhance the model by changing the values of the dataset into the
672
+ range of 0 to 1 (Lin et al., 2016; Storcheus, Dmitry; Rostamizadeh, Afshin; Kumar, 2015).
673
+ d) Standardization
674
+ Each column value subtracts the average value of the column and then is divided by the
675
+ Standard deviation value and is not restricted to a specific range. In this experimental
676
+ research work. We used this to enhance the model by changing the values of the dataset into
677
+ the range of [0, 1]. (Sklearn.Preprocessing.StandardScaler — Scikit-Learn 1.1.3
678
+ Documentation, n.d.).
679
+ Standardization Scaling = (X – Mean(X)) / σ
680
+
681
+
682
+
683
+ (4)
684
+ e) Robust Scaling
685
+ Subtract the data points with the median value and divide them by the Inter Quartile
686
+ Range value (IQR). IQR means the difference between the dataset's first and third quartile
687
+ (Barros & Hirakata, 2003; Boente & Salibian-Barrera, 2015).
688
+ Robust Scaling = X – Median(X) / IQR.
689
+
690
+
691
+
692
+
693
+ (5)
694
+
695
+ 2 Date Splitting
696
+
697
+ In our dataset, The date in the String value can't be readable by the machine, so we
698
+ have changed the date into an understandable format by splitting the date into three columns
699
+ date, day, and year.
700
+
701
+
702
+
703
+ 3 Label Encoding
704
+
705
+ date
706
+ country
707
+ cumulative_total_ cases
708
+ daily_new_cases
709
+ active_cases
710
+ cumulative_ total_ deaths
711
+ daily_new_deaths
712
+ Dates
713
+ Month
714
+ Year
715
+ Country
716
+ 10
717
+ 25-02-2020
718
+ Afghanistan
719
+ 1
720
+ 0.0
721
+ 1.0
722
+ 0.0
723
+ 0.0
724
+ 25
725
+ 2
726
+ 2020
727
+ 0
728
+ 11
729
+ 26-02-2020
730
+ Afghanistan
731
+ 1
732
+ 0.0
733
+ 1.0
734
+ 0.0
735
+ 0.0
736
+ 26
737
+ 2020
738
+ 0
739
+ 12
740
+ 27-02-2020
741
+ Afghanistan
742
+ 1
743
+ 0.0
744
+ 1.0
745
+ 0.0
746
+ 0.0
747
+ 27
748
+ 2020
749
+ 0
750
+ 13
751
+ 28-02-2020
752
+ Afghanistan
753
+ 1
754
+ 0.0
755
+ 1.0
756
+ 00
757
+ 0.0
758
+ 28
759
+ 2020
760
+ 0
761
+ 14
762
+ 29-02-2020
763
+ Afghanistan
764
+ 1
765
+ 0.0
766
+ 1.0
767
+ 0.0
768
+ 0.0
769
+ 29
770
+ 2020
771
+ 0
772
+ .
773
+ ..
774
+ ..
775
+ ...
776
+ ...
777
+ ".
778
+ :
779
+ 145216
780
+ 28-11-2021
781
+ Zimbabwe
782
+ 133991
783
+ 40.0
784
+ 631.0
785
+ 4705.0
786
+ 00
787
+ 28
788
+ 11
789
+ 2021
790
+ 200
791
+ 145217
792
+ 29-11-2021
793
+ Zimbabwe
794
+ 134226
795
+ 235.0
796
+ 817.0
797
+ 4706.0
798
+ 1.0
799
+ 29
800
+ 11
801
+ 2021
802
+ 200
803
+ 145218
804
+ 30-11-2021
805
+ Zimbabwe
806
+ 134625
807
+ 399.0
808
+ 1171.0
809
+ 4707.0
810
+ 1.0
811
+ 11
812
+ 2021
813
+ 200
814
+ 145219
815
+ 01-12-2021
816
+ Zimbabwe
817
+ 135337
818
+ 712.0
819
+ 1846.0
820
+ 4707.0
821
+ 00
822
+ 12
823
+ 2021
824
+ 200
825
+ 145220
826
+ 02-12-2021
827
+ Zimbabwe
828
+ 136379
829
+ 1042.0
830
+ 2843.0
831
+ 4707.0
832
+ 00
833
+ 12
834
+ 2021
835
+ 200
836
+ 119189 rows
837
+
838
+ Converting the alphabetic values into a numeric form converts them into a machine-
839
+ readable
840
+ now
841
+ machine-learning
842
+ algorithm
843
+ that
844
+ can
845
+ easily
846
+ understand
847
+ it(Sklearn.Preprocessing.LabelEncoder — Scikit-Learn 1.1.3 Documentation, n.d.) . Because
848
+ it transformed into categorical values, for example, India, USA, U.K… by this method, it
849
+ will change like 0, 1, 2, … and so on. In our Experimental research work, we used the country
850
+ column. Because the machine learning algorithm can't read the alphabetic values, we used
851
+ this method to change them into categorical values.
852
+
853
+
854
+
855
+
856
+ 4 Model Building
857
+ 4.1 Machine learning
858
+
859
+ a) Linear Regression
860
+ This algorithm is used to find the changes of the dependent variable according to the
861
+ independent variable and shows the difference between dependent and independent variables,
862
+ known as Linear regression (Mojjada et al., 2021; Rustam et al., 2020; Sardinha & Catalán,
863
+ 2018; Yadav et al., 2020). It plots the straight line and covers most of the data points in the
864
+ dependent variable. For continuous/real/numeric variables like sales, salary, age, and product
865
+ price, among others, linear regression makes predictions.
866
+ Two types of linear regression
867
+ Simple linear regression
868
+ - predict the value by a single independent variable.
869
+ Multiple linear regression
870
+ - predict the value by more than one independent variable.
871
+ Y = mX + b
872
+
873
+
874
+
875
+
876
+
877
+
878
+
879
+
880
+
881
+ (6)
882
+ Y is the dependent variable
883
+ X is the independent variable
884
+
885
+ date
886
+ country
887
+ cumulative_total_cases
888
+ daily_new_cases
889
+ active_cases
890
+ cumulative_total_ deaths
891
+ daily_new_deaths
892
+ Datess
893
+ Month
894
+ Year
895
+ Country
896
+ 10
897
+ 25-02-2020
898
+ Afghanistan
899
+ 1
900
+ 0'0
901
+ 1.0
902
+ 0'0
903
+ 0'0
904
+ 25
905
+ 2
906
+ 2020
907
+ 0
908
+ 11
909
+ 26-02-2020
910
+ Afghanistan
911
+ 1
912
+ 0'0
913
+ 1.0
914
+ 0'0
915
+ 0.0
916
+ 26
917
+ 2
918
+ 2020
919
+ 0
920
+ 12
921
+ 27-02-2020
922
+ Afghanistan
923
+ 1
924
+ 0.0
925
+ 1.0
926
+ 0.0
927
+ 0.0
928
+ 27
929
+ 2
930
+ 2020
931
+ 0
932
+ 13
933
+ 28-02-2020
934
+ Afghanistan
935
+ 1
936
+ 0.0
937
+ 1.0
938
+ 0.0
939
+ 0.0
940
+ 28
941
+ 2
942
+ 2020
943
+ 0
944
+ 14
945
+ 29-02-2020
946
+ Afghanistan
947
+ 1
948
+ 0.0
949
+ 1.0
950
+ 0.0
951
+ 0.0
952
+ 29
953
+ 2
954
+ 2020
955
+ 0
956
+ ...
957
+ "
958
+ ..
959
+ ...
960
+ .
961
+ ...
962
+ ...
963
+ ..
964
+ ...
965
+ ...
966
+ 145216
967
+ 28-11-2021
968
+ Zimbabwe
969
+ 133991
970
+ 40.0
971
+ 631.0
972
+ 4705.0
973
+ 0.0
974
+ 28
975
+ 11
976
+ 2021
977
+ 200
978
+ 145217
979
+ 29-11-2021
980
+ Zimbabwe
981
+ 134226
982
+ 235.0
983
+ 817.0
984
+ 4706.0
985
+ 1.0
986
+ 29
987
+ 11
988
+ 2021
989
+ 200
990
+ 145218
991
+ 30-11-2021
992
+ Zimbabwe
993
+ 134625
994
+ 399.0
995
+ 1171.0
996
+ 4707.0
997
+ 1.0
998
+ 30
999
+ 11
1000
+ 2021
1001
+ 200
1002
+ 145219
1003
+ 01-12-2021
1004
+ Zimbabwe
1005
+ 135337
1006
+ 712.0
1007
+ 1846.0
1008
+ 4707.0
1009
+ 0.0
1010
+ 12
1011
+ 1
1012
+ 2021
1013
+ 200
1014
+ 145220
1015
+ 02-12-2021
1016
+ Zimbabwe
1017
+ 136379
1018
+ 1042.0
1019
+ 2843.0
1020
+ 4707.0
1021
+ 0.0
1022
+ 12
1023
+ 2
1024
+ 2021
1025
+ 200
1026
+ 119189
1027
+ rows × 11 columns
1028
+
1029
+ m is the slope of the line
1030
+ b is the intercept
1031
+ In this experimental research work, we used Multiple Linear Regression (Rath et al.,
1032
+ 2020; Sardinha & Catalán, 2018). We took the date and country as the independent variable
1033
+ and the confirmed cases as the dependent variable. By this algorithm, we can predict the
1034
+ confirmed cases of covid-19 by giving the value of the input of the independent variable to
1035
+ the trained model and predicting the output, respectively.
1036
+
1037
+ b) Polynomial Regression
1038
+ It is also known as the Multiple Linear Regression Special Case in Machine Learning.
1039
+ Because transforming the equation for multiple linear regression into polynomial regression by
1040
+ adding specific polynomial terms. It is a linear model that has been modified in several ways
1041
+ to improve accuracy. The training dataset for polynomial regression is non-linear. To fit the
1042
+ complex and non-linear functions and datasets instead of using linear regression. This
1043
+ statement leads to the polynomial regression: transformed original features into polynomial
1044
+ features of the essential degree (2,3,..,n) and then used in a linear model (Nikhil et al., 2021;
1045
+ Shaikh et al., 2021; Yadav et al., 2020).
1046
+ y= b0+b1x+ b2x2+ b3x3+....+ bnxn
1047
+
1048
+
1049
+
1050
+
1051
+
1052
+
1053
+
1054
+ (7)
1055
+ This formula also works like the linear regression formula but changes the nth degree
1056
+ to plot the line closure to the data points.
1057
+ This algorithm also works like linear regression, but we can change the predicted output
1058
+ by changing the degree value of polynomial regression. By this change, we can give the
1059
+ optimized predicted output for the given input.
1060
+
1061
+ c) K-NN
1062
+
1063
+ KNN - K-Nearest Neighbour
1064
+ The K-NN algorithm first finds the similarity between the new case and the existing
1065
+ cases, and then it places the new case in a category that is quite similar to the existing
1066
+ categories. After storing all of the previous data, a new data point is categorized using the K-
1067
+ NN Algorithm based on the similarity or distance of the data points. This model will quickly
1068
+ recognize and place new cases in the most related category.
1069
+ The Euclidean distance between the data points will be calculated. The distance
1070
+ between two points is known as the Euclidean distance. The data points are classified based
1071
+ on this Euclidean distance.
1072
+
1073
+
1074
+ d=√((x2-x1)²+(y2-y1)²)
1075
+
1076
+
1077
+
1078
+
1079
+
1080
+
1081
+
1082
+ (8)
1083
+ By this example, you can understand the formula,
1084
+ d=√((age2-age1)²+(gender2-gender1)²)
1085
+
1086
+
1087
+
1088
+
1089
+
1090
+
1091
+ (9)
1092
+
1093
+
1094
+
1095
+ This algorithm is typically used for classification problems, but we used it in a
1096
+ regression-type problem this time, and it predicted the confirmed cases but did not perform
1097
+ well.
1098
+
1099
+ d) Decision tree
1100
+ A decision tree is one of the most popular supervised learning methods to solve
1101
+ classification and regression problems (Deng et al., 2019; Safavian & Landgrebe, 1991).
1102
+ Structured classifier,
1103
+ where internal nodes stand for a dataset's features, branches for the decision-making
1104
+ process, and each leaf node for the classification result (Alghamdi & Alfalqi, 2015; Verma &
1105
+ Pal, 2020). It is used to split the datasets into tree-like structures for decision-making.
1106
+ The Decision Node and Leaf Node are the two decision
1107
+ tree
1108
+ nodes.
1109
+ While Leaf
1110
+ nodes are the results of decisions and do not have any more branches,
1111
+ Decision nodes are
1112
+ used to create decisions and have numerous branches. The CART algorithm divides a node
1113
+ into sub-nodes based on the Gini Index value. It starts with the training set as a root node, and
1114
+ after successfully splitting it in two, it breaks the subsets using the same logic and then splits
1115
+ the sub-subsets again, recursively, until it discovers that further splitting will not result in any
1116
+ pure sub-nodes.
1117
+
1118
+ We used this algorithm in my experimental research work to build the model and
1119
+ predict the confirmed cases of the covid-19. The evaluation metrics proved that this algorithm
1120
+ worked well for our experimental dataset.
1121
+
1122
+ e) Random Forest
1123
+ Random Forest is a classifier. It also works like the decision tree algorithm, but it uses
1124
+ many decision trees on different subsets of the input dataset. Each tree has the predicted output
1125
+ by voting averages of the results to increase the predicted accuracy of the model (Ankit &
1126
+ Saleena, 2018; Hossain et al., 2021; Painuli et al., 2021; Pokharel & Deardon, 2014). This
1127
+ algorithm can improve the accuracy and performance of the model from the decision tree
1128
+ model. More trees in the random forest result in increased accuracy and high performance and
1129
+ prevent (or) avoid the problem of overfitting. In my experimental research work, I used this
1130
+ algorithm to build the model and predict the confirmed cases of the covid-19. This algorithm
1131
+ also worked well than the decision tree because and proved by the evaluation metrics.
1132
+
1133
+ f) SVM
1134
+ Support Vector Machine (SVM) is a popular Supervised Learning algorithm used for
1135
+ classification and regression problems. However, it is primarily used in Machine Learning for
1136
+ Classification problems. The SVM algorithm aims to find the best line or decision boundary
1137
+ for categorizing n-dimensional space to place new data points in the correct category easily. A
1138
+ hyperplane is the best decision boundary (Education, 2021; Oumina et al., 2020; Rustam et al.,
1139
+ 2020; Suhasini et al., n.d.).
1140
+ Support Vector Regression (SVR) is a supervised learning algorithm for predicting
1141
+ discrete values (Rivas-Perea et al., 2013). Support Vector Regression operates on the same
1142
+ principles as SVMs. SVR's basic concept is to find the best fit line. The best fit line in SVR is
1143
+ the hyperplane with the most significant number of points. This algorithm was used to create
1144
+
1145
+
1146
+
1147
+ the model and predict the confirmed cases of the covid-19. However, the SVR performance is
1148
+ also low because it is challenging to construct the hyperplane for all data points in the dataset.
1149
+
1150
+
1151
+ 4.2 Ensembling Models
1152
+
1153
+ a) Voting
1154
+ Voting classifiers and regressors are both ensemble methods; the predictions of these
1155
+ models are simply an aggregation of ensemble predictions (Ankit & Saleena, 2018; Hammar
1156
+ et al., 2019). An ensemble is a collection of predictors. As a result, these models are made up
1157
+ of multiple predictors. The model aggregates each of these predictors' predictions into a final
1158
+ one (Agnihotri et al., 2019). In this experimental research work, we used voting for the
1159
+ regression model. I took the three machine learning models, the K-NN model, the Decision
1160
+ tree model, and the Random Forest Model; we calculated the result by averaging those outputs
1161
+ from these model outputs.
1162
+
1163
+
1164
+ b) Stacking
1165
+
1166
+ Stacking is a popular ensemble modeling technique for predicting multiple nodes to
1167
+ generate a new model and enhance model performance (Kim et al., 2021; Verma & Pal, 2020).
1168
+ We used the stacking technique in my experimental research to improve the model. We
1169
+ combined the three machine learning models, the K-NN model, the Decision tree model, and
1170
+ Random Forest Model by combining these models to build the new model and improve the
1171
+ model performance.
1172
+
1173
+ c) Bagging
1174
+
1175
+ Bagging, also known as Bootstrap aggregation, is an ensemble learning technique that
1176
+ helps machine learning algorithms to improve their performance and accuracy. It is used to
1177
+ deal with bias-variance trade-offs and reduces a prediction model's variance. Bagging prevents
1178
+ data overfitting and is used in both regression and classification models, particularly
1179
+ for decision tree algorithms (Dong & Qian, 2022).
1180
+
1181
+ In this experimental research work, we are going to use bagging for the regression
1182
+ model to enhance the previously used model performance. We took the Decision tree algorithm
1183
+ to improve the model performance of this algorithm.
1184
+
1185
+ 4.3 Deep Learning
1186
+
1187
+ a) ANN
1188
+
1189
+ Artificial Neural Networks (ANN) are multi-layer fully-connected neural nets. They
1190
+ are made up of an input layer, several hidden layers, and an output layer. Every node in one
1191
+ layer is linked to every node. A structure like the human brain Artificial neural networks, like
1192
+ the human brain, have neurons that are interconnected to one another in various layers of the
1193
+ networks. These neurons are called nodes (Kathiravan & Saranya, 2021; Nan & Gao, 2018;
1194
+ Oumina et al., 2020; Ozturk et al., 2020).
1195
+
1196
+
1197
+
1198
+
1199
+ In this Experimental research work, we used this Deep learning Algorithm to build the
1200
+ model and predict the confirmed cases of covid-19. The Performance of this Algorithm is also
1201
+ very well for our experimental dataset.
1202
+
1203
+
1204
+ 5.Evaluation
1205
+
1206
+ USED
1207
+ ALGORITHMS
1208
+ SCALING
1209
+ Dataset—1
1210
+ Dataset--2
1211
+
1212
+
1213
+ MAE
1214
+ MSE
1215
+ RMSE
1216
+ MAPE
1217
+ MAE
1218
+ MSE
1219
+ RMSE MAPE
1220
+ Linear
1221
+ Regression
1222
+ Absolute
1223
+ Maximum
1224
+ 0.00773
1225
+ 0.00059
1226
+ 0.02430
1227
+ 0.77380
1228
+ 0.00327 0.00019 0.01381 0.32732
1229
+ Min Max
1230
+ 0.00786
1231
+ 0.00076
1232
+ 0.02759
1233
+ 0.78608
1234
+ 0.00318 0.00018 0.01357 0.31863
1235
+ Normalization
1236
+ 0.27527
1237
+ 0.13680
1238
+ 0.36987
1239
+ 27.5272
1240
+ 0.44087 0.22091 0.47001 44.0871
1241
+ Standardization
1242
+ 0.29441
1243
+ 0.95309
1244
+ 0.97626
1245
+ 29.4419
1246
+ 0.24278 1.02367 1.01176 24.2787
1247
+ Robust
1248
+ 3.97267
1249
+ 173.280
1250
+ 13.1636
1251
+ 397.267
1252
+ 7.86555 1051.98 32.4343 786.555
1253
+ Polynomial
1254
+ Regression
1255
+ Absolute
1256
+ Maximum
1257
+ 0.00772
1258
+ 0.00058
1259
+ 0.02426
1260
+ 0.77240
1261
+ 0.00322 0.00018 0.01374 0.32228
1262
+ Min Max
1263
+ 0.00782
1264
+ 0.00075
1265
+ 0.02755
1266
+ 0.78286
1267
+ 0.00313 0.00018 0.01351 0.31378
1268
+ Normalization
1269
+ 0.27342
1270
+ 0.13595
1271
+ 0.36872
1272
+ 27.3423
1273
+ 0.43739 0.21898 0.46796 43.7397
1274
+ Standardization
1275
+ 0.29424
1276
+ 0.94983
1277
+ 0.97459
1278
+ 29.4241
1279
+ 0.23882 1.01550 1.00772 23.8825
1280
+ Robust
1281
+ 3.96498
1282
+ 172.826
1283
+ 13.1463
1284
+ 396.498
1285
+ 7.73825 1041.97 32.2795 773.825
1286
+ K-NN
1287
+ Absolute
1288
+ Maximum
1289
+ 0.00779
1290
+ 0.00076
1291
+ 0.02758
1292
+ 0.77987
1293
+ 0.00339 0.00020 0.01441 0.33969
1294
+ Min Max
1295
+ 0.00735
1296
+ 0.00065
1297
+ 0.02563
1298
+ 0.73524
1299
+ 0.00308 0.00018 0.01342 0.30884
1300
+ Normalization
1301
+ 0.20626
1302
+ 0.10418
1303
+ 0.32277
1304
+ 20.6264
1305
+ 0.29956 0.14464 0.38032 29.9567
1306
+ Standardization
1307
+ 0.28259
1308
+ 0.95238
1309
+ 0.97590
1310
+ 28.2595
1311
+ 0.23513 1.00703 1.00351 23.5139
1312
+ Robust
1313
+ 3.83587
1314
+ 184.926
1315
+ 13.5987
1316
+ 383.587
1317
+ 7.65189 1051.12 32.4210 765.189
1318
+ SVR
1319
+ Absolute
1320
+ Maximum
1321
+ 0.09660
1322
+ 0.00961
1323
+ 0.09803
1324
+ 9.66083
1325
+ 0.09847 0.00979 0.09894 9.84745
1326
+ Min Max
1327
+ 0.09678
1328
+ 0.00979
1329
+ 0.09895
1330
+ 9.6785
1331
+ 0.09860 0.00979 0.09897 9.86086
1332
+ Normalization
1333
+ 0.23199
1334
+ 0.14200
1335
+ 0.37683
1336
+ 23.1999
1337
+ 0.37507 0.28580 0.53461 37.5072
1338
+ Standardization
1339
+ 0.23072
1340
+ 1.03625
1341
+ 1.01796
1342
+ 23.072
1343
+ 0.20443 0.85370 0.92396 20.4438
1344
+
1345
+
1346
+
1347
+ Robust
1348
+ 2.72609
1349
+ 199.169
1350
+ 14.1127
1351
+ 272.60
1352
+ 4.75392 948.336 30.7950 475.392
1353
+ Decision Tree
1354
+ Absolute
1355
+ Maximum
1356
+ 0.00122
1357
+ 6.42435
1358
+ 0.00801
1359
+ 0.12245
1360
+ 0.00050 2.37978 0.00487 0.05063
1361
+ Min Max
1362
+ 0.00127
1363
+ 7.28606
1364
+ 0.00853
1365
+ 0.12702
1366
+ 0.00049 2.23365 0.00472 0.04979
1367
+ Normalization
1368
+ 0.12954
1369
+ 0.12982
1370
+ 0.36030
1371
+ 12.954
1372
+ 0.12124 0.12215 0.34950 12.1241
1373
+ Standardization
1374
+ 0.04880
1375
+ 0.11049
1376
+ 0.33241
1377
+ 4.88080
1378
+ 0.03811 0.17476 0.41805 3.81192
1379
+ Robust
1380
+ 0.65679
1381
+ 16.8198
1382
+ 4.10119
1383
+ 65.6796
1384
+ 1.26035 192.319 13.8679 126.035
1385
+ Random Forest
1386
+ Absolute
1387
+ Maximum
1388
+ 0.00113
1389
+ 3.71692
1390
+ 0.00609
1391
+ 0.11386
1392
+ 0.00046 1.89923 0.00435 0.04655
1393
+ Min Max
1394
+ 0.00116
1395
+ 4.25899
1396
+ 0.00652
1397
+ 0.11669
1398
+ 0.00046 2.23436 0.00472 0.04675
1399
+ Normalization
1400
+ 0.13557
1401
+ 0.08211
1402
+ 0.28656
1403
+ 13.5574
1404
+ 0.12545 0.07804 0.27936 12.5452
1405
+ Standardization
1406
+ 0.04479
1407
+ 0.07184
1408
+ 0.26804
1409
+ 4.47991
1410
+ 0.03484 0.12288 0.35054 3.48449
1411
+ Robust
1412
+ 0.62770
1413
+ 16.1979
1414
+ 4.02466
1415
+ 62.7707
1416
+ 1.13672 139.042 11.7916 113.672
1417
+ Voting
1418
+ Ensemble
1419
+ Absolute
1420
+ Maximum
1421
+ 0.00288
1422
+ 9.84888
1423
+ 0.00992
1424
+ 0.28810
1425
+ 0.00127 5.05432 0.00710 0.12745
1426
+ Min Max
1427
+ 0.00278
1428
+ 9.67740
1429
+ 0.00983
1430
+ 0.27831
1431
+ 0.00116 3.73179 0.00610 0.11633
1432
+ Normalization
1433
+ 0.15719
1434
+ 0.08449
1435
+ 0.29067
1436
+ 15.7198
1437
+ 0.18187 0.08527 0.29202 18.1877
1438
+ Standardization
1439
+ 0.10422
1440
+ 0.15918
1441
+ 0.39897
1442
+ 10.4228
1443
+ 0.08832 0.17096 0.41348 8.83253
1444
+ Robust
1445
+ 1.45676
1446
+ 35.3470
1447
+ 5.94533
1448
+ 145.676
1449
+ 2.82742 176.511 13.2857 282.742
1450
+ Bagging
1451
+ Ensemble
1452
+ Absolute
1453
+ Maximum
1454
+ 0.00115
1455
+ 3.58553
1456
+ 0.00598
1457
+ 0.11504
1458
+ 0.00047 2.39693 0.00489 0.04786
1459
+ Min Max
1460
+ 0.00130
1461
+ 7.81970
1462
+ 0.00884
1463
+ 0.13041
1464
+ 0.00047 2.17509 0.00466 0.04714
1465
+ Normalization
1466
+ 0.13660
1467
+ 0.07791
1468
+ 0.27912
1469
+ 13.6604
1470
+ 0.12664 0.07389 0.27183 12.6646
1471
+ Standardization
1472
+ 0.04498
1473
+ 0.07324
1474
+ 0.27064
1475
+ 4.49816
1476
+ 0.03515 0.11348 0.33686 3.51532
1477
+ Robust
1478
+ 0.63517
1479
+ 16.8967
1480
+ 4.11056
1481
+ 63.5177
1482
+ 1.07096 84.6469 9.20037 107.096
1483
+ Stacking
1484
+ Ensemble
1485
+ Absolute
1486
+ Maximum
1487
+ 0.00072
1488
+ 1.78910
1489
+ 0.00422
1490
+ 0.07217
1491
+ 0.00035 6.49338 0.00254 0.03549
1492
+ Min Max
1493
+ 0.00067
1494
+ 1.53106
1495
+ 0.00391
1496
+ 0.06747
1497
+ 0.00023 3.76875 0.00194 0.02359
1498
+ Normalization
1499
+ 0.09456
1500
+ 0.02263
1501
+ 0.15046
1502
+ 9.4561
1503
+ 0.07794 0.01525 0.12349 7.79464
1504
+ Standardization
1505
+ 0.01444
1506
+ 0.00667
1507
+ 0.08172
1508
+ 1.44478
1509
+ 0.01977 0.01457 0.12073 1.97717
1510
+
1511
+
1512
+
1513
+
1514
+
1515
+ 6. Conclusion
1516
+
1517
+
1518
+ In this experimental research work, we have done predictive analysis by proposing a
1519
+ new model to predict the recent covid-19 cases. Before training, we made some changes in the
1520
+ dataset for our convenience by pre-processing techniques such as Dropna() function to delete
1521
+ the null values in the dataset to avoid wrong prediction performance and separate the date
1522
+ column into the day, month and year because the string can not load in the machine learning.
1523
+ And then we used scaling to change the data in the range between 0 to 1 for easy understanding
1524
+ of the Machine Learning model because the original data is tough to train and predict, so we
1525
+ made these changes in the original data from 0 to 1, such as Absolute maximum, Min-Max,
1526
+ Normalization, Standardization, and Robust. Then, in the Data visualization phase, we used
1527
+ maps and graphs for an easy understanding of the data those we used in this experimental
1528
+ research. After that, we built the model using different machine learning algorithms such as
1529
+ Linear Regression (L.R.), Polynomial Regression (PR), K-nearest neighbor (KNN), Decision
1530
+ Tree (D.T.), Random Forest (R.F.), Lasso Regression (LR), Support Vector Regression (SVR),
1531
+ and we used the ensembling model also to improve the model performance. In ensembling
1532
+ techniques, we used voting, Stacking, and Bagging.
1533
+ In deep learning Algorithms, the evaluation metrics such as MAE, MSE, RSME, and
1534
+ MAPE are used in our prediction model. Out of this decision tree and Random Forest algorithm
1535
+ performed better than SVR and all other algorithms. This model helps in predicting trends and
1536
+ patterns of covid-19.
1537
+
1538
+ Reference
1539
+ Agnihotri, D., Verma, K., Tripathi, P., & Singh, B. K. (2019). Soft voting technique to
1540
+ improve the performance of global filter based feature selection in text corpus. Applied
1541
+ Intelligence, 49(4), 1597–1619. https://doi.org/10.1007/S10489-018-1349-1
1542
+ Alghamdi, R., & Alfalqi, K. (2015). A Survey of Topic Modeling in Text Mining.
1543
+ International Journal of Advanced Computer Science and Applications, 6(1), 147–153.
1544
+ https://doi.org/10.14569/IJACSA.2015.060121
1545
+ Ankit, & Saleena, N. (2018). An Ensemble Classification System for Twitter Sentiment
1546
+ Robust
1547
+ 0.23091
1548
+ 1.68392
1549
+ 1.29766
1550
+ 23.0917
1551
+ 0.60178 19.0907 4.36929 60.1785
1552
+ ANN
1553
+ Absolute
1554
+ Maximum
1555
+ 0.00524
1556
+ 0.00067
1557
+ 0.02597
1558
+ 0.52460
1559
+ 0.00222 0.00022 0.01500 0.22281
1560
+ Min Max
1561
+ 0.00521
1562
+ 0.00067
1563
+ 0.02597
1564
+ 0.52182
1565
+ 0.00203 0.00017 0.01328 0.20345
1566
+ Normalization
1567
+ 0.16492
1568
+ 0.16498
1569
+ 0.40618
1570
+ 16.4920
1571
+ 0.34391 0.34419 0.58668 34.3914
1572
+ Standardization
1573
+ 0.19644
1574
+ 1.09775
1575
+ 1.04773
1576
+ 19.644
1577
+ 0.15029 0.87252 0.93408 15.0297
1578
+ Robust
1579
+ 2.76430
1580
+ 214.265
1581
+ 14.6378
1582
+ 276.43
1583
+ 4.74052 948.755 30.8018 474.052
1584
+
1585
+
1586
+
1587
+ Analysis. Procedia Computer Science, 132(Iccids), 937–946.
1588
+ https://doi.org/10.1016/j.procs.2018.05.109
1589
+ Barros, A. J. D., & Hirakata, V. N. (2003). Alternatives for logistic regression in cross-
1590
+ sectional studies: An empirical comparison of models that directly estimate the
1591
+ prevalence ratio. BMC Medical Research Methodology, 3, 1–13.
1592
+ https://doi.org/10.1186/1471-2288-3-21
1593
+ Boente, G., & Salibian-Barrera, M. (2015). S-Estimators for Functional Principal Component
1594
+ Analysis. Journal of the American Statistical Association, 110(511), 1100–1111.
1595
+ https://doi.org/10.1080/01621459.2014.946991
1596
+ Deng, X., Li, Y., Weng, J., & Zhang, J. (2019). Feature selection for text classification: A
1597
+ review. Multimedia Tools and Applications, 78(3), 3797–3816.
1598
+ https://doi.org/10.1007/S11042-018-6083-5
1599
+ Dong, J., & Qian, Q. (2022). A Density-Based Random Forest for Imbalanced Data
1600
+ Classification. Future Internet 2022, Vol. 14, Page 90, 14(3), 90.
1601
+ https://doi.org/10.3390/FI14030090
1602
+ Education, M. (2021). A Hybrid TF-IDF and N-Grams Based Feature Extraction Approach
1603
+ for Accurate Detection of Fake News on Twitter Data. 12(06), 5710–5723.
1604
+ Gambhir, E., Jain, R., Gupta, A., & Tomer, U. (2020). Regression Analysis of COVID-19
1605
+ using Machine Learning Algorithms. Proceedings - International Conference on Smart
1606
+ Electronics and Communication, ICOSEC 2020, 65–71.
1607
+ https://doi.org/10.1109/ICOSEC49089.2020.9215356
1608
+ Hammar, K., Jaradat, S., Dokoohaki, N., & Matskin, M. (2019). Deep Text Mining of
1609
+ Instagram Data without Strong Supervision. Proceedings - 2018 IEEE/WIC/ACM
1610
+ International Conference on Web Intelligence, WI 2018, 158–165.
1611
+ https://doi.org/10.1109/WI.2018.00-94
1612
+ Hossain, M. M., Asadullah, M., Rahaman, A., Miah, M. S., Hasan, M. Z., Paul, T., &
1613
+ Hossain, M. A. (2021). Prediction on Domestic Violence in Bangladesh during the
1614
+ COVID-19 Outbreak Using Machine Learning Methods. Applied System Innovation
1615
+ 2021, Vol. 4, Page 77, 4(4), 77. https://doi.org/10.3390/ASI4040077
1616
+ Jignesh Chowdary, G., Punn, N. S., Sonbhadra, S. K., & Agarwal, S. (2020). Face Mask
1617
+ Detection Using Transfer Learning of InceptionV3. Lecture Notes in Computer Science
1618
+ (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
1619
+ Bioinformatics), 12581 LNCS, 81–90. https://doi.org/10.1007/978-3-030-66665-1_6
1620
+ Jolliffe, I. T., Trendafilov, N. T., & Uddin, M. (2003). A Modified Principal Component
1621
+ Technique Based on the LASSO. Journal of Computational and Graphical Statistics,
1622
+ 12(3), 531–547. https://doi.org/10.1198/1061860032148
1623
+ Kanimozhi, G., Shanmugavadivu, P., & Rani, M. M. S. (2020). Machine Learning‐Based
1624
+ Recommender System for Breast Cancer Prognosis. Recommender System with Machine
1625
+ Learning and Artificial Intelligence, 121–140.
1626
+ https://doi.org/10.1002/9781119711582.ch7
1627
+ Kathiravan, P. S., & Saranya, R. (2021). EasyChair Preprint Named Entity Recognition (
1628
+ NER ) for Social Media Tamil Posts Using Deep Learning with Singular Value
1629
+ Decomposition.
1630
+
1631
+
1632
+
1633
+ Kim, C., You, S. C., Reps, J. M., Cheong, J. Y., & Park, R. W. (2021). Machine-learning
1634
+ model to predict the cause of death using a stacking ensemble method for observational
1635
+ data. Journal of the American Medical Informatics Association : JAMIA, 28(6), 1098–
1636
+ 1107. https://doi.org/10.1093/jamia/ocaa277
1637
+ Lau, J. H., & Baldwin, T. (2016). An Empirical Evaluation of doc2vec with Practical Insights
1638
+ into Document Embedding Generation. 2014, 78–86. https://doi.org/10.18653/v1/w16-
1639
+ 1609
1640
+ Lin, W. S., Dai, H. J., Jonnagaddala, J., Chang, N. W., Jue, T. R., Iqbal, U., Shao, J. Y. H.,
1641
+ Chiang, I. J., & Li, Y. C. (2016). Utilizing different word representation methods for
1642
+ twitter data in adverse drug reactions extraction. TAAI 2015 - 2015 Conference on
1643
+ Technologies and Applications of Artificial Intelligence, 260–265.
1644
+ https://doi.org/10.1109/TAAI.2015.7407070
1645
+ Liu, T. (2020). U.S. Pandemic prediction using regression and neural network models.
1646
+ Proceedings - 2020 International Conference on Intelligent Computing and Human-
1647
+ Computer Interaction, ICHCI 2020, 351–354.
1648
+ https://doi.org/10.1109/ICHCI51889.2020.00080
1649
+ Mandayam, A. U., Rakshith, A. C., Siddesha, S., & Niranjan, S. K. (2020). Prediction of
1650
+ Covid-19 pandemic based on Regression. Proceedings - 2020 5th International
1651
+ Conference on Research in Computational Intelligence and Communication Networks,
1652
+ ICRCICN 2020, 1–5. https://doi.org/10.1109/ICRCICN50933.2020.9296175
1653
+ Mojjada, R. K., Yadav, A., Prabhu, A. V., & Natarajan, Y. (2021). WITHDRAWN: Machine
1654
+ learning models for covid-19 futureforecasting. Materials Today. Proceedings.
1655
+ https://doi.org/10.1016/J.MATPR.2020.10.962
1656
+ Nan, Y., & Gao, Y. (2018). A machine learning method to monitor China’s AIDS epidemics
1657
+ with data from Baidu trends. PLoS ONE, 13(7).
1658
+ https://doi.org/10.1371/JOURNAL.PONE.0199697
1659
+ Nikhil, Saini, A., Panday, S., & Gupta, N. (2021). Polynomial Based Linear Regression
1660
+ Model to Predict COVID-19 Cases. 2021 6th International Conference on Recent
1661
+ Trends on Electronics, Information, Communication and Technology, RTEICT 2021,
1662
+ 66–69. https://doi.org/10.1109/RTEICT52294.2021.9574032
1663
+ Oumina, A., El Makhfi, N., & Hamdi, M. (2020). Control the COVID-19 Pandemic: Face
1664
+ Mask Detection Using Transfer Learning. 2020 IEEE 2nd International Conference on
1665
+ Electronics, Control, Optimization and Computer Science, ICECOCS 2020.
1666
+ https://doi.org/10.1109/ICECOCS50124.2020.9314511
1667
+ Ozturk, T., Talo, M., Yildirim, E. A., Baloglu, U. B., Yildirim, O., & Rajendra Acharya, U.
1668
+ (2020). Automated detection of COVID-19 cases using deep neural networks with X-ray
1669
+ images. Computers in Biology and Medicine, 121.
1670
+ https://doi.org/10.1016/J.COMPBIOMED.2020.103792
1671
+ Painuli, D., Mishra, D., Bhardwaj, S., & Aggarwal, M. (2021). Forecast and prediction of
1672
+ COVID-19 using machine learning. Data Science for COVID-19, 381.
1673
+ https://doi.org/10.1016/B978-0-12-824536-1.00027-7
1674
+ Plotly Python Graphing Library. (n.d.). Retrieved November 9, 2022, from
1675
+ https://plotly.com/python/
1676
+
1677
+
1678
+
1679
+ Pokharel, G., & Deardon, R. (2014). Supervised learning and prediction of spatial epidemics.
1680
+ Spatial and Spatio-Temporal Epidemiology, 11, 59–77.
1681
+ https://doi.org/10.1016/J.SSTE.2014.08.003
1682
+ Preacher, K. J., Curran, P. J., & Bauer, D. J. (2006). Computational tools for probing
1683
+ interactions in multiple linear regression, multilevel modeling, and latent curve analysis.
1684
+ Journal of Educational and Behavioral Statistics, 31(4), 437–448.
1685
+ https://doi.org/10.3102/10769986031004437
1686
+ Rath, S., Tripathy, A., & Tripathy, A. R. (2020). Prediction of new active cases of
1687
+ coronavirus disease (COVID-19) pandemic using multiple linear regression model.
1688
+ Diabetes and Metabolic Syndrome: Clinical Research and Reviews, 14(5), 1467–1474.
1689
+ https://doi.org/10.1016/J.DSX.2020.07.045
1690
+ Rivas-Perea, P., Cota-Ruiz, J., Chaparro, D. G., Venzor, J. A. P., Carreón, A. Q., & Rosiles,
1691
+ J. G. (2013). Support Vector Machines for Regression: A Succinct Review of Large-
1692
+ Scale and Linear Programming Formulations. International Journal of Intelligence
1693
+ Science, 03(01), 5–14. https://doi.org/10.4236/IJIS.2013.31002
1694
+ Rustam, F., Reshi, A. A., Mehmood, A., Ullah, S., On, B. W., Aslam, W., & Choi, G. S.
1695
+ (2020). COVID-19 Future Forecasting Using Supervised Machine Learning Models.
1696
+ IEEE Access, 8, 101489–101499. https://doi.org/10.1109/ACCESS.2020.2997311
1697
+ Safavian, S. R., & Landgrebe, D. (1991). A Survey of Decision Tree Classifier Methodology.
1698
+ IEEE Transactions on Systems, Man and Cybernetics, 21(3), 660–674.
1699
+ https://doi.org/10.1109/21.97458
1700
+ Sardinha, L. M., & Catalán, H. E. N. (2018). Attitudes towards domestic violence in 49 low-
1701
+ and middle-income countries: A gendered analysis of prevalence and countrylevel
1702
+ correlates. PLoS ONE, 13(10), 1–18. https://doi.org/10.1371/journal.pone.0206101
1703
+ Shaikh, S., Gala, J., Jain, A., Advani, S., Jaidhara, S., & Edinburgh, M. R. (2021). Analysis
1704
+ and Prediction of COVID-19 using Regression Models and Time Series Forecasting.
1705
+ Proceedings of the Confluence 2021: 11th International Conference on Cloud
1706
+ Computing, Data Science and Engineering, 989–995.
1707
+ https://doi.org/10.1109/CONFLUENCE51648.2021.9377137
1708
+ sklearn.preprocessing.LabelEncoder — scikit-learn 1.1.3 documentation. (n.d.). Retrieved
1709
+ November 9, 2022, from https://scikit-
1710
+ learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html
1711
+ sklearn.preprocessing.minmax_scale — scikit-learn 1.1.3 documentation. (n.d.). Retrieved
1712
+ November 9, 2022, from https://scikit-
1713
+ learn.org/stable/modules/generated/sklearn.preprocessing.minmax_scale.html
1714
+ sklearn.preprocessing.StandardScaler — scikit-learn 1.1.3 documentation. (n.d.). Retrieved
1715
+ November 9, 2022, from https://scikit-
1716
+ learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
1717
+ Storcheus, Dmitry; Rostamizadeh, Afshin; Kumar, S. (2015). A Survey of Modern Questions
1718
+ and Challenges in Feature Extraction. The 1st InternationalWorkshop “Feature
1719
+ Extraction: Modern Questions and Challenges,” 44, 1–18.
1720
+ Su, X., Gao, M., Ren, J., Li, Y., Dong, M., & Liu, X. (2022). Face mask detection and
1721
+ classification via deep transfer learning. Multimedia Tools and Applications, 81(3),
1722
+
1723
+
1724
+
1725
+ 4475–4494. https://doi.org/10.1007/S11042-021-11772-5/TABLES/5
1726
+ Suhasini, V., Mathematics, N. V.-T. J. of C. and, & 2021, undefined. (n.d.). A Hybrid TF-
1727
+ IDF and N-Grams Based Feature Extraction Approach for Accurate Detection of Fake
1728
+ News on Twitter Data. Turcomat.Org. Retrieved August 14, 2022, from
1729
+ https://turcomat.org/index.php/turkbilmat/article/download/10885/8124
1730
+ Tiwari, S., Kumar, S., & Guleria, K. (2020). Outbreak Trends of Coronavirus Disease-2019
1731
+ in India: A Prediction. Disaster Medicine and Public Health Preparedness, 14(5), e33–
1732
+ e38. https://doi.org/10.1017/DMP.2020.115
1733
+ Verma, A. K., & Pal, S. (2020). Prediction of Skin Disease with Three Different Feature
1734
+ Selection Techniques Using Stacking Ensemble Method. Applied Biochemistry and
1735
+ Biotechnology, 191(2), 637–656. https://doi.org/10.1007/s12010-019-03222-8
1736
+ Yadav, M., Perumal, M., & Srinivas, M. (2020). Analysis on novel coronavirus (COVID-19)
1737
+ using machine learning methods. Chaos, Solitons and Fractals, 139.
1738
+ https://doi.org/10.1016/J.CHAOS.2020.110050
1739
+
1740
+
KtFIT4oBgHgl3EQfaisW/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
L9E0T4oBgHgl3EQf0AI4/content/tmp_files/2301.02679v1.pdf.txt ADDED
@@ -0,0 +1,1084 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Grokking modular arithmetic
2
+ Andrey Gromov
3
+ Meta AI
4
+ Meta Platforms, Inc.
5
+ Menlo Park, California 94025
6
+ &
7
+ Department of Physics,
8
+ Condensed Matter Theory Center,
9
+ University of Maryland
10
+ College Park, Maryland 20740
11
12
+ Abstract
13
+ We present a simple neural network that can learn modular arithmetic tasks and
14
+ exhibits a sudden jump in generalization known as “grokking”. Concretely, we
15
+ present (i) fully-connected two-layer networks that exhibit grokking on various
16
+ modular arithmetic tasks under vanilla gradient descent with the MSE loss function
17
+ in the absence of any regularization; (ii) evidence that grokking modular arithmetic
18
+ corresponds to learning specific feature maps whose structure is determined by the
19
+ task; (iii) analytic expressions for the weights – and thus for the feature maps –
20
+ that solve a large class of modular arithmetic tasks; and (iv) evidence that these
21
+ feature maps are also found by vanilla gradient descent as well as AdamW, thereby
22
+ establishing complete interpretability of the representations learnt by the network.
23
+ 1
24
+ Introduction and overview of literature
25
+ Grokking is an effect discovered empirically in [11]. Its phenomenology is characterized by a
26
+ steep and delayed rise in generalization from 0% to a fixed value, as depicted in Fig. 1b. Beyond
27
+ that observation, however, there are no clear characteristics of grokking that are reproduced across
28
+ different works. Here, we start with a lightening review of various claims made in the literature.
29
+ In the original work [11], the authors studied how a shallow transformer learns data distributions
30
+ that are generated by simple deterministic rules (termed ‘algorithmic datasets’). Examples of such
31
+ datasets include modular arithmetic, finite groups, bit operations and more. Specifically, in [11] the
32
+ data took a form of a string “a ◦ b = c”, where c was masked and had to be predicted by a two-layer
33
+ decoder-only transformer. In that study, the following empirical facts were observed:
34
+ • Generalization occurs long after training accuracy reached 100%. The jump in generalization
35
+ is quite rapid and occurs after a large number of epochs (cf. Fig. 1).
36
+ • There is a minimal amount of data (dependent on the task) that needs to be included into the
37
+ training set in order for generalization to occur (cf. Fig. 4b).
38
+ • Various forms of regularization improve how quickly grokking happens. Weight decay
39
+ included in AdamW optimizer showed to be particularly effective (cf. Fig. 4b).
40
+ In subsequent work [7], the authors simplified the architecture to a single linear learnable encoder
41
+ followed by a multilayer perceptron (MLP) decoder and showed that, even if the task is recast as
42
+ a classification problem, grokking persists. They also interpret grokking as a competition between
43
+ encoder and decoder, and developed a toy model of grokking as dynamics of the embeddings only.
44
+ Preprint. Under review.
45
+ arXiv:2301.02679v1 [cs.LG] 6 Jan 2023
46
+
47
+ (a)
48
+ (b)
49
+ (d)
50
+ (c)
51
+ Figure 1: Dynamics under GD for the minimal model (4) with MSE loss and α = 0.49. (a) Train
52
+ and test loss. Train loss generally decays monotonically, while test loss reaches its maximum right
53
+ before the onset of grokking. (b) Norms of weight matrices during training. We do not observer a
54
+ large increase in weight norms as in [14], but we do see that weight norms start growing at the onset
55
+ of grokking. (c) Train and test accuracy showing the delayed and sudden onset of generalization. (d)
56
+ Norms of gradient vectors. The dynamics accelerates until the test loss maximum is reached and then
57
+ slowly decelerates.
58
+ This model indeed leads to some quantitative predictions such as the critical amount of data needed
59
+ for grokking to happen relatively fast.
60
+ In more recent work [14], it was argued that if the Adam optimizer is used, then in order for grokking
61
+ to happen without regularization, the training dynamics must undergo a slingshot – a sudden explosion
62
+ in the training loss – which was followed by the rise of generalization. It was further shown that those
63
+ slingshots and grokking can be turned on and off by tuning the ϵ parameter of the Adam optimizer.
64
+ In a blogpost [9], it was argued that the algorithm for modular addition learnt by a single-layer
65
+ transformer can be reverse-engineered and is human-interpretable. It was further argued that (i)
66
+ regularization is required for grokking and (ii) there should be no grokking in the infinite-data regime.
67
+ Furthermore, many other algorithmic datasets were considered.
68
+ On a theoretical front, the authors of [1] studied online learning of the (k, n) sparse parity problem
69
+ where the network function is asked to compute parity of k bits in a length-n string of random bits. In
70
+ particular, they observed grokking both in under- and over-parametrized regimes. For large minibatch
71
+ sizes, generalization was attributed to amplification of the information already present in the initial
72
+ gradient (called Fourier gap) rather than to the diffusive search by stochastic gradient descent, and
73
+ derived the scaling of grokking time with n, k to be nO(k).
74
+ Finally, [8] studied grokking for non-algorithmic datasets and its dependence on the initialization,
75
+ while [16] developed a solvable model of grokking in the teacher-student setup.
76
+ To summarize, the available results, although undoubtedly inspiring, leave grokking on algorithmic
77
+ datasets as a somewhat mysterious effect. Furthermore, the empirical results suggest that grokking
78
+ provides a fascinating platform for quantitatively studying many fundamental questions of deep
79
+ learning in a controlled setting. These include: (i) the precise role of regularization in deep nonlinear
80
+ neural networks; (ii) feature learning; (iii) the role of training data distributions in optimization
81
+ dynamics and generalization performance of the network; (iv) data-, parameter- and compute-
82
+ efficiency of training; (v) interpretability of learnt features; and (vi) expressivity of architectures and
83
+ complexity of tasks.
84
+ 2
85
+
86
+ Train accuracy
87
+ Test accuracyTrain loss
88
+ Test lossWeight norm layer 1
89
+ Weight norm layer 2grad norm layer 1
90
+ grad norm layer 2(a)
91
+ (b)
92
+ (c)
93
+ (d)
94
+ (e)
95
+ (f)
96
+ (g)
97
+ (h)
98
+ (i)
99
+ Figure 2: Preactivations. First row: Preactivation h(2)
100
+ 6 (n, m). Second row: Fourier image of the
101
+ Preactivation h(2)
102
+ 6 (n, m). Third row: Preactivation h(1)
103
+ 6 (n, m) or h(1)
104
+ 30 (n, m). First column: At
105
+ initialization. Second column: Found by vanilla GD. The Fourier image shows a single series of
106
+ peaks corresponding to m + n = 6 mod 97. Third column: Evaluated using the analytic solution
107
+ (6)-(7). The Fourier image shows the same peak as found by GD, but also weak peaks corresponding
108
+ to 2m = 6 mod 97, 2n = 6 mod 97 and m − n = 6 mod 97 that were suppresed by the choice of
109
+ phases via (12).
110
+ This motivates the present study, proposing and analyzing a minimal yet realistic model and opti-
111
+ mization process that lead to grokking on modular arithmetic tasks.
112
+ 2
113
+ Set up and overview of results
114
+ In this Section we describe a very simple, solvable, setting where grokking takes place and learnt
115
+ features can be understood analytically. We consider a two-layer MLP network without biases, given
116
+ by
117
+ h(1)
118
+ k (x) =
119
+
120
+ 1
121
+ D
122
+ D
123
+
124
+ j=1
125
+ W (1)
126
+ kj xj ,
127
+ z(1)
128
+ i
129
+ (x) = φ(h(1)
130
+ i (x)) ,
131
+ (1)
132
+ h(2)
133
+ q (x) = 1
134
+ N
135
+ N
136
+
137
+ k=1
138
+ W (2)
139
+ qk z(1)
140
+ k (x) ,
141
+ (2)
142
+ where N is the width of the hidden layer, D is the input dimension, and φ is an activation function. At
143
+ initialization the weights are sampled from the standard normal distribution W (1), W (2) ∼ N(0, 1).
144
+ In Eqs. (1)–(2), we have chosen to follow the mean-field parametrization [13]: this parametrization
145
+ ensures that the analytic solution presented in the next Section remains finite in the large-N limit1.
146
+ Given this architecture, we then set up modular arithmetic tasks as classification problems. To this
147
+ end, we fix an integer p (that does not have to be prime) and consider functions over Zp. Each input
148
+ integer is encoded as a one-hot vector. The output integer is also encoded as a one-hot vector. For the
149
+ task of learning bivariate functions over Zp the input dimension is 2p, the output dimension is p, the
150
+ total number of points in the dataset is p2, while the model (1)–(2) has 3Np parameters. Finally, we
151
+ 1In the limit of infinite width, the meanfield parametrization allows for feature learning.
152
+ 3
153
+
154
+ hg'(n,m)hg'(n,m)hg(n,m)hg'(n,m)hg(n,m)hg'(n,m)h?'(n,m)hg'(n,m)hg(n,m)split the dataset D into train Dtrain and test Dtest subsets, and furnish this setup with the MSE loss
155
+ function.2
156
+ Under this minimal setting grokking occurs consistently for many modular functions, provided
157
+ enough epochs of training have taken place and the fraction of data used for training,
158
+ α ≡ |Dtrain|
159
+ |D|
160
+ ,
161
+ (3)
162
+ is sufficiently large (if α is too small, generalization is not possible even after long training time).
163
+ By adjusting width N, at fixed α, we can tune between underparametrized and overparametrized
164
+ regimes. The ‘simplest’ optimizer that leads to grokking is the full-batch gradient descent. No explicit
165
+ regularization is necessary for grokking to occur. We have tried other optimizers and regularization
166
+ methods such as AdamW, GD with weight decay and momentum, SGD with Batchnorm, and GD
167
+ with Dropout. Generally, regularization and the use of adaptive optimizers produce two effects: (i)
168
+ grokking happens after a smaller number of epochs and (ii) grokking happens at smaller α. See
169
+ Fig. 4.
170
+ In passing, we note that, in the case of quadratic activation the full network function takes an even
171
+ simpler form
172
+ f(x) =
173
+ 1
174
+ DN W (2) �
175
+ W (1)x
176
+ �2
177
+ .
178
+ (4)
179
+ This function is cubic in parameters and quadratic in its inputs. Eq. (4) is the simplest possible
180
+ nonlinear generalization of the ‘u-v’ model studied in [6]. The exact results are derived for this
181
+ particular choice (and can be generalized to other monomials if wished) while empirical results are
182
+ only mildly sensitive to the choice of activation function.
183
+ Whether grokking happens or not depends on the modular function at hand assuming the architecture
184
+ and optimizer are fixed. We show that for any function of the form f(n, m) = f1(n) + f2(m) mod p
185
+ as well as ˜f(n, m) = F(f1(n) + f2(m)) mod p one can present an analytic solution for the weights
186
+ that yield 100% accuracy and these weights are approximately found by various optimizers with
187
+ and without regularization. Functions of the form g(n, m) = g1(n) · g2(m) mod p can also be
188
+ grokked, however we have failed to find the analytic expression for the weights. Functions of the
189
+ form f(n, m) + g(n, m) mod p are more difficult to grok: they require more epochs and larger α.
190
+ In summary, our setup is simple enough to be analytically tractable but complex enough to exhibit
191
+ representation learning and, consequently, grokking.
192
+ 3
193
+ Interpretability: analytic expression for the weights
194
+ 3.1
195
+ Modular addition
196
+ In this Section we will exhibit the analytic expression for the weights that solve the modular addition
197
+ task. Namely, the network supplied with these weights implements the following modular function
198
+ f(n, m) = n + m mod p .
199
+ (5)
200
+ This solution is approximate and can be made increasingly more accurate (meaning the test loss
201
+ can be made arbitrarily close to 0) by increasing the width N. To simplify the presentation, we
202
+ will discuss modular addition at length and then generalize the solution to a broad class of modular
203
+ functions. In the next Section we will provide evidence that the GD and AdamW find the same
204
+ solution.
205
+ Claim I. If the network function has the form (4) then the weights W (1)
206
+ kn and W (2)
207
+ qk solving the
208
+ modular addition problem are given by
209
+ W (1)
210
+ kn =
211
+
212
+ �cos
213
+
214
+ 2π k
215
+ pn1 + ϕ(1)
216
+ k
217
+
218
+ cos
219
+
220
+ 2π k
221
+ pn2 + ϕ(2)
222
+ k
223
+
224
+
225
+
226
+ T
227
+ ,
228
+ n = (n1, n2)
229
+ (6)
230
+ W (2)
231
+ qk = cos
232
+
233
+ −2π k
234
+ pq − ϕ(3)
235
+ k
236
+
237
+ ,
238
+ (7)
239
+ 2CSE loss can be used, if desired.
240
+ 4
241
+
242
+ where we represent W (1)
243
+ kn as a row of two N × p matrices and n1, n2 = 0, 1, . . . , p − 1. The full size
244
+ of W (1)
245
+ kn is N × 2p. The phases ϕ(1)
246
+ k , ϕ(2)
247
+ k
248
+ and ϕ(3)
249
+ k
250
+ are random, sampled from a uniform distribution
251
+ and satisfy the constraint (12).
252
+ Reasoning.
253
+ Here we explain why and how the solution (6)-(7) works. There are two important
254
+ ingredients in (6)-(7). The first ingredient is the periodicity of weights with respect to the indices
255
+ n1, n2, q. The set of frequencies is determined by the base of Zp. The full set of independent
256
+ frequencies is obtained by varying k from 0 to p−1
257
+ 2
258
+ if p is odd and to p
259
+ 2 if p is even. The second
260
+ ingredient is the set of phases ϕ(1)
261
+ k , ϕ(2)
262
+ k , ϕ(3)
263
+ k . Indeed, Eqs. (6)-(7) solve modular addition only after
264
+ these phases are chosen appropriately. We will discuss the choice shortly.
265
+ To show that (6)-(7) solve modular addition we will perform the inference step analytically. Consider
266
+ a general input (n, m) represented as a pair of one-hot vectors stacked into a single vector of size
267
+ 2p × 1.
268
+ The preactivations in the first layer are given by (we drop the normalization factors)
269
+ h(1)
270
+ k (n, m) = cos
271
+
272
+ 2π k
273
+ pn + ϕ(1)
274
+ k
275
+
276
+ + cos
277
+
278
+ 2π k
279
+ pm + ϕ(2)
280
+ k
281
+
282
+ .
283
+ (8)
284
+ The activations in the first layer are given by
285
+ z(1)
286
+ k (n, m) =
287
+
288
+ cos
289
+
290
+ 2π k
291
+ pn + ϕ(1)
292
+ k
293
+
294
+ + cos
295
+
296
+ 2π k
297
+ pm + ϕ(2)
298
+ k
299
+ ��2
300
+ ,
301
+ (9)
302
+ which, after some trigonometry, becomes
303
+ z(1)
304
+ k (n, m)
305
+ =
306
+ 1 + 1
307
+ 2
308
+
309
+ cos
310
+
311
+ 2π k
312
+ p2n + 2ϕ(1)
313
+ k
314
+
315
+ + cos
316
+
317
+ 2π k
318
+ p2m + 2ϕ(2)
319
+ k
320
+ ��
321
+ +
322
+ cos
323
+
324
+ 2π k
325
+ p(n + m) + ϕ(1)
326
+ k
327
+ + ϕ(2)
328
+ k
329
+
330
+ + cos
331
+
332
+ 2π k
333
+ p(n − m) + ϕ(1)
334
+ k
335
+ − ϕ(2)
336
+ k
337
+
338
+ .(10)
339
+ Finally, the preactivations in the second layer take form
340
+ h(2)
341
+ q (n, m)
342
+ =
343
+ 1
344
+ 4
345
+ N
346
+
347
+ k=1
348
+ cos
349
+
350
+ 2π k
351
+ p(2n − q) + 2ϕ(1)
352
+ k
353
+ − ϕ(3)
354
+ k
355
+
356
+ + cos
357
+
358
+ 2π k
359
+ p(2n + q) + 2ϕ(1)
360
+ k
361
+ + ϕ(3)
362
+ k
363
+
364
+ +
365
+ 1
366
+ 4
367
+ N
368
+
369
+ k=1
370
+ cos
371
+
372
+ 2π k
373
+ p(2m − q) + 2ϕ(1)
374
+ k
375
+ − ϕ(3)
376
+ k
377
+
378
+ + cos
379
+
380
+ 2π k
381
+ p(2m + q) + 2ϕ(1)
382
+ k
383
+ + ϕ(3)
384
+ k
385
+
386
+ +
387
+ 1
388
+ 2
389
+ N
390
+
391
+ k=1
392
+ cos
393
+
394
+ 2π k
395
+ p(n + m − q) + ϕ(1)
396
+ k
397
+ + ϕ(2)
398
+ k
399
+ − ϕ(3)
400
+ k
401
+
402
+ +
403
+ 1
404
+ 2
405
+ N
406
+
407
+ k=1
408
+ cos
409
+
410
+ 2π k
411
+ p(n + m + q) + ϕ(1)
412
+ k
413
+ + ϕ(2)
414
+ k
415
+ + ϕ(3)
416
+ k
417
+
418
+ +
419
+ 1
420
+ 2
421
+ N
422
+
423
+ k=1
424
+ cos
425
+
426
+ 2π k
427
+ p(n − m − q) + ϕ(1)
428
+ k
429
+ − ϕ(2)
430
+ k
431
+ − ϕ(3)
432
+ k
433
+
434
+ +
435
+ 1
436
+ 2
437
+ N
438
+
439
+ k=1
440
+ cos
441
+
442
+ 2π k
443
+ p(n − m + q) + ϕ(1)
444
+ k
445
+ − ϕ(2)
446
+ k
447
+ + ϕ(3)
448
+ k
449
+
450
+ +
451
+ N
452
+
453
+ k=1
454
+ cos
455
+
456
+ 2π k
457
+ pq + ϕ(3)
458
+ k
459
+
460
+ .
461
+ (11)
462
+ Expression (11) does not yet perform modular addition. Observe that each term in (11) is a sum
463
+ of waves with different phases, but systematically ordered frequencies. We are going to choose
464
+ the phases ϕ(1)
465
+ k , ϕ(2)
466
+ k , ϕ(3)
467
+ k
468
+ to ensure constructive interference in the third line of (11). The simplest
469
+ choice is to take
470
+ ϕ(1)
471
+ k
472
+ + ϕ(2)
473
+ k
474
+ = ϕ(3)
475
+ k
476
+ .
477
+ (12)
478
+ 5
479
+
480
+ Then the term in the third line of (11) takes form
481
+ 1
482
+ 2
483
+ N
484
+
485
+ k=1
486
+ cos
487
+
488
+ 2π k
489
+ p(n + m − q)
490
+
491
+ = N
492
+ 2 δ(n + m − q) ,
493
+ (13)
494
+ where δ(n+m−q) is the modular version of the δ-function. It is equal to 1 when n+m−q = 0 mod p
495
+ and is equal to 0 otherwise. This concludes the constructive part of the interference.
496
+ Next, we need to ensure that all other waves (i.e. all terms, but the third term in (11)) interfere
497
+ destructively. Fortunately, this can be accomplished by observing that the constraint (12) leaves some
498
+ phases in every single term in (11) apart from the third one. We will spare the reader the explicit
499
+ expression. Every remaining term takes form
500
+ 1
501
+ 2
502
+ N
503
+
504
+ k=1
505
+ cos
506
+
507
+ 2π k
508
+ ps + ϕk
509
+
510
+ ,
511
+ (14)
512
+ where s is an integer and ϕk is a linear combination of ϕ(1)
513
+ k
514
+ and ϕ(2)
515
+ k . We now assume that ϕ(1)
516
+ k
517
+ and
518
+ ϕ(2)
519
+ k
520
+ are uniformly distributed random numbers. Then so are ϕk. For any appreciable N (see Fig. 4b)
521
+ we have
522
+ N
523
+
524
+ k=1
525
+ cos
526
+
527
+ 2π k
528
+ ps + ϕk
529
+
530
+ ≪ N ,
531
+ (15)
532
+ which implies that every term in (11) can be neglected compared to the third term. Thus, for
533
+ reasonable values of N (and restoring normalisation) the network function h(2)
534
+ q (n, m) takes form
535
+ h(2)
536
+ q (n, m) ≈ 1
537
+ 2
538
+ N
539
+
540
+ k=1
541
+ cos
542
+
543
+ 2π k
544
+ p(n + m − q)
545
+
546
+ =
547
+ 1
548
+ 2Dδ(n + m − q) .
549
+ (16)
550
+ In the limit of large N the approximation becomes increasingly more accurate. Note that h(2)
551
+ q (n, m)
552
+ is finite in the infinite width limit.
553
+ The test accuracy of the solution (6)-(7) increases with width. For larger N the interference is
554
+ stronger leading to the better approximation of the δ-function and, ultimately, to better accuracy. In
555
+ this example we clearly see that larger width does not imply a larger number of relevant features.
556
+ Instead, it introduces redundancy: each frequency appears several times with different random phases
557
+ ultimately leading to a better wave interference.
558
+ We emphasize that, the weights (6)-(7) are not iid. At fixed k the weights W (1)
559
+ kn , W (2)
560
+ qk are strongly
561
+ correlated with each other. This provides a non-trivial yet analytically tractable example of a
562
+ correlated, non-linear, network far away from the Gaussian limit.
563
+ The weights (6)-(7) also work for other activation functions, including ReLU, however the 100%
564
+ accuracy is achieved at higher width compared to quadratic activation function (more details in
565
+ Appendix B).
566
+ 3.2
567
+ General modular functions and complexity
568
+ The solution (6)-(7) can be easily generalized to represent a general modular function of the form
569
+ f(n, m) = f1(n) + f2(m) mod p ,
570
+ (17)
571
+ where f1, f2 are arbitrary modular functions of a single variable. The generalization becomes obvious
572
+ once we observe that the proof presented in Section 3 holds verbatim upon replacing n → f1(n) and
573
+ m → f2(m) leading to a δ-function supported on f1(n) + f2(m) − q = 0 mod p. These solutions
574
+ are also found by the optimizer just like in the case of modular addition. More precisely, we claim
575
+ Claim II. If the network function has the form (4) then the weights W (1)
576
+ kn and W (2)
577
+ qk solving the
578
+ modular task f(n, m) = f1(n) + f2(m) mod p are given by
579
+ W (1)
580
+ kn =
581
+
582
+ �cos
583
+
584
+ 2π k
585
+ pf1(n1) + ϕ(1)
586
+ k
587
+
588
+ cos
589
+
590
+ 2π k
591
+ pf2(n2) + ϕ(2)
592
+ k
593
+
594
+
595
+ � ,
596
+ n = (n1, n2)
597
+ (18)
598
+ 6
599
+
600
+ ReLU
601
+ Quadratic
602
+ GD
603
+ AdamW
604
+ Figure 3: Solutions found by the optimizer. In all cases distribution of ϕ(1)
605
+ k
606
+ + ϕ(2)
607
+ k
608
+ − ϕ(3)
609
+ k
610
+ is strongly
611
+ peaked around 0. The solutions found by AdamW are closer to the analytic ones because the phases
612
+ are peaked stronger around 0. Note that for solutions found by the optimizer the phases are not iid
613
+ which leads to the better accuracy.
614
+ and Eq. (7). The weights depend on the modular arithmetic task at hand. Furthermore, for this class
615
+ of tasks the weights in the readout layer are unchanged. A simple example is f(n, m) = n2 + m2.
616
+ The activations for this task are presented in the Appendix C.
617
+ Corollary. Given the Claim II, a more general modular task ˜f(n, m) = F(f1(n) + f2(m)) mod p,
618
+ can be solved, assuming that F is invertible. This is accomplished by modifying the readout layer
619
+ weights as follows
620
+ W (2)
621
+ qk = cos
622
+
623
+ −2π k
624
+ pF −1(q) − ϕ(3)
625
+ k
626
+
627
+ .
628
+ (19)
629
+ This solution approximates δ(f1(n) + f2(m) − F −1(q)), which is equivalent to the δ-function
630
+ supported on the claimed modular task δ(F(f1(n) + f2(m)) − q) assuming F −1 is single-valued.
631
+ Note that application of F −1 must follow modular arithmetic rules. If F −1 is not single-value then
632
+ the accuracy will be approximately 100%/b, where b is the number of branches. A simple example is
633
+ f(n, m) = (n + m)2. The activations for this task are presented in the Appendix. Analytic solution
634
+ has accuracy ≈ 50% since F −1(x) = x
635
+ 1
636
+ 2 mod p, which has two branches.
637
+ The architecture (4) can also learn modular multiplication, however we do not posses an analytic
638
+ solution for that case.
639
+ Broadly speaking, a bivariate modular function is a p × p table where each entry can take values
640
+ between 0 and p−1. There are pp2 such tables. Clearly, grokking is not possible on the overwhelming
641
+ majority of such functions, because this set includes placing random integers in each entry of the
642
+ table. Some modular functions, namely the ones that involve addition and multiplication, and are not
643
+ of the form ˜f are substantially harder to learn. They require more data, more time and do not always
644
+ yield 100% test accuracy after grokking. One particularly interesting example was found by [11],
645
+ f(n, m) = n3 + nm2 + m, which does not generalize even for α > 0.9, both for transformer and
646
+ MLP architectures. Some examples are discussed in Appendix. It is not clear how to predict which
647
+ functions will generalize and which will not given an architecture.
648
+ 7
649
+
650
+ 4
651
+ Properties of solutions found by gradient descent
652
+ 4.1
653
+ General properties
654
+ In this Section we show that optimization of the network (1)-(2) yields a solution that is very close to
655
+ the one we proposed in the previous Section.
656
+ (a)
657
+ (b)
658
+ Figure 4: Scaling with width and data. (a) Grokking time vs. the amount of training data for various
659
+ optimizers. The abrupt change in grokking time is observed at different α. Momentum appears to
660
+ play a major role both in reducing grokking time and α. (b): Test accuracy as a function of width
661
+ for the solution found by GD, AdamW and for the analytic solution (6)–(7). The optimizer can tune
662
+ phases better than random uniform distribution in order to ensure better cancellations. The shape of
663
+ the curves also depends on the amount of data used for training and number of epochs. Here we took
664
+ α = 0.5 and trained longer for GD.
665
+ As can be seen in Fig. 1 during the optimization the network first overfits the train data. The periodic
666
+ structure in weights and activations does not form at that point. Train loss slowly gets smaller until
667
+ it either (i) saturates leading to a memorizing solution without grokking the problem, or (ii) after a
668
+ period of slow decrease, it slightly accelerates. It is during that time grokking and feature formation
669
+ take place. The test loss is non-monotonic and reaches a local maximum right before grokking
670
+ happens. In the memorizing phase test loss never leaves this local maximum. This general behaviour
671
+ appears to be insensitive to either optimizer used, loss function or modular function (i.e. dataset) in
672
+ question.
673
+ We then show empirically that independently of the optimizer and the loss function the features
674
+ found by optimization in the grokking phase are indeed periodic functions with frequencies 2πk
675
+ p
676
+ where k = 0, . . . , p − 1. If the width is larger than p−1
677
+ 2
678
+ then multiple copies of these functions are
679
+ found with different phases. The phases are approximately random and satisfy the constraint (12)
680
+ approximately as we show in Fig.3. Given the simplicity of the setup, the basic explanation for
681
+ grokking must be quite banal. At some point in training, the only way to decrease training loss is to
682
+ start learning the “right” features.
683
+ 4.2
684
+ Scaling
685
+ Scaling with width and dataset size are presented on Fig. 4. The accuracy of solution (6)-(7) favorably
686
+ scales with width. This stems from the simple fact that destructive interference condition (15)
687
+ becomes increasingly more accurate with larger N. The test accuracy of trained network also
688
+ 8
689
+
690
+ GD
691
+ GD, momentum 0.9
692
+ GD, wd
693
+ GD,momentum 1.0
694
+ GD, momentum 1.0 + wd
695
+ Adam
696
+ AdamWDGD
697
+ Adamw
698
+ Analyticincreases with the width, reaching perfect accuracy before the analytic solution does, which is not
699
+ surprising because optimizer can tune the individual phases to ensure better performance.
700
+ The grokking time scales with the amount of data. Both for GD and AdamW there is a critical amount
701
+ of data αc such that grokking is possible. The precise value of αc is hard to determine because of
702
+ the long time scales needed for grokking close to αc. This is clearly seen on Fig.4. AdamW appears
703
+ to be more data-efficient than GD, however it is difficult to rule out the possibility that for α ≈ 0.2
704
+ GD requires extremely long time scales to show grokking. The value of αc also depends on how
705
+ the training set is sampled. One can imagine a random sampling or a guided algorithmic choice of
706
+ training examples. The latter will lead to smaller αc.
707
+ 4.3
708
+ Dynamics
709
+ In this Section we introduce an empirical measure that quantifies the feature learning for the modular
710
+ addition task. To define such measure we turn to the exact solution (6)- (7). We will utilize the fact
711
+ that periodic weights are peaked in Fourier space, while random weights are not.
712
+ To define the measure of feature learning, we first transform the weights W (1)
713
+ nk to a Fourier space
714
+ with respect to index n. Denote the transformed weights ˜W (1)
715
+ νk . If the weights are periodic, then
716
+ Fourier-transformed weights are localized in ν, i.e. for most values of ν we have ˜W (1)
717
+ νk ≈ 0 except
718
+ for a few values determined by the frequency 2π
719
+ p k. At initialization, when the weights are random
720
+ the Fourier-transformed weights are delocalized, i.e. will take roughly equal values for any ν.
721
+ We introduce a measure of localization known as the inverse participation ratio (IPR). It is routinely
722
+ used in localization physics [3] as well as network theory [10]. We define IPR in terms of the
723
+ normalized Fourier-transformed weights
724
+ IPRr(k) =
725
+ D
726
+
727
+ ν=1
728
+ | ˜w(1)
729
+ νk |2r ,
730
+ where
731
+ ˜w(1)
732
+ νk =
733
+ ˜W (1)
734
+ νk
735
+ ��D
736
+ ν=1( ˜W (1)
737
+ νk )2
738
+ ,
739
+ (20)
740
+ and r is a parameter traditionally taken to be 2. It follows from the definition that IPR1(k) = 1 for
741
+ any k. Unfortunately, IPRr(k) is defined per neuron. We would like a single measure for all of the
742
+ weights in a given layer. Thus, we introduce the average IPR
743
+ IPRr = 1
744
+ N
745
+ N
746
+
747
+ k=1
748
+ IPRr(k) .
749
+ (21)
750
+ Larger values of IPRr indicate that the weights are more periodic, while the smaller values indicate
751
+ that the weights are more random.
752
+ We plot IPR2 as a function of time in Fig. 5. It is clear that there is an upward trend from the very
753
+ beginning of training. Onset of grokking is correlated with the sharp increase of rate of IPR growth.
754
+ 5
755
+ Conclusions and discussions
756
+ 5.1
757
+ Conclusions
758
+ We have presented a simple architecture that exhibits grokking on a variety of modular arithmetic
759
+ problems. The architecture (4) is simple enough to determine the weights and features that solve
760
+ modular addition problems analytically, leading to complete interpretability of what was learnt by the
761
+ model: the network is learning a δ-function represented by a complete set of trigonometric functions
762
+ with frequencies determined by the base of modular addition; the phases are chosen to ensure that
763
+ waves concentrated on m + n = q mod p interfere constructively.
764
+ As suggested in some literature before, we reiterate that grokking is likely to be intimately connected
765
+ to feature learning. In particular, random feature models such as infinitely-wide neural networks (in
766
+ the NTK regime) do not exhibit grokking, at least on the tasks that involve modular functions. In
767
+ addition, Ref. [7] argued that grokking is due to the competition between encoder and decoder. While
768
+ it is certainly true in their model, in the present case there is no learnable encoder but grokking is still
769
+ present. In our minimal setup, the simplest explanation for grokking is that once training loss reached
770
+ a certain value, the only way to further minimize it is to start learning the right features.
771
+ 9
772
+
773
+ Figure 5: Inverse participation ratio. IPR plotted against the dynamics (under AdamW) of train and
774
+ test accuracy. Empirically, we see 4 regimes: (i) early training when IPR grows linearly and slowly;
775
+ (ii) transition from slow liner growth to fast linear growth. This transition period coincides with
776
+ grokking; (iii) fast linear growth, that starts after 100% test accuracy was reached; (iv) saturation,
777
+ once weights became periodic. The dashed line indicates IPR2 for the exact solution (6)-(7). The gap
778
+ between the two indicates that even in the final solution there is quite a bit of noise leading do some
779
+ mild delocalization in Fourier space. More training and more data helps to reduce the gap.
780
+ 5.2
781
+ Discussions
782
+ We close with a discussion of open problems and directions.
783
+ Different modular functions clearly fit into different complexity classes: (i) functions that can be
784
+ learnt easily; (ii) functions that can be learnt with a lot of data and training time; and (iii) functions
785
+ that cannot be learnt at all (at least within the class of architectures we and [11] have considered). It
786
+ would be interesting to (1) define the notion of complexity rigorously as a computabe quantity and
787
+ (2) construct architectures/optimizers that can learn more complex modular functions (or argue that it
788
+ cannot be done).
789
+ A neural network can learn a smooth approximation to complicated modular operations, such as mod-
790
+ ular square root and modular logarithm. It would be interesting to determine if these approximations
791
+ provide any practical gain over known algorithms that perform these operations as well as to enable
792
+ the networks to operate over large numbers.
793
+ The critical amount of data needed for generalization, αc, is likely to be computable as well, and is a
794
+ measure of complexity of a modular function. We would like to have an expression for the absolute
795
+ minimal value of αc (i.e. minimized over all possible ML methods). This value is also an implicit
796
+ function of modulus p, and the modular functions with larger modulus are likely simpler since we find
797
+ empirically that αc is a decreasing function of p. The value of αc further depends on how training set
798
+ is sampled from the entire dataset; the appropriate choice of the sampling method may thus improve
799
+ the data efficiency.
800
+ While grokking happens in a very simple setting described here, adaptive methods and regularization
801
+ improve both speed and data efficiency. It might be possible to characterize these improvements
802
+ quantitatively.
803
+ Modular functions of many variables can be grokked as well and, in some cases, the corresponding
804
+ analytic solution can be constructed. It is possible that the analytic solution can inform a type of
805
+ architecture one should be using, e.g., in applications of deep learning to cryptography.
806
+ 10
807
+
808
+ IPR
809
+ IPR exact
810
+ Tain accuracy
811
+ Test accuracyPresented solutions only work for a single-hidden-layer neural network. To quantify the role of depth,
812
+ we would like to have examples of algorithmic tasks that require a deeper architecture. For instance,
813
+ it is possible that deep convolutional architectures, given an appropriate algorithmic dataset with
814
+ hierarchical structure, would admit a solution in terms of wavelets rather than Fourier components [2].
815
+ In real-world datasets and tasks that require feature learning, it is possible that grokking is happening
816
+ but the jumps in generalization after learning a new feature may be so small that we perceive a
817
+ continuous learning curve. To elucidate this point further, it is important to construct a realistic model
818
+ of datasets and tasks with controllable amount of hierarchical structure. More broadly, it would be
819
+ very interesting to characterize grokking in terms that are not specific to a particular problem or a
820
+ particular model and to establish whether it occurs in more traditional ML settings.
821
+ Given the simplicity of our model (4), loss function (MSE) and optimization algorithm (vanilla GD),
822
+ it is plausible that some aspects of the training dynamics – not just the solution at the end of training
823
+ – can be treated analytically. As the training and test losses show peculiar dynamics, it would be
824
+ interesting to understand the structure of the loss landscape to explain the dynamics, in particular
825
+ what happens at the onset of generalization and why it is so abrupt. Perhaps methods described in
826
+ [12] – where the feature kernel and the neural tangent kernel can be computed analytically throughout
827
+ the training – will take a particularly simple form in this setting.
828
+ There are certainly many other directions that the reader may be interested in exploring.
829
+ Acknowledgments and Disclosure of Funding
830
+ Discussions with N. Ardalani, Y. Bahri, M. Barkeshli, L. Bottou, T. Can, F. Charton, D. Doshi,
831
+ S. Ganguli, P. Glorioso, B. Hanin, T. He, I. Molybog, M. Paul, D. Roberts, A. Saxe, D. Schwab and
832
+ S. Yaida are acknowledged. I am particularly grateful to S. Yaida, D. Roberts, and B. Hanin for
833
+ encouraging, detailed and insightful feedback on the manuscript. A.G.’s work at the University of
834
+ Maryland was supported in part by NSF CAREER Award DMR-2045181, Sloan Foundation and the
835
+ Laboratory for Physical Sciences through the Condensed Matter Theory Center.
836
+ References
837
+ [1] Boaz Barak, Benjamin L Edelman, Surbhi Goel, Sham Kakade, Eran Malach, and Cyril Zhang. Hid-
838
+ den progress in deep learning: Sgd learns parities near the computational limit.
839
+ arXiv preprint
840
+ arXiv:2207.08799, 2022.
841
+ [2] Sihao Cheng and Brice Ménard. How to quantify fields or textures? a guide to the scattering transform.
842
+ arXiv preprint arXiv:2112.01288, 2021.
843
+ [3] Steven M Girvin and Kun Yang. Modern condensed matter physics. Cambridge University Press, 2019.
844
+ [4] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization
845
+ in neural networks. Advances in neural information processing systems, 31, 2018.
846
+ [5] Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-Dickstein,
847
+ and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent.
848
+ Advances in neural information processing systems, 32, 2019.
849
+ [6] Aitor Lewkowycz, Yasaman Bahri, Ethan Dyer, Jascha Sohl-Dickstein, and Guy Gur-Ari. The large
850
+ learning rate phase of deep learning: the catapult mechanism. arXiv preprint arXiv:2003.02218, 2020.
851
+ [7] Ziming Liu, Ouail Kitouni, Niklas Nolte, Eric J Michaud, Max Tegmark, and Mike Williams. Towards
852
+ understanding grokking: An effective theory of representation learning. arXiv preprint arXiv:2205.10343,
853
+ 2022.
854
+ [8] Ziming Liu, Eric J Michaud, and Max Tegmark. Omnigrok: Grokking beyond algorithmic data. arXiv
855
+ preprint arXiv:2210.01117, 2022.
856
+ [9] Neel Nanda and Tom Lieberum.
857
+ A mechanistic interpretability analysis of grokking.
858
+ Align-
859
+ ment Forum, Aug 2022. URL https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/
860
+ a-mechanistic-interpretability-analysis-of-grokking.
861
+ [10] Romualdo Pastor-Satorras and Claudio Castellano. Distinct types of eigenvector localization in networks.
862
+ Scientific reports, 6(1):1–9, 2016.
863
+ 11
864
+
865
+ [11] Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin, and Vedant Misra. Grokking: Generalization
866
+ beyond overfitting on small algorithmic datasets. arXiv preprint arXiv:2201.02177, 2022.
867
+ [12] Daniel A Roberts, Sho Yaida, and Boris Hanin. The principles of deep learning theory. arXiv preprint
868
+ arXiv:2106.10165, 2021.
869
+ [13] Mei Song, Andrea Montanari, and P Nguyen. A mean field view of the landscape of two-layers neural
870
+ networks. Proceedings of the National Academy of Sciences, 115(33):E7665–E7671, 2018.
871
+ [14] Vimal Thilak, Etai Littwin, Shuangfei Zhai, Omid Saremi, Roni Paiss, and Joshua Susskind. The slingshot
872
+ mechanism: An empirical study of adaptive optimizers and the grokking phenomenon. arXiv preprint
873
+ arXiv:2206.04817, 2022.
874
+ [15] Greg Yang and Edward J Hu.
875
+ Feature learning in infinite-width neural networks.
876
+ arXiv preprint
877
+ arXiv:2011.14522, 2020.
878
+ [16] Bojan Žunkoviˇc and Enej Ilievski. Grokking phase transitions in learning local rules with gradient descent.
879
+ arXiv preprint arXiv:2210.15435, 2022.
880
+ 12
881
+
882
+ A
883
+ Complex network
884
+ A simpler network that solves modular addition problem can be phrased using complex weights. This
885
+ structure would also be more friendly to physicists. The complex solution takes form
886
+ W (1)
887
+ kn =
888
+
889
+ e2πi k
890
+ p n1+iϕ(1)
891
+ k
892
+ e2πi k
893
+ p n2+iϕ(2)
894
+ k
895
+
896
+ ,
897
+ n = (n1, n2)
898
+ (22)
899
+ W (2)
900
+ qk = e−2πi k
901
+ p q−iϕ(3)
902
+ k ,
903
+ (23)
904
+ We can take quadratic activation function that simply squares the preactivations. The first preactivation
905
+ and activation are given by
906
+ h(1)(n, m) = e2πi k
907
+ p n+iϕ(1)
908
+ k
909
+ + e2πi k
910
+ p m+iϕ(2)
911
+ k ,
912
+ (24)
913
+ z(1)(n, m) = e2πi k
914
+ p 2n+iϕ(1)
915
+ k
916
+ + e2πi k
917
+ p 2m+iϕ(2)
918
+ k
919
+ + 2e2πi k
920
+ p (n+m)+i(ϕ(1)
921
+ k +ϕ(2)
922
+ k ) .
923
+ (25)
924
+ The final activation is given by
925
+ h(2)(n, m)
926
+ =
927
+ N
928
+
929
+ k=1
930
+
931
+ e2πi k
932
+ p (2n−q)+i(ϕ(1)
933
+ k −ϕ(3)
934
+ k ) + e2πi k
935
+ p (2m−q)+i(ϕ(2)
936
+ k −ϕ(3)
937
+ k )
938
+ (26)
939
+ +
940
+ 2e2πi k
941
+ p (n+m−q)+i(ϕ(1)
942
+ k +ϕ(2)
943
+ k −ϕ(3)
944
+ k )�
945
+ .
946
+ (27)
947
+ Similarly setting
948
+ ϕ(1)
949
+ k
950
+ + ϕ(2)
951
+ k
952
+ − ϕ(3)
953
+ k
954
+ = 0
955
+ (28)
956
+ yields the constructive interference for the output supported on (n + m − q) = 0 mod p.
957
+ B
958
+ Other activations
959
+ Remarkably, the weights (6)-(7) also solve the modular addition problem for networks (1)-(2) with
960
+ other activation functions. That is, the function
961
+ f(x) =
962
+ 1
963
+ D
964
+
965
+ N
966
+ W (2)φ
967
+
968
+ W (1)x
969
+
970
+ (29)
971
+ approximates the δ-function concentrated on the modular addition problem. This also holds for the
972
+ generalizations discussed in the main text. We do not have an analytic proof of this fact, so we
973
+ provide the evidence in Fig. 6.
974
+ C
975
+ Some other modular functions
976
+ We show a few examples of the modular functions for which the exact solutions discussed in the
977
+ main text apply.
978
+ • f(n, m) = n2 + m2 mod p. Full solution is available and gives 100% accuracy. The first
979
+ layer weights are given by
980
+ W (1)
981
+ kn =
982
+
983
+ �cos
984
+
985
+ 2π k
986
+ pn2
987
+ 1 + ϕ(1)
988
+ k
989
+
990
+ cos
991
+
992
+ 2π k
993
+ pn2
994
+ 2 + ϕ(2)
995
+ k
996
+
997
+
998
+ � ,
999
+ n = (n1, n2) ,
1000
+ (30)
1001
+ while the second layer weights remain unmodified.
1002
+ • f(n, m) = (n + m)2 mod p. The weights in the first layer are unmodified, while the
1003
+ weights in the second layer are given by
1004
+ W (2)
1005
+ qk = cos
1006
+
1007
+ −2π k
1008
+ pq
1009
+ 1
1010
+ 2 − ϕ(3)
1011
+ k
1012
+
1013
+ .
1014
+ (31)
1015
+ Note that q
1016
+ 1
1017
+ 2 must be understood in the modular sense, that is r = q
1018
+ 1
1019
+ 2 is a solution to
1020
+ r2 = q mod p.
1021
+ 13
1022
+
1023
+ Figure 6: Accuracy for various activation functions. Test accuracy vs. width for different activation
1024
+ functions for f(n, m) = n+m mod p. The weights are given by (6)-(7). GELU activation eventually
1025
+ reaches 100% accuracy, but at very large width.
1026
+ • f(n, m) = nm. We do not have an analytic solution. The activations are presented in Fig. 9
1027
+ • f(n, m) = n2 + m2 + nm mod p. We do not have an analytic solution. This generalization
1028
+ on this function never reaches 100% unless most of the data is utilized, α > 0.95. See the
1029
+ learning curve in Fig. 10. Note that although generalization accuracy is very high: ≈ 97%,
1030
+ there is a large gap between train and test loss. This is to be contrasted with Fig. 2, where
1031
+ the gap disappears over time.
1032
+ • f(n, m) = n3 + nm2 + m. We do not have an analytic solution. The generalization never
1033
+ rises above 1%. See the learning curve in Fig. 10.
1034
+ We show the corresponding activations on Fig. 7 - Fig. 9
1035
+ 14
1036
+
1037
+ Quadratic
1038
+ Quartic
1039
+ Absolute value
1040
+ ReLU
1041
+ GELUFigure 7: Top: Preactivations h(1)
1042
+ k
1043
+ and h(2)
1044
+ q
1045
+ found by the AdamW for f(n, m) = (n + m)2 mod p.
1046
+ Note that h(1)
1047
+ k
1048
+ is the same as for f(n, m) = (n + m) mod p as expected. Bottom: Analytic solution
1049
+ for the same function. Note that since square root is not invertible – because it has two branches
1050
+ – the accuracy of analytic solution is ≈ 51%. It can be clearly seen in the form of h(2)
1051
+ q : there are
1052
+ 4 activation lines in the top plots and only 2 in the bottom. Each pair corresponds to a branch of
1053
+ square root. The noisy preactivations h(2)
1054
+ q
1055
+ correspond to the values of q that cannot be represented as
1056
+ (n + m)2 mod p.
1057
+ Figure 8: Top: Preactivations h(1)
1058
+ k
1059
+ and h(2)
1060
+ q
1061
+ found by the AdamW for f(n, m) = n2 + m2 mod p.
1062
+ Bottom: Analytic solution for the same function. Both solutions have 100% accuracy.
1063
+ 15
1064
+
1065
+ 888
1066
+ 8888888888
1067
+ 880036:
1068
+ ..
1069
+ .Figure 9: Preactivations h(1)
1070
+ k
1071
+ and h(2)
1072
+ q
1073
+ found by the AdamW for f(n, m) = nm mod p.
1074
+ Figure 10: The learning curves for f(n, m) = n2 + m2 + nm mod p and f(n, m) = n3 + nm2 +
1075
+ m mod p at α = 0.73 and α = 0.9 correspondingly. Note the gap between train and test loss in the
1076
+ former case. Although test accuracy is almost 100%, it is clear that the network did not grok all the
1077
+ right features.
1078
+ 16
1079
+
1080
+ Tain loss
1081
+ Test lossTain accuracy
1082
+ Test accuracyTain loss
1083
+ Test lossTrain accuracy
1084
+ Test accuracy..
L9E0T4oBgHgl3EQf0AI4/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
MtAzT4oBgHgl3EQfV_zH/content/tmp_files/2301.01295v1.pdf.txt ADDED
@@ -0,0 +1,2891 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.01295v1 [math.CV] 3 Jan 2023
2
+ NEVANLINNA THEORY ON COMPLETE K¨AHLER
3
+ MANIFOLDS WITH NON-NEGATIVE RICCI CURVATURE
4
+ XIANJING DONG
5
+ Abstract. This paper considers the equiv-distribution of meromorphic
6
+ mappings from a complete K¨ahler manifold with non-negative Ricci cur-
7
+ vature into a complex projective manifold. When the domain manifold
8
+ is of maximal volume growth, one establishes a second main theorem in
9
+ Nevanlinna theory with a refined error term. As an important result, we
10
+ prove a sharp defect relation. Furthermore, we apply our main theorems
11
+ to the problems on the propagation of algebraic dependence, then finally
12
+ we obtain some unicity results for dominant meromorphic mappings.
13
+ Contents
14
+ 1.
15
+ Introduction
16
+ 3
17
+ 1.1.
18
+ Motivation
19
+ 3
20
+ 1.2.
21
+ Main results
22
+ 4
23
+ 2.
24
+ Preliminaries
25
+ 5
26
+ 2.1.
27
+ Curvatures in differential geometry
28
+ 5
29
+ 2.2.
30
+ Comparison theorems on Riemannian manifolds
31
+ 7
32
+ 2.3.
33
+ Existence of positive global Green functions
34
+ 9
35
+ 2.4.
36
+ Positivity, ampleness and bigness for Q-line bundles
37
+ 10
38
+ 2.5.
39
+ Poincar´e-Lelong formula and Jensen-Dynkin formula
40
+ 11
41
+ 3.
42
+ Nevanlinna theory on complete K¨ahler manifolds
43
+ 13
44
+ 3.1.
45
+ Nevanlinna’s functions
46
+ 13
47
+ 3.2.
48
+ Calculus lemma and logarithmic derivative lemma
49
+ 15
50
+ 3.3.
51
+ Second main theorem and defect relation
52
+ 25
53
+ 2010 Mathematics Subject Classification. 32H30; 32H04.
54
+ Key words and phrases. Nevanlinna theory; second main theorem; defect relation; al-
55
+ gebraic dependence; unicity theorem.
56
+
57
+ 2
58
+ X.-J. DONG
59
+ 3.4.
60
+ The case for singular divisors
61
+ 29
62
+ 4.
63
+ Application to algebraic dependence problems
64
+ 31
65
+ 4.1.
66
+ Consequences of Theorems 3.1 and 3.10
67
+ 31
68
+ 4.2.
69
+ Propagation of algebraic dependence
70
+ 32
71
+ 4.3.
72
+ Unicity theorems
73
+ 38
74
+ 4.4.
75
+ Targets are compact Riemann surfaces and Pn(C)
76
+ 39
77
+ References
78
+ 42
79
+
80
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
81
+ 3
82
+ 1. Introduction
83
+ 1.1. Motivation.
84
+ In 1972, Carlson-Griffiths [9] devised an equiv-distribution theory of holo-
85
+ morphic mappings from Cm into a complex projective manifold. This theory
86
+ is undoubtedly a great progress in the study of Nevanlinna theory [37], which
87
+ was further generalized by Griffiths-King [22] to the affine algebraic varieties.
88
+ Many famous scholars made efforts towards an extension of Carlson-Griffiths
89
+ theory. For example, W. Stoll [45, 46] extended it to the parabolic manifolds,
90
+ Lang-Cherry [33] promoted it onto a covering space of Cm. More extensions
91
+ and developments refer to J. Noguchi [34, 35], M. Ru [38], B. Shiffman [39],
92
+ F. Sakai [40, 41], Wong-Stoll [49], Wong-Wong [50] and Yang-Ru [51], etc.
93
+ The study of Nevanlinna theory on K¨ahler manifolds via Brownian motion
94
+ (initialized by T. K. Carne [12]) can be traced back to the work of A. Atsuji
95
+ [1] in 1995. Later, Atsuji wrote a series of papers (cf. [2, 3, 4, 5]) to study the
96
+ value distribution of meromorphic functions on a complete K¨ahler manifold.
97
+ His excellent contribution to Nevanlinna theory seems to be the second main
98
+ theorem of meromorphic functions on a complete K¨ahler manifold with non-
99
+ positive sectional curvature. Along the line of Atsuji, X. J. Dong [17] further
100
+ extended the notion for Nevanlinna’s functions and generalized the Carlson-
101
+ Griffiths theory from Cm to the complete K¨ahler manifolds with non-positive
102
+ sectional curvature. More details about the Nevanlinna theory via Brownian
103
+ motion refer to Dong-He-Ru [16], Dong-Yang [18] and X. J. Dong [19], etc.
104
+ Although the Nevanlinna theory on K¨ahler manifolds has been studied for
105
+ a long time, we know little about this theory if domain is not non-positively
106
+ curved, particularly, if domain is of non-negative Ricci curvature. In fact, it
107
+ is a long-standing problem. Refer to [42], we see that such class of manifolds
108
+ are in abundance. Motivated by those, we shall focus on the Carlson-Griffith
109
+ theory on complete K¨ahler manifolds with non-negative Ricci curvature, and
110
+ we would like to derive a sharp defect relation in Nevanlinna theory.
111
+ Another motivation of this paper is the propagation problem of algebraic
112
+ dependence of meromorphic mappings, which was first studied by L. Smiley
113
+ [43] in 1979. This problem has aroused widespread concern among scholars,
114
+ referred to Y. Aihara [6, 7, 8], Dulock-Ru [14], S. Drouilhet [15], H. Fujimote
115
+ [20], S. Ji [24, 25] and W. Stoll [47], etc. So far, the best result for propaga-
116
+ tion theorems may be due to Y. Aihara [8] who obtained a beautiful criteria
117
+ for the propagation of algebraic dependence of dominant meromorphic map-
118
+ pings from an analytic finite covering space of Cm into a complex projective
119
+ manifold. Aihara’s criteria improved the criteria of W. Stoll [47]. In fact, he
120
+ used the estimate for the ramification term obtained by J. Noguchi [34]. In
121
+ Aihara’s criteria, however, there still exist some unnatural restrictions, such
122
+ as “fibers separated by mappings”, etc.
123
+
124
+ 4
125
+ X.-J. DONG
126
+ In this paper, we shall apply our second main theorem to the propagation
127
+ problems. The purpose is to prove some propagation theorems for algebraic
128
+ dependence of dominant meromorphic mappings on a complete K¨ahler man-
129
+ ifold. We not only generalize Aihara’s criteria to complete K¨ahler manifolds,
130
+ but also remove those restrictions such as “ramification estimate” and “fibers
131
+ separated by mappings” appeared in Aihara’s criteria.
132
+ 1.2. Main results.
133
+ In Atsuji [5] and Dong [17], the Nevanlinna’s functions (i.e., characteristic
134
+ function, proximity function and counting function) are defined on a geodesic
135
+ ball of a complete K¨ahler manifold. Thus, in order to establish a second main
136
+ theorem, they must estimate the local Green functions for the geodesic balls.
137
+ A reasonable estimate was obtained under a non-positive sectional curvature
138
+ condition. But, the estimate is not so accurate enough. That’s why a growth
139
+ condition was assumed in their defect relations. Not only that, their methods
140
+ fail to the complete K¨ahler manifolds with non-negative Ricci curvature. It’s
141
+ because that we don’t seem to obtain a suitable estimate (in terms of integral
142
+ forms) on local Green functions for the geodesic balls. It is a great difficulty
143
+ faced with in the study of Nevanlinna theory on a complete K¨ahler manifold
144
+ with non-negative Ricci curvature. To overcome this difficulty, our strategy
145
+ is to use the global Green function instead of local Green functions. Roughly
146
+ speaking, we apply the global Green function to define the relatively compact
147
+ r-domains which exhaust the manifold, and we then estimate the local Green
148
+ functions for those domains and the harmonic measures for their boundaries.
149
+ Define the Nevanlinna’s functions on r-domains and use the approach in this
150
+ paper, we establish the Carlson-Griffiths theory on such class of manifolds.
151
+ Let M be a non-compact complete K¨ahler manifold with maximal volume
152
+ growth and non-negative Ricci curvature, and let X be a complex projective
153
+ manifold of complex dimension not greater than that of M. A meromorphic
154
+ mapping f : M → X is said to be differentiably non-degenerate, if the rank
155
+ of differential df equals dimC X at some point in M \I(f), where I(f) is the
156
+ indeterminacy set of f. In this case, we also call f a dominant meromorphic
157
+ mapping. Next, let us state the main results in this paper. Some notations
158
+ will be introduced later.
159
+ We obtain the following second main theorem:
160
+ Theorem I. Let D ∈ |L| be a divisor of simple normal crossing type, where
161
+ L is a positive line bundle over X. Let f : M → X be a differentiably non-
162
+ degenerate meromorphic mapping. Then for any δ > 0, there exists a subset
163
+ Eδ ⊆ (0, ∞) with finite Lebesgue measure such that
164
+ Tf(r, L) + Tf(r, KX) + T(r, R) ≤ N f(r, D) + O
165
+
166
+ log+ Tf(r, L) + δ log r
167
+
168
+ holds for all r > 0 outside Eδ.
169
+
170
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
171
+ 5
172
+ Later, we will note that Tf(r, L) ≥ O(log r) as r → ∞ for any nonconstant
173
+ meromorphic mapping, which means that the error term δ log r in the above
174
+ Theorem I is refined. This yields a defect relation:
175
+ ¯δf(D) ≤
176
+ �c1(K∗
177
+ X)
178
+ c1(L)
179
+
180
+ − lim inf
181
+ r→∞
182
+ T(r, R)
183
+ Tf(r, L) ≤
184
+ �c1(K∗
185
+ X)
186
+ c1(L)
187
+
188
+ .
189
+ We give an application of Theorem I to the propagation problems of alge-
190
+ braic dependence. Let S = S1 ∪ · · · ∪ Sq, where S1, · · · , Sq are hypersurfaces
191
+ of M such that dimC Si ∩Sj ≤ dimC M −2 for i ̸= j. Let D = D1 +· · · +Dq,
192
+ where D1, · · · , Dq ∈ |L| such that D has simple normal crossings, here L is
193
+ an ample line bundle over X. Now, given l dominant meromorphic mappings
194
+ f1, · · · , fl : M → X. Assume that there are integers k1, · · · , kq (may be +∞)
195
+ such that
196
+ Sj = Suppkjf ∗
197
+ i Dj;
198
+ 1 ≤ i ≤ l,
199
+ 1 ≤ j ≤ q.
200
+ Set k0 = max{k1, · · · , kq}. Given an indecomposable hypersurface Σ of M1×
201
+ · · · × Ml. Moreover, define a Q-line bundle L0 ∈ Pic(X) ⊗ Q by
202
+ L0 =
203
+
204
+ q
205
+
206
+ j=1
207
+ kj
208
+ kj + 1
209
+
210
+ L ⊗
211
+
212
+ − ˜γlk0
213
+ k0 + 1F0
214
+
215
+ ,
216
+ where F0 is some big line bundle over X and ˜γ is a positive rational number
217
+ depending only on Σ and F0.
218
+ Employing Theorem I, we obtain some propagation theorems of algebraic
219
+ dependence in this paper. For example, we show that
220
+ Theorem II. Let f1, · · · , fl : M → X be dominant meromorphic mappings
221
+ given as above. Assume that f1, · · · , fl are Σ-related on S. If L0 ⊗ KX is
222
+ big, then f1, · · · , fl are Σ-related on M.
223
+ The propagation theorems of algebraic dependence can apply to the unic-
224
+ ity problems. For example, Theorem II derives a unicity theorem:
225
+ Theorem III. Let f1, f2 : M → P1(C) be nonconstant holomorphic map-
226
+ pings. Let a1, · · · , aq be distinct points in P1(C). We have
227
+ (a) Assume that Suppf ∗
228
+ 1 aj = Suppf ∗
229
+ 2 aj for all j. If q ≥ 5, then f1 ≡ f2;
230
+ (b) Assume that Supp1f ∗
231
+ 1 aj = Supp1f ∗
232
+ 2 aj for all j. If q ≥ 7, then f1 ≡ f2.
233
+ 2. Preliminaries
234
+ 2.1. Curvatures in differential geometry.
235
+ In order to simplify formulas, the Einstein’s convenience is used.
236
+
237
+ 6
238
+ X.-J. DONG
239
+ 2.1.1. Curvatures of Riemannian manifolds.
240
+ Let (M, g) be a Riemannian manifold with Lapalce-Beltrami operator ∆.
241
+ Let ∇ be the Levi-Civita connection of g on M. Recall that the Riemannian
242
+ curvature tensor is defined by
243
+ R = Rijkldxi ⊗ dxj ⊗ dxk ⊗ dxl,
244
+ where ∂j = ∂/∂xj and Rijkl = glpRp
245
+ ijk with
246
+ Γk
247
+ ij = 1
248
+ 2gkl (∂jgil + ∂igjl − ∂lgij) ,
249
+ Rl
250
+ ijk = ∂kΓl
251
+ ji − ∂jΓl
252
+ ki + Γl
253
+ kpΓp
254
+ ji − Γl
255
+ jpΓp
256
+ ki.
257
+ For any point x ∈ M and every tangent 2-plane σ to M at x, the sectional
258
+ curvature of σ is defined by
259
+ K(X, Y ) = R(X, Y, Y, X)
260
+ ∥X ∧ Y ∥2
261
+ ,
262
+ where X, Y ∈ TxM is a basis of σ. Define the Ricci curvature tensor by
263
+ Ric(X, Y ) = R(X, ej, ej, Y ),
264
+ where {e1, · · · , en} is an orthonormal basis of TxM. We can write Ric as
265
+ Ric = Rijdxi ⊗ dxj,
266
+ where
267
+ Rij = Rp
268
+ ipj.
269
+ For 0 ̸= X ∈ TxM, the Ricci curvature at x in the direction X is defined by
270
+ Ric(X, X)/∥X∥2. Moreover, the scalar curvature of M is defined by
271
+ s = gijRij,
272
+ which is the trace of Ricci curvature tensor.
273
+ 2.1.2. Curvatures of K¨ahler manifolds.
274
+ Now, we turn to Hermitian manifolds. Let (M, h) be a Hermitian manifold
275
+ with Hermitian connection ˜∇. Note that M can be regarded as a Riemannian
276
+ manifold with Riemannian metric g = ℜh, where g is extended linearly over
277
+ C. We have the Levi-Civita connection ∇ on M as a Riemannian manifold.
278
+ Also, we extend ∇ linearly over C to TM = TM ⊗C. In general, ˜∇ ̸= ∇ since
279
+ the torsion tensor of ˜∇ may not vanish for the general Hermitian manifolds.
280
+ Therefore, the Laplacian ˜∆ of ˜∇ does not coincide with the Laplace-Beltrami
281
+ operator ∆ of ∇. However, the case when ˜∇ = ∇ happens provided that M
282
+ is a K¨ahler manifold. Consequently,
283
+ ∆ = ˜∆ = 2hi¯j
284
+ ∂2
285
+ ∂zi∂¯zj
286
+
287
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
288
+ 7
289
+ acting on a C 2-class function when M is K¨ahlerian, where (hi¯j) is the inverse
290
+ of metric matrix (hi¯j) of h.
291
+ Suppose that M is a K¨ahler manifold with K¨ahler metric h. The K¨ahlerness
292
+ of h means that ⟨JX, JY ⟩ = ⟨X, Y ⟩ and ∇X(JY ) = J∇XY for X, Y ∈ TM,
293
+ where J is the canonical complex structure of M. Extending J and R linearly
294
+ over C to TM = T 1,0
295
+ M ⊕ T 0,1
296
+ M . Let X, Y ∈ T 1,0
297
+ M,x. The holomorphic bisectional
298
+ curvature H(X, Y ) of M at x in the holomorphic directions X, Y is defined
299
+ by
300
+ H(X, Y ) = R(X, X, Y, Y )
301
+ ∥X ∧ Y ∥2
302
+ .
303
+ When X = Y, we call H(X) = H(X, X) the holomorphic sectional curvature
304
+ of M at x in the holomorphic direction X. The Ricci curvature tensor RicC
305
+ of M is defined by
306
+ RicC(X, Y ) = R(X, Y , ej, ej),
307
+ where {e1, · · · , em} is an orthonormal basis of T 1,0
308
+ M,x. Applying the K¨ahlerness
309
+ of M, RicC can be computed in such way: RicC = Ri¯jdzi ⊗ d¯zj, where
310
+ Ri¯j = −
311
+ ∂2
312
+ ∂zi∂¯zj log det(hs¯t).
313
+ Naturally, it defines the following Ricci form
314
+ R = −ddc log det(hs¯t) =
315
+ √−1
316
+ 2π Ri¯jdzi ∧ d¯zj,
317
+ where
318
+ d = ∂ + ∂,
319
+ dc =
320
+ √−1
321
+ 4π (∂ − ∂) so that ddc =
322
+ √−1
323
+ 2π ∂∂.
324
+ A well-known theorem by S. S. Chern asserts that R defines a cohomology
325
+ class in the de Rham cohomology group H2
326
+ DR(M, R). We can define the Ricci
327
+ curvature at x in the holomorphic direction X by RicC(X, X)/∥X∥2. Indeed,
328
+ we have the scalar curvature sC of M defined by
329
+ (1)
330
+ sC = hi¯jRi¯j = −1
331
+ 2∆ log det(hs¯t).
332
+ A comparison gives that (cf. e.g., [52])
333
+ (2)
334
+ s = 2sC.
335
+ 2.2. Comparison theorems on Riemannian manifolds.
336
+ Let M be a Riemannian manifold. Fix a point o ∈ M, define
337
+ ρ(x) = dist(o, x),
338
+ ∀x ∈ M.
339
+
340
+ 8
341
+ X.-J. DONG
342
+ It is well-known that ρ is Lipschitz-continuous and thus differentiable almost
343
+ everywhere. Let Cut(o) be the cut locus of o, then Cut(o) has measure zero
344
+ and ρ is smooth on M \ Cut(o).
345
+ A space form is a simply-connected complete Riemannian manifold with
346
+ constant sectional curvature. Let ˜
347
+ M be a space form and ˜ρ(˜x) be the distance
348
+ function of ˜x ∈ ˜
349
+ M from a fixed point ˜o ∈ ˜
350
+ M. Let ˜∆ be the Laplace-Beltrami
351
+ operator on
352
+ ˜
353
+ M. The Laplacian comparison theorem (cf., e.g., [44]) states
354
+ that
355
+ Theorem 2.1 (Laplacian comparison theorem). Let M be an n-dimensional
356
+ complete Riemannian manifold with Ric ≥ −(n − 1)K, where K is a non-
357
+ negative constant.
358
+ Let
359
+ ˜
360
+ M be an n-dimensional space form with sectional
361
+ curvature −K. Assume that x ∈ M and ˜x ∈ ˜
362
+ M such that ρ(x) = ˜ρ(˜x). If x
363
+ is not a cut point of ρ, then
364
+ ∆ρ(x) ≤ ˜∆ρ(˜x).
365
+ By finding the Jacobian field along a minimal geodesic, one can compute
366
+ ˜∆˜ρ. It follows from Theorem 2.1 that
367
+ Corollary 2.2. Let M be an n-dimensional complete Riemannian manifold
368
+ with Ric ≥ −(n − 1)K, where K is a non-negative constant. Then
369
+ ∆ρ ≤ (n − 1)
370
+ �√
371
+ K + 1
372
+ ρ
373
+
374
+ on M \ Cut(o). In particular, if Ric(M) ≥ 0, then
375
+ ∆ρ ≤ n − 1
376
+ ρ
377
+ in the sense of distributions on M.
378
+ We also state the following Bishop’s volume comparison theorem (cf. e.g.,
379
+ [44]):
380
+ Theorem 2.3 (Volume comparison theorem). Let M be an n-dimensional
381
+ complete Riemannian manifold with Ric ≥ (n−1)K, where K is a constant.
382
+ Let ˜
383
+ M be an n-dimensional space form with sectional curvature K. Then for
384
+ r > 0, Vx(r)/V (K, r) is non-increasing in r and
385
+ Vx(r) ≤ V (K, r),
386
+ where Vx(r) is the volume of geodesic ball centered at x with radius r in M,
387
+ and V (K, r) is the volume of geodesic ball with radius r in ˜
388
+ M.
389
+
390
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
391
+ 9
392
+ 2.3. Existence of positive global Green functions.
393
+ Let M be an n-dimensional non-compact complete Riemannian manifold,
394
+ whose Laplace-Beltrami operator is ∆. Set ρx(y) = dist(x, y). In this paper,
395
+ a positive global Green function (if it exists) for M means a Green function
396
+ G(x, y) of ∆/2 on M ×M, which is a smooth function on M ×M \diag(M ×
397
+ M) satisfying that (cf. e.g., [44])
398
+ 1◦ G(x, y) ≥ 0;
399
+ 2◦ G(x, y) = G(y, x);
400
+ 3◦ ∆yG(x, y) = 0 for y ̸= x;
401
+ 4◦ Fix any x, when y → x
402
+ G(x, y) ∼ C2 log ρ−1
403
+ x (y),
404
+ n = 2;
405
+ G(x, y) ∼ Cnρ2−n
406
+ x
407
+ (y),
408
+ n > 2,
409
+ where Cn (n ≥ 2) is a positive constant such that
410
+ −1
411
+ 2∆yG(x, y) = δx(y)
412
+ in the sense of distributions. Note that G(x, y) with n = 2 may change sign,
413
+ thus the positivity of G(x, y) in this case should be understood to be outside
414
+ a compact subset of M.
415
+ It is known that there always exists a global Green function for any non-
416
+ compact complete Riemannian manifold M, this fact was confirmed for the
417
+ first time by M. Malgrange [32], while a constructive proof, and best suited
418
+ for applications, which was presented by Li-Tam [28]. Moreover, if M is non-
419
+ parabolic (it admits a positive global Green function), Li-Tam’s construction
420
+ produces a unique minimal positive global Green function for M. In case that
421
+ M has a nonconstant positive harmonic function, Schoen-Yau [44] confirmed
422
+ that M admits a positive global Green function. In particular, they proved
423
+ that
424
+ Theorem 2.4 (Schoen-Yau). Let M be a non-compact complete Riemann-
425
+ ian manifold with lower bounded Ricci curvature. Then M admits a positive
426
+ global Green function, and thus there exists a unique minimal positive global
427
+ Green function for M.
428
+ There are necessary conditions for the existence of a positive global Green
429
+ function for M. Cheng-Yau [13] gave the first result in this direction, which
430
+ involves only the volume growth of M. The first major result for the suffi-
431
+ ciency was due to Li-Yau [31] and N. Varopoulos [48]. Utilizing the estimates
432
+ of the heat kernel, Li-Yau [31] proved the following theorem:
433
+
434
+ 10
435
+ X.-J. DONG
436
+ Theorem 2.5 (Li-Yau). Let M be a non-compact complete Riemannian
437
+ manifold with non-negative Ricci curvature. If
438
+ � ∞
439
+ 0
440
+ t
441
+ Vx(t)dt < ∞,
442
+ then there exists a positive global Green function for M. Furthermore, the
443
+ unique minimal positive global Green function G(x, y) for M satisfies that
444
+ C−1
445
+ � ∞
446
+ ρx(y)
447
+ t
448
+ Vx(t)dt ≤ G(x, y) ≤ C
449
+ � ∞
450
+ ρx(y)
451
+ t
452
+ Vx(t)dt
453
+ for x ̸= y, where Vx(t) is the volume of geodesic ball centered at x with radius
454
+ t, and C is a positive constant depending only on the dimension of M.
455
+ 2.4. Positivity, ampleness and bigness for Q-line bundles.
456
+ Let M be a compact complex manifold. We recall that a holomorphic line
457
+ bundle L over M is said to be positive, if there exists a Hermitian metric h
458
+ on L such that the Chern form c1(L, h) > 0; L is said to be very ample, if
459
+ L provides a holomorphic embedding of X into a complex projective space,
460
+ namely, there exist holomorphic sections e0, · · · , eN in H0(M, L) (the space
461
+ of all holomorphic sections of L over M) such that the mapping
462
+ [e0 : · · · : eN] : M ֒→ PN(C)
463
+ is a holomorphic embedding. It is said that L is ample if L⊗ν (written as νL
464
+ for short) is very ample whenever ν is a sufficiently large positive integer. A
465
+ known fact asserts that L is ample if and only if L is positive. A holomorphic
466
+ line bundle F over M is said to be big, if
467
+ dim H0(M, νL) ≥ Cνdim M
468
+ for all sufficiently large integers ν > 0 and some constant C > 0.
469
+ We introduce two well-known theorems by K. Kodaira (cf. e.g., [21, 27]).
470
+ Theorem 2.6 (Kodaira). Let F be a holomorphic line bundle over M. Then
471
+ F is big if and only if there exists a singular metric e−ψ on F such that
472
+ ddc[ψ] ≥ δωL
473
+ in the sense of currents for an arbitrary ample line bundle L over M, where
474
+ ωL is a Hermitian metric on L and δ is a sufficiently small positive number
475
+ depending on ωL.
476
+ As a matter of convenience, instead of ddc[ψ] ≥ δωL in Theorem 2.6, one
477
+ writes F ≥ δL in short. Indeed, we sometimes write F ⊗L = F +L in short
478
+ for two Q-line bundles F, L.
479
+ Corollary 2.7. Let L be an ample line bundle over M. If F is a big line
480
+ bundle over M, then F − δL is big for any sufficiently small number δ > 0.
481
+
482
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
483
+ 11
484
+ Theorem 2.8 (Kodaira). Let F be a big line bundle over M, and let L be a
485
+ holomorphic line bundle over M. Then there exists a positive integer µ such
486
+ that µF ⊗ L is big, and
487
+ H0(X, µF ⊗ L) ̸= 0
488
+ for any sufficiently large positive integer µ.
489
+ In general, a Q-line bunde is defined as an element belonging to Pic(M)⊗
490
+ Q, here Pic(M) is the Picard group over M. Let F ∈ Pic(M)⊗Q be a Q-line
491
+ bundle. Then F is said to be positive (resp. ample and big), if νF ∈ Pic(M)
492
+ is positive (resp. ample and big) for some integer ν > 0.
493
+ By Corollary 2.9 and Theorem 2.10, it is trivial to see that
494
+ Corollary 2.9. Let L be an ample Q-line bundle over M. If F is a big
495
+ Q-line bundle over M, then F − δL is big for any sufficiently small positive
496
+ number δ.
497
+ Corollary 2.10. Let F be a big Q-line bundle, and L be a Q-line bundle
498
+ over M. Then there exist positive integers µ, ν such that µF, νL ∈ Pic(M)
499
+ and µF ⊗ νL is big, and
500
+ H0(X, µF ⊗ νL) ̸= 0
501
+ for some sufficiently large positive integers µ, ν.
502
+ 2.5. Poincar´e-Lelong formula and Jensen-Dynkin formula.
503
+ Let M be a m-dimensional K¨ahler manifold with Laplace-Beltrami oper-
504
+ ator ∆ associated with the K¨ahler metric h. Let us introduce two important
505
+ formulas as follows.
506
+ 2.5.1. Poincar´e-Lelong formula.
507
+ Let (L, hL) be a Hermitian holomorphic line over M. We define the Chern
508
+ form of L associated to hL by
509
+ c1(L, hL) = −ddc log hL.
510
+ If s is the canonical section of L over M with zero divisor D, then s defines
511
+ a positive (1, 1)-current of integration [D] over D by
512
+ [D](ϕ) =
513
+
514
+ D
515
+ ϕ
516
+ for any (m − 1, m − 1)-form ϕ with compact support on M.
517
+ Let u be a plurisubharmonic function u on M. If u is of C 2-class, then
518
+ ddcu =
519
+ ∂2u
520
+ ∂zi∂¯zj
521
+ √−1
522
+ 2π dzi ∧ d¯zj ≥ 0,
523
+
524
+ 12
525
+ X.-J. DONG
526
+ by which we mean that the matrix of all the second order partial derivatives
527
+ of u is semi-positive definite (cf. e.g., [36], Section 2.1), i.e.,
528
+ ∂2u
529
+ ∂zi∂¯zj ξi ¯ξj ≥ 0
530
+ for every complex vector (ξ1, · · · , ξm). If u is not of C 2-class, one can use the
531
+ notation ∂2[u]/∂zi∂¯zj in the sense of (Schwartz) distributions, which defines
532
+ a positive Radon measure, so that
533
+ ddc[u] = ∂2[u]
534
+ ∂zi∂¯zj
535
+ √−1
536
+ 2π dzi ∧ d¯zj ≥ 0
537
+ which is a positive (1, 1)-current:
538
+ ddc[u](ϕ) =
539
+
540
+ M
541
+ ddc[u] ∧ ϕ
542
+ for any (m−1, m−1)-form ϕ with compact support on M. Using the K¨ahler
543
+ property of M, we see that
544
+ ∆u = 2hi¯j ∂2[u]
545
+ ∂zi∂¯zj ≥ 0
546
+ in the sense of distributions. Thus, u is subharmonic on M as a Riemannian
547
+ manifold. In particular, if f is a nonzero meromorphic function on M, then
548
+ log |f|2 is a plurisubharmonic function and it defines a positive (1, 1)-current
549
+ ddc[log |f|2].
550
+ We state the Poincar´e-Lelong formula (cf. e.g., [9]) as follows:
551
+ Theorem 2.11 (Poincar´e-Lelong formula). Let f be a nonzero meromorphic
552
+ function on M. Then
553
+ ddc �
554
+ log |f|2�
555
+ = [(f = 0)] − [(f = ∞)].
556
+ Corollary 2.12. Let (L, hL) be a Hermitian holomorphic line bundle over
557
+ M. Let s be the canonical section of L over M with zero divisor D. Then
558
+ ddc �
559
+ log ∥s∥2
560
+ hL
561
+
562
+ = [D] − c1(L, hL).
563
+ Remark that the K¨ahler condition for M is not necessary in both theorems
564
+ above. In fact, we only need to require that M is a complex manifold.
565
+ 2.5.2. Jensen-Dynkin formula.
566
+ Let Ω ⊊ M be a relatively compact domain with piecewise smooth bound-
567
+ ary ∂Ω. Fix a point o ∈ Ω. Let gΩ(o, x) denote the positive Green function
568
+ of ∆/2 for Ω with a pole at o, satisfying Dirichlet boundary condition. Note
569
+ that gΩ(o, x) defines a unique harmonic measure π∂Ω for ∂Ω with respect to
570
+ o, say
571
+ dπ∂Ω(x) = −1
572
+ 2
573
+ ∂gΩ(o, x)
574
+ ∂⃗ν
575
+ dσ∂Ω(x),
576
+ ∀x ∈ ∂Ω,
577
+
578
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
579
+ 13
580
+ where ∂/∂⃗ν is the inward normal derivative on ∂Ω, dσ∂Ω is the Riemannian
581
+ area element of ∂Ω.
582
+ A δ-subharmonic function is defined as the difference of two subharmonic
583
+ functions. We introduce the following Jensen-Dynkin formula that is viewed
584
+ as a generalization of Green-Jensen formula. The formula plays an important
585
+ role in the study of Nevanlinna theory on complex manifolds.
586
+ The Jensen-Dynkin formula (cf. e.g., [17, 18]) reads that
587
+ Theorem 2.13 (Jensen-Dynkin formula). Let u be a δ-subharmonic func-
588
+ tion on M such that u(o) ̸= ∞. Then
589
+
590
+ ∂Ω
591
+ u(x)dπ∂Ω(x) − f(o) = 1
592
+ 2
593
+
594
+
595
+ gΩ(o, x)∆u(x)dv(x),
596
+ where dv is the Riemannian volume element of M.
597
+ We remark that Theorem 2.13 remains true when M is just a Riemannian
598
+ manifold. Recall that the K¨ahler metric form of M is defined by
599
+ α =
600
+ √−1
601
+ π
602
+ hi¯jdzi ∧ d¯zj.
603
+ By Wirtinger’s formula, dv = πmαm/m!. Let u be a plurisubharmonic func-
604
+ tion on M. By the K¨ahler property of h, we see that u is subharmonic and
605
+ ∆u = 4mddc[u] ∧ αm−1
606
+ αm
607
+ in the sense of distributions or currents. The following consequence follows
608
+ from the above arguments and Theorem 2.13.
609
+ Corollary 2.14. Let u be a plurisubharmonic function on M such that
610
+ u(o) ̸= ∞. Then
611
+
612
+ ∂Ω
613
+ udπ∂Ω − f(o) =
614
+ 2πm
615
+ (m − 1)!
616
+
617
+
618
+ gΩ(o, x)ddc[u] ∧ αm−1.
619
+ 3. Nevanlinna theory on complete K¨ahler manifolds
620
+ 3.1. Nevanlinna’s functions.
621
+ Let M be a non-compact complete K¨ahler manifold of complex dimension
622
+ m, with Laplace-Beltrami operator ∆ associated to K¨ahler metric h. Assume
623
+ that M has non-negative Ricci curvature as a Riemannian manifold. Indeed,
624
+ assume that M is of maximal volume growth, i.e.,
625
+ (3)
626
+ lim inf
627
+ r→∞ r−2mV (r) > 0,
628
+ where V (r) is the volume of geodesic ball B(r) centered at a reference point
629
+ o with radius r.
630
+
631
+ 14
632
+ X.-J. DONG
633
+ If m ≥ 2, using Theorem 2.4 or Theorem 2.5, then we see that there exists
634
+ the unique minimal positive global Green function G(x, y) of ∆/2 for M. For
635
+ r > 0, define the r-domain ∆(r) by
636
+ ∆(r) =
637
+
638
+ x ∈ M : G−1(o, x) < r
639
+
640
+ .
641
+ Note that {∆(rn)}∞
642
+ 1 exhausts M compactly for a strictly increasing sequence
643
+ {rn}∞
644
+ 1 with rn → ∞ as n → ∞. Set
645
+ gr(o, x) = G(o, x) − r−1.
646
+ Evidently, gr(o, x) defines the positive Green function of ∆/2 for ∆(r), with a
647
+ pole at o satisfying Dirichlet boundary condition. Denote by πr the harmonic
648
+ measure for ∂∆(r) with respect to o, which is defined by gr(o, x) as follows:
649
+ (4)
650
+ dπr(x) = −1
651
+ 2
652
+ ∂gr(o, x)
653
+ ∂⃗ν
654
+ dσr(x),
655
+ ∀x ∈ ∂∆(r),
656
+ where ∂/∂⃗ν is the inward normal derivative on ∂∆(r), dσr is the Riemannian
657
+ area element of ∂∆(r).
658
+ If m = 1, the curvature assumption implies that M is a parabolic Riemann
659
+ surface and each global Green function for M must change sign. Again, since
660
+ M is of maximal volume growth, we see that M is conformally equivalent to
661
+ C. Equip a conformal metric ds2 = λds2
662
+ 0 on M, where λ is a positive smooth
663
+ function and ds2
664
+ 0 is the standard Euclidean metric on C. It is noted that the
665
+ Laplacian ∆ in real dimension 2 is conformally invariant, thus it produces a
666
+ global Green function G(x, y) of ∆/2 for M:
667
+ (5)
668
+ G(x, y) = 1
669
+ π log d(x, y),
670
+ where d(x, y) is the Riemannian distance between x, y. In this case, we define
671
+ the r-domain ∆(r) by
672
+ ∆(r) = {x ∈ M : d(o, x) < r} .
673
+ It is clear that
674
+ (6)
675
+ gr(o, x) = 1
676
+ π log
677
+ r
678
+ d(o, x)
679
+ gives the Green function of ∆/2 for ∆(r), with a pole at o satisfying Dirichlet
680
+ boundary condition. By (4), the harmonic measure πr for ∂∆(r) with respect
681
+ to o is
682
+ dπr(x) =
683
+ 1
684
+ 2πrdσr(x),
685
+ ∀x ∈ ∂∆(r).
686
+ Let X be a complex projective manifold where we put an ample Hermitian
687
+ line bunlde (L, hL) so that its Chern form c1(L, hL) > 0. Let f : M → X be a
688
+ meromorphic mapping, by which we mean that f is defined by a holomorphic
689
+ mapping f0 : M \ I → X such that the closure G(f0) of the graph G(f0) of
690
+ f0 is an analytic subset of M ×X, and the natural projection π : G(f0) → M
691
+
692
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
693
+ 15
694
+ is a proper mapping, where I is an analytic subset (called the indeterminacy
695
+ set of f) of M satisfying dimC I ≤ m − 2. Assume that o ̸∈ I. Set
696
+ ef = 2mf ∗c1(L, h) ∧ αm−1
697
+ αm
698
+ = −1
699
+ 2∆ log(h ◦ f),
700
+ where
701
+ α =
702
+ √−1
703
+ π
704
+ m
705
+
706
+ i,j=1
707
+ hi¯jdzi ∧ d¯zj
708
+ is the K¨ahler form of M. For any analytic hypersuface A of M, we put
709
+ N(r, A) =
710
+ πm
711
+ (m − 1)!
712
+
713
+ A∩∆(r)
714
+ gr(o, x)αm−1.
715
+ The Nevanlinna’s functions (characteristic function, proximity function and
716
+ counting function) are defined respectively by
717
+ Tf(r, L) = 1
718
+ 2
719
+
720
+ ∆(r)
721
+ gr(o, x)efdv,
722
+ mf(r, D) =
723
+
724
+ ∂∆(r)
725
+ log
726
+ 1
727
+ ∥sD ◦ f∥dπr,
728
+ Nf(r, D) = N(r, f ∗D),
729
+ where sD is the canonical section of L with zero divisor D. Moreover, define
730
+ the simple counting function by
731
+ N f(r, D) = N(r, Suppf ∗D).
732
+ Using Jensen-Dynkin formula and Poincar´e-Lelong formula (cf. Theorems
733
+ 2.11 and 2.13), we have the first main theorem (cf. [17] also):
734
+ Theorem 3.1. Let f : M → X be a meromorphic mapping such that f(o) ̸∈
735
+ SuppD. Then
736
+ mf(r, D) + Nf(r, D) = Tf(r, L) + O(1).
737
+ 3.2. Calculus lemma and logarithmic derivative lemma.
738
+ 3.2.1. Estimate on harmonic measures.
739
+ Let M be a m-dimensional complete Hermitian manifold with non-negative
740
+ Ricci curvature, here m ≥ 2. Fix a point o ∈ M. For a convenience, we denote
741
+ by ρ(x) the Riemannian distance function of x from o, and by V (r), A(r) the
742
+ volume and area of B(r), ∂B(r), respectively, where B(r) is the geodesic ball
743
+ centered at o with radius r. Assume that M is of maximal volume growth.
744
+ Set
745
+ θ(r) = (2m)−1r1−2mA(r).
746
+
747
+ 16
748
+ X.-J. DONG
749
+ The Bishop’s volume comparison theorem (cf. Theorem 2.3) says that there
750
+ exists a number θ > 0, independent of o, such that (cf. [29] also)
751
+ (7)
752
+ θ(r) ց θ,
753
+ r−2mV (r) ց θ
754
+ as r → ∞. By Laplacian comparison theorem (cf. Corollary 2.2), one obtains
755
+ ∆ρ(x) ≤ 2m − 1
756
+ ρ
757
+ ,
758
+ ∆ρ1−2m(x) ≥ 0
759
+ in the sense of distributions. Recall that the notation G(o, x) is the minimal
760
+ positive global Green function of ∆/2 for M with a pole at o. According to
761
+ the estimations of Li-Yau (cf. Theorem 2.5) and (7), there exists a constant
762
+ A(m, θ) > 0 depending only on m, θ such that
763
+ A−1(m, θ)ρ2−2m(x) ≤ G(o, x) ≤ A(m, θ)ρ2−2m(x).
764
+ In further, Colding-Minicozzi II [11] (cf. Li-Tam-Wang [29] also) obtained
765
+ the asymptotic behavior of G(o, x). In fact, they showed that
766
+ Lemma 3.2. Let M be a m-dimensional (m ≥ 2) Hermitian manifold with
767
+ non-negative Ricci curvature. If M has maximal volume growth, then G(o, x)
768
+ satisfies the asymptotic behavior
769
+ (8)
770
+ lim
771
+ x→∞
772
+ 2m(m − 1)θG(o, x)
773
+ ρ2−2m(x)
774
+ = 1.
775
+ If m = 1, it was shown by Li-Tam [30] that there also has the asymptotic
776
+ behavior
777
+ lim
778
+ x→∞
779
+ θG(o, x)
780
+ log ρ(x) = 1.
781
+ Next, we give a gradient estimate on G(o, x).
782
+ Theorem 3.3. We have
783
+ (a) If m = 1, then
784
+ ∥∇G(o, x)∥ = π−1r−1,
785
+ x ∈ ∂∆(r);
786
+ (b) If m ≥ 2, then there exists a constant Cm > 0 such that
787
+ max
788
+ x∈∂∆(r) ∥∇G(o, x)∥ ≤ Cmr− 2m−1
789
+ 2m−2 .
790
+ Proof. If m = 1, then ∆(r) is just the geodesic disc centered at o with radius
791
+ r. It means that ρ(x) = r for x ∈ ∂∆(r). Hence, it follows from (5) that
792
+ ∥∇G(o, x)∥ = ∂G(o, x)
793
+ ∂⃗ν
794
+ = π−1r−1,
795
+ x ∈ ∂∆(r),
796
+ where ∂/∂⃗ν is the inward normal derivative on ∂∆(r). Thus, (a) holds.
797
+
798
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
799
+ 17
800
+ Next, we show that (b) holds. Since G(o, x) → 0, ρ2−2m(x) → 0 as x → ∞,
801
+ utilizing L’Hˆospital’s rule, then it yields from (8) that for x ∈ ∂∆(r) (without
802
+ loss of generality, we may assume that x is not a cut point of o):
803
+ lim
804
+ r→∞
805
+ 2m(m − 1)θ∂G(o, x)/∂⃗ν
806
+ ∂ρ2−2m(x)/∂⃗ν
807
+ = lim
808
+ r→∞
809
+ −2m(m − 1)θ∥∇G(o, x)∥
810
+ (2 − 2m)ρ1−2m(x)∂ρ(x)/∂⃗ν = 1,
811
+ where ∂/∂⃗ν stands for the inward normal derivative on ∂∆(r). On the other
812
+ hand, (8) gives that
813
+ lim
814
+ r→∞
815
+ 2m(m − 1)θ
816
+ rρ2−2m(x) = 1,
817
+ x ∈ ∂∆(r).
818
+ Or equivalently,
819
+ lim
820
+ r→∞
821
+ (2m(m − 1)θ)
822
+ 2m−1
823
+ 2m−2 r− 2m−1
824
+ 2m−2
825
+ ρ1−2m(x)
826
+ = 1,
827
+ x ∈ ∂∆(r).
828
+ Hence, we see that ∆(r) turns increasingly to a geodesic ball centered at o,
829
+ as r → ∞. This implies that
830
+ lim
831
+ r→∞
832
+ ∂ρ(x)
833
+ ∂⃗ν
834
+ = 1,
835
+ x ∈ ∂∆(r).
836
+ Combining the above, we get
837
+ lim
838
+ r→∞
839
+ 2m(m − 1)θ∥∇G(o, x)∥
840
+ (2m − 2)ρ1−2m(x)
841
+ = lim
842
+ r→∞
843
+ 2m(m − 1)θ∥∇G(o, x)∥
844
+ (2m − 2)(2m(m − 1)θ)
845
+ 2m−1
846
+ 2m−2 r− 2m−1
847
+ 2m−2
848
+ = 1
849
+ for x ∈ ∆(r). Therefore, there exists a constant Cm > 0 such that
850
+ max
851
+ x∈∂∆(r) ∥∇G(o, x)∥ ≤ Cmr− 2m−1
852
+ 2m−2 .
853
+ This proves the theorem.
854
+
855
+ Corollary 3.4. We have
856
+ (a) If m = 1, then
857
+ dπr =
858
+ 1
859
+ 2πrdσr;
860
+ (b) If m ≥ 2, then
861
+ dπr ≤ Cm2−1r− 2m−1
862
+ 2m−2 dσr.
863
+ In the above, dσr is the Riemannian area element of ∂∆(r).
864
+
865
+ 18
866
+ X.-J. DONG
867
+ Proof. The corollary follows immediately from the following relation
868
+ dπr(x) = 1
869
+ 2∥∇G(o, x)∥dσr(x).
870
+ The proof is completed.
871
+
872
+ 3.2.2. Calculus lemma.
873
+ To establish the desired calculus lemma which plays a key role in deriving
874
+ the second main theorem, we still need a lower bound of gr(o, x) in terms of
875
+ integral forms. Set
876
+ ρr =
877
+ max
878
+ x∈∂∆(r) ρ(x).
879
+ Lemma 3.5. We have
880
+ (a) If m = 1, then
881
+ gr(o, x) = 1
882
+ π
883
+ � r
884
+ ρ(x)
885
+ t−1dt;
886
+ (b) If m ≥ 2, then for any ǫ > 0, there exists ρ0 > 0 such that for all
887
+ sufficiently large r > 0 and x ∈ ∆(r) with ρ(x) > ρ0, we have
888
+ gr(o, x) ≥
889
+ � 1
890
+ mθ − ǫ
891
+ � � ρr
892
+ ρ(x)
893
+ t1−2mdt.
894
+ Proof. When m = 1, it follows immediately from (6) that (a) holds. In what
895
+ follows, we show that (b) holds. In view of (8), we obtain
896
+ G(o, x) =
897
+ ρ2−2m(x)
898
+ 2m(m − 1)θ + o
899
+ � ρ2−2m(x)
900
+ 2m(m − 1)θ
901
+
902
+ as x → ∞. Note that ∂∆(r) is a compact set. Using the definition of ∂∆(r),
903
+ the above equality implies that
904
+ 1
905
+ r =
906
+ ρ2−2m
907
+ r
908
+ 2m(m − 1)θ + o
909
+
910
+ ρ2−2m
911
+ r
912
+ 2m(m − 1)θ
913
+
914
+ as r → ∞. Thus, for any ǫ0 > 0, there exists ρ0 > 0 such that for sufficiently
915
+ large r > 0 and x ∈ ∆(r) with ρ(x) > ρ0
916
+ gr(o, x) = G(o, x) − r−1
917
+
918
+
919
+ 1
920
+ 2m(m − 1)θ − ǫ0
921
+ � �
922
+ ρ2−2m(x) − ρ2−2m
923
+ r
924
+
925
+
926
+ � 1
927
+ mθ − (2m − 2)ǫ0
928
+ � � ρr
929
+ ρ(x)
930
+ t1−2mdt
931
+ =
932
+ � 1
933
+ mθ − ǫ
934
+ � � ρr
935
+ ρ(x)
936
+ t1−2mdt,
937
+ where ǫ = (2m − 2)ǫ0. This completes the proof.
938
+
939
+
940
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
941
+ 19
942
+ Now, we prove the following so-called calculus lemma:
943
+ Theorem 3.6. Let k ≥ 0 be a locally integrable function on M. Assume that
944
+ k is locally bounded at o. Then for any δ > 0, there exist a positive constant
945
+ C and a subset Eδ ⊆ (0, ∞) with finite Lebesgue measure such that
946
+ (a) If m = 1, then
947
+
948
+ ∂∆(r)
949
+ k(x)dπr(x) ≤ Crδ
950
+ � �
951
+ ∆(r)
952
+ gr(o, x)k(x)dv(x)
953
+ �(1+δ)2
954
+ holds for all r > 0 outside Eδ;
955
+ (b) If m ≥ 2, then
956
+
957
+ ∂∆(r)
958
+ k(x)dπr(x) ≤ Cr
959
+ 2m−1
960
+ 2m−2 δ
961
+ � �
962
+ ∆(r)
963
+ gr(o, x)k(x)dv(x)
964
+ �(1+δ)2
965
+ holds for all r > 0 outside Eδ.
966
+ Proof. If m = 1, then ∆(r) is the geodesic disc centered at o with radius r.
967
+ It yields from (6) that
968
+
969
+ ∆(r)
970
+ gr(o, x)k(x)dv(x) =
971
+ � r
972
+ 0
973
+ dt
974
+
975
+ ∂∆(t)
976
+ gr(o, x)k(x)dσt(x)
977
+ = 1
978
+ π
979
+ � r
980
+ 0
981
+ � � r
982
+ t
983
+ s−1ds
984
+
985
+ ∂∆(t)
986
+ k(x)dσt(x)
987
+
988
+ dt.
989
+ Set
990
+ Γ(r) =
991
+ � r
992
+ 0
993
+ � � r
994
+ t
995
+ s−1ds
996
+
997
+ ∂∆(t)
998
+ k(x)dσt(x)
999
+
1000
+ dt.
1001
+ Then
1002
+ dΓ(r)
1003
+ dr
1004
+ = r−1
1005
+ � r
1006
+ 0
1007
+ � �
1008
+ ∂∆(t)
1009
+ k(x)dσt(x)
1010
+
1011
+ dt,
1012
+ which yields from Corollary 3.4 that
1013
+ (9)
1014
+ d
1015
+ dr
1016
+
1017
+ rΓ′(r)
1018
+
1019
+ =
1020
+
1021
+ ∂∆(r)
1022
+ k(x)dσr(x) = 2πr
1023
+
1024
+ ∂∆(r)
1025
+ k(x)dπr(x).
1026
+ On the other hand, applying Borel lemma to (rΓ′)′ twice, then for any δ > 0,
1027
+ there exists a subset Eδ ⊆ (0, ∞) with finite Lebesque measure such that
1028
+ (10)
1029
+ d
1030
+ dr
1031
+
1032
+ rΓ′(r)
1033
+
1034
+ ≤ r1+δΓ(1+δ)2(r)
1035
+ holds for all r > 0 outside Eδ. Combining (9) with (10), we get
1036
+
1037
+ ��∆(r)
1038
+ k(x)dπr(x) ≤
1039
+ 1
1040
+ 2πrr1+δΓ(1+δ)2(r).
1041
+ By the definition of Γ, we have (a) holds.
1042
+
1043
+ 20
1044
+ X.-J. DONG
1045
+ Next, we show that (b) holds. By Lemma 3.5 and (8), for any ǫ > 0, there
1046
+ is a sufficiently large number r0 > 0 such that
1047
+ gr(o, x) ≥
1048
+ � 1
1049
+ mθ − ǫ
1050
+ � � ρr
1051
+ ρt
1052
+ s1−2mds
1053
+ holds for all x ∈ ∂∆(t) with r0 < t ≤ r. Thus,
1054
+
1055
+ ∆(r)
1056
+ gr(o, x)k(x)dv(x)
1057
+ (11)
1058
+ =
1059
+ � r
1060
+ r0
1061
+ dt
1062
+
1063
+ ∂∆(t)
1064
+ gr(o, x)k(x)dσt(x) + O(1)
1065
+ ≥ C(m, ǫ)
1066
+ � r
1067
+ r0
1068
+ � � ρr
1069
+ ρt
1070
+ s1−2mds
1071
+
1072
+ ∂∆(t)
1073
+ k(x)dσt(x)
1074
+
1075
+ dt,
1076
+ where
1077
+ C(m, ǫ) =
1078
+ 1
1079
+ mθ − ǫ.
1080
+ Set
1081
+ Λ(r) =
1082
+ � r
1083
+ r0
1084
+ � � ρr
1085
+ ρt
1086
+ s1−2mds
1087
+
1088
+ ∂∆(t)
1089
+ k(x)dσt(x)
1090
+
1091
+ dt.
1092
+ A direct computation leads to
1093
+ dΛ(r)
1094
+ dr
1095
+ = ρ1−2m
1096
+ r
1097
+ dρr
1098
+ dr
1099
+ � r
1100
+ r0
1101
+ � �
1102
+ ∂∆(t)
1103
+ k(x)dσt(x)
1104
+
1105
+ dt.
1106
+ In further, we have
1107
+ (12)
1108
+ d
1109
+ dr
1110
+ �ρ2m−1
1111
+ r
1112
+ Λ′(r)
1113
+ ρ′r
1114
+
1115
+ =
1116
+
1117
+ ∂∆(r)
1118
+ k(x)dσr(x).
1119
+ Since
1120
+ dσr ≥ 2C−1
1121
+ m r
1122
+ 2m−1
1123
+ 2m−2 dπr
1124
+ due to Corollary 3.4, then it follows from (12) that
1125
+ (13)
1126
+
1127
+ ∂∆(r)
1128
+ k(x)dπr(x) ≤ Cm2−1r− 2m−1
1129
+ 2m−2 d
1130
+ dr
1131
+ �ρ2m−1
1132
+ r
1133
+ Λ′(r)
1134
+ ρ′r
1135
+
1136
+ .
1137
+ Using Borel lemma twice, for any δ > 0, we have
1138
+ (14)
1139
+ d
1140
+ dr
1141
+ �ρ2m−1
1142
+ r
1143
+ Λ′(r)
1144
+ ρ′r
1145
+
1146
+ ≤ ρ(2m−1)(1+δ)
1147
+ r
1148
+ (ρ′r)1+δ
1149
+ Λ(1+δ)2(r)
1150
+ holds for all r > 0 outside a subset Fδ ⊆ (0, ∞) with finite Lebesque measure.
1151
+ Combining (13) with (14), we get
1152
+
1153
+ ∂∆(r)
1154
+ k(x)dπr(x) ≤ Cm2−1r− 2m−1
1155
+ 2m−2 ρ(2m−1)(1+δ)
1156
+ r
1157
+ (ρ′r)1+δ
1158
+ Λ(1+δ)2(r).
1159
+
1160
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
1161
+ 21
1162
+ Since (8) implies that
1163
+ ρr ∼ (2m(m − 1)θ)−
1164
+ 1
1165
+ 2m−2 r
1166
+ 1
1167
+ 2m−2 ,
1168
+ ρ′
1169
+ r ∼ 1
1170
+ as r → ∞, then for r0 > 0 sufficiently large, we have (0 < δ < 1)
1171
+
1172
+ ∂∆(r)
1173
+ k(x)dπr(x)
1174
+
1175
+ Cm2−1r− 2m−1
1176
+ 2m−2
1177
+
1178
+ 2 (2m(m − 1)θ)−
1179
+ 1
1180
+ 2m−2 r
1181
+ 1
1182
+ 2m−2
1183
+ �(2m−1)(1+δ)
1184
+ (2−1)2
1185
+ Λ(1+δ)2(r)
1186
+ ≤ Cm24m−1 (2m(m − 1)θ)− 2m−1
1187
+ 2m−2 r
1188
+ 2m−1
1189
+ 2m−2 δΛ(1+δ)2(r)
1190
+ holds for all r > r0. Hence, by this with (11)
1191
+
1192
+ ∂∆(r)
1193
+ k(x)dπr(x)
1194
+ ≤ Cm24m−1 (2m(m − 1)θ)− 2m−1
1195
+ 2m−2 r
1196
+ 2m−1
1197
+ 2m−2 δ
1198
+ C(1+δ)2(m, ǫ)
1199
+ ��
1200
+ ∆(r)
1201
+ gr(o, x)k(x)dv(x)
1202
+ �(1+δ)2
1203
+ ≤ Cm24m−1 (2m(m − 1)θ)− 2m−1
1204
+ 2m−2 r
1205
+ 2m−1
1206
+ 2m−2 δ
1207
+ (2−1m−1θ−1)4
1208
+ ��
1209
+ ∆(r)
1210
+ gr(o, x)k(x)dv(x)
1211
+ �(1+δ)2
1212
+ = Cr
1213
+ 2m−1
1214
+ 2m−2 δ
1215
+ ��
1216
+ ∆(r)
1217
+ gr(o, x)k(x)dv(x)
1218
+ �(1+δ)2
1219
+ holds for all r > 0 outside Eδ = Fδ ∪ (0, r0], where
1220
+ C = Cm24m+3m4θ4 (2m(m − 1)θ)− 2m−1
1221
+ 2m−2 .
1222
+ This shows that (b) holds. The proof is completed.
1223
+
1224
+ 3.2.3. Logarithmic derivative lemma.
1225
+ To establish the second main theorem, we still need to prove a logarithmic
1226
+ derivative lemma. Let ∇ be the gradient operator on M associated with the
1227
+ K¨ahler metric h. Let ψ be a meromorphic function on M. The norm of the
1228
+ gradient of ψ is defined by
1229
+ ∥∇ψ∥2 = 2
1230
+ m
1231
+
1232
+ i,j=1
1233
+ hij ∂ψ
1234
+ ∂zi
1235
+ ∂ψ
1236
+ ∂zj ,
1237
+ where (hij) is the inverse of (hij). Define
1238
+ T(r, ψ) = m(r, ψ) + N(r, ψ),
1239
+
1240
+ 22
1241
+ X.-J. DONG
1242
+ where
1243
+ m(r, ψ) =
1244
+
1245
+ ∂∆(r)
1246
+ log+ |ψ|dπr,
1247
+ N(r, ψ) =
1248
+ πm
1249
+ (m − 1)!
1250
+
1251
+ ψ∗∞∩∆(r)
1252
+ gr(o, x)αm−1.
1253
+ It is trivial to show that
1254
+ T
1255
+
1256
+ r,
1257
+ 1
1258
+ ψ − ζ
1259
+
1260
+ = T(r, ψ) + O(1).
1261
+ On P1(C), take a singular metric
1262
+ Ψ =
1263
+ 1
1264
+ |ζ|2(1 + log2 |ζ|)
1265
+ √−1
1266
+ 4π2 dζ ∧ d¯ζ,
1267
+ which gives that
1268
+
1269
+ P1(C)
1270
+ Ψ = 1.
1271
+ Lemma 3.7. We have
1272
+ 1
1273
+
1274
+
1275
+ ∆(r)
1276
+ gr(o, x)
1277
+ ∥∇ψ∥2
1278
+ |ψ|2(1 + log2 |ψ|)dv ≤ T(r, ψ) + O(1).
1279
+ Proof. Since
1280
+ ∥∇ψ∥2
1281
+ |ψ|2(1 + log2 |ψ|) = 4mπψ∗Ψ ∧ αm−1
1282
+ αm
1283
+ ,
1284
+ then it yields from Fubini’s theorem that
1285
+ 1
1286
+
1287
+
1288
+ ∆(r)
1289
+ gr(o, x)
1290
+ ∥∇ψ∥2
1291
+ |ψ|2(1 + log2 |ψ|)dv
1292
+ = m
1293
+
1294
+ ∆(r)
1295
+ gr(o, x)ψ∗Ψ ∧ αm−1
1296
+ αm
1297
+ dv
1298
+ =
1299
+ πm
1300
+ (m − 1)!
1301
+
1302
+ P1(C)
1303
+ Ψ(ζ)
1304
+
1305
+ ψ−1(ζ)∩∆(r)
1306
+ gr(o, x)αm−1
1307
+ =
1308
+
1309
+ P1(C)
1310
+ N
1311
+
1312
+ r, 1/(ψ − ζ)
1313
+
1314
+ Ψ(ζ)
1315
+
1316
+
1317
+ P1(C)
1318
+
1319
+ T(r, ψ) + O(1)
1320
+
1321
+ Ψ
1322
+ = T(r, ψ) + O(1).
1323
+
1324
+
1325
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
1326
+ 23
1327
+ Lemma 3.8. Assume that ψ ̸≡ 0. Then for any δ > 0, there exists a subset
1328
+ Eδ ⊆ (0, ∞) with finite Lebesgue measure such that
1329
+ (a) If m = 1, then
1330
+
1331
+ ∂∆(r)
1332
+ log+
1333
+ ∥∇ψ∥2
1334
+ |ψ|2(1 + log2 |ψ|)dπr ≤ (1 + δ)2 log+ T(r, ψ) + δ log r
1335
+ holds for all r > 0 outside Eδ;
1336
+ (b) If m ≥ 2, then
1337
+
1338
+ ∂∆(r)
1339
+ log+
1340
+ ∥∇ψ∥2
1341
+ |ψ|2(1 + log2 |ψ|)dπr ≤ (1 + δ)2 log+ T(r, ψ) + 2m − 1
1342
+ 2m − 2δ log r
1343
+ holds for all r > 0 outside Eδ.
1344
+ Proof. Using the concavity of log, we have
1345
+
1346
+ ∂∆(r)
1347
+ log+
1348
+ ∥∇ψ∥2
1349
+ |ψ|2(1 + log2 |ψ|)dπr
1350
+ ≤ log
1351
+
1352
+ ∂∆(r)
1353
+
1354
+ 1 +
1355
+ ∥∇ψ∥2
1356
+ |ψ|2(1 + log2 |ψ|)
1357
+
1358
+ dπr
1359
+ ≤ log+
1360
+
1361
+ ∂∆(r)
1362
+ ∥∇ψ∥2
1363
+ |ψ|2(1 + log2 |ψ|)dπr + O(1).
1364
+ By Theorem 3.6, for m = 1
1365
+ log+
1366
+
1367
+ ∂∆(r)
1368
+ ∥∇ψ∥2
1369
+ |ψ|2(1 + log2 |ψ|)dπr
1370
+ ≤ (1 + δ)2 log+
1371
+
1372
+ ∆(r)
1373
+ gr(o, x)
1374
+ ∥∇ψ∥2
1375
+ |ψ|2(1 + log2 |ψ|)dv + δ log r + O(1)
1376
+ ≤ (1 + δ)2 log+ T(r, ψ) + δ log r + O(1)
1377
+ for all r > 0 outside Eδ. Similarly, for m ≥ 2
1378
+ log+
1379
+
1380
+ ∂∆(r)
1381
+ ∥∇ψ∥2
1382
+ |ψ|2(1 + log2 |ψ|)dπr
1383
+ ≤ (1 + δ)2 log+
1384
+
1385
+ ∆(r)
1386
+ gr(o, x)
1387
+ ∥∇ψ∥2
1388
+ |ψ|2(1 + log2 |ψ|)dv + 2m − 1
1389
+ 2m − 2δ log r + O(1)
1390
+ ≤ (1 + δ)2 log+ T(r, ψ) + 2m − 1
1391
+ 2m − 2δ log r + O(1)
1392
+ for all r > 0 outside Eδ. Adjust Eδ so that O(1) is absorbed.
1393
+
1394
+ Define
1395
+ m
1396
+
1397
+ r, ∥∇ψ∥
1398
+ |ψ|
1399
+
1400
+ =
1401
+
1402
+ ∂∆(r)
1403
+ log+ ∥∇ψ∥
1404
+ |ψ| dπr.
1405
+
1406
+ 24
1407
+ X.-J. DONG
1408
+ Theorem 3.9. Let ψ be a nonconstant meromorphic function on M. Then
1409
+ for any δ > 0, there exist a subset Eδ ⊆ (0, ∞) with finite Lebesgue measure
1410
+ such that
1411
+ (a) If m = 1, then
1412
+ m
1413
+
1414
+ r, ∥∇ψ∥
1415
+ |ψ|
1416
+
1417
+ ≤ 2 + (1 + δ)2
1418
+ 2
1419
+ log+ T(r, ψ) + δ
1420
+ 2 log r
1421
+ holds for all r > 0 outside Eδ;
1422
+ (b) If m ≥ 2, then
1423
+ m
1424
+
1425
+ r, ∥∇ψ∥
1426
+ |ψ|
1427
+
1428
+ ≤ 2 + (1 + δ)2
1429
+ 2
1430
+ log+ T(r, ψ) + 2m − 1
1431
+ 4m − 4δ log r
1432
+ holds for all r > 0 outside Eδ,
1433
+ Proof. Note that
1434
+ m
1435
+
1436
+ r, ∥∇ψ∥
1437
+ |ψ|
1438
+
1439
+ ≤ 1
1440
+ 2
1441
+
1442
+ ∂∆(r)
1443
+ log+
1444
+ ∥∇ψ∥2
1445
+ |ψ|2(1 + log2 |ψ|)dπr
1446
+ +1
1447
+ 2
1448
+
1449
+ ∂∆(r)
1450
+ log
1451
+
1452
+ 1 + log2 |ψ|
1453
+
1454
+ dπr
1455
+ = 1
1456
+ 2
1457
+
1458
+ ∂∆(r)
1459
+ log+
1460
+ ∥∇ψ∥2
1461
+ |ψ|2(1 + log2 |ψ|)dπr
1462
+ +1
1463
+ 2
1464
+
1465
+ ∂∆(r)
1466
+ log
1467
+
1468
+ 1 +
1469
+
1470
+ log+ |ψ| + log+ 1
1471
+ |ψ|
1472
+ �2�
1473
+ dπr
1474
+ ≤ 1
1475
+ 2
1476
+
1477
+ ∂∆(r)
1478
+ log+
1479
+ ∥∇ψ∥2
1480
+ |ψ|2(1 + log2 |ψ|)dπr
1481
+ + log
1482
+
1483
+ ∂∆(r)
1484
+
1485
+ log+ |ψ| + log+ 1
1486
+ |ψ|
1487
+
1488
+ dπr + O(1)
1489
+ ≤ 1
1490
+ 2
1491
+
1492
+ ∂∆(r)
1493
+ log+
1494
+ ∥∇ψ∥2
1495
+ |ψ|2(1 + log2 |ψ|)dπr + log+ T(r, ψ) + O(1).
1496
+ By Lemma 3.8, for any δ > 0, there exists a subset Eδ ⊆ (0, ∞) with finite
1497
+ Lebesgue measure such that, for m = 1
1498
+ m
1499
+
1500
+ r, ∥∇ψ∥
1501
+ |ψ|
1502
+
1503
+ ≤ 2 + (1 + δ)2
1504
+ 2
1505
+ log+ T(r, ψ) + δ
1506
+ 2 log r + O(1)
1507
+ holds for all r > 0 outside Eδ; and for m ≥ 2
1508
+ m
1509
+
1510
+ r, ∥∇ψ∥
1511
+ |ψ|
1512
+
1513
+ ≤ 2 + (1 + δ)2
1514
+ 2
1515
+ log+ T(r, ψ) + 2m − 1
1516
+ 4m − 4δ log r + O(1)
1517
+ holds for all r > 0 outside Eδ. Adjust Eδ so that O(1) is absorbed.
1518
+
1519
+
1520
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
1521
+ 25
1522
+ 3.3. Second main theorem and defect relation.
1523
+ Let D = D1+· · ·+Dq be a reduced divisor on X. D is said to be of simple
1524
+ normal crossing type if every Dj is smooth and; at every point x ∈ X, there
1525
+ exist a local holomorphic coordinate neighborhood U(z1, · · · , zm) and a non-
1526
+ negative integer k with 0 ≤ k ≤ m such that
1527
+ U ∩ D =
1528
+
1529
+ z1 · · · zk = 0
1530
+
1531
+ for U ∩ D ̸= ∅.
1532
+ We establish the following second main theorem:
1533
+ Theorem 3.10. Let M be a non-compact complete K¨ahler manifold with
1534
+ maximal volume growth and non-negative Ricci curvature. Let X be a com-
1535
+ plex projective manifold of complex dimension not greater than that of M.
1536
+ Let D ∈ |L| be a divisor of simple normal crossing type, where L is a positive
1537
+ line bundle over X. Let f : M → X be a differentiably non-degenerate mero-
1538
+ morphic mapping. Then for any δ > 0, there exists a subset Eδ ⊆ (0, ∞)
1539
+ with finite Lebesgue measure such that
1540
+ Tf(r, L) + Tf(r, KX) + T(r, R) ≤ N f(r, D) + O
1541
+
1542
+ log+ Tf(r, L) + δ log r
1543
+
1544
+ holds for all r > 0 outside Eδ.
1545
+ Proof. Equip every holomorphic line bundle O(Dj) with a Hermitian metric
1546
+ hj. It can induce a Hermitian metric hL = h1⊗· · ·⊗hq on L with c1(L, hL) >
1547
+ 0. We see that Ω = cn
1548
+ 1(L, hL) is a volume form on X. Pick sj ∈ H0(X, O(Dj)
1549
+ such that (sj) = Dj and ∥sj∥ < 1. On X, define a singular volume form
1550
+ Φ =
1551
+
1552
+ �q
1553
+ j=1 ∥sj∥2 .
1554
+ Set
1555
+ f ∗Φ ∧ αm−n = ξαm.
1556
+ Note that
1557
+ αm = m! det(hi¯j)
1558
+ m
1559
+
1560
+ j=1
1561
+ √−1
1562
+ π
1563
+ dzj ∧ d¯zj.
1564
+ It is clear that
1565
+ ddc[log ξ] ≥ f ∗c1(L, hL) − f ∗Ric(Ω) + R − [Suppf ∗D]
1566
+ in the sense of currents, where
1567
+ R = −ddc log det(hi¯j)
1568
+ is the Ricci form of M. It yields that
1569
+ 1
1570
+ 4
1571
+
1572
+ ∆(r)
1573
+ gr(o, x)∆ log ξdv
1574
+ (15)
1575
+ ≥ Tf(r, L) + Tf(r, KX) + T(r, R) − N f(r, D).
1576
+
1577
+ 26
1578
+ X.-J. DONG
1579
+ Next, we need to estimate the upper bound of the first term in (15). Since
1580
+ D is of simple normal crossing type, then there exist a finite open covering
1581
+ {Uλ} of X and finitely many rational functions wλ1, · · · , wλn on X such that
1582
+ wλ1, · · · , wλn are holomorphic on Uλ satisfying that
1583
+ dwλ1 ∧ · · · ∧ dwλn(x) ̸= 0,
1584
+ ∀x ∈ Uλ;
1585
+ D ∩ Uλ =
1586
+
1587
+ wλ1 · · · wλhλ = 0
1588
+
1589
+ ,
1590
+ ∃hλ ≤ n.
1591
+ Moreover, we can require that O(Dj)|Uλ ∼= Uλ × C for λ, j. On Uλ, write
1592
+ Φ =
1593
+
1594
+ |wλ1|2 · · · |wλhλ|2
1595
+ n�
1596
+ k=1
1597
+ √−1
1598
+ π
1599
+ dwλk ∧ d ¯wλk,
1600
+ where eλ is a positive smooth function on Uλ. Set
1601
+ Φλ =
1602
+ φλeλ
1603
+ |wλ1|2 · · · |wλhλ|2
1604
+ n�
1605
+ k=1
1606
+ √−1
1607
+ π
1608
+ dwλk ∧ d ¯wλk,
1609
+ where {φλ} is a partition of unity subordinate to {Uλ}. Set fλk = wλk ◦ f.
1610
+ On f −1(Uλ), we have
1611
+ f ∗Φλ =
1612
+ φλ ◦ f · eλ ◦ f
1613
+ |fλ1|2 · · · |fλhλ|2
1614
+ n�
1615
+ k=1
1616
+ √−1
1617
+ π
1618
+ dfλk ∧ d ¯fλk
1619
+ = φλ ◦ f · eλ ◦ f
1620
+
1621
+ 1≤i1̸=···̸=in≤m
1622
+ ���∂fλ1
1623
+ ∂zi1
1624
+ ���
1625
+ 2
1626
+ |fλ1|2 · · ·
1627
+ ���
1628
+ ∂fλhλ
1629
+ ∂z
1630
+ ihλ
1631
+ ���
1632
+ 2
1633
+ |fλhλ|2
1634
+ ����
1635
+ ∂fλ(hλ+1)
1636
+ ∂zihλ+1
1637
+ ����
1638
+ 2
1639
+ · · ·
1640
+ ����
1641
+ ∂fλn
1642
+ ∂zin
1643
+ ����
1644
+ 2
1645
+ ·
1646
+ �√−1
1647
+ π
1648
+ �n
1649
+ dzi1 ∧ d¯zi1 ∧ · · · ∧ dzin ∧ d¯zin.
1650
+ Fix any x0 ∈ M, we take local holomorphic coordinates z1, · · · , zm near
1651
+ x0 and local holomorphic coordinates ζ1, · · · , ζn near f(x0) such that
1652
+ α|x0 =
1653
+ √−1
1654
+ π
1655
+ m
1656
+
1657
+ j=1
1658
+ dzj ∧ d¯zj,
1659
+ c1(L, hL)|f(x0) =
1660
+ √−1
1661
+ π
1662
+ n
1663
+
1664
+ j=1
1665
+ dζj ∧ d¯ζj.
1666
+ Put f ∗Φλ ∧ αm−n = ξλαm. Clearly, we have ξ = � ξλ and
1667
+ ξλ = φλ ◦ f · eλ ◦ f
1668
+
1669
+ 1≤i1̸=···̸=in≤m
1670
+ ��� ∂fλ1
1671
+ ∂zi1
1672
+ ���
1673
+ 2
1674
+ |fλ1|2 · · ·
1675
+ ���
1676
+ ∂fλhλ
1677
+ ∂z
1678
+ ihλ
1679
+ ���
1680
+ 2
1681
+ |fλhλ|2
1682
+ ����
1683
+ ∂fλ(hλ+1)
1684
+ ∂zihλ+1
1685
+ ����
1686
+ 2
1687
+ · · ·
1688
+ ����
1689
+ ∂fλn
1690
+ ∂zin
1691
+ ����
1692
+ 2
1693
+ ≤ φλ ◦ f · eλ ◦ f
1694
+
1695
+ 1≤i1̸=···̸=in≤m
1696
+ ��∇fλ1
1697
+ ��2
1698
+ |fλ1|2
1699
+ · · ·
1700
+ ��∇fλhλ
1701
+ ��2
1702
+ |fλhλ|2
1703
+ ·
1704
+ ��∇fλ(hλ+1)
1705
+ ��2 · · ·
1706
+ ��∇fλn
1707
+ ��2
1708
+
1709
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
1710
+ 27
1711
+ at x0. Define a non-negative function ̺ on M by
1712
+ (16)
1713
+ f ∗c1(L, hL) ∧ αm−1 = ̺αm.
1714
+ Set fj = ζj ◦ f for 1 ≤ j ≤ n. Then, at x0
1715
+ f ∗c1(L, hL) ∧ αm−1 = (m − 1)!
1716
+ 2
1717
+ m
1718
+
1719
+ j=1
1720
+ ��∇fj
1721
+ ��2αm.
1722
+ That is,
1723
+ ̺ = (m − 1)!
1724
+ n
1725
+
1726
+ i=1
1727
+ m
1728
+
1729
+ j=1
1730
+ ��� ∂fi
1731
+ ∂zj
1732
+ ���
1733
+ 2
1734
+ = (m − 1)!
1735
+ 2
1736
+ n
1737
+
1738
+ j=1
1739
+ ��∇fj
1740
+ ��2
1741
+ at x0. Combining the above, we are led to
1742
+ ξλ ≤ φλ ◦ f · eλ ◦ f · (2̺)n−hλ
1743
+ (m − 1)!n−hλ
1744
+
1745
+ 1≤i1̸=···̸=in≤m
1746
+ ��∇fλ1
1747
+ ��2
1748
+ |fλ1|2
1749
+ · · ·
1750
+ ��∇fλhλ
1751
+ ��2
1752
+ |fλhλ|2
1753
+ on f −1(Uλ). Note that φλ ◦f ·eλ ◦f is bounded on M, whence it yields from
1754
+ log+ ξ ≤ �
1755
+ λ log+ ξλ + O(1) that
1756
+ (17)
1757
+ log+ ξ ≤ O
1758
+
1759
+ log+ ̺ +
1760
+
1761
+ k,λ
1762
+ log+ ∥∇fλk∥
1763
+ |fλk|
1764
+
1765
+ + O(1)
1766
+ on M. By Jensen-Dynkin formula (cf. Theorem 2.13)
1767
+ (18)
1768
+ 1
1769
+ 4
1770
+
1771
+ ∆(r)
1772
+ gr(o, x)∆ log ξdv = 1
1773
+ 2
1774
+
1775
+ ∂∆(r)
1776
+ log ξdπr + O(1).
1777
+ Combining (17) with (18) and using Theorem 3.9,
1778
+ 1
1779
+ 4
1780
+
1781
+ ∆(r)
1782
+ gr(o, x)∆ log ξdv
1783
+ ≤ O
1784
+ � �
1785
+ k,λ
1786
+ m
1787
+
1788
+ r, ∥∇fλk∥
1789
+ |fλk|
1790
+
1791
+ + log+
1792
+
1793
+ ∂∆(r)
1794
+ ̺dπr
1795
+
1796
+ + O(1)
1797
+ ≤ O
1798
+ � �
1799
+ k,λ
1800
+ log+ T(r, fλk) + log+
1801
+
1802
+ ∂∆(r)
1803
+ ̺dπr
1804
+
1805
+ + O(1)
1806
+ ≤ O
1807
+
1808
+ log+ Tf(r, L) + log+
1809
+
1810
+ ∂∆(r)
1811
+ ̺dπr
1812
+
1813
+ + O(1).
1814
+ Using Theorem 3.6 and (16), for any δ > 0, there exists a subset Eδ ⊆ (0, ∞)
1815
+ with finite Lebesgue measure such that
1816
+ log+
1817
+
1818
+ ∂∆(r)
1819
+ ̺dπr ≤ O
1820
+
1821
+ log+ Tf(r, L) + δ log r
1822
+
1823
+
1824
+ 28
1825
+ X.-J. DONG
1826
+ holds for all r > 0 outside Eδ. Thus, we conclude that
1827
+ (19)
1828
+ 1
1829
+ 4
1830
+
1831
+ ∆(r)
1832
+ gr(o, x)∆ log ξdv ≤ O
1833
+
1834
+ log+ Tf(r, L) + δ log r
1835
+
1836
+ + O(1)
1837
+ for all r > 0 outside Eδ. Combining (15) with (19), we prove the theorem.
1838
+
1839
+ In view of (1) and (2), we see that T(r, R) has the alternative expressions
1840
+ T(r, R) = 1
1841
+ 2
1842
+
1843
+ ∆(r)
1844
+ gr(o, x)sCdv = 1
1845
+ 4
1846
+
1847
+ ∆(r)
1848
+ gr(o, x)sdv.
1849
+ The curvature condition implies that T(r, R) ≥ 0, thus we deduce that
1850
+ Corollary 3.11. Assume the same conditions as in Theorem 3.10. Then for
1851
+ any δ > 0, there exists a subset Eδ ⊆ (0, ∞) with finite Lebesgue measure
1852
+ such that
1853
+ Tf(r, L) + Tf(r, KX) ≤ N f(r, D) + O
1854
+
1855
+ log+ Tf(r, L) + δ log r
1856
+
1857
+ holds for all r > 0 outside Eδ.
1858
+ We continue to consider a defect relation. Define the defect and the simple
1859
+ defect ¯δf(D) of f with respect to D, respectively by
1860
+ δf(D) = 1 − lim sup
1861
+ r→∞
1862
+ Nf(r, D)
1863
+ Tf(r, L) ,
1864
+ ¯δf(D) = 1 − lim sup
1865
+ r→∞
1866
+ N f(r, D)
1867
+ Tf(r, L) .
1868
+ By the first main theorem, we have 0 ≤ δf(D) ≤ ¯δf(D) ≤ 1.
1869
+ For any two holomorphic line bundles L1, L2 over X, define
1870
+ �c1(L2)
1871
+ c1(L1)
1872
+
1873
+ = inf
1874
+
1875
+ t ∈ R : ω2 < tω1; ∃ω1 ∈ c1(L1), ∃ω2 ∈ c1(L2)
1876
+
1877
+ .
1878
+ Invoking the volume growth condition (3) and the Green function estimate
1879
+ (8) (cf. Lemma 3.5 also) for M, one can deduce that Tf(r, L) ≥ O(log r) for
1880
+ a nonconstant meromorphic mapping f. Hence, we obtain a defect relation:
1881
+ Theorem 3.12. Assume the same conditions as in Theorem 3.10. Then
1882
+ ¯δf(D) ≤
1883
+ �c1(K∗
1884
+ X)
1885
+ c1(L)
1886
+
1887
+ − lim inf
1888
+ r→∞
1889
+ T(r, R)
1890
+ Tf(r, L) ≤
1891
+ �c1(K∗
1892
+ X)
1893
+ c1(L)
1894
+
1895
+ .
1896
+ Note that the above defect relation in Theorem 3.12 is sharp.
1897
+ Corollary 3.13. There exist no differentiably non-degenerate meromorphic
1898
+ mappings f : M → X satisfying the growth condition
1899
+ lim inf
1900
+ r→∞
1901
+ T(r, R)
1902
+ Tf(r, L) >
1903
+ �c1(K∗
1904
+ X)
1905
+ c1(L)
1906
+
1907
+ .
1908
+
1909
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
1910
+ 29
1911
+ 3.4. The case for singular divisors.
1912
+ We consider a general hypersurface D of X. Let S be the set of the points
1913
+ of D at which D has a non-normal-crossing singularity. Hironaka’s resolution
1914
+ of singularities (cf. [23]) says that there exists a proper modification
1915
+ τ : ˜X → X
1916
+ such that ˜X \ ˜S is biholomorphic to X \S through τ, and ˜D has only normal
1917
+ crossing singularities, where ˜S = τ −1(S) and ˜D = τ −1(D). Write ˆD = ˜D \ ˜S
1918
+ and denote by ˜Sj the irreducible components of ˜S. Put
1919
+ (20)
1920
+ τ ∗D = ˆD +
1921
+
1922
+ pj ˜Sj = ˜D +
1923
+
1924
+ (pj − 1) ˜Sj,
1925
+ Rτ =
1926
+
1927
+ qj ˜Sj,
1928
+ where Rτ is ramification divisor of τ, and pj, qj are positive integers. Set
1929
+ (21)
1930
+ S∗ =
1931
+
1932
+ ςj ˜Sj,
1933
+ ςj = max
1934
+
1935
+ pj − qj − 1, 0
1936
+
1937
+ .
1938
+ Endow O(S∗) with a Hermitian metric ∥ · ∥ and take a holomorphic section
1939
+ σ of O(S∗) such that the zero divisor (σ) = S∗ and ∥σ∥ < 1.
1940
+ Let f : M → X be a meromorphic mapping such that f(M) ̸⊆ D. Define
1941
+ the proximity function of f with respect to the singularities of D by
1942
+ mf(r, Sing(D)) =
1943
+
1944
+ ∂∆(r)
1945
+ log
1946
+ 1
1947
+ ∥σ ◦ τ −1 ◦ f∥dπr.
1948
+ Consider the lift ˜f : M → ˜X defined via τ ◦ ˜f = f. Then, ˜f is a holomorphic
1949
+ mapping on M \ ˜I, where ˜I = I ∪ f −1(S) with the indeterminacy set I of f.
1950
+ One can similarly define the Nevanlinna’s functions of ˜f through the lift of
1951
+ f via τ. It is not difficult to verify that
1952
+ (22)
1953
+ mf(r, Sing(D)) = m ˜f(r, S∗) =
1954
+
1955
+ ςjm ˜f(r, ˜Sj).
1956
+ The following second main theorem is an extension of Theorem 3.10.
1957
+ Theorem 3.14. Let M be a non-compact complete K¨ahler manifold with
1958
+ maximal volume growth and non-negative Ricci curvature. Let X be a com-
1959
+ plex projective manifold of complex dimension not greater that of M. Let
1960
+ D ∈ |L| be a hypersurface of X, where L is a positive line bundle over X.
1961
+ Let f : M → X be a differentiably non-degenerate meromorphic mapping.
1962
+ Then for any δ > 0, there exists a subset Eδ ⊆ (0, ∞) with finite Lebesgue
1963
+ measure such that
1964
+ Tf(r, L) + Tf(r, KX) + T(r, R)
1965
+ ≤ N f(r, D) + mf(r, Sing(D)) + O
1966
+
1967
+ log+ Tf(r, L) + δ log r
1968
+
1969
+ holds for all r > 0 outside Eδ.
1970
+
1971
+ 30
1972
+ X.-J. DONG
1973
+ Proof. We first show the case where D is the union of smooth hypersurfaces,
1974
+ namely, no irreducible component of ˜D crosses itself. Let E be the union of
1975
+ generic hyperplane sections of X so that the set A = ˜D ∪E has only normal
1976
+ crossing singularities. By (20) and K ˜
1977
+ X = τ ∗KX + O(Rτ)
1978
+ K ˜
1979
+ X + O( ˜D) = τ ∗KX + τ ∗O(D) + (1 − pj + qj)O( ˜Sj)
1980
+ (23)
1981
+ = τ ∗KX + τ ∗L + (1 − pj + qj)O( ˜Sj).
1982
+ Applying Theorem 3.10 to ˜f, we obtain
1983
+ T ˜f(r, O(A)) + T ˜f(r, K ˜
1984
+ X) + T(r, R)
1985
+ ≤ N ˜f(r, A) + O
1986
+
1987
+ log+ T ˜f(r, τ ∗O(A)) + δ log r
1988
+
1989
+ .
1990
+ The First Main Theorem gives that
1991
+ T ˜f(r, O(A)) = m ˜f(r, A) + N ˜f(r, A) + O(1)
1992
+ = m ˜f(r, ˜D) + m ˜f(r, E) + N ˜f(r, A) + O(1)
1993
+ ≥ m ˜f(r, ˜D) + N ˜f(r, A) + O(1)
1994
+ = T ˜f(r, O( ˜D)) − N ˜f(r, ˜D) + N ˜f(r, A) + O(1),
1995
+ which yields
1996
+ T ˜f(r, O(A)) − N ˜f(r, A) ≥ T ˜f(r, O( ˜D)) − N ˜f(r, ˜D) + O(1).
1997
+ Since T ˜f(r, τ ∗L) = Tf(r, L) and N ˜f(r, ˜D) = N f(r, D), then
1998
+ T ˜f(r, O( ˜D)) + T ˜f(r, K ˜
1999
+ X) + T(r, R)
2000
+ (24)
2001
+ ≤ N ˜f(r, ˜D) + O
2002
+
2003
+ log Tf(r, L) + δ log r
2004
+
2005
+ .
2006
+ It follows from (23) that
2007
+ T ˜f(r, O( ˜D)) + T ˜f(r, K ˜
2008
+ X)
2009
+ (25)
2010
+ = T ˜f(r, τ ∗L) + T ˜f(r, τ ∗KX) +
2011
+
2012
+ (1 − pj + qj)T ˜f(r, O( ˜Sj))
2013
+ = Tf(r, L) + Tf(r, KX) +
2014
+
2015
+ (1 − pj + qj)T ˜f(r, O( ˜Sj)).
2016
+ Note that N ˜f(r, ˜S) = 0, thus it yields from (21) and (22) that
2017
+
2018
+ (1 − pj + qj)T ˜f(r, O( ˜Sj)) =
2019
+
2020
+ (1 − pj + qj)m ˜f(r, ˜Sj) + O(1)
2021
+ (26)
2022
+
2023
+
2024
+ ςjm ˜f(r, ˜Sj) + O(1)
2025
+ = mf(r, Sing(D)) + O(1).
2026
+ Combining (24) and (25) with (26), we have the theorem proved in this case.
2027
+ For the general case, according to those arguments stated above, it suffices
2028
+ to prove the case that a hypersurface D is of normal crossing type. Using the
2029
+ arguments from [[39], pp. 175], there exists a proper modification τ : ˜X → X
2030
+
2031
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
2032
+ 31
2033
+ such that ˜D = τ −1(D) is the union of a collection of hypersurfaces of simple
2034
+ normal crossing type. It means that mf(r, Sing(D)) = 0. Invoking Theorem
2035
+ 3.10, we can prove the theorem in general.
2036
+
2037
+ Corollary 3.15. Assume the same conditions as in Theorem 3.14. Then
2038
+ ¯δf(D) ≤
2039
+ �c1(K∗
2040
+ X)
2041
+ c1(L)
2042
+
2043
+ + lim sup
2044
+ r→∞
2045
+ mf(r, Sing(D)) − T(r, R)
2046
+ Tf(r, L)
2047
+ .
2048
+ 4. Application to algebraic dependence problems
2049
+ 4.1. Consequences of Theorems 3.1 and 3.10.
2050
+ The purpose of this section is to apply the Nevanlinna theory established
2051
+ in Section 3 to the study of algebraic dependence problems on the dominant
2052
+ meromorphic mappings. We shall point out that some arguments essentially
2053
+ follow Y. Aihara [8].
2054
+ Let X be a complex projective manifold. Let L be an ample line bundle
2055
+ over X. For any holomorphic line bundle F over X, define
2056
+ �F
2057
+ L
2058
+
2059
+ = inf
2060
+
2061
+ γ ∈ Q : γL ⊗ F −1 is big
2062
+
2063
+ .
2064
+ According to Corollary 2.9, we see that [F/L] < 0 if and only if F −1 is big.
2065
+ Let f : M → X be a meromorphic mapping, where M is a K¨ahler manifold.
2066
+ For F ∈ Pic(X) ⊗ Q, we define
2067
+ Tf(r, F) = 1
2068
+ ν Tf(r, νF),
2069
+ where ν is a positive integer such that νF ∈ Pic(X). Evidently, this is well
2070
+ defined. Next, we would like to consider another defect relation. The second
2071
+ main theorem (i.e., Theorem 3.10) yields that
2072
+ Theorem 4.1. Let M be a non-compact complete K¨ahler manifold with
2073
+ maximal volume growth and non-negative Ricci curvature. Let X be a com-
2074
+ plex projective manifold of complex dimension not greater than that of M.
2075
+ Let D ∈ |L| be a divisor of simple normal crossing type, where L is an ample
2076
+ line bundle over X. Let f : M → X be a dominant meromorphic mapping.
2077
+ Then
2078
+ ¯δf(D) ≤
2079
+
2080
+ K−1
2081
+ X
2082
+ L
2083
+
2084
+ .
2085
+ Proof. It follows from the definition of [K−1
2086
+ X /L] that ([K−1
2087
+ X /L] + ǫ)L ⊗ KX
2088
+ is big for any rational number ǫ > 0. Using Corollary 2.9, we obtain
2089
+ ��
2090
+ K−1
2091
+ X /L
2092
+
2093
+ + ǫ
2094
+
2095
+ L ⊗ KX ≥ δL
2096
+
2097
+ 32
2098
+ X.-J. DONG
2099
+ for a sufficiently small rational number δ > 0. This implies that
2100
+ Tf(r, K−1
2101
+ X ) ≤
2102
+ ��
2103
+ K−1
2104
+ X /L
2105
+
2106
+ − δ + ǫ
2107
+
2108
+ Tf(r, L) + O(1).
2109
+ Invoking Theorem 3.10, we conclude that
2110
+ ¯δf(D) ≤
2111
+
2112
+ K−1
2113
+ X
2114
+ L
2115
+
2116
+ .
2117
+
2118
+ The first main theorem (i.e., Theorem 3.1) also yields that
2119
+ Theorem 4.2. Let M be a K¨ahler manifold and X be a complex projective
2120
+ manifold. Let f : M → X be a dominant meromorphic mapping. Assume
2121
+ that µF ⊗L−1 is big for some positive integer µ, where F is a big line bundle
2122
+ over X. Then
2123
+ Tf(r, L) ≤ µTf(r, F) + O(1).
2124
+ Proof. By Theorem 2.8, the bigness of µF ⊗ L−1 implies that there exists a
2125
+ nonzero holomorphic section s ∈ H0(X, ν(µF ⊗L−1)) for a sufficiently large
2126
+ positive integer ν. Note that Theorem 3.1 remains true for a general K¨ahler
2127
+ manifold M, whence
2128
+ Nf(r, (s)) ≤ Tf(r, ν(µF ⊗ L−1)) + O(1)
2129
+ = µνTf(r, F) − νTf(r, L) + O(1).
2130
+ This leads to the desired inequality.
2131
+
2132
+ 4.2. Propagation of algebraic dependence.
2133
+ Let M be a non-compact complete K¨ahler manifold with maximal volume
2134
+ growth and non-negative Ricci curvature, and let X be a complex projective
2135
+ manifold with complex dimension not greater than that of M. Fix an integer
2136
+ l ≥ 2. A proper algebraic subset Σ of Xl is said to be decomposible, if there
2137
+ exist positive integers l1, · · · , ls with l = l1 + · · · + ls for some integer s ≤ l
2138
+ and algebraic subsets Σj ⊆ Xlj for 1 ≤ j ≤ s, such that Σ = Σ1×· · ·×Σs. If
2139
+ Σ is not decomposable, we say that Σ is indecomposable. For l meromorphic
2140
+ mappings f1, · · · , fl : M → X, there is a meromorphic mapping f1×· · ·×fl :
2141
+ M → Xl, defined by
2142
+ (f1 × · · · × fl)(x) = (f1(x), · · · , fl(x)),
2143
+ ∀x ∈ M \
2144
+ l�
2145
+ j=1
2146
+ I(fj),
2147
+ where I(fj) denotes the indeterminacy set of fj for 1 ≤ j ≤ l. As a matter
2148
+ of convenience, set
2149
+ ˜f = f1 × · · · × fl.
2150
+
2151
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
2152
+ 33
2153
+ Definition 4.3. Let S be an analytic subset of M. The nonconstant mero-
2154
+ morphic mappings f1, · · · , fl : M → X are said to be algebraically dependent
2155
+ on S, if there exists a proper indecomposable algebraic subset Σ of Xl such
2156
+ that ˜f(S) ⊆ Σ. In this case, we say that f1, · · · , fl are Σ-related on S.
2157
+ Let L be an ample line bundle over X, and let D1, · · · , Dq ∈ |L| such that
2158
+ D1 + · · · + Dq has only simple normal crossings. Set
2159
+ G =
2160
+
2161
+ f : M → X is a dominant meromorphic mapping
2162
+
2163
+ .
2164
+ Let S1, · · · , Sq be hypersurfaces of M such that dimC Si ∩ Sj ≤ m − 2 for all
2165
+ i ̸= j. Fix q positive integers k1, · · · , kq which may be +∞. Denote by
2166
+ (27)
2167
+ F = F
2168
+
2169
+ f ∈ G ; {kj}; (M, {Sj}); (X, {Dj})
2170
+
2171
+ the set of all f ∈ G such that
2172
+ Sj = Suppkjf ∗Dj,
2173
+ 1 ≤ j ≤ q.
2174
+ Let ˜L be a big line bundle over Xl. In general, we have
2175
+ ˜L ̸∈ π∗
2176
+ 1Pic(X) ⊕ · · · ⊕ π∗
2177
+ l Pic(X),
2178
+ where πk : Xl → X is the natural projection on the k-th factor for 1 ≤ k ≤ l.
2179
+ Let F1, · · · , Fl be big line bundles over X. Then, it defines a line bundle over
2180
+ Xl by
2181
+ ˜F = π∗
2182
+ 1F1 ⊗ · · · ⊗ π∗
2183
+ l Fl.
2184
+ If ˜L ̸= ˜F, we assume that there is a rational number ˜γ > 0 such that
2185
+ ˜γ ˜F ⊗ ˜L−1 is big.
2186
+ If ˜L = ˜F, we shall take ˜γ = 1. In further, assume that there is a line bundle
2187
+ F0 ∈ {F1, · · · , Fl} such that F0 ⊗ F −1
2188
+ j
2189
+ is either big or trivial for 1 ≤ j ≤ l.
2190
+ Let H be the set of all indecomposable hypersurfaces Σ in Xl satisfying
2191
+ Σ = Supp ˜D for some ˜D ∈ |˜L|. Set
2192
+ S = S1 ∪ · · · ∪ Sq,
2193
+ k0 = max{k1, · · · , kq}.
2194
+ Lemma 4.4. Let f1, · · · , fl be meromorphic mappings in F. Assume that
2195
+ ˜f(S) ⊆ Σ and ˜f(M) ̸⊆ Σ for some Σ ∈ H . Then
2196
+ N(r, S) ≤ ˜γ
2197
+ l
2198
+
2199
+ j=1
2200
+ Tfj(r, Fj) + O(1) ≤ ˜γ
2201
+ l
2202
+
2203
+ j=1
2204
+ Tfj(r, F0) + O(1).
2205
+ Proof. Take ˜D ∈ |˜L| such that Σ = Supp ˜D. As mentioned earlier, ˜γ ˜F ⊗ ˜L−1
2206
+ is big for ˜γ ̸= 1 and trivial for ˜γ = 1. Then, by conditions with Theorem 3.1
2207
+
2208
+ 34
2209
+ X.-J. DONG
2210
+ and Corollary 4.2, we conclude that
2211
+ N(r, S) ≤ T ˜f(r, ˜L) + O(1)
2212
+ ≤ ˜γT ˜f(r, ˜F ) + O(1)
2213
+ ≤ ˜γ
2214
+ l
2215
+
2216
+ j=1
2217
+ Tfj(r, Fj) + O(1)
2218
+ ≤ ˜γ
2219
+ l
2220
+
2221
+ j=1
2222
+ Tfj(r, F0) + O(1).
2223
+ The proof is completed.
2224
+
2225
+ Lemma 4.5. Let f be a meromorphic mapping in F. Then
2226
+ qTf(r, L) + Tf(r, KX)
2227
+
2228
+ k0
2229
+ k0 + 1N(r, S) +
2230
+ q
2231
+
2232
+ j=1
2233
+ 1
2234
+ kj + 1Nf(r, Dj) + o
2235
+
2236
+ Tf(r, L)
2237
+
2238
+ .
2239
+ Proof. By Sj = Suppkjf ∗Dj, we have
2240
+ N f(r, Dj) ≤ N(r, Sj)
2241
+
2242
+ kj
2243
+ kj + 1N(r, Sj) +
2244
+ 1
2245
+ kj + 1Nf(r, Dj)
2246
+
2247
+ k0
2248
+ k0 + 1N(r, Sj) +
2249
+ 1
2250
+ kj + 1Nf(r, Dj).
2251
+ Combining this with the second main theorem (cf. Theorem 3.10), then one
2252
+ can easily prove the lemma.
2253
+
2254
+ Define a Q-line bundle L0 ∈ Pic(X) ⊗ Q by
2255
+ (28)
2256
+ L0 =
2257
+
2258
+ q
2259
+
2260
+ j=1
2261
+ kj
2262
+ kj + 1
2263
+
2264
+ L ⊗
2265
+
2266
+ − ˜γlk0
2267
+ k0 + 1F0
2268
+
2269
+ .
2270
+ Again, for an arbitrary Q-line bundle H ∈ Pic(X) ⊗ Q, define
2271
+ T(r, H) =
2272
+ l
2273
+
2274
+ j=1
2275
+ Tfj(r, H).
2276
+ In what follows, we are to show a theorem for the propagation of algebraic
2277
+ dependence.
2278
+ Theorem 4.6. Let f1, · · · , fl be meromorphic mappings in F. Assume that
2279
+ f1, · · · , fl are Σ-related on S for some Σ ∈ H . If L0 ⊗ KX is big, then
2280
+ f1, · · · , fl are Σ-related on M.
2281
+
2282
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
2283
+ 35
2284
+ Proof. It suffices to prove ˜f(M) ⊆ Σ. Otherwise, we assume that ˜f(M) ̸⊆ Σ.
2285
+ According to Theorem 3.1 and Lemma 4.5, for i = 1, · · · , l
2286
+ qTfi(r, L) + Tfi(r, KX)
2287
+
2288
+ k0
2289
+ k0 + 1N(r, S) +
2290
+ q
2291
+
2292
+ j=1
2293
+ 1
2294
+ kj + 1Tfi(r, L) + o
2295
+
2296
+ Tfi(r, L)
2297
+
2298
+ ,
2299
+ which yields from Lemma 4.4 that
2300
+ q
2301
+
2302
+ j=1
2303
+ kj
2304
+ kj + 1Tfi(r, L) + Tfi(r, KX) ≤
2305
+ k0
2306
+ k0 + 1N(r, S) + o
2307
+
2308
+ Tfi(r, L)
2309
+
2310
+
2311
+ ˜γk0
2312
+ k0 + 1
2313
+ l
2314
+
2315
+ j=1
2316
+ Tfj(r, F0) + o
2317
+
2318
+ Tfi(r, L)
2319
+
2320
+ =
2321
+ ˜γk0
2322
+ k0 + 1T(r, F0) + o
2323
+
2324
+ Tfi(r, L)
2325
+
2326
+ .
2327
+ Thus, we get
2328
+ q
2329
+
2330
+ j=1
2331
+ kj
2332
+ kj + 1T(r, L) + T(r, KX) ≤
2333
+ ˜γlk0
2334
+ k0 + 1T(r, F0) + o
2335
+
2336
+ T(r, L)
2337
+
2338
+ .
2339
+ It follows that
2340
+ (29)
2341
+ T(r, L0) + T(r, KX) ≤ o
2342
+
2343
+ T(r, L)
2344
+
2345
+ .
2346
+ On the other hand, the bigness of L0⊗KX implies that there exists a positive
2347
+ integer µ such that µ(L0 ⊗ KX) ⊗ L−1 is a big line bundle. By Theorem 4.2
2348
+ T(r, L) ≤ µ
2349
+
2350
+ T(r, L0) + T(r, KX)
2351
+
2352
+ + O(1),
2353
+ which contradicts with (29). We conclude that ˜f(M) ⊆ Σ. This completes
2354
+ the proof.
2355
+
2356
+ Set
2357
+ γ0 =
2358
+
2359
+ L−1
2360
+ 0
2361
+ ⊗ K−1
2362
+ X
2363
+ L
2364
+
2365
+ .
2366
+ Then, L0 ⊗ KX is big if and only if γ0 < 0. Whence, we obtain
2367
+ Corollary 4.7. Let f1, · · · , fl be meromorphic mappings in F. Assume that
2368
+ f1, · · · , fl are Σ-related on S for some Σ ∈ H . If γ0 < 0, then f1, · · · , fl are
2369
+ Σ-related on M.
2370
+ Remark 4.8. In the case where γ0 ≥ 0, we cannot conclude the propagation
2371
+ of algebraic dependence in general. Particularly, for γ0 > 0, the propagation
2372
+ of algebraic dependence does not occur even if one assumes the existence of
2373
+ Picard’s deficient divisors. For instance, consider the situation that M = C,
2374
+ X = P1(C) and l = 2. Set L = F1 = F2 = O(1), where O(1) stands for the
2375
+
2376
+ 36
2377
+ X.-J. DONG
2378
+ hyperplane line bundle over P1(C), which means that F0 = O(1). Then, one
2379
+ can take ˜L = ˜F = π∗
2380
+ 1O(1) ⊗ π∗
2381
+ 2O(1) which implies that ˜γ = 1. Let
2382
+ f1(z) = ez, f2(z) = e−z; D1 = 0, D2 = ∞, D3 = −1, D4 = 1.
2383
+ Then, D1, D2 are Picard’s deficient values of f1. Put Sj = Supp1f ∗
2384
+ 1 Dj, which
2385
+ means that kj = 1 for 1 ≤ j ≤ 4. Let S = S1 ∪ · · · ∪ S4. It is trivial to check
2386
+ that f2 ∈ F. A direct computation leads to
2387
+ L0 =
2388
+
2389
+ 4
2390
+
2391
+ j=1
2392
+ 1
2393
+ 1 + 1
2394
+
2395
+ O(1) ⊗
2396
+
2397
+
2398
+ 2
2399
+ 1 + 1O(1)
2400
+
2401
+ = O(1).
2402
+ It yields that
2403
+ γ0 =
2404
+ �L−1
2405
+ 0
2406
+ ⊗ K−1
2407
+ P1(C)
2408
+ L
2409
+
2410
+ =
2411
+ �−O(1) ⊗ (2O(1))
2412
+ O(1)
2413
+
2414
+ = 1 > 0.
2415
+ Taking Σ as the diagonal of P1(C) × P1(C), then we have Σ ∈ H . It is clear
2416
+ that (f1 ×f2)(S) ⊆ Σ and (f1 ×f2)(C) ̸⊆ Σ. Therefore, one cannot conclude
2417
+ the propagation of algebraic dependence if γ0 > 0. However, for γ0 = 0, the
2418
+ propagation of algebraic dependence may occur under a defect condition. In
2419
+ fact, we have the following theorem:
2420
+ Theorem 4.9. Let f1, · · · , fl be meromorphic mappings in F. Assume that
2421
+ f1, · · · , fl are Σ-related on S for some Σ ∈ H . If γ0 = 0 and δfi(Dj) > 0 for
2422
+ some i ∈ {1, · · · , l} and some j ∈ {1, · · · , q}, then f1, · · · , fl are Σ-related
2423
+ on M.
2424
+ To prove Theorem 4.9, we need a lemma:
2425
+ Lemma 4.10. Let f1, · · · , fl be meromorphic mappings in F. Assume that
2426
+ f1, · · · , fl are Σ-related on S for some Σ ∈ H . If γ0 = 0 and ˜f(M) ̸⊆ Σ,
2427
+ then there exist positive constants C1, C2 such that for j = 1, · · · , l
2428
+ C1 ≤ Tfj(r, L)
2429
+ Tf1(r, L) ≤ C2
2430
+ holds for all sufficiently large r ̸∈ I, where I = ∪l
2431
+ j=1I(fj).
2432
+ Proof. Let ν be a rational number with 0 < ν < 1. We claim that
2433
+ (30)
2434
+ ˜γνT(r, F0) ≤ N(r, S) + O(1)
2435
+ for all sufficiently large r ̸∈ I. Otherwise, there exists a monotone increasing
2436
+ sequence {rn}∞
2437
+ n=1 outside I with rn → ∞ as n → ∞, such that
2438
+ ˜γνT(rn, F0) > N(rn, S) + O(1).
2439
+
2440
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
2441
+ 37
2442
+ Using Lemmas 4.4 and 4.5,
2443
+ q
2444
+
2445
+ j=1
2446
+ kj
2447
+ kj + 1T(rn, L) + T(rn, KX) ≤ ˜γν0lk0
2448
+ k0 + 1 T(rn, F0) + o
2449
+
2450
+ T(rn, L)
2451
+
2452
+ .
2453
+ This yields that
2454
+ (31)
2455
+ T(rn, L0 ⊗ KX) + λT(rn, F0) ≤ o
2456
+
2457
+ T(rn, L)
2458
+
2459
+ ,
2460
+ where
2461
+ λ = ˜γlk0(1 − ν)
2462
+ k0 + 1
2463
+ > 0.
2464
+ Since γ0 = 0, the bigness of F0 implies the bigness of L0 ⊗KX ⊗λF0. Hence,
2465
+ L0 ⊗ KX ⊗ λF0 ≥ δL for a sufficiently small rational number δ > 0. Thus,
2466
+ δT(rn, L) ≤ T(rn, L0) + T(rn, KX) + λT(rn, F0) + O(1).
2467
+ By this with (31)
2468
+ δT(rn, L) ≤ o(T(rn, L)),
2469
+ which is a contradiction and it thus confirms (30). By Theorem 3.1
2470
+ N(r, S) =
2471
+ q
2472
+
2473
+ j=1
2474
+ N(r, Sj) ≤
2475
+ q
2476
+
2477
+ j=1
2478
+ Nf1(r, Dj) ≤ qTf1(r, L) + O(1).
2479
+ Combining this with (30), we get
2480
+ ˜γνT(r, F0) ≤ qTf1(r, L) + O(1).
2481
+ Since F0 is big, for a sufficiently small rational number ǫ > 0
2482
+ ǫ˜γνTfj(r, L) ≤ ǫ˜γνT(r, L) ≤ q˜γνT(r, F0) ≤ Tf1(r, L) + O(1).
2483
+ This proves the lemma.
2484
+
2485
+ Proof of Theorem 4.9
2486
+ It suffices to show that ˜f(M) ⊆ Σ. Otherwise, we assume that ˜f(M) ̸⊆ Σ.
2487
+ From the definition of the defect, for any ǫ > 0, there is a subset Eǫ ⊆ (0, ∞)
2488
+ with finite Lebesgue measure such that
2489
+ Nfs(r, Dj) < (1 − δfs(Dj) + ǫ)Tfs(r, L)
2490
+ for all r > 0 outside Eǫ. By (28) and Lemma 4.4 and Lemma 4.5
2491
+ T(r, L0) + T(r, KX) ≤ −
2492
+ l
2493
+
2494
+ s=1
2495
+ q
2496
+
2497
+ j=1
2498
+ 1
2499
+ kj + 1(δfs(Dj) − ǫ)Tfs(r, L) + o
2500
+
2501
+ T(r, L)
2502
+
2503
+ ≤ qǫT(r, L) −
2504
+ 1
2505
+ kj + 1δfi(Dj)Tfi(r, L) + o
2506
+
2507
+ T(r, L)
2508
+
2509
+ ,
2510
+ which yields that
2511
+ 1
2512
+ kj + 1δfi(Dj)Tfi(r, L) + T(r, L0) + T(r, KX) − qǫT(r, L) ≤ o
2513
+
2514
+ T(r, L)
2515
+
2516
+ .
2517
+
2518
+ 38
2519
+ X.-J. DONG
2520
+ Since [L−1
2521
+ 0 ⊗K−1
2522
+ X /L] = 0, we have L0 +KX ≥ −γL for an arbitrary rational
2523
+ number γ > 0. It implies that
2524
+ −γT(r, L) ≤ T(r, L0) + T(r, KX) + O(1).
2525
+ Whence,
2526
+ 1
2527
+ kj + 1δfi(Dj)Tfi(r, L) − (γ + qǫ)T(r, L) ≤ o
2528
+
2529
+ T(r, L)
2530
+
2531
+ .
2532
+ Note that δfi(Dj) > 0 and ǫ, γ can be be small arbitrarily. However, Lemma
2533
+ 4.10 implies that the above inequality does not hold if ǫ, γ are small enough.
2534
+ This is a contradiction. Thus, we have ˜f(M) ⊆ Σ. The proof is completed.
2535
+ 4.3. Unicity theorems.
2536
+ We apply the propagation theorems of algebraic dependence to the unicity
2537
+ problems for meromorphic mappings from a complete K¨ahler manifold into
2538
+ a complex projective manifold.
2539
+ Since X is projective, there is a holomorphic embedding Φ : X ֒→ PN(C).
2540
+ Let O(1) be the hyperplane line bundle over PN(C). Now, take l = 2 and
2541
+ F1 = F2 = Φ∗O(1) which gives that F0 = Φ∗O(1) and
2542
+ ˜F = π∗
2543
+ 1 (Φ∗O(1)) ⊗ π∗
2544
+ 2 (Φ∗O(1)) .
2545
+ Again, take ˜L = ˜F, then ˜γ = 1. In view of (28), we have
2546
+ (32)
2547
+ L0 =
2548
+
2549
+ q
2550
+
2551
+ j=1
2552
+ kj
2553
+ kj + 1
2554
+
2555
+ L ⊗
2556
+
2557
+ − 2k0
2558
+ k0 + 1Φ∗O(1)
2559
+
2560
+ .
2561
+ Suppose that D1, · · · , Dq ∈ |L| satisfy that D1 + · · · + Dq has only simple
2562
+ normal crossings. For f0 ∈ F, one says that the set {Dj}q
2563
+ j=1 is generic with
2564
+ respect to f0 and φ, if
2565
+ ˆ
2566
+ Ms = f0(M − I(f0)) ∩ SuppDs ∩ {x ∈ X : rank(dφ(x)) = dimC X} ̸= ∅
2567
+ for at least one s ∈ {1, · · · , q}. Next, assume that the set {Dj}q
2568
+ j=1 is generic
2569
+ with respect to f0 and φ. Denote by F0 the set of all meromorphic mappings
2570
+ f ∈ F such that f = f0 on the hypersuface S.
2571
+ Theorem 4.11. If L0 ⊗ KX is big, then F0 contains exactly one element.
2572
+ Proof. Let f ∈ F0 be an arbitrary meromorphic mapping, it suffices to show
2573
+ that f ≡ f0. Recall that Φ : X ֒→ PN(C) is a holomorphic embedding. Since
2574
+ f = f0 on S, we have Φ◦f = Φ◦f0 on S. Now, we assert that Φ◦f ≡ Φ◦f0.
2575
+ Otherwise, we may assume that Φ◦f ̸≡ Φ◦f0. Let ∆ denote the diagonal of
2576
+ PN(C)×PN(C). Put ˜Φ = Φ×Φ and ˜f = f ×f0. Then, it gives a meromorphic
2577
+ mapping
2578
+ φ = ˜Φ ◦ ˜f : M → PN(C) × PN(C).
2579
+
2580
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
2581
+ 39
2582
+ Again, set ˜
2583
+ O(1) = π∗
2584
+ 1O(1)⊗π∗
2585
+ 2O(1), which is a holomorphic line bundle over
2586
+ PN(C) × PN(C), where O(1) is the hyperplane line bundle over PN(C). It is
2587
+ easy to see that ˜L = ˜Φ∗ ˜
2588
+ O(1). Since Φ◦f ̸≡ Φ◦f0, there exists a holomorphic
2589
+ global section ˜σ of ˜
2590
+ O(1) which satisfies that φ∗˜σ ̸= 0 and ∆ ⊆ Supp(˜σ). Take
2591
+ Σ = Supp˜Φ∗(˜σ), it yields that ˜f(S) ⊆ Σ and ˜f(M) ̸⊆ Σ. On the other hand,
2592
+ with the aid of Theorem 4.6, the bigness of L0 ⊗KX implies that ˜f(M) ⊆ Σ,
2593
+ which is a contradiction. Thus, we obtain Φ◦f ≡ Φ◦f0. By the assumption,
2594
+ ˆ
2595
+ Ms ̸= ∅ for some s ∈ {1, · · · , q}. Take a point P ∈ ˆ
2596
+ Ms, then there is an open
2597
+ neighborhood UP of P such that Φ|UP : UP → Φ(UP) is biholomorphic. Let
2598
+ V = UP . Since Φ ◦ f = Φ ◦ f0 and f = f0 on S, we deduce that f = f0 on V.
2599
+ By the uniqueness theorem on analytic functions, we see that f ≡ f0.
2600
+
2601
+ Using the similar arguments as in the proof of Theorem 4.11 and Theorem
2602
+ 4.9, we can also prove the following theorem:
2603
+ Theorem 4.12. Assume that [L−1
2604
+ 0
2605
+ ⊗ K−1
2606
+ X /L] = 0. If δf0(Ds) > 0 for some
2607
+ s ∈ {1, · · · , q}, then F0 contains exactly one element.
2608
+ 4.4. Targets are compact Riemann surfaces and Pn(C).
2609
+ In this section, we consider some consequences of Theorems 4.11 and 4.12
2610
+ (and Theorems 4.6, 4.9 too) in the case that X is a compact Riemann surface
2611
+ or a complex projective space.
2612
+ 4.4.1. X is a compact Riemann surface.
2613
+ Let R be a compact Riemann surface with genus g. Note that R is viewed
2614
+ as P1(C) for g = 0, and a torus for g = 1. Let f : M → R be a nonconstant
2615
+ holomorphic mapping. Let a1, · · · , aq be distinct points in R. Theorem 3.12
2616
+ gives a defect relation:
2617
+ (33)
2618
+ q
2619
+
2620
+ j=1
2621
+ ¯δf(aj) ≤ 2 − 2g.
2622
+ A1 The cases g = 0 and g ≥ 2
2623
+ In the case of g = 0, we have
2624
+ Theorem 4.13. Let f1, f2 : M → P1(C) be nonconstant holomorphic map-
2625
+ pings. Let a1, · · · , aq be distinct points in P1(C). We have
2626
+ (a) Assume that Suppf ∗
2627
+ 1 aj = Suppf ∗
2628
+ 2 aj for all j. If q ≥ 5, then f1 ≡ f2;
2629
+ (b) Assume that Supp1f ∗
2630
+ 1 aj = Supp1f ∗
2631
+ 2 aj for all j. If q ≥ 7, then f1 ≡ f2.
2632
+ Proof. In Theorem 4.11, we put X = P1(C) and L = O(1) as well as kj = ∞
2633
+ for all j. Note that KP1(C) = −2O(1). It yields from (32) that
2634
+ L0 ⊗ KP1(C) = qO(1) ⊗ (−2O(1)) ⊗ (−2O(1)) = (q − 4)O(1).
2635
+
2636
+ 40
2637
+ X.-J. DONG
2638
+ Hence, L0 ⊗KP1(C) is big provided that q ≥ 5. Using Theorem 4.11, we have
2639
+ (a) holds. For (b), we just need to put kj = 1 for all j. In this case,
2640
+ L0 ⊗ KP1(C) = q
2641
+ 2O(1) ⊗ (−O(1)) ⊗ (−2O(1)) = q − 6
2642
+ 2
2643
+ O(1).
2644
+ Clearly, L0⊗KP1(C) is big provided that q ≥ 7. This completes the proof.
2645
+
2646
+ In the case that g ≥ 2, it yields from (33) that each holomorphic mapping
2647
+ f : M → R is a constant mapping. Therefore, this case is actually trivial.
2648
+ A2 The case g = 1
2649
+ By (33), each nonconstant holomorphic mapping f : M → R is surjective.
2650
+ Here, we only consider the case where the torus R is a smooth elliptic curve,
2651
+ denoted by E. When M is a finite analytic ramified covering of Cm, Aihara
2652
+ proved a unicity theorem (cf. [8], Theorem 5.1). In particular, when M =
2653
+ Cm, he showed that (cf. [8], Theorem 5.2)
2654
+ Theorem 4.14 (Aihara). Let f1, f2 : Cm → E be nonconstant holomorphic
2655
+ mappings. Let D1 = {a1, · · · , aq} be a set of points in E. Set D2 = φ(D1)
2656
+ with #(D2) = p, here φ ∈ End(E). Assume that Supp1f ∗
2657
+ 1D1 = Supp1f ∗
2658
+ 2D2.
2659
+ If pq > (p + q)(deg ψ + 1), then f2 = φ(f1).
2660
+ Using the theorems on the propagation of algebraic dependence obtained
2661
+ in Section 4.2, we see that those arguments in the proof of Theorem 4.14 can
2662
+ be carried to our situation where the domain is M. Hence, there is no need
2663
+ to state the details. We give a generalization of Theorem 4.14 as follows:
2664
+ Theorem 4.15. Let f1, f2 : M → E be nonconstant holomorphic mappings.
2665
+ Let D1 = {a1, · · · , aq} be a set of points in E. Set D2 = φ(D1) with #(D2) =
2666
+ p, here φ ∈ End(E). Assume that Supp1f ∗
2667
+ 1 D1 = Supp1f ∗
2668
+ 2 D2. If pq > (p +
2669
+ q)(deg ψ + 1), then f2 = φ(f1).
2670
+ In particular, when φ = Id, we have deg φ = 1. Thus, it yields that
2671
+ Corollary 4.16. Let f1, f2 : M → E be nonconstant holomorphic mappings.
2672
+ Let a1, · · · , aq be distinct points in E. Assume that Supp1f ∗
2673
+ 1 aj = Supp1f ∗
2674
+ 2 aj
2675
+ for all j. If q ≥ 5, then f1 ≡ f2.
2676
+ 4.4.2. X is a complex projective space.
2677
+ We consider the case X = Pn(C). For a Q-line bundle F ∈ Pic(Pn(C))⊗Q,
2678
+ note that F is big if and only if F is ample. Let F1, F2 be ample line bundles
2679
+ over Pn(C). Since Pic(Pn(C)) ∼= Z, then there are positive integers d1, d2 such
2680
+ that F1 = d1O(1) and F2 = d2O(1). Again, define a holomorphic line bundle
2681
+ ˜F over Pn(C) × Pn(C) by ˜F = π∗
2682
+ 1F1 ⊗ π∗
2683
+ 2F2. A well-known fact says that
2684
+ Pic (Pn(C) × Pn(C)) = π∗
2685
+ 1Pic(Pn(C)) ⊕ π∗
2686
+ 2Pic(Pn(C)).
2687
+
2688
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
2689
+ 41
2690
+ Thus, we may assume that ˜L = ˜F. Let σ be a holomorphic section of ˜L over
2691
+ Pn(C)×Pn(C). Note that σ can be identified with a homogeneous polynomial
2692
+ P(ξ, ζ) of degree d1 in ξ = [ξ0; · · · ; ξn] and degree d2 in ζ = [ζ0; · · · ; ζn]. Set
2693
+ d0 = max{d1, d2}. Let L be an ample line bundle over Pn(C) with L = dO(1).
2694
+ Let S = S1 ∪ · · · ∪ Sq, where S1, · · · , Sq are q hypersufaces of M such that
2695
+ dimC Si ∩Sj ≤ m−2 for i ̸= j. Let D1, · · · , Dq ∈ |L| such that D1 +· · ·+Dq
2696
+ has simple normal crossings.
2697
+ Based on the above notations, we give the following theorem:
2698
+ Theorem 4.17. Let f1, f2 : M → Pn(C) be dominant meromorphic map-
2699
+ pings. Assume that Suppkf ∗
2700
+ 1 Dj = Suppkf ∗
2701
+ 2Dj = Sj with 1 ≤ k ≤ +∞ for
2702
+ all j. Suppose, in addition, that P(f1, f2) = 0 on S. If
2703
+ q > 2d0 + (1 + n)(1 + k−1)
2704
+ d
2705
+ ,
2706
+ then P(f1, f2) ≡ 0.
2707
+ Proof. Recall that F0 is defined as an element in {F1, F2} such that F0⊗F −1
2708
+ j
2709
+ is either big or trivial for j = 1, 2. Hence, we have F0 = d0O(1). Since ˜L = ˜F,
2710
+ then ˜γ = 1. We also note that l = 2, k0 = kj = k. Thus, it follows from (28)
2711
+ that
2712
+ L0 =
2713
+
2714
+ q
2715
+
2716
+ j=1
2717
+ k
2718
+ k + 1
2719
+
2720
+ dO(1) ⊗
2721
+
2722
+ − 2kd0
2723
+ k + 1O(1)
2724
+
2725
+ = k(dq − 2d0)
2726
+ k + 1
2727
+ O(1).
2728
+ By KPn(C) = −(n + 1)O(1), we get
2729
+ L0 ⊗ KPn(C) = k(dq − 2d0) − (k + 1)(n + 1)
2730
+ (k + 1)
2731
+ O(1).
2732
+ Hence, L0 ⊗ KPn(C) is big if k(dq − 2d0) − (k + 1)(n + 1) > 0, i.e.,
2733
+ q > 2d0 + (1 + n)(1 + k−1)
2734
+ d
2735
+ .
2736
+ Invoking Theorem 4.6, we have the theorem proved.
2737
+
2738
+ Take L = F1 = F2 = O(1), then d = d1 = d2 = 1. In this case, D1, · · · , Dq
2739
+ reduce to q hyperplanes H1, · · · , Hq. Using Theorem 4.17, it yields that
2740
+ Corollary 4.18. Let f1, f2 : M → Pn(C) be dominant meromorphic map-
2741
+ pings. Assume that Suppkf ∗
2742
+ 1 Hj = Suppkf ∗
2743
+ 2Hj = Sj with 1 ≤ k ≤ +∞ for
2744
+ all j. Suppose, in addition, that P(f1, f2) = 0 on S. If
2745
+ q > 2 + (1 + n)(1 + k−1)
2746
+ then P(f1, f2) ≡ 0.
2747
+ Keep the same notations as in Theorem 4.17, we get that
2748
+
2749
+ 42
2750
+ X.-J. DONG
2751
+ Theorem 4.19. Let f1, f2 : M → Pn(C) be dominant meromorphic map-
2752
+ pings. Assume that Suppkf ∗
2753
+ 1 Dj = Suppkf ∗
2754
+ 2Dj = Sj with 1 ≤ k ≤ +∞ for
2755
+ all j. Suppose, in addition, that P(f1, f2) = 0 on S. If δf1(Dj) > 0 for some
2756
+ j ∈ {1, · · · , q} and
2757
+ q = 2d0 + (1 + n)(1 + k−1)
2758
+ d
2759
+ ∈ Z,
2760
+ then P(f1, f2) ≡ 0.
2761
+ Proof. The given relation between q, d, d0 implies that L0 ⊗KPn(C) is trivial.
2762
+ Invoking Theorem 4.9, the theorem can be proved.
2763
+
2764
+ Similarly, take L = F1 = F2 = O(1). Theorem 4.19 yields that
2765
+ Corollary 4.20. Let f1, f2 : M ��� Pn(C) be dominant meromorphic map-
2766
+ pings. We have
2767
+ (a) Assume that Suppf ∗
2768
+ 1 Dj = Suppf ∗
2769
+ 2 Dj = Sj for all j, and assume that
2770
+ P(f1, f2) = 0 on S. If δf1(Dj) > 0 for some j ∈ {1, · · · , q} with q = n + 3,
2771
+ then P(f1, f2) ≡ 0;
2772
+ (b) Assume that Supp1f ∗
2773
+ 1Dj = Supp1f ∗
2774
+ 2 Dj = Sj for all j, and assume that
2775
+ P(f1, f2) = 0 on S. If δf1(Dj) > 0 for some j ∈ {1, · · · , q} with q = 2n + 4,
2776
+ then P(f1, f2) ≡ 0.
2777
+ References
2778
+ [1] A. Atsuji, Nevanlinna theory via stochastic calculus, J. Func. Anal. 132 (1995), 473-
2779
+ 510.
2780
+ [2] A. Atsuji, A second main theorem of Nevanlinna theory for meromorphic functions
2781
+ on complete K¨ahler manifolds, J. Math. Japan Soc. 60 (2008), 471-493.
2782
+ [3] A. Atsuji, Estimates on the number of the omitted values by meromorphic functions,
2783
+ Advanced Studies in Pure Math. 57 (2010), 49-59.
2784
+ [4] A. Atsuji, On the number of omitted values by a meromorphic function of finite
2785
+ energy and heat diffusions, J. Geom. Anal. 20 (2010), 1008-1025.
2786
+ [5] A. Atsuji, Nevanlinna-type theorems for meromorphic functions on non-positively
2787
+ curved K¨ahler manifolds, Forum Math. 30 (2018), 171-189.
2788
+ [6] Y. Aihara, A unicity theorem of meromorphic mappings into compactified locally
2789
+ symmetric spaces, Kodai Math. J. 14 (1991), 392-405.
2790
+ [7] Y. Aihara, Unicity theorems for meromorphic mappings with deficiencies, Complex
2791
+ variables, 42 (2000), 259-268.
2792
+ [8] Y. Aihara, Algebraic dependence of meromorphic mappings in value distribution
2793
+ theory. Nagoya Math. J. 169 (2003), 145-178.
2794
+ [9] J. Carlson and P. Griffiths, A defect relation for equidimensional holomorphic map-
2795
+ pings between algebraic varieties, Ann. Math. 95 (1972), 557-584.
2796
+ [10] T. K. Carne, Brownian motion and Nevanlinna theory, Proc. London Math. Soc. (3)
2797
+ 52 (1986), 349-368.
2798
+
2799
+ NEVANLINNA THEORY AND ALGEBRAIC DEPENDENCE
2800
+ 43
2801
+ [11] T. H. Colding and W. P. Minicozzi II, Large scale behavior of kernels of Schr¨odinger
2802
+ operators, Amer. J. Math. (6) 119 (1997), 1355-1398.
2803
+ [12] T. K. Carne, Brownian motion and Nevanlinna theory, Proc. London Math. Soc. (3)
2804
+ 52 (1986), 349-368.
2805
+ [13] S. Y. Cheng and S. T. Yau, Differential equations on Riemannian manifolds and their
2806
+ geometric applications, Comm. Pure Appl. Math. 28 (1975), 333-354.
2807
+ [14] M. Dulock and M. Ru, Uniqueness of holomorphic curves into Abelian varieties,
2808
+ Trans. Amer. Math. Soc. 363 (2010), 131-142.
2809
+ [15] S. J. Drouilhet, Criteria for algebraic dependence of meromorphic mappings into
2810
+ algebraic varieties, Illinois J. Math. 26 (1982), 492-502.
2811
+ [16] X. J. Dong, Y. He and M. Ru, Nevalinna theory through the Brownian motion, Sci.
2812
+ China Math. 62 (2019), 2131-2154.
2813
+ [17] X. J. Dong, Carlson-Griffiths theory for complete K¨ahler manifolds, J. Inst. Math.
2814
+ Jussieu, (2022), 1-29.
2815
+ [18] X. J. Dong and S. S. Yang, Nevanlinna theory via holomorphic forms, Pacific J. Math.
2816
+ (1) 319 (2022), 55-74.
2817
+ [19] X. J. Dong, Nevanlinna-type theory based on heat diffusion, Asian J. Math. to appear,
2818
+ (2023), arXiv: 2006.04572.
2819
+ [20] H. Fujimoto, Uniqueness problem with truncated multiplicities in value distribution
2820
+ theory, Nagoya Math. J. I, 152 (1998), 131-152; II, 155 (1999), 161-188.
2821
+ [21] P. Griffiths and J. Harris, Principles of Algebraic Geometry, Wiley, New York, 2nd,
2822
+ (2011).
2823
+ [22] P. Griffiths and J. King, Nevanlinna theory and holomorphic mappings between al-
2824
+ gebraic varieties, Acta Math. 130 (1973), 146-220.
2825
+ [23] H. Hironaka, Resolution of singularities of an algebraic variety over a field of charac-
2826
+ teristic zero, Ann. Math. 79 (1964), I, 109-203; II, 205-326.
2827
+ [24] S. Ji, A uniqueness theorem for meromorphic mappings between algebraic varieties,
2828
+ Trans. Amer. Math. Soc. 265 (1981), 349-358.
2829
+ [25] S. Ji, Uniqueness theorems without multiplicities in value distribution theory, Pacific
2830
+ J. Math. 135 (1988), 323-348.
2831
+ [26] K. Kodaira, On holomorphic mappings of polydiscs into compact complex manifolds.
2832
+ J. Diff. Geometry, 6 (1971), 31-46.
2833
+ [27] S. Kobayashi and T. Ochiai, Meromorphic mappings into compact complex manifolds
2834
+ with negative first Chern class, J. Math. Soc. Japan, 23 (1971), 137-148.
2835
+ [28] P. Li and L. Tam, Symmetric Green’s Functions on Complete manifolds, Amer. J.
2836
+ Math. (6) 109 (1987), 1129-1154.
2837
+ [29] P. Li, L. Tam and J. Wang, Sharp Bounds for Green’s functions and the heat kernel,
2838
+ Math. Res. Let. 4 (1997), 589-602.
2839
+ [30] P. Li and L. Tam, Complete surfaces with finite total curvature, J. Diff. Geom. 33
2840
+ (1991), 139-168.
2841
+ [31] P. Li and S. T. Yau, On the parabolic kernel of the Schr¨odinger operator, Acta Math.
2842
+ 156 (1986), 153-201.
2843
+ [32] M. Malgrange, Existence et approximation des solutions des ´equations aux deriv´ees
2844
+ partielles et des ´equations de convolution, Ann. Inst. Fourier, 6 (1955), 271-355.
2845
+ [33] S. Lang and W. Cherry, Topics in Nevanlinna Theory, Springer, (2014).
2846
+ [34] J. Noguchi, Meromorphic mappings of a covering space over Cm into a projective
2847
+ varieties and defect relations, Hiroshima Math. J. 6 (1976), 265-280.
2848
+ [35] J. Noguchi. On the value distribution of meromorphic mappings of covering spaces
2849
+ over Cm into algebraic varieties, J. Math. Soc. Japan, 37 (1985), 295-313.
2850
+
2851
+ 44
2852
+ X.-J. DONG
2853
+ [36] J. Noguchi and J. Winkelmann, Nevanlinna theory in several complex variables
2854
+ and Diophantine approximation, A series of comprehensive studies in mathematics,
2855
+ Springer, (2014).
2856
+ [37] R. Nevanlinna, Zur Theorie der meromorphen Funktionen, Acta Math. 46(1925),
2857
+ 1-99.
2858
+ [38] M. Ru, Nevanlinna Theory and Its Relation to Diophantine Approximation, 1st edn.
2859
+ World Scientific Publishing, (2001).
2860
+ [39] B. Shiffman, Nevanlinna defect relations for singular divisors, Invent. Math. 31
2861
+ (1975), 155-182.
2862
+ [40] F. Sakai, Degeneracy of holomorphic maps with ramification. Invent. Math, 26 (1974),
2863
+ 213-229.
2864
+ [41] F. Sakai, Defections and Ramifications, Pro. Japan Acad. 50 (1974), 723-728.
2865
+ [42] J. P. Sha and D. G. Yang, Examples of manifolds of positive Ricci curvature, J. Diff.
2866
+ Geom. 29 (1989), 95-103.
2867
+ [43] L. Smiley, Dependence theorems for meromorphic maps, Ph.D Thesis, Notre Dame
2868
+ University, (1979).
2869
+ [44] R. Schoen and S. T. Yau, Lectures on Differential Geometry, International Press,
2870
+ (2010).
2871
+ [45] W. Stoll, Value distribution on parabolic spaces, Lecture Notes in Mathematics,
2872
+ Springer, 600 (1977).
2873
+ [46] W. Stoll, Value Distribution Theory for Meromorphic Maps, Vieweg-Teubner, Verlag
2874
+ (1985).
2875
+ [47] W. Stoll, Propagation of dependence, Pacific J. Math. 139 (1989), 311-336.
2876
+ [48] N. Varopoulos, Green’s function on positively curved manifolds, J. Funct. Anal. 45
2877
+ (1982), 109-118.
2878
+ [49] P. M. Wong and W. Stoll, Second main theorem of Nevanlinna theory for nonequidi-
2879
+ mensional meromorphic maps, Amer. J. Math. 116 (1994), 1031-1071.
2880
+ [50] P. M. Wong and P. W. Wong, The second main theorem on generalized parabolic
2881
+ manifolds, in some topics on value distribution and differentiability in complex and
2882
+ p-adic analysis, Mathematical Monographs Series, Science Press, Beijing, China, 11
2883
+ (2008), 3-41.
2884
+ [51] Y. Liu and M. Ru, A defect relation for meromorphic maps on parabolic manifolds
2885
+ intersecting hypersurfaces, Illinois J. Math. 49 (2005), 237-257.
2886
+ [52] F. Y. Zheng, Complex Differential Geometry, AMS/IP, Studies in Advanced Mathe-
2887
+ matics, (2002).
2888
+ School of Mathematical Sciences, Qufu Normal University, Qufu, Jining,
2889
+ Shandong, 273165, P. R. China
2890
+ Email address: [email protected]
2891
+
MtAzT4oBgHgl3EQfV_zH/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
TNAzT4oBgHgl3EQfXfzv/content/tmp_files/2301.01321v1.pdf.txt ADDED
@@ -0,0 +1,1732 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.01321v1 [cs.CR] 3 Jan 2023
2
+ Cheesecloth: Zero-Knowledge Proofs of Real-World Vulnerabilities
3
+ Santiago Cuéllar∗
4
+ Galois, Inc.
5
+ Bill Harris
6
+ Galois, Inc.
7
+ James Parker
8
+ Galois, Inc.
9
+ Stuart Pernsteiner
10
+ Galois, Inc.
11
+ Eran Tromer
12
+ Columbia University
13
+ Abstract
14
+ Currently, when a security analyst discovers a vulnerabil-
15
+ ity in critical software system, they must navigate a fraught
16
+ dilemma: immediately disclosing the vulnerability to the
17
+ public could harm the system’s users; whereas disclosing the
18
+ vulnerability only to the software’s vendor lets the vendor
19
+ disregard or deprioritize the security risk, to the detriment of
20
+ unwittingly-affected users.
21
+ A compelling recent line of work aims to resolve this by
22
+ using Zero Knowledge (ZK) protocols that let analysts prove
23
+ that they know a vulnerability in a program, without reveal-
24
+ ing the details of the vulnerability or the inputs that exploit
25
+ it. In principle, this could be achieved by generic ZK tech-
26
+ niques. In practice, ZK vulnerability proofs to date have been
27
+ restricted in scope and expressibility, due to challenges re-
28
+ lated to generating proof statements that model real-world
29
+ software at scale and to directly formulating violated proper-
30
+ ties.
31
+ This paper presents
32
+ CHEESECLOTH, a
33
+ novel proof-
34
+ statement compiler, which proves practical vulnerabilities in
35
+ ZK by soundly-but-aggressively preprocessing programs on
36
+ public inputs, selectively revealing information about exe-
37
+ cuted control segments, and formalizing information leak-
38
+ age using a novel storage-labeling scheme. CHEESECLOTH’s
39
+ practicality is demonstrated by generating ZK proofs of
40
+ well-known vulnerabilities in (previous versions of) critical
41
+ software, including the Heartbleed information leakage in
42
+ OpenSSL and a memory vulnerability in the FFmpeg graph-
43
+ ics framework.
44
+ 1
45
+ Introduction
46
+ Ideally, programs that process sensitive information would
47
+ always execute safely and securely. With this ideal remain-
48
+ ing difficult to achieve for the foreseeable future, it is criti-
49
+ cal that that when programs are found to be vulnerable, the
50
+ program’s affected users are alerted quickly and safely. This
51
+ ∗Authors listed alphabetically.
52
+ requirement presents a challenge: convincingly disclosing a
53
+ vulnerability appears to require sharing the vulnerability’s
54
+ details (such as an exploit that triggers it), thereby placing
55
+ users at greater risk in the short term.
56
+ A promising approach to disclosing vulnerabilities con-
57
+ vincingly yet safely is to leverage Zero-Knowledge (ZK)
58
+ proofs: protocols in which one party—designated as the
59
+ prover—convinces another party—designated as the veri-
60
+ fier—of the validity of a claim without revealing any addi-
61
+ tional information about the claim’s evidence.
62
+ Such a use of ZK proofs has arguably been a conceptual
63
+ possibility ever since the initial fundamental results establish-
64
+ ing that they exist for all problems in NP [26]. It has become
65
+ more realistic with improvements to underlying ZK proto-
66
+ cols and with the emergence of schemes for encoding knowl-
67
+ edge of executions of programs written in convenient lan-
68
+ guages (starting with [10,24] and discussed further below).
69
+ In order to prove vulnerabilities in ZK about practical soft-
70
+ ware, several open problems remaing to be addressed. First,
71
+ proof frameworks must scale to compile proofs of vulner-
72
+ abilities that require considerably more steps of execution
73
+ and space. TinyRAM [10–12] is sufficiently flexible to val-
74
+ idate the executions of applications, but it is expensive, in
75
+ part due to the fact that it simulates every instruction in the
76
+ modeled CPU’s ISA in each step. TinyRAM’s performance
77
+ is surpassed by those of Pantry [17] and Buffet [52], but both
78
+ frameworks require loops to be unrolled to a public bound:
79
+ publicly revealing these bounds leaks information about the
80
+ underlying vulnerability.
81
+ A second open problem is to efficiently compile state-
82
+ ments from an understandable form. One immediate ap-
83
+ proach is to execute a program under a dynamic safety moni-
84
+ tor for well-understood safety properties, such as those those
85
+ implemented in Valgrind [41]; however, directly encoding
86
+ the additional monitoring would induce prohibitively large
87
+ overhead. Approaches for verifying low-level exploits in
88
+ ZK [27] rely on being able to efficiently compile directly-
89
+ understandable properties into statements of control-location
90
+ reachability.
91
+ 1
92
+
93
+ To address these problems, we present CHEESECLOTH, an
94
+ optimizing ZK proof-statement generator that efficently en-
95
+ codes vulnerabilities in practical software. The contributions
96
+ behind CHEESECLOTH’s design include:
97
+ 1. Optimizations of ZK statements that verify the executions
98
+ of programs, taking advantage of program structure but
99
+ without revealing additional information about the exe-
100
+ cution. Specifically, Public-PC segments construct execu-
101
+ tion traces from segments with public program counters,
102
+ thus enabling aggressive constant folding, without leak-
103
+ ing information about the overall execution trace. Simi-
104
+ larly, instructions which are publicly-determined to be ex-
105
+ ecuted infrequently are sparsely supported (i.e. can’t be
106
+ executed at every step), making the statement smaller.
107
+ 2. Novel, efficient ZK encodings of memory errors preva-
108
+ lent in practical software, specifically out-of-bound ac-
109
+ cess, use-after-free, free-after-free, and uninitialized ac-
110
+ cess. Previous related work focused primarily on proving
111
+ knowledge of a valid execution without proving existence
112
+ of a vulnerability [10,11] or encoded proofs of vulnerabil-
113
+ ity using a less efficient memory model [29].
114
+ 3. A novel, efficient encoding of statements that a program
115
+ always leaks data (when given an exploit as a secret in-
116
+ put). Our scheme enables proofs of program properties
117
+ that are related to, but critically distinct from, existing pro-
118
+ gram monitors and type systems that prove that a program
119
+ may leak data [20,39], optionally in ZK [21].
120
+ We implemented these optimizations and encodings in
121
+ CHEESECLOTH, a full compilation toolchain for encoding
122
+ vulnerabilities of real-world programs into efficient ZK
123
+ proofs. The toolchain extends previous approaches based
124
+ on TinyRAM, and includes a full definition of a novel
125
+ TinyRAM extension (named MicroRAM) and a compiler to
126
+ MicroRAM from the LLVM intermediate language,enabling
127
+ proofs of vulnerabilities in programs provided in C, C++, or
128
+ Rust.
129
+ We evaluated our implementation by proving in ZK the
130
+ existence of three vulnerabilities in practical systems soft-
131
+ ware. Specifically, we proved that previous versions of the
132
+ GRIT and FFmpeg [2] graphics processing libraries con-
133
+ tained buffer-overflow vulnerabilites, and that the OpenSSL
134
+ cryptograhy toolkit [3] was vulnerable to the notorious Heart-
135
+ bleed vulnerability [5]. CHEESECLOTH takes the software
136
+ C/C++ source code and a flag denoting a vulnerabilities
137
+ class; it combines these with an emulation of the runtime
138
+ environment (operating system and libraries), and applies
139
+ the aforementioned techniques, to derive a statement directly
140
+ provable in ZK. The ZK proof can then be given, as a wit-
141
+ ness, the concrete exploit used to demonstrate the original at-
142
+ tack. CHEESECLOTH contains implementations of powerful
143
+ program analyses that, when combined with manual program
144
+ partitioning in some cases, dramatically increase the scale of
145
+ programs that it can process, compared to a more naive com-
146
+ piler.
147
+ The remainder of this paper is organized as follows: Sec. 2
148
+ reviews the background that this work builds upon. Sec. 3
149
+ presents the implementation details of our CHEESECLOTH
150
+ compilation pipeline; Sec. 4 covers the critical and aggres-
151
+ sive optimizations we make to verify the ZK execution of a
152
+ program; Sec. 5 describes our ZK encodings to efficiently de-
153
+ tect memory and information leakage vulnerabilities; Sec. 6
154
+ describes our practical experience using CHEESECLOTH to
155
+ prove vulnerabilities; Sec. 7 compares our approach to re-
156
+ lated work and Sec. 8 concludes.
157
+ 2
158
+ Background
159
+ In this section, we review prior work on which our contribu-
160
+ tion builds upon, specifically Zero-Knowledge (ZK) proofs of
161
+ program executions (Sec. 2.1), information leakage by pro-
162
+ grams (Sec. 2.2), and partial program evaluation (Sec. 2.3).
163
+ 2.1
164
+ Zero-Knowledge Proofs
165
+ Zero-knowledge proofs enable a prover party to prove to
166
+ a verifier party that the prover knows the correctness of a
167
+ computational statement (e.g., that a given Boolean circuit
168
+ is satisfiable), without revealing information about their evi-
169
+ dence for the claim (e.g., the witness that satisfied the circuit).
170
+ There exist ZK protocols for proving knowledge of solutions
171
+ to all problems in NP [26], and in recent years, numerous ef-
172
+ ficient protocols have been developed and implemented for
173
+ ZK proofs of general statements (e.g., [8, 10, 12, 14–16, 22,
174
+ 24,28,29,31,32,38,43,51]).
175
+ Some of these works specifically address statements about
176
+ correct execution of programs running on a general-purpose
177
+ architecture that include Random Access Memory (RAM),
178
+ where the program is expressed in low-level machine code
179
+ or a high-level language [9,10,12,14,16,17,22,27,29,31,32,
180
+ 38,51,52].
181
+ Our compiler uses a hybrid of step-by-step CPU emula-
182
+ tion, similar to TinyRAM [10–12], a MIPS-like CPU that
183
+ can simulate programs in C and similar low-level languages
184
+ that access RAM. The TinyRAM encoder, given a public
185
+ TinyRAM program and bound on the number of steps of ex-
186
+ ecution to simulate and a private program input, generates
187
+ an R1CS that is satisfied by encodings of the input. The con-
188
+ straint system consists of (1) a family of constraint systems
189
+ that validate computations purely over registers in each step
190
+ and (2) a novel memory-checking sub-circuit that verifies the
191
+ correctness of RAM operations using a permutation network.
192
+ This CPU-unrolling technique is excellent for supporting lan-
193
+ guage features such as data-dependent loops, control-flow
194
+ and self-modifying code. The technique can also naturally
195
+ leverage existing tools such as compilers front-ends and li-
196
+ braries.
197
+ 2
198
+
199
+ We combine TinyRAM-style emulation with direct com-
200
+ pilation of program blocks into circuit gates [17, 24, 52]
201
+ (Sec. 4). The compiler’s output is a circuit whose satisfia-
202
+ bility is equivalent to the existence of a vulnerability in the
203
+ source program, and whose structure does not reveal the vul-
204
+ nerability or how it may be triggered.
205
+ In our evaluation, the underlying ZK protocol is the
206
+ Mac’n’Cheese [9] protocol for proving circuit satisfiability,
207
+ as implemented by the Swanky [23] library. This is an inter-
208
+ active protocol, where the prover and verifier engage in mul-
209
+ tiple rounds of communication to evaluate the circuit, at the
210
+ end of which the verifier learns that the circuit accepted the
211
+ secret witness provided by the prover (and nothing else).
212
+ 2.2
213
+ Information Flow
214
+ One core contribution of our work is a practical scheme for
215
+ proving in zero knowledge that a program leaks data, which
216
+ we have applied to prove that previous versions of OpenSSL
217
+ leak private data, as triggered by the Heartbleed vulnerabil-
218
+ ity (described in Sec. 5.2 and Sec. 6.3). The scheme’s design
219
+ requires a formal treatment of information flow: specifically,
220
+ a treatment sufficiently formal that we could generate logical
221
+ circuits that would be satisfied only by witnesses to leakage.
222
+ In the interest of space and clarity, we will omit a definition
223
+ of information flow and leakage for a full programming lan-
224
+ guage, but we will describe ours in sfufficient datail to com-
225
+ municate the key challenges and approaches.
226
+ A labeling L is a subset of a program’s input variables I
227
+ designated as the private inputs, and a subset of its output
228
+ variables O denoted as public outputs. Program P satisfies
229
+ noninterference with respect to L if each pair of inputs that
230
+ are only distinct at private inputs result in values that are the
231
+ same at all public outputs; P leaks with respect to L if, with
232
+ respect to L, it does not satisfy noninterference. It follows
233
+ from the above definition that a leak is witnessed by a pair
234
+ of executions that differ only at L-labeled inputs and produce
235
+ distinct L-labeled outputs.
236
+ Noninterference has a precise but accessible formal defi-
237
+ nition that can capture the flow requirements of some criti-
238
+ cal software [20], but its shortcomings in practice are well
239
+ known [39, 40, 45]: the complete information flow spec-
240
+ ifications of practical programs often are not noninterfer-
241
+ ence properties, intuitively because programs that take sen-
242
+ sitive inputs typically do need to reveal some partial infor-
243
+ mation about them; and even when desired flow properties
244
+ are noninterference properties, proving that a program sat-
245
+ isfies the property in general can involve careful reasoning
246
+ about unbounded data and control. A rich body of prior
247
+ work [13, 18, 19, 25] has considered generalizations of non-
248
+ interference involving equivalences over observable events,
249
+ along with rich programming languages and type systems
250
+ and attempt to prove their satisfaction. However, noninterfer-
251
+ ence properties still constitute aspects of a program’s com-
252
+ plete information flow requirements that unfortunately are
253
+ both critical and are violated in practice (Heartbleed being
254
+ a prominent example). This pattern justisfies the current
255
+ work’s primary focus on proving noninterference violations.
256
+ 2.2.1
257
+ Labeled Programs and Executions
258
+ In their most general form, information flow and leakage are
259
+ defined over pairs of executions. Practical program moni-
260
+ tors [20,49] and type systems [39] prove facts about all exe-
261
+ cution pairs, by labeling the program’s data and control struc-
262
+ tures with metadata which is tracked through the execution.
263
+ These approaches can be carried out by a programmer or au-
264
+ tomated analysis that directly annotates the program or exe-
265
+ cution. However, the requisite guarantees are different in our
266
+ proof-of-vulnerability context compared to their usual appli-
267
+ cations, as seen next.
268
+ At a high level, the guarantees provided by dynamic infor-
269
+ mation flow monitors are as follows. A labeling of a program
270
+ execution over n steps is an assignment from each program
271
+ variable and step 0 ≤ i < n to a sensitivity label. A label-
272
+ ing over-approximates information flow if, from any two ex-
273
+ ecutions starting from states that only differ at high inputs,
274
+ the program produces results that differ only at high-labeled
275
+ timestamped storage cells (static analyses and type systems
276
+ lift this property to be defined over all pairs of executions
277
+ that differ only at sensitive inputs).
278
+ Such over-approximation allows for “false positives” in
279
+ identifying information flows. For example, in a context
280
+ where only input x is sensitive and the return value is pub-
281
+ lic, the following function always_true does not leak any
282
+ information about its input secret because it returns true
283
+ for each input value:
284
+ bool always_true(bool secret) {
285
+ if (secret) return secret;
286
+ else return !secret; }
287
+ However, many natural taint analyses would label the re-
288
+ turned values as sensitive because it is computed from
289
+ secret.
290
+ Over-approximation of potential leaks is often still valu-
291
+ able for aiding programmers to ensure that their program
292
+ does not leak: falsely determining that a secure program may
293
+ leak may constitute a nuisance, and may need be mitigated
294
+ to ensure practicality, but can to some degree to tolerated.
295
+ However, in our setting of proving a leak in ZK, it is un-
296
+ acceptable for the verifier to learn only that a program may
297
+ leak. The whole point is to prove that it does leak (given the
298
+ purported exploit). We will thus create a labeling which is an
299
+ under-approximation, i.e., when the labels say so, a leakage
300
+ is present. It will then remain to empirically show that label-
301
+ ing indeed detects leakage for the vulnerabilities of interest.
302
+ 3
303
+
304
+ 2.3
305
+ Partial Evaluation
306
+ In many practical contexts, a program may receive different
307
+ subsets of its input at different times after it has been written
308
+ and compiled: e.g., after being installed, a configuration file
309
+ may be included that remains the same over all executions
310
+ on distinct inputs subsequently received from a network. A
311
+ natural objective is, given a program and a subset of its in-
312
+ puts that can be fixed, to generate a specialized program that
313
+ processes the remaining inputs with improved performance.
314
+ Stated more precisely, for program P(X,Y) with input vari-
315
+ ables X and Y, a partial evaluation of P on an assignment
316
+ A : X → Words from X to data values Words is a program PA
317
+ such that P(A,B) is the same as PA(B) for each assignment
318
+ B : Y → Words.
319
+ Partially evaluating programs in a practical language
320
+ brings several complexities [35]; the underlying technique
321
+ amounts to: (1) evaluate the program under a symbolic state,
322
+ in which registers and memory addresses may be mapped
323
+ either to memory addresses or terms defined over symbolic
324
+ variables that denoted unknown values; (2) using computed
325
+ symbolic states that describe all possible states at each con-
326
+ trol point, simplify the control structure at each point. Vari-
327
+ ations of this technique may be viewed as aggressive exten-
328
+ sions and generalizations of the constant propagation analy-
329
+ sis and constant folding transformation implemented in con-
330
+ ventional optmizing compilers [7].
331
+ 3
332
+ CHEESECLOTH Implementation
333
+ CHEESECLOTH produces ZK proofs of real world vulnera-
334
+ bilities. It takes as input a public LLVM program (typically
335
+ compiled from C, C++, or Rust) and, when run as the prover,
336
+ a secret exploit that triggers a vulnerability in the program.
337
+ CHEESECLOTH outputs a ZK circuit that verifies the execu-
338
+ tion trace of the program and checks whether or not a vulner-
339
+ ability occurred during that execution. The pipeline enables a
340
+ prover to demonstrate to a verifier that there is a vulnerability
341
+ in a program while keeping the vulnerability and triggering
342
+ exploit secret.
343
+ CHEESECLOTH produces ZK circuits in multiple stan-
344
+ dard representations including Rank-1 Constraints Systems
345
+ (R1CS) [10] and SIEVE IR [6]. Because the circuits are seri-
346
+ alized in standardized formats, CHEESECLOTH is agnostic to
347
+ the ZK protocol applied. When run as the prover, CHEESE-
348
+ CLOTH outputs the accompanying witness for the circuit.
349
+ CHEESECLOTH can be extended to check different proper-
350
+ ties about a program’s execution. Users can selectively en-
351
+ able which extensions to run by providing different input
352
+ flags to the compilation pipeline. These extensions are how
353
+ the memory and information leakage vulnerability detection
354
+ checks described in Sec. 5 are implemented. This section
355
+ covers the baseline design of the CHEESECLOTH compila-
356
+ tion pipeline which includes (1) the MicroRAM assembly
357
+ language, (2) the MicroRAM Compiler, and (3) the Witness
358
+ Checker Generator. Sec. 4 describes optimizations for this
359
+ design that enable it to scale to real world vulnerabilities.
360
+ 3.1
361
+ MicroRAM
362
+ The MicroRAM assembly language is critical to CHEESE-
363
+ CLOTH. It is the core IR language that CHEESECLOTH op-
364
+ erates on and is the language that the MicroRAM Compiler
365
+ compiles LLVM programs to. The Witness Checker Genera-
366
+ tor produces ZK circuits that verify program executions ac-
367
+ cording to MicroRAM’s architecture.
368
+ MicroRAM is heavily inspired by TinyRAM [10, 11],
369
+ which is a practical and efficient assembly language with
370
+ a simple transition function that is ideal for ZK execution
371
+ verification. We describe MicroRAM and its architecture be-
372
+ low, and we precisely describe how its design diverges from
373
+ TinyRAM in Sec. 3.1.1.
374
+ MicroRAM is a random-access machine designed to effi-
375
+ ciently detect vulnerabilities in program executions. It is a
376
+ reduced instruction set computer (RISC) with a Harvard ar-
377
+ chitecture and byte-addressable random-access memory.
378
+ MicroRAM instructions are relatively simple and include
379
+ 4 boolean operations, 8 arithmetic operations for signed and
380
+ unsigned integers, 2 shift operations, 5 compare operations,2
381
+ move operations, 3 jump instructions, 2 operations for read-
382
+ ing and writing to memory, and 1 answer operation that re-
383
+ turns and halts the execution. Floating-point and vector arith-
384
+ metic are not directly supported in the MicroRAM machine
385
+ and must be implemented in software. Instructions take two
386
+ registers and one operand (either a register or an immediate)
387
+ as arguments. As an example, instruction xor ri rj 255
388
+ writes to register ri the exclusive-orof register rj and the im-
389
+ mediate 255. CHEESECLOTH extensions like those described
390
+ in Sec. 5 can introduce additional instructions as needed.
391
+ The state of the MicroRAM machine consists of the pro-
392
+ gram counter (pc), k 64-bit registers, a memory of 264 64-bit
393
+ words, a flag indicating whether or not the execution so far is
394
+ valid (inv_flag), and a flag tracking whether a vulnerability
395
+ has occurred (vuln_flag). CHEESECLOTH extensions can
396
+ extend the state of the MicroRAM machine as well.
397
+ To demonstrate the existence of a vulnerability in a pro-
398
+ gram, a prover must present a secret input that results in a
399
+ valid execution trace that triggers a vulnerability. Formally,
400
+ given a MicroRAM program, P, and an initial memory, m0,
401
+ P(m0) demonstrates a vulnerability in T steps if inv_flag
402
+ is false and vuln_flag is true in the final MicroRAM
403
+ state of the program’s execution trace. inv_flag is set to
404
+ false if any of the checks validating the program’s execu-
405
+ tion fails. The extensions implementing the vulnerability de-
406
+ tection checks set vuln_flag to true if they observe a vul-
407
+ nerability during the program’s execution.
408
+ 4
409
+
410
+ 3.1.1
411
+ Beyond TinyRAM
412
+ As mentioned above, our MicroRAM machine is inspired by
413
+ TinyRAM. Here we report on how MicroRAM’s design de-
414
+ parts from the TinyRAM model.
415
+ • MicroRAM’s memory model is byte-addressable while
416
+ TinyRAM is word-addressable. Byte-addressable memory
417
+ is necessary to support functionality like string manipula-
418
+ tions and packed structs, without adding subroutines to ac-
419
+ cess bytes within full words.
420
+ • TinyRAM receives input via input tapes. In MicroRAM,
421
+ input is passed directly in memory, which saves many cy-
422
+ cles that TinyRAM spends copying input to memory. A Mi-
423
+ croRAM program can request non-deterministic advice in
424
+ several ways, however the prover does not have to commit
425
+ to the advice ahead of time on a tape; instead they provide
426
+ the advice upon request. This approach is better suited to
427
+ support backends that exploit parallelism or streaming, and
428
+ it results in smaller circuits.
429
+ • TinyRAM uses a 1-bit condition flag for branching while
430
+ MicroRAM does not. This is advantageous since Micro-
431
+ RAM targets a variety of backends including non-boolean
432
+ arithmetic circuits where the flag is more expensive than a
433
+ regular register1. In addition, the semantics without a flag
434
+ are much simpler so the compiler, interpreter, and circuit
435
+ generator are simpler as well. We found that even when
436
+ targeting boolean circuits, the benefits of having a condi-
437
+ tion flag are outweighed by the extra complexity.
438
+ • We have not yet explored using a von Neumann architec-
439
+ ture [12] for MicroRAM because, despite the asymptotic
440
+ benefits, the instruction fetching circuit is not yet a limit-
441
+ ing factor in our ZK statements.
442
+ 3.2
443
+ MicroRAM Compiler
444
+ The MicroRAM Compiler is implemented as a LLVM back-
445
+ end that takes LLVM IR programs as input and produces Mi-
446
+ croRAM assembly as output. We currently support C, C++,
447
+ and Rust programs by compiling them to LLVM IR with the
448
+ Clang and rustc compiler frontends. Support for other lan-
449
+ guages such as C#, Haskell, or Scala can be added in the
450
+ future by connecting their appropriate LLVM frontends and
451
+ writing the appropriate standard libraries.
452
+ Our compiler backend supports a large subset of the
453
+ LLVM IR language. The compiler supports all boolean and
454
+ arithmetic operations for integers of different sizes, bitwise
455
+ operations, all non-concurrent memory operations including
456
+ pointer arithmetic with getelementptr, conversion opera-
457
+ tions, function calls, variable arguments, comparisons, and
458
+ phi nodes. Complex operations like floating-point opera-
459
+ tions are implemented in software via a LLVM compiler
460
+ 1If full words fit in a field element, then the flag is the same size as a
461
+ register, but requires special circuitry and has more restrictions.
462
+ pass.
463
+ Exceptions, and all exception handling instructions, are
464
+ not supported; but we we can still tolerate programs with ex-
465
+ ceptions as long as the prover is disclose that the execution of
466
+ interest, which triggers a vulnerability,does not throw any ex-
467
+ ceptions. This is since the MicroRAM Compiler translates all
468
+ exception handling instructions to traps that mark the trace as
469
+ invalid by setting the inv_flag flag. By inserting traps, the
470
+ MicroRAM Compiler can process programs with any num-
471
+ ber of unsupported features, as long as the prover is will-
472
+ ing to reveal that those features are not involved in the vul-
473
+ nerable execution. With this simple trick, users can compile
474
+ real-world programs without having to manually remove un-
475
+ supported features. When enabling traps, provers must take
476
+ care not to reveal too much information about the underlying
477
+ vulnerability. Sec. 3.2.3 presents a more detailed discussion
478
+ about the security implications of how proof statements can
479
+ reveal information about their witnesses.
480
+ 3.2.1
481
+ Standard library
482
+ MicroRAM supports a significant portion of the C standard
483
+ library and POSIX system calls, using Picolibc [4]: a library
484
+ that offers the C standard library APIs and was originally de-
485
+ signed for small embedded systems with limited RAM. Picol-
486
+ ibc supports multiple widely deployed target architectures,
487
+ including ARM, RISC-V, and x86-64.
488
+ We implemented MicroRAM as a target architecture for
489
+ Picolibc. This enables the MicroRAM compiler to support
490
+ most of the C standard library and POSIX system calls. It
491
+ is also convenient as it allows prover to publicly customize
492
+ the behavior of system calls. For example, in our case study
493
+ of OpenSSL, the victim server receives the malicious re-
494
+ quest from the attacker over the network. We customized the
495
+ behavior of read when compiled natively to intercept and
496
+ record all data received over the network. When compiled
497
+ for MicroRAM, read returns the previously recorded exploit
498
+ request, which is loaded from secret memory. We also cus-
499
+ tomize the implementation of malloc and free to efficiently
500
+ detect memory vulnerabilities (Sec. 5.1.1).
501
+ 3.2.2
502
+ Generating advice
503
+ As we will see in later sections, CHEESECLOTH requires non-
504
+ deterministic advice to efficiently generate a ZK circuit that
505
+ verifies the consistency of memory in an execution (Sec. 3.3)
506
+ and the presence of a vulnerability (Sec. 5.1). To aid the
507
+ prover in producing that advice, the MicroRAM compiler
508
+ runs two interpreter passes. The first pass executes the pro-
509
+ gram without any advice and records the necessary advice.
510
+ The second execution runs the nondeterministic semantics
511
+ and records the trace, which is passed to the Witness Checker
512
+ Generator to produce the witness for the prover.
513
+ 5
514
+
515
+ 3.2.3
516
+ Security
517
+ MicroRAM produces zero-knowledge proofs which ensure
518
+ that no additional information is revealed about the wit-
519
+ ness. However, the proof statement itself can reveal informa-
520
+ tion about the secret input. For example, in MicroRAM and
521
+ TinyRAM the circuit reveals a time bound T on the execution
522
+ length. In Pantry/Buffet, the circuit discloses an upper bound
523
+ Ti on every loop (and recursive function) in the execution. In
524
+ vRAM [53], every instruction run during the execution is re-
525
+ vealed to the verifier. We argue that a formalization of this
526
+ information leakage is necessary. Interesting and important
527
+ future work will be to define a formal framework to analyse
528
+ how secure these encodings are.
529
+ 3.2.4
530
+ Preprocessing public inputs
531
+ One opportunity for aggressive optimization is to publicly
532
+ evaluate logic that is determined by the program’s public in-
533
+ puts. Many practical programs collect inputs from multiple
534
+ sources, some of which are not secret (i.e., irrelevant to the
535
+ vulnerability). If the prover and verifier agree when defining
536
+ a proof statement that only some inputs are sensitive secrets
537
+ (e.g., data packets received from a network connection) while
538
+ others are not (e.g., straightforward configuration options),
539
+ then the resulting proof statement could be immediately op-
540
+ timized by generating the proof statement and partially eval-
541
+ uating the resulting circuit on its input wires corresponding
542
+ to non-sensitive inputs.
543
+ CHEESECLOTH supports such cases with a compiler pass
544
+ that determines the largest program prefix in which no op-
545
+ eration depends on secret inputs. The MicroRAM compiler
546
+ then separates the public prefix from the remaining program
547
+ suffix and compiles them separately. When the interpreter is
548
+ executed by both the prover and verifier, it executes the pre-
549
+ fix and defines a public snapshot of the resulting state, includ-
550
+ ing both registers and memory. When executed by the prover,
551
+ the interpreter then executes the remaining suffix using both
552
+ the snapshot and their private input generate to generate the
553
+ statement’s witness. In practice, this simple optimization has
554
+ significant impact, reducing the number of execution steps in
555
+ OpenSSL’s ZK proof statement from 25M to 1.3M (Sec. 6.3).
556
+ The compiler optimization implements a relatively re-
557
+ stricted form of partial evaluation and constant folding
558
+ (Sec. 2.3). Our initial experience indicates that further ex-
559
+ tensions could improve CHEESECLOTH’s performance dras-
560
+ tically: a key technical challenge is that while programs may
561
+ perform much processing of public data over the course of
562
+ the entire execution, the processing is often interleaved with
563
+ computation over sensitive inputs. Evaluating each of the in-
564
+ terleaved phases of public computation is sound in principle,
565
+ but can only be automated by ensuring that regions of storage
566
+ used by public and secret phases are disjoint. Such automa-
567
+ tion could potentially be achieved by applying points-to and
568
+ shape analyses [46–48], including separation logic [44].
569
+ 3.3
570
+ Witness Checker Generator
571
+ The Witness Checker Generator takes as input a MicroRAM
572
+ program and generates a ZK circuit, serialized in standard-
573
+ ized formats including R1CS and SIEVE IR. It also accepts
574
+ nondeterministic advice as input and outputs the secret wit-
575
+ ness to the circuit when run as the prover.
576
+ The Witness Checker Generator builds arithmetic circuits
577
+ for the prime field 2128−159. As an optimization, it automat-
578
+ ically constant folds gates that are independent of secret in-
579
+ puts. To scale to large circuits and avoid running out of mem-
580
+ ory, it streams the circuit serialization to a file. This stream-
581
+ ing is independent of secret witnesses, so the same circuit is
582
+ generated for the prover and verifier.
583
+ The nondeterministic advice the Witness Checker Genera-
584
+ tor accepts provides a description of a program’s execution
585
+ together with the advice necessary to run it. Concretely, the
586
+ advice for an execution of T steps contains the initial pro-
587
+ gram memory, the T MicroRAM states making up the execu-
588
+ tion trace, and a mapping from step number to additional ad-
589
+ vice given at each step. This additional advice includes mem-
590
+ ory ports for what is read or written to memory and stutters
591
+ that indicate the execution should pause for the current step.
592
+ The Witness Checker Generator produces a ZK circuit that
593
+ verifies that the witness describes a valid execution trace for
594
+ the program and that a vulnerability occurs. The circuit is
595
+ split into four key pieces: (1) the transition function circuit,
596
+ (2) the memory consistency circuit, (3) a state transition net-
597
+ work, and (4) public-pc segments. We describe the first two
598
+ here, which follow a similar structure to the circuit construc-
599
+ tion for TinyRAM [10]. The other two are described later in
600
+ Sec. 4.1.
601
+ Transition function circuit. The transition function circuit
602
+ checks a single step of execution. These checks are chained
603
+ together to validate the entire execution trace. Fig. 1 shows
604
+ pseudocode for the transition function circuit. It takes as
605
+ input the circuit’s wire representation of the current Micro-
606
+ RAM state, the next state, and any additional advice needed
607
+ for the current step. The circuit then fetches the instruction
608
+ to execute based on the program counter and pulls out the
609
+ instruction’s argument values by indexing into machine reg-
610
+ isters. It calculates the expected result of the step by multi-
611
+ plexing over the instruction. Finally, the circuit ensures that
612
+ the calculated expected state matches the next state provided
613
+ as advice.
614
+ Memory consistency circuit. The memory consistency cir-
615
+ cuit is similar to TinyRAM’s except addresses are byte-
616
+ addressable instead of word-addressable. Each step may
617
+ have a corresponding memory port advice that states the ad-
618
+ dress and what was read or written to memory. The transi-
619
+ tion function circuit verifies that the execution trace matches
620
+ the memory port advice. All of the memory ports are sorted
621
+ by address and step number. The memory consistency cir-
622
+ 6
623
+
624
+ 1
625
+ fn transition_func(circuit, current_st, next_st) {
626
+ 2
627
+ let expected_st = current_st.clone();
628
+ 3
629
+ let instr = fetch_instr(circuit, current_st.pc);
630
+ 4
631
+ let arg1 = index(circuit, current_st.regs, instr.op1);
632
+ 5
633
+ let arg2 = index(circuit, current_st.regs, instr.op2);
634
+ 6
635
+ 7
636
+ let result = circuit.mux(instr.opcode == XOR,
637
+ 8
638
+ xor(circuit, arg1, arg2), ...);
639
+ 9
640
+ expected_st.pc = circuit.mux(
641
+ 10
642
+ is_jump(circuit, instr.opcode),
643
+ 11
644
+ result,
645
+ 12
646
+ circuit.add(current_st.pc, 1));
647
+ 13
648
+ write_index(circuit, expected_st.regs, instr.dest,
649
+ 14
650
+ result);
651
+ 15
652
+ 16
653
+ circuit.assert(expected_st == next_st); }
654
+ Figure 1: Pseudocode for the transition function circuit that
655
+ validates a single MicroRAM step.
656
+ cuit linearly scans the memory ports to ensure that all reads
657
+ and writes to a given address are consistent with the previ-
658
+ ous memory operation. For example, a read should return
659
+ the same value that was previously written to an address. Fi-
660
+ nally, the memory consistency circuit checks that the sorted
661
+ memory ports are a permutation of the memory ports used by
662
+ the transition function circuit. Sec. 5.1 describes how these
663
+ checks are enhanced to efficiently detect memory vulnerabil-
664
+ ities.
665
+ 4
666
+ Optimizations
667
+ This section describes two of CHEESECLOTH’s key optimiza-
668
+ tions: constructing executions with public program coun-
669
+ ters (Sec. 4.1) and tuning steps based on instruction sparsity
670
+ (Sec. 4.2). Sec. 6.4 contains an empirical evaluation of the
671
+ optimizations’ effectiveness.
672
+ 4.1
673
+ Public-PC segments
674
+ The MicroRAM machine is design to minimize the size of
675
+ the circuit that checks the transition function circuit. How-
676
+ ever, even with MicroRAM’s small instruction set, the tran-
677
+ sition function circuit is still large. This is due to the fact
678
+ that every transition function must support every operation
679
+ in every step of execution. What if we could remove all the
680
+ unused functionality? This is the approach of vRAM [53],
681
+ where the circuit is tuned to check the instruction that is exe-
682
+ cuted at each step. The resulting circuit is much smaller, but
683
+ unfortunately the trace of executed instructions is revealed.
684
+ The values in memory and registers would still be kept secret,
685
+ however a verifier could easily discover where the vulnerable
686
+ code is in the program. In this section, we present public-pc
687
+ segments which generate much smaller circuits without re-
688
+ vealing the trace.
689
+ A public-pc segment is a sequence of transition circuits
690
+ with a hardcoded and public program counter. Using con-
691
+ stant folding, all the instruction fetches of the public-pc seg-
692
+ ments are known and the unused functionality of every step
693
+ can be removed. For example, with a public program counter,
694
+ the fetch_instr and subsequent mux operations over the in-
695
+ struction in Fig. 1 can be constant folded away. We gener-
696
+ ate public-pc segments for all straightline code segments in
697
+ a program, and we implemented a compiler pass that uses a
698
+ naive control-flow analysis to estimate how many times each
699
+ segment will be used. The analysis takes a global bound spec-
700
+ ifying how many times to unroll loops and estimates how
701
+ many times a function will be called by counting the number
702
+ of call sites for that function in the program.
703
+ To preserve the security of the trace, the cycle counter of
704
+ segments is kept private. In addition, we introduce a state
705
+ routing network so that the end state of a segment could be
706
+ the initial state for any other segment in the circuit. Just like
707
+ the memory routing network, the routing information for the
708
+ state routing network is given by the prover and kept secret.
709
+ As a further optimization, we avoid using the state rout-
710
+ ing network when possible. For example, when a public-pc
711
+ segment branches to two statically known locations, we di-
712
+ rectly connect the end state of that segment to the segments
713
+ representating those two locations.
714
+ It is possible that the bound for unrolling loops is not large
715
+ enough to support certain executions, so the pipeline would
716
+ not generate enough public-pc segments for a section of code.
717
+ As backup, the pipeline also produces private-pc segments
718
+ which are just like public-pc segments except the program
719
+ counter is not revealed. Private-pc segments look similar to a
720
+ much smaller TinyRAM circuit with the difference that their
721
+ start and end states come from the state routing network. The
722
+ circuits for these segments are significantly larger, but can ex-
723
+ ecute any part of the program at any point during execution.
724
+ 4.2
725
+ Sparsity
726
+ With the naive CPU unrolling described in Sec. 3.3, ev-
727
+ ery transition function must contain a memory port, which
728
+ causes the memory consistency network to grow at a rate of
729
+ O(T logT), where T is the number of steps executed. Unfor-
730
+ tunately, most of those gates are wasted by execution steps
731
+ that do not access memory. CHEESECLOTH mitigates this ex-
732
+ cess by removing some of the unused memory ports, thereby
733
+ reducing the size of the memory consistency circuit.
734
+ The key observation for this optimization is that memory
735
+ operations are rarely contiguous. Even when a program per-
736
+ forms a memory-intensive operation, other instructions are
737
+ often interleaved between memory instructions. For exam-
738
+ ple, when adding the values in a buffer, it takes some steps
739
+ to increment the pointer and add the values between memory
740
+ reads. This enables us to share one memory port among s
741
+ contiguous steps, shrinking the memory consistency network
742
+ 7
743
+
744
+ by a factor of s.
745
+ We define the memory sparsity, s, as the number of steps
746
+ that share a single memory port. CHEESECLOTH chooses s
747
+ based on a static analysis of the code. The analysis deter-
748
+ mines the minimum distance between two memory opera-
749
+ tions in any possible execution. Across statically-unknown
750
+ jumps (e.g. calling a function from a pointer dereference),
751
+ the analysis naively considers all the instructions the control
752
+ flow can possibly jump to. This memory sparsity number s is
753
+ then used by the MicroRAM Compiler and Witness Checker
754
+ Generator to generate the optimized circuit.
755
+ Given a memory sparsity s, the Witness Checker Generator
756
+ will group s consecutive steps and create a single memory
757
+ port for all of these steps. A multiplexer connects the single
758
+ memory port to the entire group and sends the result, using
759
+ nondeterministic advice, to the right step (if any).
760
+ If s is larger than the actual sparsity displayed by a trace,
761
+ then (if unlucky) multiple memory accesses can fall into
762
+ the same group of steps, which has a single memory port.
763
+ CHEESECLOTH handles this situation by inserting stutter
764
+ instructions that delay memory operations until they are
765
+ pushed into the next group with separate memory ports. In-
766
+ serting stutter instructions can be expensive, but reducing the
767
+ size of the memory consistency circuit is more beneficial
768
+ (Sec. 6.4). In future work, we will explore swapping program
769
+ instructions to reduce stutter instructions and determine the
770
+ optimal s parameter for most programs.
771
+ 5
772
+ Encoding Vulnerabilities
773
+ This section describes how CHEESECLOTH encodes two
774
+ prevalent and critical classes of software vulnerabilities:
775
+ memory unsafety (Sec. 5.1) and data leakage (Sec. 5.2).
776
+ 5.1
777
+ Memory unsafety
778
+ We now describe how CHEESECLOTH efficiently models
779
+ memory and represents memory vulnerabilties. In CHEESE-
780
+ CLOTH, memory is an array of 264 bytes with half reserved
781
+ for the heap and the rest for global variables and the stack.
782
+ Our approach is to keep track of valid memory (e.g. allo-
783
+ cated arrays) and report a vulnerability (i.e., set bug_flag)
784
+ when the program accesses non-valid memory. At the start
785
+ of the execution, the only valid memory is where the global
786
+ variables are stored and, during execution,malloc makes allo-
787
+ cated regions valid and free makes them invalid again. With
788
+ this technique we can catch the following memory errors:
789
+ • Uninitialized access. All uninitialized memory is invalid,
790
+ so any use triggers a bug.
791
+ • Use-after-free. When a region is freed it becomes invalid,
792
+ so any use triggers a bug.
793
+ • Free-after-free. The implementation of free starts by read-
794
+ ing a word from the region to be freed, if the region is not
795
+ 1
796
+ void* malloc(size_t size) {
797
+ 2
798
+ // Get pointer from advice
799
+ 3
800
+ char* addr = __cc_malloc(size);
801
+ 4
802
+ /* Compute and validate the size of
803
+ 5
804
+ * the allocation provided by the
805
+ 6
806
+ * prover. */
807
+ 7
808
+ size_t region_size =
809
+ 8
810
+ 1ull << ((addr >> 58) & 63);
811
+ 9
812
+ /* The allocated region must have
813
+ 10
814
+ * space for `size` bytes, plus
815
+ 11
816
+ * an additional word for metadata.
817
+ 12
818
+ */
819
+ 13
820
+ __cc_valid_if(
821
+ 14
822
+ region_size >= size + sizeof(uintptr_t),
823
+ 15
824
+ "allocated region size is too small");
825
+ 16
826
+ /* `region_size` is always a power of
827
+ 17
828
+ * two and is at least the word size,
829
+ 18
830
+ * so the address must be a multiple
831
+ 19
832
+ * of the word size. */
833
+ 20
834
+ __cc_valid_if(addr % region_size == 0,
835
+ 21
836
+ "allocated address is misaligned for"
837
+ 22
838
+ "its region size");
839
+ 23
840
+ /* Write 1 (allocated) to the metadata
841
+ 24
842
+ * field, and poison it to prevent
843
+ 25
844
+ * tampering, invalidating the trace
845
+ 26
846
+ * if the metadata word is already
847
+ 27
848
+ * poisoned (this happens if the
849
+ 28
850
+ * prover tries to return the same
851
+ 29
852
+ * region for two separate
853
+ 30
854
+ * allocations). */
855
+ 31
856
+ uintptr_t* metadata = (uintptr_t*)
857
+ 32
858
+ (addr + region_size - sizeof(uintptr_t));
859
+ 33
860
+ __cc_write_and_poison(metadata, 1);
861
+ 34
862
+ 35
863
+ // further computation...
864
+ 36
865
+ return (void*)addr; }
866
+ Figure 2: Implementation of non-deterministic malloc.
867
+ valid it triggers a bug.
868
+ • Out-of-bound access. If the program accesses an address
869
+ out of bounds, that new location might (see below) not be
870
+ valid and this triggers a bug.
871
+ It is clear that a normal execution with such bound check-
872
+ ing might miss out-of-bound access bugs, when the ac-
873
+ cess happens to fall on another valid region, and free-after-
874
+ free/use-after-free bugs, if an intermediate malloc makes the
875
+ region valid before the bug is triggered. However, we only
876
+ need to show that the bug exists in one execution, so we im-
877
+ plement a malloc guided by nondeterministicadvice; this lets
878
+ the prover choose the allocation layout to ensure the bug is
879
+ triggered.
880
+ While the techniques described here are specific to heap
881
+ memory bugs, the same ideas can be applied to the stack.
882
+ 5.1.1
883
+ Encoding dynamic memory allocation
884
+ An implementation of malloc with nondeterminism poses
885
+ its own challenges. If left unchecked, the prover could man-
886
+ ufacture an execution that triggers a false bug. For example
887
+ the prover could malloc overlapping regions such that if one
888
+ 8
889
+
890
+ is freed and the other one is accessed a false bug is triggered.
891
+ Thus, our implementation of malloc and free (Fig. 2) focuses
892
+ on verifying that the nondeterminisitc choices are legal. If
893
+ foul play is detected, the execution is flagged as invalid with
894
+ inv_flag and will not be accepted by the verifier.
895
+ To ensure that malloc never returns overlapping regions,
896
+ we predetermine aligned non-overlapping regions of differ-
897
+ ent sizes for malloc to choose from. Concretely, we divide
898
+ memory into 26 pools of size 258, then subdivide pool i into
899
+ regions of size 2i. malloc rounds up the requested size to the
900
+ next power of two, then returns the start of an unallocated
901
+ region of that size. For example, malloc(15) must return a
902
+ region in the 4th pool and be 16-byte aligned. In fact, we can
903
+ easily verify that malloc has allocated a correct region just
904
+ by looking at the pointer returned: the first 6 bits determine
905
+ the pool and the rest the alignment.
906
+ Finally, malloc must not return the same pointer twice
907
+ without it being freed in between. To do so, we add to each
908
+ region one word reserved for metadata that is marked and
909
+ made invalid when the region is allocated. If the region was
910
+ allocated again, the invalid metadata would be made invalid
911
+ again which makes the trace invalid by setting inv_flag.
912
+ Furthermore, an implementation of malloc/free that tracks
913
+ the validity of all memory locations would be quite inef-
914
+ ficient. Luckily, the prover knows exactly where the bug
915
+ will happen and thus the malloc/free implementation only
916
+ needs to track the status of that location. At the beginning
917
+ of the execution, the prover commits to a secret location
918
+ stored in the global variable __cc_memory_error_address
919
+ and then malloc/free only track the validity of that location.
920
+ In particular, if an allocated/freed region does not contain
921
+ __cc_memory_error_address then malloc/free do not check
922
+ for errors, and run in constant time.
923
+ 5.2
924
+ Data leakage
925
+ A straightforward approach to proving leakage would be to
926
+ directly encode the definition of noninterference in the ZK
927
+ circuit. This could be accomplished by verifying two pro-
928
+ gram executions where only sensitive inputs are distinct but
929
+ public outputs are distinct. However, suchan approach would
930
+ result in a statement of twice the size required for validating
931
+ a single execution. Instead, we might hope to prove a leak
932
+ using a single execution in which storage is annotated with
933
+ labels (Sec. 2.2). However, such systems tranditionally have
934
+ only been designed to prove that a program may leak infor-
935
+ mation, which is unacceptable for definitively proving a leak
936
+ without providing a violating execution directly (Sec. 2.2.1).
937
+ Specifying leakage To identify sensitive sources and sinks,
938
+ the instructions source and sink are added to the Mi-
939
+ croRAM instruction set, and are directly wrapped by user-
940
+ level functions taintSource and taintSink, respectively.
941
+ source annotates that a given byte of data carries sensitive
942
+ data; sink annotates that a given byte is output to a channel.
943
+ Instantiating the general definitions of information flow and
944
+ leakage (Sec. 2.2) for this extended ISA, a MicroRAM pro-
945
+ gram leaks if it has two executions whose inputs only differ
946
+ at addresses given to source, but result in different values
947
+ at an address given in calls to sink. Leakage is established
948
+ by the prover and verifier collaborating to extend the subject
949
+ program to call the taintSource and taintSink to anno-
950
+ tate sensitive sources and sinks.
951
+ Proving leakage To soundly and precisely prove leakage, we
952
+ propose a novel labeling system that tracks what program
953
+ storage may and must hold secret. There are four labels, de-
954
+ noted and partially ordered as
955
+ ⊥ ⊑ ℓ0,ℓ1 ⊑ ⊤
956
+ with a least-upper bound (i.e., join) operation denoted ⊔. La-
957
+ bels ℓ0 and ℓ1 annotates data that must belong to one of two
958
+ principals; ⊤ denotes that the data’s sensitivity is unknown;
959
+ ⊥ denotes data that must not be influenced by a principal.
960
+ With this labeling scheme, leakage of ℓ-labeled data written
961
+ to a ℓc-labeled sink must occur when ℓ ̸= ⊤ ∧ℓ ̸⊑ ℓc.
962
+ MicroRAM state is extended so that every register and
963
+ byte of memory is associated with a label, similar to previ-
964
+ ous leakage monitors [20, 42, 49, 50]. Two additional labels
965
+ model effects of instructions other than register arithmetic.
966
+ The control context label γ is maintained to be ⊥ if the pro-
967
+ gram execution has not branched on secret data, and ⊤ other-
968
+ wise; similarly, the storage context label σ is maintained to
969
+ be ⊥ if the program has not stored to a secret address, and ⊤
970
+ otherwise.
971
+ Each assignment x:=e sets the label of x to L(e) ⊔ γ ⊔ σ
972
+ (where L(e) is the label of e, defined below); thus, if the pro-
973
+ gram has branched on secret data or written to a secret ad-
974
+ dress, the label of x is set to ⊤. If e is an arithmetic/logical
975
+ operation f(y), then L(e) is ⊥ when L(y) is ⊥ and ⊤ other-
976
+ wise; L(e) = L(y) if f is a bijection: our current implemen-
977
+ tation conservatively only labels single-register expressions
978
+ (i.e., copy sources) as L(y). If e is a load *p, then L(e) is
979
+ L(*p). Conditional branches update γ and memory stores up-
980
+ date σ according to the labels’ descriptions; we omit formal
981
+ descriptions here, due to space constraints.
982
+ Plenty of natural programs leak but cannot be proved to
983
+ do so by this labeling system, potentially because a leakage
984
+ happens after branching or storing to a ⊤-labeled value, or
985
+ because a secret value is propagated over an operation not
986
+ recognized as a bijection. Such cases restrict the situations in
987
+ which the labeling scheme can be applied to prove leakage,
988
+ but do not threaten its validity when it claims that a given
989
+ program leaks. They might be addressed in future work that
990
+ refines instruction interpretations using valid logical axioms
991
+ (e.g., the fact that for each value x, x + 0 = x). Such threats
992
+ and mitigations are dual to threats to precision, and possi-
993
+ ble refinements when proving that a program execution may
994
+ leak.
995
+ 9
996
+
997
+ 6
998
+ Evaluation
999
+ We evaluate CHEESECLOTH with three case studies that
1000
+ demonstrate ZK proofs of real world software vulnerabilities.
1001
+ The vulnerabilities scale by code size and execution trace
1002
+ length to showcase the capabilities of CHEESECLOTH. We
1003
+ also benchmark the optimizations (Sec. 4) to evaluate their
1004
+ effectiveness.
1005
+ Tab. 1 presents the results of using CHEESECLOTH to pro-
1006
+ duce ZK proofs for our case studies which include GRIT,
1007
+ FFMPEG, and OpenSSL. For each case study, we report
1008
+ the size of the program in terms of the number of Micro-
1009
+ RAM instructions, the number of execution steps required
1010
+ to demonstrate the vulnerability, and the number of multipli-
1011
+ cation gates in the resulting ZK circuit. We prove satisfia-
1012
+ bility of the ZK circuit using the Mac’n’Cheese [9] interac-
1013
+ tive ZK protocol, as implemented by the Swanky [23] library.
1014
+ We record the protocol running time and communication cost
1015
+ between the prover and verifier. All measurements were per-
1016
+ formed on a 128 core Intel Xeon E7-8867 CPU with 2 TB of
1017
+ RAM running Debian 11, although our implementation typi-
1018
+ cally uses considerably less memory (Tab. 1).
1019
+ 6.1
1020
+ Memory unsafety in GRIT
1021
+ The GBA Raster Image Transmogrifier (GRIT) [34] converts
1022
+ bitmap image files to a graphics format that is readable by
1023
+ the Game Boy Advance. A bitmap image includes headers, a
1024
+ palette array indicating the colors in the image, and the pixels
1025
+ for the image. For 24bpp images, GRIT’s parser assumes the
1026
+ palette size is zero and allocates a buffer without space for
1027
+ the palette. When populating the buffer, it checks the image
1028
+ header for the numberof palette entries without checking that
1029
+ this matches the assumed palette size that was used during
1030
+ allocation. As a result, a malformed 24bpp image can write
1031
+ an arbitrary amount of data (up to the length of the file) past
1032
+ the allocated buffer.
1033
+ To demonstrate this memory error, we construct a 24bpp
1034
+ exploit image with 0x3000 bytes of pixel data and 12 bytes of
1035
+ palette data. On Linux, the 12 byte overflow overwrites heap
1036
+ metadata and triggers an assertion failure in the memory allo-
1037
+ cator. When run through CHEESECLOTH, we generate a ZK
1038
+ proof that a memory error is triggered within six thousand
1039
+ steps of GRIT’s execution without revealing the triggering
1040
+ image or where the error occurred in the code.
1041
+ 6.2
1042
+ Memory unsafety in FFmpeg
1043
+ FFmpeg is a tool for recording, converting, and streaming au-
1044
+ dio and video [2], and is used in popular software projects
1045
+ such as Chrome, Firefox, iTunes, VLC, and YouTube. FFm-
1046
+ peg is written in C and has been plagued by vulnerabilities
1047
+ that compromise memory safety, enabling attackers to exe-
1048
+ cute code and share local files over the network. Versions of
1049
+ FFmpeg prior to v1.1.2 contained a vulnerability [1] caused
1050
+ by the memory error in the function gif_copy_img_rect,
1051
+ which copies the frame of a GIF file between buffers. Previ-
1052
+ ous versions of gif_copy_img_rect insecurely calculated
1053
+ a pointer to the end of a memory buffer by directly using
1054
+ the input image’s height. This calculation allowed an attacker
1055
+ to provide a carefully crafted GIF which causes FFmpeg to
1056
+ write to memory outside of an array’s bounds.
1057
+ To prove memory unsafety of FFmpeg in ZK, we manually
1058
+ crafted a GIF image that exploits the described memory vul-
1059
+ nerability. We passed this image and a program module that
1060
+ invokes FFmpeg’s video decoder to CHEESECLOTH, which
1061
+ generated a proof of a out-of-bound access. The only facts re-
1062
+ vealed about the exploitative GIF are what are implied by the
1063
+ fact that it trigger an out-of-bound access within 76K steps
1064
+ of execution.
1065
+ Preprocessing FFmpeg on public inputs There was po-
1066
+ tential to aggressively optimize FFmpeg’s proof state-
1067
+ ment, which was ultimately achieved by applying CHEESE-
1068
+ CLOTH’s constant folding transformation pass after manual
1069
+ program partitioning. The need for partitioning arose due to
1070
+ the interleaving of public and secret computation in the GIF
1071
+ modules, which executes by: (1) demultiplexing a given se-
1072
+ cret GIF file into a sequence of data packets; (2) initializing
1073
+ the state of the decoder, using public configuration settings;
1074
+ (3) executing the codec that contains the vulnerability.
1075
+ Although phase (2) computes entirely over public data, it
1076
+ would not be optimized by CHEESECLOTH’s constant fold-
1077
+ ing pass because the pass halts upon detecting computation
1078
+ that uses secret data, and thus would not optimize any pro-
1079
+ gram segment after phase (1). To address this issue, we man-
1080
+ ually partitioned the program by phase, applied CHEESE-
1081
+ CLOTH’s constant folding pass to each, and linked the result-
1082
+ ing optimized MicroRAM code. In general, our case study
1083
+ of FFmpeg motivates the further study and design of more
1084
+ aggressive constant folding passes, which might apply more
1085
+ sophisticated static program analyses (Secs. 2.3 and 3.2.4).
1086
+ 6.3
1087
+ Leakage in OpenSSL
1088
+ OpenSSL [3] is a widely deployed open-source crypto-
1089
+ graphic library that contains implementations of the SSL and
1090
+ TLS protocols. OpenSSL versions 1.0.1 to 1.0.1f contained
1091
+ a devastating vulnerability dubbed Heartbleed, discovered in
1092
+ 2014 [5], that could be exploited by a remote attacker to
1093
+ completely leak information stored over the protocol’s exe-
1094
+ cution, including other clients’ sensitive information and pri-
1095
+ vate keys.
1096
+ Comprehensive descriptions of SSL and OpenSSL are be-
1097
+ yond the scope of this paper; for the purposes of our work,
1098
+ it suffices to note that SSL parties support multiple requests,
1099
+ including both requests to store data from the another party
1100
+ and requests to reply to a heartbeat signal: a signal sent only
1101
+ 10
1102
+
1103
+ Program
1104
+ Code size (K instrs)
1105
+ Execution steps (K)
1106
+ Mult gates (M)
1107
+ Protocol time
1108
+ Protocol memory
1109
+ GRIT
1110
+ 3
1111
+ 5
1112
+ 26.7
1113
+ 3m 40s
1114
+ 845 MB
1115
+ FFmpeg
1116
+ 24
1117
+ 79
1118
+ 672.7
1119
+ 1h 22m
1120
+ 19 GB
1121
+ OpenSSL
1122
+ 340
1123
+ 1,300
1124
+ 17,049.5
1125
+ 36h 45m
1126
+ 460 GB
1127
+ Table 1: Results for generating and running a ZK proof of software vulnerability for each case study.
1128
+ 1
1129
+ void process_heartbeat(SSLRequest *req) {
1130
+ 2
1131
+ unsigned int len = parse_heartbeat_len(req);
1132
+ 3
1133
+ unsigned char *heartbeat = get_heartbeat(req);
1134
+ 4
1135
+ unsigned char *response = malloc(len);
1136
+ 5
1137
+ memcpy(response, heartbeat, len);
1138
+ 6
1139
+ write(response, len); }
1140
+ Figure 3: Pseudocode depicting the Heartbleed vulnerability.
1141
+ to obtain a response to ensure that the other party is still re-
1142
+ sponsive.
1143
+ The heartbeat request and response is critical to the oper-
1144
+ ation of the Heartbleed vulnerability. A well-formed request
1145
+ consists of a data buffer d and a length field n < |d|. A cor-
1146
+ rect response to such a request returns the first n bytes con-
1147
+ tained in d. However, a party could potentially transmit an
1148
+ ill-formed request, in which n > |d|. The correct response to
1149
+ such an ill-formed request is to reject it.
1150
+ The implementation of OpenSSL (illustrated by the pseu-
1151
+ docode function process_heartbeat in Fig. 3) crucially
1152
+ failed to implement this aspect of the protocol and instead
1153
+ returned the n bytes of memory contiguous with the input
1154
+ buffer. process_heartbeat takes a heartbeat request from
1155
+ a client and echos the provided heartbeat string back. It does
1156
+ so by first parsing the length of the heartbeat string from
1157
+ the client’s request. The function then gets a pointer to the
1158
+ heartbeat string in the request. Next, it allocates a response
1159
+ buffer and copies len bytes from the heartbeat string into
1160
+ the response buffer, which is subsequently sent back to the
1161
+ client. Since process_heartbeat does not check the pro-
1162
+ vided heartbeat length against the actual length of the heart-
1163
+ beat string, if the claimed length is larger than the actual
1164
+ length of the provided heartbeat string, memory beyond the
1165
+ client’s request is sent back to the client. This is practically
1166
+ exploitable, and has been demonstrated to reveal sensitive in-
1167
+ memory data such as cryptographic keys and passwords.
1168
+ Using CHEESECLOTH, we proved in ZK that OpenSSL
1169
+ version 1.0.1f leaks arbitrary user information in 1.3M steps
1170
+ of execution, propagating data purely over register copies,
1171
+ loads, and stores; while the statement reveals a bound on the
1172
+ amount of computation required to perform the leak and in-
1173
+ formation about the types of instructions used to perform the
1174
+ leak (described below), it gives no direct indication of what
1175
+ validation is missing in the function for processing heartbeat
1176
+ requests, or that heartbeat requests are involved in the leak at
1177
+ all. We describe the statement proved, along with technical
1178
+ challenges and solutions, in more detail below.
1179
+ 1
1180
+ int login_handler(
1181
+ 2
1182
+ SSLConn *c, char *password, int len) {
1183
+ 3
1184
+ ...
1185
+ 4
1186
+ label l = getLabel(c);
1187
+ 5
1188
+ for (size_t i = 0; i < len; i++)
1189
+ 6
1190
+ taintSource(password + i, l);
1191
+ 7
1192
+ ... }
1193
+ 8
1194
+ int ssl3_write(
1195
+ 9
1196
+ SSLConn *c, char *buf, int len) {
1197
+ 10
1198
+ ...
1199
+ 11
1200
+ label l = getLabel(c);
1201
+ 12
1202
+ for (size_t i = 0; i < len; i++)
1203
+ 13
1204
+ taintSink(buf + i, l);
1205
+ 14
1206
+ ... }
1207
+ Figure
1208
+ 4:
1209
+ Versions
1210
+ of
1211
+ the
1212
+ OpenSSL
1213
+ functions
1214
+ login_handler
1215
+ and ssl3_write
1216
+ that we
1217
+ augmented
1218
+ with operations that specify information sources and sinks.
1219
+ Passwords are tainted with the label of the current connec-
1220
+ tion, and leaks are detected if data written to the network has
1221
+ a label from a different connection.
1222
+ Specifying OpenSSL’s leakage A primary challenge of our
1223
+ work was to provide a scheme for identifying sensitive
1224
+ sources and sinks such that:
1225
+ 1. A verifier with only an understanding of the data that a
1226
+ subject program handles should be able to inspect the
1227
+ modified program and definitively conclude that it cor-
1228
+ rectly defines information sources and sinks.
1229
+ 2. Any modifications to the program to enable the defini-
1230
+ tion of sources and sinks between which information is
1231
+ leaked must not reveal additional information about the
1232
+ leak’s triggering input.
1233
+ Our mechanism for defining sensitivity of sources and
1234
+ sinks consists of the designated functions taintSource and
1235
+ taintSink (Sec. 5.2). We found that such a library served
1236
+ well for specifying information flow in in OpenSSL; psue-
1237
+ docode of C functions modified in the OpenSSL codebase
1238
+ to label sources and sinks are given in Fig. 4. The function
1239
+ login_handler, given an SSL connection c and a buffer
1240
+ password presumed to contain len bytes of sensitive infor-
1241
+ mation to be transmitted over c, labels len addresses be-
1242
+ ginning with password with the label of c. The function
1243
+ ssl3_write, given an SSL connection c and buffer buf pre-
1244
+ sumed to output len bytes, denotes sinks at the output chan-
1245
+ nel with the label of c for len addresses beginning with buf.
1246
+ The modifications to login_handler and ssl3_write
1247
+ 11
1248
+
1249
+ Program
1250
+ Mult gates without
1251
+ public-pc segments (M)
1252
+ Mult gates without
1253
+ sparsity (M)
1254
+ GRIT
1255
+ 42.3 (37%)
1256
+ 27.9 (4%)
1257
+ FFmpeg
1258
+ 716.9 (6%)
1259
+ 709.9 (5%)
1260
+ Table 2: The number of multiplication gates in the circuit
1261
+ with the different optimizations disabled, as well as the per-
1262
+ centage increase in size over the baseline numbers from
1263
+ Tab. 1.
1264
+ illustrate the utility of first-order labels that can be opera-
1265
+ tionally collected and set, as opposed to operations that set
1266
+ addresses as only high sources or low sinks, even in a setting
1267
+ in which the information belonging to only one principal is of
1268
+ interest. By using first-order labels, we we able to write small
1269
+ specification functions that only unified the labels between a
1270
+ network connection and a given buffer, and then succinctly
1271
+ modified the original program logic in contexts that readily
1272
+ provided a connection and related buffer.
1273
+ Proving OpenSSL’s leakage Once OpenSSL has been suit-
1274
+ ably modified to call the taintSource and taintSink func-
1275
+ tions, its leakage can be proved by generating a statement
1276
+ whose solution corresponds to an execution of a server run-
1277
+ ning OpenSSL that leaks sensitive data from one connec-
1278
+ tion to another connection. We have generated such a state-
1279
+ ment where the server first responds to a public login request
1280
+ where the password is marked as sensitive. The server then
1281
+ handles a secret malicious heartbeat request that returns the
1282
+ password from the previous request’s connection.
1283
+ Using CHEESECLOTH we prove OpenSSL’s leakage by
1284
+ validating the previously described execution which is de-
1285
+ rived from one of its originally disclosed exploits. The leak-
1286
+ age is detected through the source and sink annotations ac-
1287
+ cording to our proposed must-leak labeling scheme (Sec. 5.2)
1288
+ and the verifier only learns an upper bound on the length of
1289
+ the malicious request. We found that the labeling scheme en-
1290
+ abled leakage to be proved much more efficiently, reducing
1291
+ the overall circuit size by 30.6% over the two trace approach.
1292
+ CHEESECLOTH proved the vulnerability of OpenSSL in ap-
1293
+ proximately 37 hours, using 460 GB of protocol communica-
1294
+ tion.
1295
+ 6.4
1296
+ Optimizations
1297
+ Tab. 2 contains the improvements yielded by our key opti-
1298
+ mizations (Sec. 4). We ran the GRIT and FFmpeg case stud-
1299
+ ies with each optimization disabled and report on the num-
1300
+ ber of multiplication gates in the resulting ZK circuit. In
1301
+ addition, we provide the percentage improvement over the
1302
+ baseline numbers from Tab. 1. The public-pc optimization
1303
+ reduced gate size by 37% in the shorter GRIT execution and
1304
+ 6% for FFmpeg. While this is an improvement, these results
1305
+ indicate there is still room for improvement in our analysis
1306
+ that determines the number of public segments to generate
1307
+ for longer executions. The sparsity optimization with s = 2
1308
+ offers modest improvements of 4%–5% in gate size.
1309
+ 7
1310
+ Related Work
1311
+ Recent work has provided the first, exciting steps toward
1312
+ proofs of vulnerability in ZK. BubbleRAM [29] is an ef-
1313
+ ficient framework for proving vulnerability, leverage novel
1314
+ protocols for converting between computations in arithmetic
1315
+ and Boolean fields, efficiently handling both read-only and
1316
+ read-write memory, and the Stack protocol [30] for prov-
1317
+ ing satisfaction of circuits with explicit disjunctions. Al-
1318
+ though our current statement compiler partially overlaps with
1319
+ BubbleRAM because it implements an older scheme for mod-
1320
+ eling RAM computations [10], most of our paper’s key con-
1321
+ tributions, namely simplifying unrolled computations using
1322
+ partial evaluation and the novel scheme for generating state-
1323
+ ments of application leakage, are largely independent of the
1324
+ contributions of [29,30], and we believe that the approaches
1325
+ could be composed. In particular, Stack was evaluated on
1326
+ code snippets representative of a practical CVE of up to 50
1327
+ LoC; due to its efficient support of disjunctions, it could scale
1328
+ to prove that one of many more such snippets is vulnera-
1329
+ ble, but it likely strongly benefit from CHEESECLOTH’s pro-
1330
+ gram optimizations if any particular code segment increased
1331
+ in size.
1332
+ Reverie [27] is a framework for proving exploits in mi-
1333
+ croprocessor code, consisting of a circuit generator that com-
1334
+ piles a given program to an arithmetic circuit and an instan-
1335
+ tiation [36] of the “MPC in the Head” protocol [33]. The
1336
+ compiler generates statements from exploits that have been
1337
+ formalized as executing a designated instruction that signals
1338
+ an error condition (i.e., violations of reachability properties,
1339
+ formalized directly in the program’s control flow); the eval-
1340
+ uation of Reverie demonstrates that it can be used to prove
1341
+ Capture the Flag (CTF) exploits that require up to 51K cy-
1342
+ cles on an MSP430 microprocessor. The core contributions
1343
+ of Reverie are largely complementary to those of CHEESE-
1344
+ CLOTH, which could potentially be adapted to efficiently
1345
+ compile vulnerability statements about programs in interme-
1346
+ diate languages to control reachability properties.
1347
+ Recent work on static program analysis in ZK [21] has
1348
+ presented techniques for proving over-approximations of all
1349
+ program executions without revealing further details of the
1350
+ program, and instantiates the framework on an abstract do-
1351
+ main for information flow based on taint tracking. The static
1352
+ analysis itself is designed to prove that a program may leak
1353
+ information: thus, it cannot yield results that directly imply
1354
+ that a program must leak, although in many cases it could
1355
+ provide evidence that could strongly inform an analysts be-
1356
+ lieft that a program may in fact leak.
1357
+ Our MicroRAM machine is inspired by TinyRAM [10] but
1358
+ departs from their design in sevaral important ways discussed
1359
+ 12
1360
+
1361
+ in Sec. 3.1. There are also some key differences in scope and
1362
+ capabilities. TinyRAM is designed to express correctness
1363
+ of any nondeterministic computations while MicroRAM fo-
1364
+ cuses on vulnerable programs. For example, SNARKs for C
1365
+ [10] approach cannot encode proofs of memory-safety vul-
1366
+ nerability in ZK directly. Instead, they encode knowledge
1367
+ of the existence of a complete, concrete vulnerability trace,
1368
+ which includes copies of exact values in all local variables
1369
+ and the values in memory at each point in the trace and the
1370
+ bug must be evident in the execution’s return value. Our ap-
1371
+ proach encodes memory vulnerabilities directly, resulting in
1372
+ a significantly more succinct witnesses to vulnerability. In
1373
+ particular we can disregard the trace after the bug is found
1374
+ and we don’t rely on the programs return value.
1375
+ Furthermore, TinyRAM approach does not scale to proofs
1376
+ of vulnerabilities in practical programs and has only been
1377
+ evaluated on programs with less than 1,200 low-level instruc-
1378
+ tions [10]. In contrast, the optimizations proposed in this
1379
+ work enables us to support programs with more than 340,000
1380
+ lines of low-level code (Tab. 1). Beyond scalability, Micro-
1381
+ RAM supports a much broader subset of the C language, in-
1382
+ cluding most of the standard C library.
1383
+ Pantry and Buffet [17, 52] represent computation as arith-
1384
+ metic constraints; a solution to the constraints is a valid trace
1385
+ of the computation. After implementing the memory consis-
1386
+ tency approach of TinyRAM, they report results orders of
1387
+ magnitude better than TinyRAM. Buffet supports all features
1388
+ in the C language, with the exceptions of goto statements
1389
+ and function pointers. To translate computation into a con-
1390
+ straint system, Pantry and Buffet must unroll loops to pub-
1391
+ licly revealed bound (although the original work does not
1392
+ explicitly discuss encoding recursive functions, we hypoth-
1393
+ esize that they would be encoded similarly, using bounded
1394
+ function inlining). The constraint system must include every
1395
+ branch of conditionals and every iteration of every loop (mul-
1396
+ tiplicatively with nested loops) which could lead to blowups
1397
+ in the constraint system, howeverthe authors suggest that this
1398
+ would only happen in degenerated cases and would not be
1399
+ common in practice. A variant Pantry/Buffet that uses zero-
1400
+ knowledge techniques to keep the state private with the same
1401
+ efficiency benefits. When presenting our approach, we com-
1402
+ pare facts about private inputs that it reveals to those revealed
1403
+ from public loop bounds (Sec. 3.2.3).
1404
+ vRAM [53] has achieved further efficiency with a inge-
1405
+ nious universal preprocessing that allows the parties to use
1406
+ a smaller circuit tailored to verifying the specific program
1407
+ on the chosen inputs. Unfortunately, such tailored circuits
1408
+ can reveal significant information about the input provided.
1409
+ Our public-pc optimization (Sec. 4.1) attempts to balance the
1410
+ gains of a tailored circuit and the privacy requirements of the
1411
+ prover.
1412
+ 8
1413
+ Conclusion
1414
+ Due to a sustainted successes in the development of ZK
1415
+ protocols, recent techniques have reached the cusp of prov-
1416
+ ing knowledge of realistic vulnerabilities and proving sub-
1417
+ tle exploits in low-level code. This paper describes how
1418
+ a host of core techniques from compiler design—namely,
1419
+ conservative instruction profiling and under-approximating
1420
+ information-flow tainting—can be implemented in an opti-
1421
+ mizing proof-statement generator to produce proofs of vul-
1422
+ nerability in commodity software that can be triggered only
1423
+ be using a considerable amount of time and space.
1424
+ Our practical experience has produced a zero-knowledge
1425
+ proof of memory unsafety in FFmpeg and a proof of leakakge
1426
+ in OpenSSL that directly used the Heartbleed exploit as a wit-
1427
+ ness and demonstrates that zero knowledge proofs of vulner-
1428
+ ability in critical application software are now practical.
1429
+ Availability and Ethical Considerations
1430
+ We are in process of open sourcing the implementation
1431
+ of CHEESECLOTH for publication and artifact evaluation.
1432
+ CHEESECLOTH aids in responsible disclosure by produc-
1433
+ ing zero-knowledge proofs of the existence of vulnerabili-
1434
+ ties while keeping the vulnerabilities and exploits secret. All
1435
+ vulnerabilities used in our evaluation have been previously
1436
+ disclosed publicly, and fixes are widely deployed. Thus, the
1437
+ work presented in this paper does not constitute an unethical
1438
+ disclosure of potentially harmful information.
1439
+ Acknowledgments
1440
+ This material is based upon work supported by the Defense
1441
+ Advanced Research Projects Agency (DARPA) under Con-
1442
+ tract No. HR001120C0085. Any opinions, findings and con-
1443
+ clusions or recommendations expressed in this material are
1444
+ those of the author(s) and do not necessarily reflect the
1445
+ views of the Defense Advanced Research Projects Agency
1446
+ (DARPA). Approved for Public Release, Distribution Unlim-
1447
+ ited.
1448
+ References
1449
+ [1] CVE-2013-0864.
1450
+ https://cve.mitre.org/
1451
+ cgi-bin/cvename.cgi?name=CVE-2013-0864.
1452
+ Accessed: 2022-10-10.
1453
+ [2] FFmpeg. https://ffmpeg.org/. Accessed: 2022-09-
1454
+ 01.
1455
+ [3] OpenSSL:
1456
+ Cryptography
1457
+ and
1458
+ SSL/TLS
1459
+ toolkit.
1460
+ https://openssl.org/. Accessed: 2022-09-05.
1461
+ 13
1462
+
1463
+ [4] Picolibc: C libraries for smaller embedded systems.
1464
+ https://keithp.com/picolibc/. Accessed: 2022-
1465
+ 10-10.
1466
+ [5] The Heartbleed Bug.
1467
+ https://heartbleed.com/.
1468
+ Accessed: 2022-09-05.
1469
+ [6] zkInterface:
1470
+ SIEVE
1471
+ intermediate
1472
+ representation
1473
+ (IR) proposal.
1474
+ https://hackmd.io/@danib31/
1475
+ BkP9HBp2L. Accessed: 2022-10-10.
1476
+ [7] Alfred V Aho, Monica S Lam, Ravi Sethi, and Jeffrey D
1477
+ Ullman.
1478
+ Compilers: principles, techniques, & tools.
1479
+ Pearson Education India, 2007.
1480
+ [8] Scott Ames, Carmit Hazay, Yuval Ishai, and Muthura-
1481
+ makrishnan Venkitasubramaniam. Ligero: Lightweight
1482
+ sublinear arguments without a trusted setup. In Bha-
1483
+ vani M. Thuraisingham, David Evans, Tal Malkin, and
1484
+ Dongyan Xu, editors, ACM CCS 2017, pages 2087–
1485
+ 2104, Dallas, TX, USA, October 31 – November 2,
1486
+ 2017. ACM Press.
1487
+ [9] Carsten Baum, Alex J. Malozemoff, Marc B. Rosen,
1488
+ and Peter Scholl.
1489
+ Mac’n’cheese: Zero-knowledge
1490
+ proofs for boolean and arithmetic circuits with nested
1491
+ disjunctions. In Malkin and Peikert [37], pages 92–122.
1492
+ [10] Eli Ben-Sasson, Alessandro Chiesa, Daniel Genkin,
1493
+ Eran Tromer, and Madars Virza. SNARKs for C: Veri-
1494
+ fying program executions succinctly and in zero knowl-
1495
+ edge.
1496
+ In Ran Canetti and Juan A. Garay, editors,
1497
+ CRYPTO 2013, Part II, volume 8043 of LNCS, pages
1498
+ 90–108, Santa Barbara, CA, USA, August 18–22, 2013.
1499
+ Springer, Heidelberg, Germany.
1500
+ [11] Eli Ben-Sasson, Alessandro Chiesa, Daniel Genkin,
1501
+ Eran Tromer, and Madars Virza. TinyRAM architec-
1502
+ ture specification, v0.991. https://www.scipr-lab.
1503
+ org/doc/TinyRAM-spec-0.991.pdf, 2013.
1504
+ [12] Eli Ben-Sasson, Alessandro Chiesa, Eran Tromer, and
1505
+ Madars Virza.
1506
+ Succinct non-interactive zero knowl-
1507
+ edge for a von Neumann architecture. In 23rd USENIX
1508
+ Security Symposium (USENIX Security 14), pages 781–
1509
+ 796, 2014.
1510
+ [13] Nick Benton. Simple relational correctness proofs for
1511
+ static analyses and program transformations. ACM SIG-
1512
+ PLAN Notices, 39(1):14–25, 2004.
1513
+ [14] Alexander R. Block, Justin Holmgren, Alon Rosen,
1514
+ Ron D. Rothblum, and Pratik Soni. Public-coin zero-
1515
+ knowledge arguments with (almost) minimal time and
1516
+ space overheads. In Rafael Pass and Krzysztof Pietrzak,
1517
+ editors, TCC 2020, Part II, volume 12551 of LNCS,
1518
+ pages 168–197, Durham, NC, USA, November 16–19,
1519
+ 2020. Springer, Heidelberg, Germany.
1520
+ [15] Alexander R. Block, Justin Holmgren, Alon Rosen,
1521
+ Ron D. Rothblum, and Pratik Soni. Time- and space-
1522
+ efficient arguments from groups of unknown order. In
1523
+ Malkin and Peikert [37], pages 123–152.
1524
+ [16] Jonathan Bootle, Andrea Cerulli, Jens Groth, Sune K.
1525
+ Jakobsen, and Mary Maller.
1526
+ Arya: Nearly linear-
1527
+ time zero-knowledge proofs for correct program exe-
1528
+ cution. In Thomas Peyrin and Steven Galbraith, edi-
1529
+ tors, ASIACRYPT 2018, Part I, volume 11272 of LNCS,
1530
+ pages 595–626, Brisbane, Queensland, Australia, De-
1531
+ cember 2–6, 2018. Springer, Heidelberg, Germany.
1532
+ [17] Benjamin Braun, Ariel J Feldman, Zuocheng Ren, Sri-
1533
+ nath Setty, Andrew J Blumberg, and Michael Walfish.
1534
+ Verifying computations with state. In Proceedings of
1535
+ the Twenty-Fourth ACM Symposium on Operating Sys-
1536
+ tems Principles, pages 341–357, 2013.
1537
+ [18] Michael R Clarkson and Fred B Schneider. Hyperprop-
1538
+ erties. Journal of Computer Security, 18(6):1157–1210,
1539
+ 2010.
1540
+ [19] Dorothy E Denning. A lattice model of secure informa-
1541
+ tion flow. Communications of the ACM, 19(5):236–243,
1542
+ 1976.
1543
+ [20] William Enck, Peter Gilbert, Seungyeop Han, Vasant
1544
+ Tendulkar, Byung-Gon Chun, Landon P Cox, Jaeyeon
1545
+ Jung, Patrick McDaniel, and Anmol N Sheth.
1546
+ Taint-
1547
+ droid: an information-flow tracking system for realtime
1548
+ privacy monitoring on smartphones. ACM Transactions
1549
+ on Computer Systems (TOCS), 32(2):1–29, 2014.
1550
+ [21] Zhiyong Fang, David Darais, Joseph P Near, and Yu-
1551
+ peng Zhang.
1552
+ Zero knowledge static program analy-
1553
+ sis.
1554
+ In Proceedings of the 2021 ACM SIGSAC Con-
1555
+ ference on Computer and Communications Security,
1556
+ pages 2951–2967, 2021.
1557
+ [22] Nicholas Franzese, Jonathan Katz, Steve Lu, Rafail Os-
1558
+ trovsky, Xiao Wang, and Chenkai Weng.
1559
+ Constant-
1560
+ overhead zero-knowledge for RAM programs.
1561
+ In
1562
+ Giovanni Vigna and Elaine Shi, editors, ACM CCS
1563
+ 2021, pages 178–191,Virtual Event, Republic of Korea,
1564
+ November 15–19, 2021. ACM Press.
1565
+ [23] Galois, Inc. swanky: A suite of rust libraries for secure
1566
+ computation.
1567
+ https://github.com/GaloisInc/
1568
+ swanky, 2019.
1569
+ [24] Rosario Gennaro, Craig Gentry, Bryan Parno, and Mar-
1570
+ iana Raykova.
1571
+ Quadratic span programs and suc-
1572
+ cinct NIZKs without PCPs.
1573
+ In Thomas Johansson
1574
+ and Phong Q. Nguyen,editors, EUROCRYPT 2013,vol-
1575
+ ume 7881 of LNCS, pages 626–645, Athens, Greece,
1576
+ May 26–30, 2013. Springer, Heidelberg, Germany.
1577
+ 14
1578
+
1579
+ [25] Joseph A Goguen and José Meseguer. Security poli-
1580
+ cies and security models. In 1982 IEEE Symposium on
1581
+ Security and Privacy, pages 11–11. IEEE, 1982.
1582
+ [26] Oded Goldreich, Silvio Micali, and Avi Wigderson.
1583
+ Proofs that yield nothing but their validity or all lan-
1584
+ guages in np have zero-knowledge proof systems. Jour-
1585
+ nal of the ACM (JACM), 38(3):690–728, 1991.
1586
+ [27] Matthew Green, Mathias Hall-Andersen, Eric Hen-
1587
+ nenfent, Gabriel Kaptchuk, Benjamin Perez, and Gijs
1588
+ Van Laer.
1589
+ Efficient proofs of software exploitability
1590
+ for real-world processors. Cryptology ePrint Archive,
1591
+ 2022.
1592
+ [28] Jens Groth.
1593
+ On the size of pairing-based non-
1594
+ interactive arguments.
1595
+ In Marc Fischlin and Jean-
1596
+ Sébastien Coron, editors, EUROCRYPT 2016, Part II,
1597
+ volume 9666 of LNCS, pages 305–326,Vienna, Austria,
1598
+ May 8–12, 2016. Springer, Heidelberg, Germany.
1599
+ [29] David Heath and Vladimir Kolesnikov.
1600
+ A 2.1 khz
1601
+ zero-knowledge processor with bubbleram. In Proceed-
1602
+ ings of the 2020 ACM SIGSAC Conference on Com-
1603
+ puter and Communications Security, pages 2055–2074,
1604
+ 2020.
1605
+ [30] David Heath and Vladimir Kolesnikov.
1606
+ Stacked gar-
1607
+ bling for disjunctive zero-knowledge proofs.
1608
+ In An-
1609
+ nual International Conference on the Theory and Appli-
1610
+ cations of Cryptographic Techniques, pages 569–598.
1611
+ Springer, 2020.
1612
+ [31] David Heath, Yibin
1613
+ Yang, David Devecsery, and
1614
+ Vladimir Kolesnikov. Zero knowledge for everything
1615
+ and everyone: Fast ZK processor with cached ORAM
1616
+ for ANSI C programs. In 2021 IEEE Symposium on
1617
+ Security and Privacy, pages 1538–1556, San Francisco,
1618
+ CA, USA, May 24–27, 2021. IEEE Computer Society
1619
+ Press.
1620
+ [32] Zhangxiang Hu, Payman Mohassel, and Mike Ro-
1621
+ sulek.
1622
+ Efficient zero-knowledge proofs of non-
1623
+ algebraic statements with sublinear amortized cost. In
1624
+ Rosario Gennaro and Matthew J. B. Robshaw, edi-
1625
+ tors, CRYPTO 2015, Part II, volume 9216 of LNCS,
1626
+ pages 150–169, Santa Barbara, CA, USA, August 16–
1627
+ 20, 2015. Springer, Heidelberg, Germany.
1628
+ [33] Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky, and
1629
+ Amit Sahai. Zero-knowledge from secure multiparty
1630
+ computation. In Proceedings of the thirty-ninth annual
1631
+ ACM symposium on Theory of computing, pages 21–30,
1632
+ 2007.
1633
+ [34] Jasper Vijn.
1634
+ GRIT: Gba raster image transmogrifier.
1635
+ https://github.com/devkitPro/grit, 2022.
1636
+ [35] Neil D Jones, Carsten K Gomard, and Peter Sestoft.
1637
+ Partial evaluation and automatic program generation.
1638
+ Peter Sestoft, 1993.
1639
+ [36] Jonathan Katz, Vladimir Kolesnikov, and Xiao Wang.
1640
+ Improved non-interactive zero knowledge with applica-
1641
+ tions to post-quantum signatures. In Proceedings of the
1642
+ 2018 ACM SIGSAC Conference on Computer and Com-
1643
+ munications Security, pages 525–537, 2018.
1644
+ [37] Tal Malkin and Chris Peikert, editors. CRYPTO 2021,
1645
+ Part IV, volume 12828 of LNCS, Virtual Event, Au-
1646
+ gust 16–20, 2021. Springer, Heidelberg, Germany.
1647
+ [38] Payman Mohassel, Mike Rosulek, and Alessandra
1648
+ Scafuro.
1649
+ Sublinear zero-knowledge arguments for
1650
+ RAM programs.
1651
+ In Jean-Sébastien Coron and Jes-
1652
+ per Buus Nielsen, editors, EUROCRYPT 2017, Part I,
1653
+ volume 10210 of LNCS, pages 501–531, Paris, France,
1654
+ April 30 – May 4, 2017. Springer, Heidelberg, Ger-
1655
+ many.
1656
+ [39] Andrew C Myers.
1657
+ Jflow: Practical mostly-static in-
1658
+ formation flow control.
1659
+ In Proceedings of the 26th
1660
+ ACM SIGPLAN-SIGACT symposium on Principles of
1661
+ programming languages, pages 228–241, 1999.
1662
+ [40] Andrew C Myers and Barbara Liskov. A decentralized
1663
+ model for information flow control. ACM SIGOPS Op-
1664
+ erating Systems Review, 31(5):129–142, 1997.
1665
+ [41] Nicholas Nethercote and Julian Seward. Valgrind: A
1666
+ framework for heavyweight dynamic binary instrumen-
1667
+ tation.
1668
+ In Proceedings of the 28th ACM SIGPLAN
1669
+ Conference on Programming Language Design and Im-
1670
+ plementation, PLDI ’07, page 89–100, New York, NY,
1671
+ USA, 2007. Association for Computing Machinery.
1672
+ [42] James Parker, Niki Vazou, and Michael Hicks. Lweb:
1673
+ Information flow security for multi-tier web applica-
1674
+ tions. Proc. ACM Program. Lang., 3(POPL), jan 2019.
1675
+ [43] Bryan Parno, Jon Howell, Craig Gentry, and Mariana
1676
+ Raykova. Pinocchio: Nearly practical verifiable com-
1677
+ putation. In 2013 IEEE Symposium on Security and
1678
+ Privacy, pages 238–252, Berkeley, CA, USA, May 19–
1679
+ 22, 2013. IEEE Computer Society Press.
1680
+ [44] John C Reynolds. Separation logic: A logic for shared
1681
+ mutable data structures. In Proceedings 17th Annual
1682
+ IEEE Symposium on Logic in Computer Science, pages
1683
+ 55–74. IEEE, 2002.
1684
+ [45] Andrei Sabelfeld and Andrew C Myers.
1685
+ Language-
1686
+ based information-flow security. IEEE Journal on se-
1687
+ lected areas in communications, 21(1):5–19, 2003.
1688
+ 15
1689
+
1690
+ [46] Mooly Sagiv, Thomas Reps, and Reinhard Wilhelm.
1691
+ Parametric shape analysis via 3-valued logic.
1692
+ ACM
1693
+ Transactions on Programming Languages and Systems
1694
+ (TOPLAS), 24(3):217–298, 2002.
1695
+ [47] Marc Shapiro and Susan Horwitz. Fast and accurate
1696
+ flow-insensitive points-to analysis. In Proceedings of
1697
+ the 24th ACM SIGPLAN-SIGACT symposium on Prin-
1698
+ ciples of programming languages, pages 1–14, 1997.
1699
+ [48] Bjarne Steensgaard. Points-to analysis in almost lin-
1700
+ ear time. In Proceedings of the 23rd ACM SIGPLAN-
1701
+ SIGACT symposium on Principles of programming lan-
1702
+ guages, pages 32–41, 1996.
1703
+ [49] Deian Stefan, Alejandro Russo, John C Mitchell, and
1704
+ David Mazières.
1705
+ Flexible dynamic information flow
1706
+ control in haskell. In Proceedings of the 4th ACM Sym-
1707
+ posium on Haskell, pages 95–106, 2011.
1708
+ [50] G Edward Suh, Jae W Lee, David Zhang, and Srinivas
1709
+ Devadas. Secure program execution via dynamic infor-
1710
+ mation flow tracking. ACM Sigplan Notices, 39(11):85–
1711
+ 96, 2004.
1712
+ [51] Riad S. Wahby, Srinath T. V. Setty, Zuocheng Ren, An-
1713
+ drew J. Blumberg,and Michael Walfish. Efficient RAM
1714
+ and control flow in verifiable outsourced computation.
1715
+ In NDSS 2015, San Diego, CA, USA, February 8–11,
1716
+ 2015. The Internet Society.
1717
+ [52] Riad S Wahby, Srinath TV Setty, Zuocheng Ren, An-
1718
+ drew J Blumberg, and Michael Walfish. Efficient RAM
1719
+ and control flow in verifiable outsourced computation.
1720
+ In NDSS, 2015.
1721
+ [53] Yupeng
1722
+ Zhang, Daniel
1723
+ Genkin, Jonathan
1724
+ Katz,
1725
+ Dimitrios Papadopoulos, and Charalampos Papaman-
1726
+ thou.
1727
+ vRAM: Faster verifiable ram with program-
1728
+ independent preprocessing. In 2018 IEEE Symposium
1729
+ on Security and Privacy (SP), pages 908–925. IEEE,
1730
+ 2018.
1731
+ 16
1732
+
UNFAT4oBgHgl3EQf2x7k/content/tmp_files/2301.08717v1.pdf.txt ADDED
@@ -0,0 +1,2145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ACCELERATED PATHS AND UNRUH EFFECT II: FINITE TIME
2
+ DETECTOR RESPONSE IN (ANTI) DE SITTER SPACETIME AND
3
+ HUYGEN’S PRINCIPLE.
4
+ A PREPRINT
5
+ Shahnewaz Ahmed,a,b,c, Mir Mehedi Faruk,d,e,f, and Muktadir Rahmang
6
+ aSchool of Data and Sciences, BRAC University, 66 Mohakhali, Dhaka 1212. Bangladesh
7
+ bPerimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5, Canada
8
+ cDepartment of Physics and Astronomy, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada
9
+ dDepartment of Physics, McGill University, Montreal, Quebec, H3A 2T8, Canada
10
+ eInstitute for Theoretical Physics, University of Amsterdam, Science Park 904, 1090 GL Amsterdam, The Netherlands.
11
+ fDelta Institute for Theoretical Physics, Science Park 904, PO Box 94485, 1090 GL Amsterdam, The Netherlands.
12
+ gDepartment of Physics, University of Nevada, Reno 1664 N Virginia St, Reno, NV 89557.
13
+ January 23, 2023
14
+ ABSTRACT
15
+ We study the finite time response of an Unruh-DeWitt particle detector described by a qubit
16
+ (two-level system) moving with uniform constant acceleration in maximally symmetric spacetimes.
17
+ The D dimensional massless fermionic response function in de Sitter (dS) background is found to
18
+ be identical to that of a detector linearly coupled to a massless scalar field in 2D dimensional dS
19
+ background. Furthermore, we visit the status of Huygen’s principle in the Unruh radiation observed
20
+ by the detector.
21
+ A uniformly accelerating observer of constant acceleration a moving in Minkowski or maximally
22
+ symmetric spacetime sees the vacuum for an inertial observer as a thermal state of temperature
23
+ T =
24
+ ω
25
+ 2π. Here, ω is given by[1, 2, 3, 4, 5, 6],
26
+ ω =
27
+
28
+
29
+
30
+
31
+ a2 + k2,
32
+ dS, Λ > 0
33
+
34
+ a2 − k2
35
+ AdS, Λ < 0
36
+ a
37
+ Minkowski, Λ = 0
38
+ (1)
39
+ Here the cosmological constant Λ is related to D dimension spacetime curvature, k through |Λ| =
40
+ k2
41
+ 2 (D−2)(D−3). In recent times, Unruh radiation and its close analogue Hawking radiation has been
42
+ extensively studied with the tool of the Unruh-Dewitt (UDW) particle detector. The UDW detector
43
+ has applications connecting other branches of physics including but not limited to understanding
44
+ harvesting entanglement[7, 8, 9, 10], QCD[11], complexity[12], cosmology[13, 14, 15], as well as
45
+ in application-oriented research directions such as condensed matter systems[16, 17] like anyons[18]
46
+ and constructing heat engines[19, 20] using UDW detector. The accelerated UDW detector shows a
47
+ response when coupled to matter field. In one of the pioneering work on the topics related to detector
48
+ physics was by Takagi[21] where interesting features of detector response function were elaborated.
49
+ A complete story on fermionic response function to accelerated detectors in flat spacetime was
50
+ developed recently in [22]. It was noted in that [22] in D-dimensional Minkowski spacetime, the
51
+ arXiv:2301.08717v1 [hep-th] 20 Jan 2023
52
+
53
+ A PREPRINT - JANUARY 23, 2023
54
+ response of the accelerated UDW detector coupled to massless Dirac field proportional to that of a
55
+ detector linearly coupled to a massless scalar field in 2D dimension. This observation was quite
56
+ interesting as it helped us measure the Unruh radiation observed by the detector when coupled to
57
+ the fermionic matter field. But it is a natural question to ask if a similar conclusion also arises when
58
+ the fermionic field is coupled to curved spacetime. In our previous article[23] we explained how
59
+ the same mechanism works in AdS background instead. Another significant observation made by
60
+ several authors[24, 25] is the apparent statistics inversion in the Unruh radiation in odd dimensional
61
+ spacetime. In odd-dimensional spacetime, we notice that the thermal radiation measured by a linear
62
+ UDW particle detector coming from scalar field can maintain an anti periodic relation. Our previous
63
+ article explained how non-linearity affects the statistics inversion in AdS spacetime. Of course,
64
+ when the curvature of the spacetime was approaching zero, we reproduced the known results of
65
+ detector response in flat spacetime[6, 23] but the similar setup in the dS background still needs to
66
+ be discussed. In this article, we analyze the effects of non-linearity and develop the calculation
67
+ of the fermionic response function in the dS background. In our earlier work and many other
68
+ studies relating to the UDW detector, we investigated the response function where the detector is
69
+ "turned on" for an infinite amount of time [6, 21, 23]. In practice it is impossible to turn on the
70
+ detector for infinite time[20]; therefore, the finite time response has been rigorously investigated in
71
+ the context of flat space time [26]. We start our manuscript by analysing the finite time response
72
+ of accelerated an UDW detector in AdS spacetime coupled to real scalar fields in a non-linear
73
+ way. In section 2, we first elaborate on the dS case for the real scalar fields and then finish the
74
+ calculation for fermionic fields. We prove here the response function of the uniformly accelerated
75
+ UDW detector coupled to a massless Dirac field in dS spacetime of dimension is equivalent to the
76
+ response function of the detector linearly coupled to a massless scalar field in 2D dimensional dS
77
+ spacetime. We have generalised the result in the case of non trivial gravitational background AdS
78
+ spacetime in part-1[23] and dS spacetime in this article. Finally, we summarise the cases when
79
+ Huygen’s principle is maintained or violated by the Unruh radiation observed by the accelerated
80
+ detectors in maximally symmetric spacetime. Throughout the whole article, we chose ℏ = 1, c = 1
81
+ and Boltzmann constant kB = 1 in our calculation.
82
+ 1
83
+ Finite time response of UDW detector: Scalar field
84
+ We first consider a real scalar field Φ in D dimensional (A)dS spacetime which is conformally
85
+ coupled to gravitational background. We are are considering the AdS metric in Poincare coordinates
86
+ but we can ofcourse choose any other coordinates system such as global coordinates. Similarly we
87
+ will follow the flat slicing for De Sitter background. The AdS metric in Poincare coordinate,
88
+ ds2 =
89
+ 1
90
+ k2z2(dt2 − dx2
91
+ 1 − dx2
92
+ 2 − ... − dx2
93
+ D−2 − dz2).
94
+ (2)
95
+ The dS metric in flat slicing is written as
96
+ ds2 =
97
+ 1
98
+ k2η2(dη2 − dx2
99
+ 1 − dx2
100
+ 2 − ... − dx2
101
+ D−2 − dx2
102
+ D−1).
103
+ (3)
104
+ Depending upon what we would like to have as our our gravitational background we pick AdS or
105
+ dS we choose eq.(2) or eq. (3). The total action of our system of interest is,
106
+ S = S0 + Sint + Sdetector,
107
+ (4)
108
+ 2
109
+
110
+ A PREPRINT - JANUARY 23, 2023
111
+ The matter field action is simply-
112
+ S0 = 1
113
+ 2
114
+
115
+ dDx
116
+
117
+ |g|
118
+
119
+ gµν▽µΦ▽νΦ + ζRΦ2�
120
+ .
121
+ (5)
122
+ When scalars are conformally coupled to gravity one can specify[27],
123
+ ζ =
124
+ D − 2
125
+ 4(D − 1).
126
+ (6)
127
+ The interaction part of the Hamiltonian is simply,
128
+ HI(τ) = λχT (ˆσ−(τ) + ˆσ+(τ))Φ(z(τ)),
129
+ (7)
130
+ where, λ is the strength of coupling, χT is a switching function which controls the time of interaction
131
+ with the field, and n is any positive integer. T represents how long the detector is on. We are
132
+ going to refer it as switching time. It is well known that any sudden jump in the switching function
133
+ may cause divergence[20] for finite time interaction. Therefore we choose a Lorentzian switching
134
+ function (a smooth function),
135
+ χT (τ) =
136
+ (T /2)2
137
+ τ 2 + (T /2)2
138
+ (8)
139
+ One can simply set T → ∞, or χT = 1 in order to obtain the usual detector response (where it
140
+ is assumed that the detector can interact with the matter fields for infinite time). The detector is
141
+ thought as a two-level quantum system defined along a worldline x(τ). The detector Hamiltonian
142
+ is,
143
+ ˆHD = Ω
144
+ 2
145
+
146
+ ˆσ+ˆσ− − ˆσ−ˆσ+�
147
+ ,
148
+ (9)
149
+ We are thinking the UDW detector as two level system. There are two states |g⟩ and |e⟩ and Ω is the
150
+ energy gap between these two states. The ˆσ+ and ˆσ− are the well known SU(2) ladder operators.
151
+ |e⟩ = ˆσ+ |g⟩ ,
152
+ (10)
153
+ ˆσ− |g⟩ = 0
154
+ (11)
155
+ From this Hamiltonian we can easily see the ground and excited states of the detector are |g⟩ and
156
+ |e⟩, respectively.
157
+ ˆHD |e⟩ = Ω
158
+ 2 |e⟩
159
+ (12)
160
+ ˆHD |g⟩ = −Ω
161
+ 2 |g⟩
162
+ (13)
163
+ Point to note that ˆHD generates time translations with respect to the detector’s proper time τ. We
164
+ are assuming that the detector follows a timelike trajectory x(τ) which parametrized by proper
165
+ time τ in D dimensional spacetime. Any real scalar quantum field Φ(x) can be written as a mode
166
+ expansion,
167
+ Φ(x) =
168
+
169
+ dlk
170
+
171
+ fk(x)ˆbk + f ∗
172
+ k(x)ˆb†
173
+ k
174
+
175
+ ,
176
+ (14)
177
+ where {uk(x)} is assumed to be a normalized basis of solutions to the Klein-Gordon equation.
178
+ The functional form is fixed from the gravitational background. the annihilation and creation
179
+ operators maintaining the usual commutation relations are ˆbk,ˆb†
180
+ k, respectively. In principle the
181
+ creation/annihilation operators help can be used construct a Hilbert space representation for the
182
+ 3
183
+
184
+ A PREPRINT - JANUARY 23, 2023
185
+ quantum field, defined in terms of the vacuum state |0⟩. The most interesting quantity to us is the
186
+ probability amplitude related to the transition from the initial state |g, 0⟩ to a state |e, ϕ⟩. Here |ϕ⟩
187
+ denotes an arbitrary final state of the field. The amplitude can be found following[28],
188
+ Ag→e(ϕ) = ⟨e, ϕ| UI |g, 0⟩ =
189
+
190
+
191
+ n=0
192
+ ⟨e, ϕ| U (n)
193
+ I
194
+ |g, 0⟩
195
+ (15)
196
+ =
197
+
198
+ n odd
199
+ λn(−i)n
200
+ n!
201
+
202
+ dτ1 · · · dτn χ(τ1) · · · χ(τn) ⟨ϕ| T
203
+
204
+ ˆφ(τ1) · · · ˆφ(τn)
205
+
206
+ |0⟩ eiΩ(τ1−τ2+···+τn),
207
+ Here UI the time evolution operator is given in the usual way,
208
+ UI
209
+ = T exp
210
+
211
+ −i
212
+
213
+ dτHI(τ)
214
+
215
+ = �∞
216
+ n=0
217
+ (−i)n
218
+ n!
219
+
220
+ dτ1 · · · dτnT
221
+
222
+ HI(τ1) · · · HI(τn)
223
+
224
+
225
+ ��
226
+
227
+ U (n)
228
+ I
229
+ = �∞
230
+ n=0 U (n)
231
+ I
232
+ (16)
233
+ One can determine the transition probability to arbitrary order by tracing over the field final states
234
+ |ϕ⟩,
235
+ Pg→e =
236
+
237
+ Dϕ |Ag→e(ϕ)|2
238
+ =
239
+
240
+ n,m odd
241
+ λn+m(−i)n−m
242
+ n!m!
243
+
244
+ dτ ′
245
+ 1 . . . dτ ′
246
+ m dτ1 · · · dτn χ(τ ′
247
+ 1) · · · χ(τ ′
248
+ m)χ(τ1) · · · χ(τn)
249
+ × ⟨0| T
250
+
251
+ Φ(τ ′
252
+ 1) · · · Φ(τ ′
253
+ m)
254
+ �†
255
+ T
256
+
257
+ Φ(τ1) · Φ(τn)
258
+
259
+ |0⟩ e−iΩ(τ ′
260
+ 1−···+τ ′
261
+ m)eiΩ(τ1−···+τn),
262
+ (17)
263
+ In this series the lowest order term is of second order in the coupling constant λ. It is expressed as,
264
+ P(2)
265
+ g→e = λ2
266
+
267
+ dτdτ ′χ(τ)χ(τ ′) ⟨0| Φ(τ)Φ(τ ′) |0⟩ eiΩ(τ−τ ′) = λ2
268
+
269
+ dτdτ ′χ(τ)χ(τ ′)W (2)
270
+ D (x(τ), x′(τ ′))eiΩ(τ−τ ′).
271
+ (18)
272
+ Here, W (2)
273
+ D (x(τ), x′(τ ′)) is the D dimensional two point correlator (Wightman function). The exact
274
+ functional form of the Wightman function again depends upon the background gravity.
275
+ We can consider more general interaction Hamiltonian,
276
+ HI = λ χT (τ)m(τ) OΦ[x(τ)] ,
277
+ (19)
278
+ Here, m(τ) the monopole operator,
279
+ m(τ) = eiΩτ |e⟩ ⟨g| + e−iΩτ |g⟩ ⟨e| =
280
+
281
+ 0
282
+ e+iΩτ
283
+ e−iΩτ
284
+ 0
285
+
286
+ .
287
+ (20)
288
+ This actually takes the ground state of the detector to the excited state and vice versa. In other words
289
+ we can physically describe the procedure as "a click" in response to the presence of the field. Now
290
+ the the operator OΦ outlines how the matter field is coupled to the detector. Instead of usual linear
291
+ coupling we take a more general coupling1,
292
+ OΦ[x(τ)] = Φn[x(τ)]
293
+ (21)
294
+ 1normal ordering is assumed
295
+ 4
296
+
297
+ A PREPRINT - JANUARY 23, 2023
298
+ If we use the interaction Hamiltonian (21) then we re-express eq. (18),
299
+ P(2)
300
+ g→e = λ2
301
+
302
+ dτdτ ′χ(τ)χ(τ ′)W (2n)
303
+ D
304
+ (x(τ), x′(τ ′))eiΩ(τ−τ ′)
305
+ (22)
306
+ Here, W (2n)
307
+ D
308
+ (τ − τ ′) = ⟨0| : Φn(x(τ)) :: Φn(x(τ ′)) : |0⟩ is the 2n-point correlator. The detector
309
+ response function of the UDW detector is directly proportional to the probability for the detector
310
+ to transition from ground state to excited state. Using Lorentzian switching function eq. (8) the
311
+ response function F(2n)(Ω, T ) will be,
312
+ F(n)(Ω, T ) = πT 3
313
+ 4
314
+ � ∞
315
+ −∞
316
+ d(∆τ) W (n)
317
+ D (∆τ)
318
+ ∆τ 2 + T 2 e−iΩ∆τ
319
+ (23)
320
+ The 2n-point function W (2n)
321
+ D
322
+ (x, x′) is related to the the Wightman function in the following
323
+ interesting but simple way by Wick’s theorem [29],
324
+ W (2n)
325
+ D
326
+ (x, x′) = (n!)
327
+
328
+ W (2)
329
+ D (x, x′)
330
+ �n .
331
+ (24)
332
+ 1.1
333
+ Finite time response of scalar fields: AdS spacetime
334
+ For conformally coupled scalars the Wightman function in D > 2 dimensional AdS spcatime can
335
+ be obtained in the following form with suitable boundary condition[6],
336
+ W (2)
337
+ AdSD(x, x′) = ⟨0| Φ(x(τ))Φ(x(τ ′)) |0⟩ = CD
338
+
339
+ 1
340
+ (v − 1)D/2−1 −
341
+ 1
342
+ (v + 1)D/2−1
343
+
344
+ .
345
+ (25)
346
+ Here, ν is the conformal invariant defined as,
347
+ v = z2 + z′2 + (x − x′)2 − (t − t′ − iϵ)2
348
+ 2zz′
349
+ .
350
+ (26)
351
+ CD is a constant,
352
+ CD = kD−2Γ(D/2 − 1)
353
+ 2(2π)D/2
354
+ ,
355
+ (27)
356
+ In this article we mainly focus on Unruh effect is a widely studied phenomena which basically
357
+ states an accelerating observer (with constant linear acceleration a) will observe a thermal bath with
358
+ temperature T. If the observer were in flat spacetime, the temperature is given by the following
359
+ formula T =
360
+ ℏa
361
+ 2πckB . In the usual literature[30] (studies mostly done in flat spacetime) the path
362
+ which was chosen for the accelerating observer had linear uniform acceleration a. The accelerating
363
+ observer (detector) can take a circular path with constant velocity v, will end up having constant
364
+ acceleration a. Of course the resultant radiation (detector response) due to this type of non-linear
365
+ motion will not be quite thermal radiation as the correlators will not obey the KMS relation [31].
366
+ We are interested in those accelerated paths which corresponds to Wightman function maintaining
367
+ valid KMS relation.
368
+ 1.1.1
369
+ Super critical accelerated paths in AdS
370
+ We are considering the supercritical paths (a > k) as only these paths results in non zero response
371
+ function for the detectors[6] in uniform linear acceleration. In our recent article[23] we successfully
372
+ showed that using GEMS (Global Embedding Minkowski Spacetimes) approach that one can
373
+ construct a path with constant acceleration by considering the path as an intersection between a
374
+ 5
375
+
376
+ A PREPRINT - JANUARY 23, 2023
377
+ flat plane of dimension M and D dimensional AdS hypersurface embedded in D + 1 dimensional
378
+ flat spacetime. Here, M(M < D + 1). We also proved that for any uniform linear supercritical
379
+ trajectories would have the same conformal invariant v as a function of proper time.
380
+ v(τ, τ ′) = a2
381
+ ω2 − k2
382
+ ω2 cosh(ω(τ − τ ′) − iϵ).
383
+ (28)
384
+ Here ω =
385
+
386
+ a2 − k2. The example of super critical path (with constant linear acceleration) in z − t
387
+ plane is given in ref. [23],
388
+ t(τ) = a
389
+ ωeωτ , z(τ) = eωτ , x1 = x2 = x3 = . . . = xD−2 = 0.
390
+ (29)
391
+ Another supercritical path with constant acceleration a in the x1 − t plane is given by the following
392
+ manner[23],
393
+ z(τ) = z0 , x1(τ) = z0k
394
+ ω cosh(ωτ) , t(τ) = z0k
395
+ ω sinh(ωτ) , x2 = x3 = . . . = xD−2 = 0.(30)
396
+ z0 is a constant and τ is the proper time. We could also define the (30) path in the xi − t direction.
397
+ However we have already showed in ref.[23] that how all uniform accelerating paths with constant
398
+ acceleration are related by AdS isometries. Any supercritical path will result in eq. (28). Therefore,
399
+ following eq. (25), the two point function for uniform acceleration (in any supercrtical path)
400
+ becomes,
401
+ GAdSD(∆τ)
402
+ =
403
+ ωD−2Γ( D
404
+ 2 − 1)
405
+ (4π)
406
+ D
407
+ 2
408
+
409
+ 1
410
+ iD−2 sinhD−2( ω∆τ
411
+ 2
412
+ − iϵ)
413
+
414
+ 1
415
+ (sinh
416
+
417
+ A + ( ω∆τ
418
+ 2
419
+ − iϵ)
420
+
421
+ )
422
+ D
423
+ 2 −1(sinh
424
+
425
+ A − ( ω∆τ
426
+ 2
427
+ − iϵ)
428
+
429
+ )
430
+ D
431
+ 2 −1
432
+
433
+ .
434
+ (31)
435
+ Here, sinh A = ω/k.
436
+ ρ- plane
437
+ ×
438
+ ×
439
+ ×i ϵ
440
+ i ϵ + A
441
+ i ϵ - A
442
+ ×
443
+ ×
444
+ ×
445
+ i ϵ + i π
446
+ i ϵ + A + i π
447
+ i ϵ - A + i π
448
+ ·
449
+ ·
450
+ ·
451
+ ·
452
+ ·
453
+ ·
454
+ ·
455
+ ·
456
+ ·
457
+ ·
458
+ ·
459
+ ·
460
+ ·
461
+ ·
462
+ ·
463
+ ·
464
+ ·
465
+ ·
466
+ ×
467
+ ×
468
+ ×
469
+ ×
470
+ × iω /2
471
+ -iω /2
472
+ i ϵ - i π
473
+ i ϵ + A - i π
474
+ i ϵ - A - i π
475
+ ×
476
+ ×
477
+ ×
478
+ i ϵ - i π r
479
+ i ϵ + A - i π r
480
+ i ϵ - A - i π r
481
+
482
+ -∞
483
+ -i∞
484
+ Figure 1: Contour for evaluating ID,n,α.
485
+ 6
486
+
487
+ A PREPRINT - JANUARY 23, 2023
488
+ In our previous article we demonstrated the following relation about the two-point correlator[23],
489
+ W (2n)
490
+ AdSD(∆τ + 2π˙i
491
+ ω ) = (−1)nDW (2n)
492
+ AdSD(∆τ).
493
+ (32)
494
+ F (n)
495
+ AdS4(Ω, T ) vs. Ω
496
+ Figure 2: Plot of the finite-time response of the UDW particle detector in AdS Space-time against energy Ω. (From left
497
+ to right) 1st, 2nd and 3rd columns show the plots for different values of n, with n = 1, n = 2 and n = 3. Each row
498
+ (from top to bottom) shows the variation of the response function with changing a (fixed k = 1, T = 3), changing k
499
+ (fixed a = 4, T = 3) and changing T (fixed a = 4, k = 1) respectively.
500
+ Therefore ω =
501
+
502
+ a2 − k2 is the temperature[23]. We rewrite the 2n-point function G(n)
503
+ AdSD in the
504
+ following manner [23],
505
+ G(n)
506
+ AdSD(∆τ) = (n!)Cn
507
+ D
508
+ � ω
509
+
510
+ 2k
511
+ �n(D−2)
512
+ n
513
+
514
+ α=0
515
+ �n
516
+ α
517
+ �(−1)α
518
+ ip
519
+ GD,n,α(ρ)
520
+ (33)
521
+ where,
522
+ GD,n,α(ρ) = (sinh(ρ − iϵ))−p(sinh(A + (ρ − iϵ)))−q(sinh(A − (ρ − iϵ)))−q
523
+ (34)
524
+ ρ = ω∆τ/2
525
+ (35)
526
+ p = 2(n − α)(D/2 − 1)
527
+ (36)
528
+ q = α(D/2 − 1).
529
+ (37)
530
+ So, finally the expression for the finite time response function in AdS spacetime for our interaction
531
+ 7
532
+
533
+ a=2
534
+ a=3
535
+ a=4
536
+ a=5
537
+ a=6
538
+ a=7
539
+ k=0
540
+ k=0.5
541
+ k=1
542
+ k=1.5
543
+ k=2
544
+ k=2.5
545
+ k=3
546
+ T=1.5
547
+ T=2
548
+ T=2.5
549
+ T=3
550
+ T=3.5A PREPRINT - JANUARY 23, 2023
551
+ Hamiltonian becomes,
552
+ F(n)(Ω, T )
553
+ =
554
+ πn!T 3Cn
555
+ D
556
+ 4
557
+
558
+ ω
559
+
560
+ 2k
561
+ �n(D−2) � ∞
562
+ −∞ d(∆τ) e−iΩ∆τ
563
+ ∆τ 2+T 2
564
+ �n
565
+ α=0
566
+ �n
567
+ α
568
+ � (−1)α
569
+ ip GD,n,α(ρ)
570
+ (38)
571
+ =
572
+ πn!T 3Cn
573
+ D
574
+ 4
575
+
576
+ ω
577
+
578
+ 2k
579
+ �n(D−2) �n
580
+ α=0
581
+ �n
582
+ α
583
+ � (−1)α
584
+ ip
585
+ � ω
586
+ 2
587
+ � � ∞
588
+ −∞
589
+
590
+ e−i(2Ω/ω)ρ
591
+ ρ2 + (ωT /2)2GD,n,α(ρ)
592
+
593
+ ��
594
+
595
+ FD,n,α
596
+ (39)
597
+ =
598
+ ωπn!T 3Cn
599
+ D
600
+ 8
601
+
602
+ ω
603
+
604
+ 2k
605
+ �n(D−2) �n
606
+ α=0
607
+ �n
608
+ α
609
+ � (−1)α
610
+ ip FD,n,α
611
+ (41)
612
+ where,
613
+ FD,n,α =
614
+ � ∞
615
+ −∞
616
+
617
+ e−i(2Ω/ω)ρ
618
+ ρ2 + (ωT /2)2GD,n,α(ρ).
619
+ (42)
620
+ F (n)
621
+ AdS4(Ω, T ) vs. a
622
+ Figure 3: Plot of the Finite-Time response of the UDW Particle detector in AdS Space-time against acceleration a.
623
+ (From left to right) 1st, 2nd and 3rd columns show the plots for different values of n, with n = 1, n = 2 and n = 3,
624
+ respectively. Each row (from top to bottom) shows the variation of the response function with changing Ω (fixed k = 2,
625
+ T = 3), changing k (Ω = 1, T = 3) and changing T (Ω = 1, k = 2) respectively.
626
+ 8
627
+
628
+ Q=1
629
+ Q=2
630
+ Q=3
631
+ Q=4
632
+ 0.015
633
+ 0.10
634
+ 0.003
635
+ 0.08
636
+ k=0
637
+ 0.010
638
+ k=0.5
639
+ 0.06
640
+ 0.002
641
+ k=1.0
642
+ k=1.5
643
+ 0.005
644
+ 0.001
645
+ 0.02
646
+ T=5
647
+ T=10
648
+ T=15A PREPRINT - JANUARY 23, 2023
649
+ Next, we evaluate FD,n,α by computing the contour integral using semi-circle contour containing
650
+ the lower half of complex ρ plane (shown in Fig. 1). Thus we obtain,
651
+ FD,n,α(T ) = −2πi ×
652
+
653
+ sum of the residues at ρ = −iπr, ±A − iπr ( where r = 1, 2, ...)
654
+ and ρ = −iωT /2 of
655
+ GD,n,α(ρ)
656
+ ρ2 + (ωT /2)2
657
+
658
+ = −2πi ×
659
+
660
+ lim
661
+ ρ→−iωT /2
662
+ e−i(2Ω/ω)ρ
663
+ sinhq (A + ρ) sinhq (A − ρ) sinhp (ρ)
664
+
665
+ ρ −
666
+ � iωT
667
+ 2
668
+ ��+
669
+
670
+
671
+ r=1
672
+
673
+ lim
674
+ ρ→−iπr−A
675
+ η(q − 1)
676
+ Γ(q)
677
+
678
+ 1
679
+ cosh(ρ + A)
680
+ d
681
+
682
+ �q−1
683
+ e−i(2Ω/ω)ρ
684
+ cosh (A + ρ) sinhq (A − ρ) sinhp (ρ)
685
+
686
+ ρ2 +
687
+ � ωT
688
+ 2
689
+ �2�+
690
+ lim
691
+ ρ→−iπr+A
692
+ η(q − 1)
693
+ Γ(q)
694
+
695
+ −1
696
+ cosh(A − ρ)
697
+ d
698
+
699
+ �q−1
700
+ −e−i(2Ω/ω)ρ
701
+ sinhq (A + ρ) cosh (A − ρ) sinhp (ρ)
702
+
703
+ ρ2 +
704
+ � ωT
705
+ 2
706
+ �2�+
707
+ lim
708
+ ρ→−iπr
709
+ η(p − 1)
710
+ Γ(p)
711
+
712
+ 1
713
+ cosh ρ
714
+ d
715
+
716
+ �p−1
717
+ e−i(2Ω/ω)ρ
718
+ sinhq (A + ρ) sinhq (A − ρ) cosh (ρ)
719
+
720
+ ρ2 +
721
+ � ωT
722
+ 2
723
+ �2�
724
+ ��
725
+ (43)
726
+ Here, η(p) is Heaviside step function and Γ(p) is gamma function. In our previous case we were
727
+ able to analytically solve the four dimensional response function in AdS when we chose the infinite
728
+ switching time, i.e. T = ∞. However it was not possible to calculate the detector response for
729
+ finite switching time. We have evaluated the response function numerically in finite switching time.
730
+ Finally we have plotted the response function in different ranges of parameter space. In figure 2,
731
+ we plot the detector response as a function of energy gap of the two level UdW detector. In the
732
+ three different columns of figure 2, we have chosen three values of n. We can see that when the
733
+ detector energy gap increases the response goes to zero. In the first row we have fixed the curvature
734
+ and switching time while varying the energy. We can clearly see the response weakens with greater
735
+ value of n. But if we increase the acceleration of the detector the temperature of the radiation also
736
+ increases. Therefore the first row of Fig 2 manifests response function with respect to Ω takes
737
+ greater value when acceleration is increased. Similar trends are noticed when we look at the other
738
+ two plots of figure 1. It is very interesting to see as the curvature of AdS spacetime goes to zero the
739
+ detector response rises. Complete opposite trends are noticed in dS spacetime. But in both cases we
740
+ have better response function if we can "turn on" the detector for longer time.
741
+ As expected from previous discussion if we can accelerate the detectors more and more, we should
742
+ be able have best response from the detector. We also see it in figure 3. In the first row of figure 3
743
+ we can notice when energy gap rises the response weakens. Even if the detector is accelerated with
744
+ higher value it will be difficult to excite the detector if the energy gap is too much. This problem
745
+ persists more if we have non-linear coupling between the matter field and the detector. The trend of
746
+ response function changing with curvature is very interesting. As the curvature increases more and
747
+ more it becomes problematic to excite the detector. However, we should point out AdS spacetime is
748
+ a constant curvature spacetime. So when we plot for different values of k, we mean that we are
749
+ comparing the results the results of response function for different AdS spacetime with different
750
+ curvatures. We do not mean that we are analysing the detector spectrum where the gravitational
751
+ background has varying curvature.
752
+ 9
753
+
754
+ A PREPRINT - JANUARY 23, 2023
755
+ F (n)
756
+ AdS4(Ω, T ) vs. k
757
+ Figure 4: Plot of the Finite-Time response of the UDW Particle Detector in AdS Space-time against the curvature of
758
+ AdS (k) for massless scalar fields. (From left to right) 1st, 2nd and 3rd columns show the plots for the n = 1, n = 2
759
+ and n = 3 coupling to the scalar field. Each row (from top to bottom) shows the variation of the response function with
760
+ changing a (T = 3, Ω = 1), changing Ω (a = 4, T = 3) and changing T (a = 4, Ω = 1) respectively.
761
+ 1.2
762
+ Scalar field in dS space
763
+ We now we use the same setup of two level detector as before but now we consider that the
764
+ background spacetime is de Sitter spacetime. The metric of de Sitter spacetime can be expressed by
765
+ so called flat slicing as below-
766
+ ds2 =
767
+ 1
768
+ k2η2(dη2 − dx2
769
+ 1 − dx2
770
+ 2 − ... − dx2
771
+ D−1).
772
+ (44)
773
+ Now, we are going to discuss about the real scalar field Φ that is conformally coupled to dS
774
+ gravitational background. We have the same matter field action as well as the same interaction
775
+ Hamiltonian as in previous section (eq. (19)). Just as before we need to know the two-point
776
+ correlator (Wightman function) in order to evaluate the detector response. The Wightman function
777
+ for "Euclidean" vacuum |0⟩ can easily be obtained for conformally coupled real scalar field [32, 31],
778
+ W (2)
779
+ dSD(x, x′) = ⟨0| Φ(x(η))Φ(x(η′)) |0⟩ = KDv1−D/2
780
+ (45)
781
+ where,
782
+ KD = kD−2Γ(D/2 − 1)
783
+ 2(2π)D/2
784
+ .
785
+ (46)
786
+ 10
787
+
788
+ n=1
789
+ n=2
790
+ 0.08
791
+ n=3
792
+ 0.25
793
+ 0.10
794
+ a=4.5
795
+ 0.20
796
+ a=5
797
+ 0
798
+ 0.06
799
+ a=5.5
800
+ 0.15
801
+ a=6
802
+ 0.06
803
+ 0.04
804
+ a=6.5
805
+ 0.10
806
+ a=7
807
+ 0
808
+ 0.02
809
+ 0.05
810
+ 0.02
811
+ 2
812
+ Q=1
813
+ 0.0025
814
+ Q=2
815
+ =U
816
+ 0.0020
817
+ Q=4
818
+ 0.0015
819
+ 0.0010
820
+ 0.0005
821
+ T=1.0
822
+ T=1.5
823
+ T=2.0
824
+ T=2.5
825
+ T=3.0
826
+ T=3.5A PREPRINT - JANUARY 23, 2023
827
+ Here ν is the conformal invariant.
828
+ ν = (⃗x − ⃗x′)2 − (η − η′ − iϵ)2
829
+ 2ηη′
830
+ .
831
+ (47)
832
+ In order to examine Unruh radiation through the detectors need to move through a constant
833
+ accelerated path in dS background. An example of accelerating path in dS spacetime with constant
834
+ linear acceleration (see Appendix 5), we can choose the following,
835
+ η(τ) = τ0eωτ , x1(τ) = a
836
+ ωτ0eωτ , x2 = x3 = . . . = xD−1 = 0,
837
+ (48)
838
+ where ω =
839
+
840
+ a2 + k2. Plugging η and xi from eq. (48) to eq. (47), the conformal invariant ν takes
841
+ the following form in this case,
842
+ ν = ( a
843
+ ωτ0eωτ − a
844
+ ωτ0eωτ ′)2 − (τ0eωτ − τ0eωτ ′)2
845
+ 2τ 2
846
+ 0 eωτeωτ ′
847
+ = 1
848
+ 2
849
+ � a2
850
+ ω2 − 1
851
+ � �eωτ − eωτ ′
852
+ eω(τ+τ ′)/2
853
+ �2
854
+ = − H2
855
+ 2ω2
856
+
857
+ eω∆τ/2 − e−ω∆τ/2�2
858
+ = −2H2
859
+ ω2 sinh2(ω∆τ/2)
860
+ (49)
861
+ Following eq. (45), the two point function for uniformly accelerating paths becomes,
862
+ GdSD(∆τ)
863
+ =
864
+ ωD−2Γ( D
865
+ 2 − 1)
866
+ (4π)
867
+ D
868
+ 2
869
+ 1
870
+ iD−2 sinhD−2(ω∆τ/2 − iϵ).
871
+ (50)
872
+ We can define the transition probability rate or detector’s response function (per unit time)2 for
873
+ interaction Lagrangian (7) of scalars[27],
874
+ F(n)
875
+ dSD =
876
+ � ∞
877
+ −∞
878
+ d∆τe−iE∆τW (2n)
879
+ dSD (∆τ).
880
+ (51)
881
+ Here, W (2n)
882
+ dSD (τ − τ ′) = ⟨0| : Φn(x(τ)) :: Φn(x(τ ′)) : |0⟩ is the 2n correlator. The 2n-point function
883
+ W (2n)
884
+ dSD is related to the the Wightman function in the following way by Wick’s theorem [29],
885
+ W (2n)
886
+ dSD (x, x′) = (n!)
887
+
888
+ W (2)
889
+ dSD (x, x′)
890
+ �n
891
+ .
892
+ (52)
893
+ So, the 2n correlator becomes,
894
+ W (2n)
895
+ dSD (∆τ) = (n!)Kn
896
+ D
897
+
898
+ ω
899
+
900
+ 2H
901
+ �n(D−2)�
902
+ 1
903
+ iD−2 sinhD−2( ω∆τ
904
+ 2
905
+ − iϵ)
906
+ �n
907
+ .
908
+ (53)
909
+ Now the KMS condition can be easily checked using the equation (53).
910
+ W (2n)
911
+ dSD (∆τ + 2πi
912
+ ω )
913
+ =
914
+ (n!)Kn
915
+ D
916
+
917
+ ω
918
+
919
+ 2H
920
+ �n(D−2)�
921
+ 1
922
+ iD−2 sinhD−2( ω
923
+ 2
924
+
925
+ ∆τ + 2πi
926
+ ω
927
+
928
+ − iϵ)
929
+ �n
930
+ =
931
+ (−1)nDW (2n)
932
+ dSD (∆τ).
933
+ (54)
934
+ 2In this scenario, we are considering the detector can be switched on for infinite time.
935
+ 11
936
+
937
+ A PREPRINT - JANUARY 23, 2023
938
+ F (n)
939
+ dS4(Ω, T ) vs. Ω
940
+ Figure 5: Plot of the finite-time response of the UDW particle detector in dS space-time against energy Ω. (From left
941
+ to right) 1st, 2nd and 3rd columns show the plots for different values of n, with n = 1, n = 2 and n = 3. Each row
942
+ (from top to bottom) shows the variation of the response function with changing a (fixed k = 3, T = 3), changing k
943
+ (fixed a = 3, T = 3) and changing T (fixed a = 3, k = 3) respectively.
944
+ This behavior is similar to the AdS space for nonlinear coupling [33] with a major difference with
945
+ radiation temperature being ω =
946
+
947
+ a2 + k2[3]. We can obtain the Unruh-Dewitt detector response
948
+ function by taking α → 0 in equation (37) in [33].
949
+ F(n)
950
+ dSD =
951
+
952
+ n! Kn
953
+ D
954
+
955
+ ω
956
+
957
+ 2H
958
+ �n(D−2)(−1)n(D−2)+1
959
+ in(D−2)
960
+ 2
961
+ ωID,n
962
+
963
+ 1
964
+ e2πE/ω − (−1)n(D−2).
965
+ (55)
966
+ where,
967
+ ID,n = 2πi ×
968
+ 1
969
+ Γ(n(D − 2)) lim
970
+ ρ→0
971
+ ��
972
+ 1
973
+ cosh ρ
974
+ d
975
+
976
+ �n(D−2)−1 e−i 2E
977
+ ω ρ
978
+ cosh (ρ)
979
+
980
+ .
981
+ (56)
982
+ Finally, to calculate the finite-time Unruh-DeWitt detector response function for dS spacetime, we
983
+ 12
984
+
985
+ a=0
986
+ a=1
987
+ a=2
988
+ a=3
989
+ a=4
990
+ a=5
991
+ a=6
992
+ K=U
993
+ k=
994
+ k=2
995
+ K=3
996
+ K=4
997
+ k=5
998
+ =6
999
+ T=1
1000
+ T=2
1001
+ T=3
1002
+ T=4
1003
+ T=5
1004
+ T=6A PREPRINT - JANUARY 23, 2023
1005
+ F (n)
1006
+ dS4(Ω, T ) vs. a
1007
+ Figure 6: Plot of the Finite-Time response of the UDW Particle detector in dS Space-time against acceleration a. (From
1008
+ left to right) 1st, 2nd and 3rd columns show the plots for different values of n, with n = 1, n = 2 and n = 3. Each row
1009
+ (from top to bottom) shows the variation of the response function with changing Ω (fixed k = 3, T = 3), changing k
1010
+ (Ω = 1, T = 3) and changing T (Ω = 1, k = 3) respectively.
1011
+ plug in the 2n-point correlator G(n)
1012
+ dSD from eq. (53) to eq. (51) which gives,
1013
+ F(n)
1014
+ dSD(Ω, T ) = πT 3
1015
+ 4
1016
+ � ∞
1017
+ −∞
1018
+ d(∆τ) G(n)
1019
+ dSD(∆τ)
1020
+ ∆τ 2 + T 2 e−iΩ∆τ
1021
+ = πT 3n!
1022
+ 4
1023
+ �Γ(D/2 − 1)
1024
+ (4π)D/2
1025
+ �n �ω
1026
+ i
1027
+ �n(D−2) � ∞
1028
+ −∞
1029
+ d(∆τ)
1030
+ e−iΩ∆τ
1031
+ ∆τ 2 + T 2
1032
+ 1
1033
+ sinhn(D−2)( ω∆τ
1034
+ 2
1035
+ − iϵ)
1036
+ = πT 3n!
1037
+ 4
1038
+ �Γ(D/2 − 1)
1039
+ (4π)D/2
1040
+ �n �ω
1041
+ i
1042
+ �n(D−2) �ω
1043
+ 2
1044
+ � � ∞
1045
+ −∞
1046
+
1047
+ e−2iΩρ/ω
1048
+ ρ2 + (ωT /2)2
1049
+ 1
1050
+ sinhn(D−2)(ρ − iϵ)
1051
+
1052
+ ��
1053
+
1054
+ FD,n
1055
+ = πT 3n!
1056
+ 4
1057
+ �Γ(D/2 − 1)
1058
+ (4π)D/2
1059
+ �n �ω
1060
+ i
1061
+ �n(D−2) �ω
1062
+ 2
1063
+
1064
+ · FD,n
1065
+ (57)
1066
+ where,
1067
+ FD,n =
1068
+ � ∞
1069
+ −∞
1070
+
1071
+ e−2iΩρ/ω
1072
+ ρ2 + (ωT /2)2
1073
+ 1
1074
+ sinhn(D−2)(ρ − iϵ)
1075
+ (58)
1076
+ Finally, we evaluate the finite time response function F(n)
1077
+ dSD numerically from (57) and plot it with
1078
+ respect to energy gap, acceleration, and curvature as before. Just like the case of AdS, we can also
1079
+ 13
1080
+
1081
+ Q=1
1082
+ =U
1083
+ Q=3
1084
+ Q=4A PREPRINT - JANUARY 23, 2023
1085
+ F (n)
1086
+ dS4(Ω, T ) vs. k
1087
+ Figure 7: Plot of the Finite-Time response of the UDW Particle Detector in dS Space-time against the curvature of
1088
+ space-time (k) for massless scalar fields. (From left to right) 1st, 2nd and 3rd columns show the plots for different
1089
+ values of n, with n = 1, n = 2 and n = 3. Each row (from top to bottom) shows the variation of the response function
1090
+ with changing Ω (a = 3, T = 3), changing a (T = 3, Ω = 1), and changing T (a = 3, Ω = 1) respectively.
1091
+ see that as the energy gap increases the response function decreases (Fig. 5). In the first row of
1092
+ figure 5 we can see if the value of acceleration is higher the response function.
1093
+ 2
1094
+ Finite time response of UDW detector: Dirac fields.
1095
+ 2.1
1096
+ Dirac fields in AdS
1097
+ In the similar fashion we can analyse the response function for fermions in AdS spacetime minimally
1098
+ coupled to background gravity. The fermionic matter field action is-
1099
+ S0 =
1100
+
1101
+ dDx
1102
+
1103
+ |g|¯Ψi /DΨ.
1104
+ (59)
1105
+ We can consider usual interaction Hamiltonian[20],
1106
+ HInt = λ χT (τ)m(τ) OΨ[x(τ)] ,
1107
+ (60)
1108
+ Here, the operator OΨ[x(τ)] is the normal ordered bispinor,
1109
+ OΨ[x(τ)] =: ¯Ψ[x(τ)]Ψ[x(τ)] :
1110
+ (61)
1111
+ 14
1112
+
1113
+ Q=1
1114
+ Ω=2
1115
+ Q=3
1116
+ Q2=4A PREPRINT - JANUARY 23, 2023
1117
+ Using Lorentzian switching function eq. (8) the response function for interaction Hamiltonian (60)
1118
+ takes the following form,
1119
+ F(n)(Ω, T ) = πT 3
1120
+ 4
1121
+ � ∞
1122
+ −∞
1123
+ d(∆τ) S(4)
1124
+ D (∆τ)
1125
+ ∆τ 2 + T 2e−iΩ∆τ.
1126
+ (62)
1127
+ Here, the four point function S(4)
1128
+ D ,
1129
+ S(4)
1130
+ D (x(τ), x(τ ′))
1131
+ =
1132
+ ⟨0| :
1133
+
1134
+ Ψa(x(τ)))Ψa(x(τ))
1135
+
1136
+ ::
1137
+
1138
+ Ψb(x(τ ′)))Ψb(x(τ ′))
1139
+
1140
+ : |0⟩
1141
+ =
1142
+ Tr[S+(x, x′)S−(x′, x)].
1143
+ (63)
1144
+ As discussed in [23] for fermions in AdS spacetime,
1145
+ S(2)
1146
+ D (∆τ) = N (Γ(D/2))2
1147
+ Γ(D − 1) GAdS2D(∆τ)
1148
+ (64)
1149
+ From eq. (100) we can then conclude that detector response function for fermions can be related to
1150
+ response function for the scalars,
1151
+ JAdSD(∆τ) = N (Γ(D/2))2
1152
+ Γ(D − 1) F(1)
1153
+ AdS2D(∆τ).
1154
+ (65)
1155
+ In the next section we work on fermions in dS spacetime and we work out relations similar to eq.
1156
+ (64). Therefore, we further demonstrate that eq. (65) holds for maximally symmetric spacetime.
1157
+ The proof is similar to the case of AdS but we explicitly demonstrate it for the sake of completeness.
1158
+ 2.2
1159
+ Dirac fields in dS
1160
+ In order to study Dirac field in de Sitter spacetime we choose a local Lorentz frame (vielbein) which
1161
+ is defined as, ea
1162
+ µ = δa
1163
+ µ/(Hτ) such that gµν = ea
1164
+ µeb
1165
+ νηab mimics the de-Sitter metric. Here Latin letters
1166
+ a, b corresponds to local orthonormal flat coordinates and Greek letters µ, ν signifies the de Sitter
1167
+ coordinates. Both of them takes value from 0 to D − 1. Also ηab = diag(+1, −1, . . . , −1) is the
1168
+ local flat metric. The vielbeins follow the usual orthonormal relations. Now, the curved space Γ
1169
+ matrices and the covariant derivatives are defined as,
1170
+ Γµ = eµ
1171
+ aγa,
1172
+ Dµ = ∂µ + 1
1173
+ 2ωbc
1174
+ µ Ωbc,
1175
+ (66)
1176
+ where γa are gamma matrices in flat spacetime. And commutator between γ matrices are identified
1177
+ as Ωbc.
1178
+ Ωbc = 1
1179
+ 4[γb, γc]
1180
+ (67)
1181
+ and the spin connections ωbc
1182
+ µ are noted as,
1183
+ ωab
1184
+ µ = eaλ�
1185
+ ∂µeb
1186
+ λ −
1187
+
1188
+ α
1189
+ µ λ
1190
+
1191
+ eb
1192
+ α
1193
+
1194
+ (68)
1195
+ and
1196
+
1197
+ α
1198
+ µ λ
1199
+
1200
+ are the Christoffel symbols related to dS spacetime metric eq. (1). Here Γµ and γa
1201
+ maintain the well-known Clifford algebra,
1202
+ {Γµ, Γν}
1203
+ = 2gµνIN×N
1204
+ {γb, γc}
1205
+ = 2ηbcIN×N.
1206
+ (69)
1207
+ 15
1208
+
1209
+ A PREPRINT - JANUARY 23, 2023
1210
+ SAdS4(Ω, T )
1211
+ Figure 8: Plot of the Finite-Time response of the UDW Particle Detector in AdS Space-time coupled to a fermionic
1212
+ Field. Each row (from top to bottom) shows the plot of the response function against Ω (varying a[T = 3, k = 1],
1213
+ varying k[T = 3, a = 4] and varying T [a = 4, k = 1]), against k (varying a[Ω = 1, T = 3], varying Ω[a = 4, T = 3]
1214
+ and varying T [a = 4, Ω = 1]), and against a (varying Ω[k = 2, T = 3], varying k[Ω = 1, T = 3] and varying
1215
+ T [k = 2, Ω = 1]) respectively.
1216
+ with,
1217
+ N =
1218
+
1219
+ 2
1220
+ D
1221
+ 2
1222
+ D is even
1223
+ 2
1224
+ D−1
1225
+ 2
1226
+ D is odd.
1227
+ (70)
1228
+ In dS spacetime, the Dirac operator takes the form,
1229
+ /D = ΓµDµ ≡ eµ
1230
+ aγa�
1231
+ ∂µ + 1
1232
+ 2ωbc
1233
+ µ Ωbc
1234
+
1235
+ = kη
1236
+
1237
+ γa∂a − D − 1
1238
+
1239
+ γ0
1240
+
1241
+ .
1242
+ (71)
1243
+ To derive this relation we used
1244
+
1245
+ α
1246
+ µ λ
1247
+
1248
+ =
1249
+ 1
1250
+ η
1251
+
1252
+ gα0gµλ − δα
1253
+ λδ0
1254
+ µ − δα
1255
+ µδ0
1256
+ λ
1257
+
1258
+ ,
1259
+ (72)
1260
+ ωab
1261
+ µ
1262
+ =
1263
+ gβ0
1264
+ η
1265
+
1266
+ eb
1267
+ µea
1268
+ β − ea
1269
+ µeb
1270
+ β
1271
+
1272
+ .
1273
+ (73)
1274
+ In dS spacetime minimally coupled Dirac fermions with mass m to background gravity will have
1275
+ the action,
1276
+ S0 =
1277
+
1278
+ dDx
1279
+
1280
+ |g|(Ψi /DΨ − mΨΨ).
1281
+ (74)
1282
+ 16
1283
+
1284
+ k=0
1285
+ a=2
1286
+ k=0.5
1287
+ a=3
1288
+ k=1
1289
+ a=4
1290
+ k=1.5
1291
+ a=5
1292
+ k=2
1293
+ a=6
1294
+ k=2.5
1295
+ a=7
1296
+ k=3
1297
+ 12
1298
+ a=4.5
1299
+ Q=1
1300
+ a=5
1301
+ Q=2
1302
+ 10
1303
+ a=5.5
1304
+ Q=3
1305
+ a=6
1306
+ Q=4
1307
+ a=6.5
1308
+ a=7
1309
+ Q=1
1310
+ K=0
1311
+ Q=2
1312
+ k=0.5
1313
+ Q=3
1314
+ k=1
1315
+ Q=4
1316
+ k=1.5A PREPRINT - JANUARY 23, 2023
1317
+ We can split the Ψ field in two parts, namely positive and negative frequency modes.
1318
+ Ψ(x) = Ψ+(x) + Ψ−(x).
1319
+ (75)
1320
+ These are the solution of massive Dirac equation derived from equation (74)
1321
+ i /DΨ − mΨ = 0 .
1322
+ (76)
1323
+ We will first look into the positive energy mode solutions ψ(+) of (76). These solutions are
1324
+ proportional to eipx, where x = (x1, . . . , xD−1) and p = (p1, . . . , pD−1), px = plxl, and the
1325
+ summation runs over l = 1, . . . , D − 1. We now decompose the positive energy modes into upper
1326
+ and lower components,
1327
+ ψ(+) =
1328
+
1329
+ ψ+(η)
1330
+ ψ−(η)
1331
+
1332
+ eipx.
1333
+ (77)
1334
+ This can be done by using the following explicit definition Gamma matrix representation,
1335
+ γ0 =
1336
+
1337
+ I(N/2)×(N/2)
1338
+ 0(N/2)×(N/2)
1339
+ 0(N/2)×(N/2)
1340
+ −I(N/2)×(N/2)
1341
+
1342
+ , γa =
1343
+
1344
+ 0(N/2)×(N/2)
1345
+ σa
1346
+ −σa
1347
+ 0(N/2)×(N/2)
1348
+
1349
+ ,
1350
+ (78)
1351
+ with a = 1, . . . , D − 1. The definitions of I(N/2)×(N/2) and 0(N/2)×(N/2) can be found in [23].
1352
+ σaσb + σbσa
1353
+ =
1354
+ 2δabI(N/2)×(N/2),
1355
+ σa†
1356
+ =
1357
+ σa.
1358
+ The Dirac equation is then reduced to subsequent form for positive energy modes,
1359
+
1360
+ ∂0 − D − 1
1361
+
1362
+ ± im
1363
+ kη ψ±
1364
+
1365
+ − iplσlψ∓ = 0.
1366
+ (79)
1367
+ Using equation (79) we can deduce two different second order differential equations for the upper
1368
+ and lower components:
1369
+
1370
+ η2∂2
1371
+ 0 − (D − 1)η∂0 + p2η2 + (D − 1)2
1372
+ 4
1373
+ + D − 1
1374
+ 2
1375
+ + m2
1376
+ H2 ∓ im
1377
+ k
1378
+
1379
+ ψ± = 0.
1380
+ (80)
1381
+ Making the following substitution,
1382
+ ψ±(η) = ηD/2χ±(η),
1383
+ (81)
1384
+ equation (80) is reduced to the following form,
1385
+
1386
+ η2∂2
1387
+ 0 + η∂0 + (pη)2 −
1388
+ �im
1389
+ k ± 1
1390
+ 2
1391
+ �2�
1392
+ χ± = 0.
1393
+ (82)
1394
+ Now we write the solutions of (82) as
1395
+ χ±(η)
1396
+ =
1397
+ C±H(2)
1398
+ im
1399
+ k ± 1
1400
+ 2(pη)
1401
+ (83)
1402
+ where H(2)
1403
+ ν (x) are Hankel function of second kind of order ν. We only considered Hankel function
1404
+ of second kind as the solution for equation (82) because for positive energy solution in Bunch-Davies
1405
+ vacuum we demand ψ(+) ∝ e−ipη [34] . The coefficients C+ and C− are not independent of each
1406
+ other. We can find the relationship C− = −ipbσbC+/p by inserting the solution matrices (83) and
1407
+ (81) into the equation (79). Moreover, we require additional quantum numbers apart from p to
1408
+ specify all the solutions. In order to do that we need to fix the spinor C+. Here we take orthonormal
1409
+ basis for spinors by choosing C+ = C(+)
1410
+ β
1411
+ w(σ), where C(+)
1412
+ β
1413
+ is a normalization constant and w(σ),
1414
+ 17
1415
+
1416
+ A PREPRINT - JANUARY 23, 2023
1417
+ σ = 1, . . . , N/2, are one-column matrices of N/2 rows, with elements w(σ)
1418
+ l
1419
+ = δlσ. Combining with
1420
+ the negative energy solutions this set β = (p, σ) will form a complete set of quantum numbers. As
1421
+ a result, the positive-energy mode functions for Bunch-Davies vacuum takes the following form,
1422
+ ψ(+)
1423
+ β
1424
+ = C(+)
1425
+ β
1426
+ ηD/2eipx
1427
+
1428
+
1429
+ w(σ)k(2)
1430
+ im
1431
+ k + 1
1432
+ 2(pη)
1433
+ − ipbσb
1434
+ p w(σ)k(2)
1435
+ im
1436
+ k − 1
1437
+ 2(pη)
1438
+
1439
+ � .
1440
+ (84)
1441
+ The coefficient C(+)
1442
+ β
1443
+ in (84) is set on from the normalization condition (using inner product defined
1444
+ over constant time hypersurface) [35]
1445
+ ⟨ψ(+)
1446
+ β , ψ(+)
1447
+ β′ ⟩ =
1448
+
1449
+ dD−1x
1450
+
1451
+ |g|
1452
+ g00ψ(+)†
1453
+ β
1454
+ ψ(+)
1455
+ β′
1456
+ = δ(p − p′)δσσ′.
1457
+ (85)
1458
+ After evaluating the inner product we find the normalization constant
1459
+ C(+)
1460
+ β
1461
+ =
1462
+ √pk
1463
+ D−1
1464
+ 2 e−iφ/2
1465
+
1466
+ 8(2π)D−2 e
1467
+
1468
+ 2k .
1469
+ (86)
1470
+ where φ represents an arbitrary phase. The negative-energy mode functions φ(−) can be imposing
1471
+ the condition that φ(−) ∝ e−ipx+ipη. By following the same procedure illustrated above we obtain
1472
+ the negative energy solution:
1473
+ ψ(−)
1474
+ β
1475
+ =
1476
+ √pk
1477
+ D−1
1478
+ 2 eiφ/2
1479
+
1480
+ 8(2π)D−2 e− mπ
1481
+ 2k ηD/2e−ipx
1482
+
1483
+
1484
+ w(σ)H(1)
1485
+ im
1486
+ k + 1
1487
+ 2(pη)
1488
+ ipbσb
1489
+ p w(σ)H(1)
1490
+ im
1491
+ k − 1
1492
+ 2(pη)
1493
+
1494
+ � .
1495
+ (87)
1496
+ In the above equation H(1)
1497
+ ν (x) are Hankel function of first kind of order ν. Therefore we have
1498
+ successfully evaluated the complete set of solutions for the Dirac equation (76). Now we can write
1499
+ artibitary spinor solution Ψ(x) in the operator form.
1500
+ Ψ(x)
1501
+ =
1502
+ N/2
1503
+
1504
+ σ=1
1505
+
1506
+ dp
1507
+
1508
+ bσ(p)ψ(+)
1509
+ σ (p, x) + d†
1510
+ σ(p)ψ(−)
1511
+ σ (p, x)
1512
+
1513
+ (88)
1514
+ Ψ(x)
1515
+ =
1516
+ N/2
1517
+
1518
+ σ=1
1519
+
1520
+ dp
1521
+
1522
+ b†
1523
+ σ(p, λ)ψ(+)
1524
+ σ (p, x) + dσ(p)ψ(−)
1525
+ σ (p, x)
1526
+
1527
+ ,
1528
+ (89)
1529
+ where
1530
+ bσ(p) |0⟩
1531
+ =
1532
+ dσ(p) |0⟩ = 0,
1533
+ (90)
1534
+ ψ
1535
+ =
1536
+ ψ†γ0,
1537
+ (91)
1538
+ {bσ(p), b†
1539
+ σ′(p′)}
1540
+ =
1541
+ δ(p − p′)δσσ′,
1542
+ (92)
1543
+ {dσ(p), d†
1544
+ σ′(p′)}
1545
+ =
1546
+ δ(p − p′)δσσ′.
1547
+ (93)
1548
+ 18
1549
+
1550
+ A PREPRINT - JANUARY 23, 2023
1551
+ Now we can have an explicit form of the Wightman functions of the fermionic field,
1552
+ S+(x, x′)
1553
+ =
1554
+ ⟨0| Ψ(x)Ψ(x′) |0⟩ =
1555
+
1556
+ σ
1557
+
1558
+ dp ψ(+)
1559
+ σ (p, x)ψ(+)
1560
+ σ (p, x′)
1561
+ =
1562
+
1563
+ η′
1564
+ η
1565
+
1566
+ i
1567
+
1568
+ /D + Γ0
1569
+
1570
+
1571
+ + m
1572
+ � �
1573
+ P+GdSD(x, x′, im
1574
+ k − 1
1575
+ 2) + P−GdSD(x, x′, im
1576
+ k + 1
1577
+ 2)
1578
+
1579
+ ,
1580
+ (94)
1581
+ S−(x, x′)
1582
+ =
1583
+ ⟨0| Ψ(x′)Ψ(x) |0⟩ =
1584
+
1585
+ σ
1586
+
1587
+ dp ψ(−)
1588
+ σ (p, x)ψ(−)
1589
+ σ (p, x′)
1590
+ =
1591
+
1592
+
1593
+ η′
1594
+ η
1595
+
1596
+ i
1597
+
1598
+ /D + Γ0
1599
+
1600
+
1601
+ + m
1602
+ � �
1603
+ P+GdSD(x′, x, im
1604
+ k − 1
1605
+ 2) + P−GdSD(x′, x, im
1606
+ k + 1
1607
+ 2)
1608
+
1609
+ (95)
1610
+ where P ± = (IN×N ± γ0)/2 and
1611
+ GdSD(x, x′, im
1612
+ k ± 1
1613
+ 2) =
1614
+
1615
+ dp(ηη′)
1616
+ D−1
1617
+ 2 HD−2
1618
+ 8(2π)D−2
1619
+ eip(x−x′)H(2)
1620
+ im
1621
+ k ± 1
1622
+ 2(pη) H(1)
1623
+ m
1624
+ k ± 1
1625
+ 2(pη′).
1626
+ (96)
1627
+ Now the Wightman function for massless fermions in dS background,
1628
+ S±(x, x′) = ±i
1629
+
1630
+ η′
1631
+ η
1632
+
1633
+ /D + Γ0
1634
+
1635
+
1636
+ GdSD(x, x′)
1637
+ (97)
1638
+ and GdSD is as usual from equations (45-47) ,
1639
+ GdSD(x, x′) = GdSD(x′, x, 1
1640
+ 2) = GdSD(x, x′, −1
1641
+ 2) = HD−2Γ(D/2 − 1)
1642
+ 2(2π)D/2
1643
+ v1−D/2.
1644
+ (98)
1645
+ From equation (97) we can further deduce that
1646
+ S±(x, x′) = ±iHηab(xa − x′a)γb
1647
+ √ηη′v
1648
+ �D − 2
1649
+ 2
1650
+
1651
+ GdSD(x, x′)
1652
+ (99)
1653
+ The detector is moving in with constant linear acceleration a following (48) as before. In that case
1654
+ we know the detector response function of fermions (per unit time) for interaction Lagrangian is
1655
+ given by[27],
1656
+ JdSD =
1657
+ � ∞
1658
+ −∞
1659
+ d∆τe−iE∆τS(2)
1660
+ D (∆τ)
1661
+ (100)
1662
+ 19
1663
+
1664
+ A PREPRINT - JANUARY 23, 2023
1665
+ SdS4(Ω, T )
1666
+ Figure 9: Plot of the Finite-Time response of the UDW Particle Detector in dS Space-time coupled to a fermionic
1667
+ Field. Each row (from top to bottom) shows the plot of the response function against Ω (varying a[T = 3, k = 3],
1668
+ k[T = 3, a = 3], T [a = 3, k = 3]), against a (varying Ω[k = 3, T = 3], k[Ω = 1, T = 3], T [k = 3, Ω = 1]), and
1669
+ against k (varying a[Ω = 1, T = 3], Ω[a = 3, T = 3], T [a = 3, Ω = 1]) respectively.
1670
+ where,
1671
+ S(2)
1672
+ D (x(τ), x(τ ′))
1673
+ =
1674
+ ⟨0| :
1675
+
1676
+ Ψa(x(τ)))Ψa(x(τ))
1677
+
1678
+ ::
1679
+
1680
+ Ψb(x(τ ′)))Ψb(x(τ ′))
1681
+
1682
+ : |0⟩
1683
+ =
1684
+ Tr[S+(x, x′)S−(x′, x)]
1685
+ (101)
1686
+ =
1687
+ k2ηab(xa − x′a)ηcd(xc − x′c)
1688
+ ηη′ν2
1689
+ Tr
1690
+
1691
+ γbγd� �D − 2
1692
+ 2
1693
+ �2
1694
+ GdSD(x, x′) GdSD(x′, x)
1695
+ =
1696
+ Nk2 (D/2 − 1)2 ηabηcdηbd(xa − x′a)(x′c − xc)
1697
+ ηη′ν2
1698
+ H2D−4Γ(D/2 − 1)2
1699
+ 4(2π)D
1700
+ ν2−D
1701
+ =
1702
+ N [(D/2 − 1)Γ(D/2 − 1)]2
1703
+ Γ(D − 1)
1704
+ �k2D−2Γ(D − 1)
1705
+ 2(2π)D
1706
+
1707
+ ν−D
1708
+ �ηac(xa − x′a)(x′c − xc)
1709
+ 2ηη′
1710
+
1711
+ =
1712
+ N Γ(D/2)2
1713
+ Γ(D − 1)
1714
+ �k2D−2Γ(D − 1)
1715
+ 2(2π)D
1716
+
1717
+ ν1−D
1718
+ =
1719
+ N (Γ(D/2))2
1720
+ Γ(D − 1) GdS2D(x(τ), x(τ ′)).
1721
+ (102)
1722
+ is 4-points correlator of fermionic field. Here we are taking the trace over spinor index a, b, and, we
1723
+ have used the identity,
1724
+ Tr
1725
+
1726
+ γbγd�
1727
+ = Nηbd
1728
+ (103)
1729
+ 20
1730
+
1731
+ Q=1
1732
+ 2=2
1733
+ =U
1734
+ 2=4
1735
+ Q=1
1736
+ 2=2
1737
+ =U
1738
+ Q=4A PREPRINT - JANUARY 23, 2023
1739
+ So the eq.102 dictates that in any of the path S(2)
1740
+ D (∆τ) takes the following form,
1741
+ S(2)
1742
+ D (∆τ) = N (Γ(D/2))2
1743
+ Γ(D − 1) GdS2D(∆τ)
1744
+ (104)
1745
+ From eq. (100) we can then conclude that detector response function for fermions can be related to
1746
+ response function for the scalars,
1747
+ JdSD = N (Γ(D/2))2
1748
+ Γ(D − 1) F(1)
1749
+ dS2D.
1750
+ (105)
1751
+ Thus we have proved the following statement.
1752
+ The response function of an UDW detector(with uniform linear acceleration) quadratically
1753
+ coupled to a massless Dirac field in (A)dS vacuum in D ≥ 2 spacetime dimensions exactly
1754
+ equals to the response function of a UDW detector which is linearly coupled to a massless scalar
1755
+ field in 2D dimensional (A)dS vacuum times dimensional dependent numeric factor. Here,
1756
+ the fermionic field is minimally coupled to background while the scalar field is conformally
1757
+ coupled to the background.
1758
+ Upon establishing the above statement for fermionic response in maximally symmetric spacetime
1759
+ we plot the response function in figure 8 and 9 with respect to different variables such as energy gap
1760
+ Ω, acceleration a and curvature k. We can also see the similar pattern in (A)dS fermionic response
1761
+ function that the response increases with increasing acceleration a but decreases as with increasing
1762
+ energy gap Ω. The response decreases with the increment curvature k in AdS and the opposite
1763
+ pattern is noticed in dS background. Also because of the fact 4 dimensional fermionic response
1764
+ function is related to higher (eight) dimensional scalar response function we find out, accelerated
1765
+ UDW detectors respond better when coupled to fermionic fields compared to bosonic fields.
1766
+ 3
1767
+ Huygen’s principle, detector and Unruh radiation
1768
+ Huygen’s principle is a well studied phenomenon specially in quantum field theory. It is a natural
1769
+ question to ponder whether the accelerated detectors observing the thermal radiation maintains the
1770
+ Huygen’s principle[25]. The observed radiation from massless scalars by UDW detectors do not
1771
+ maintain the Huygen’s principle in flat spacetime in three (odd) dimensions[25]. However this
1772
+ statement is well understood for accelerated UDW detectors in flat spacetime with linear coupling.
1773
+ But in this section we discuss the status of Huygen’s principle for scalar theories where accelerated
1774
+ UDW detectors moving in the maximally symmetric curved spacetime with non linear interaction
1775
+ coupling(21)[24]. The Huygen’s principle has several different equivalent definitions but we can
1776
+ work on with the following one [36, 37, 38]-
1777
+ i) The theory maintains the Huygen’s principle if the causal propagator Gc has support
1778
+ only on the lightcone.
1779
+ ii) The theory violates the Huygen’s principle if they are non vanishing elsewhere.
1780
+ To understand better the state of Huygen’s principle for the detected Unruh radiation we
1781
+ need to first fix the coupling between the detector and the matter field (23). For the usual linearly
1782
+ coupled (n = 1) detector, the response function simple depends upon the Wightman function.
1783
+ Concentrating on linear coupling, the causal propagator for conformally coupled scalar theory can
1784
+ 21
1785
+
1786
+ A PREPRINT - JANUARY 23, 2023
1787
+ Figure 10: (A) Support of the propagators of a massless scalar in even dimensions for odd or even coupling associated
1788
+ with (21). This also depicts support of the propagators of a massless scalar in odd dimensions with even coupling. (B)
1789
+ Support of the propagators of a massless scalar in odd dimensions for odd coupling associated with (21)
1790
+ defined as,
1791
+ Gc(x, x′) = W (2)
1792
+ D (x, x′) − W (2)
1793
+ D (x′, x) = ⟨0| [Φ(x), Φ(x′)] |0⟩
1794
+ (106)
1795
+ In flat spacetime, the origin of obeying (or violating) the Huygen’s principle can be explained for
1796
+ linearly coupled detector. In case of the linear coupling the detector response function is nothing
1797
+ but the Fourier transform of the Wightman function W (2)
1798
+ D (x, x′), which is proportional to L1− d
1799
+ 2 for
1800
+ a bosonic field. Here L is the square distance between the two points x and x′. Now when the two
1801
+ point function is analytically continued to complex τ there is a brunch cut for timelike distance when
1802
+ we are in odd dimensions. This brunch cut becomes a simple pole in even dimensions. Therefore
1803
+ when we compute the response function in odd dimensions using (22) the linear detector finds a
1804
+ brunch cut in the integral expression and therefore reports a Fermi-Dirac distribution. The support
1805
+ of the propagator of scalar fields for linear detector[25] can be exactly (23) analysed with figure
1806
+ 10A. For even dimensions, the support of Gc is only on the lightcone while in odd dimension the
1807
+ support of Gc is on the entire timelike region. In the case of conformally coupled scalar fields over
1808
+ (A)dS spacetime, we can follow the same argument as before. For example, the two point correlator
1809
+ of conformally coupled scalars in dS can be simply related to flat space correlator W (2)
1810
+ M (x, x′) using
1811
+ the followings relation[35, 39],
1812
+ W (2)
1813
+ dS (x, x′) = (k2η2)
1814
+ d−2
1815
+ 4 W (2)
1816
+ M (x, x′)(k2η′2)
1817
+ d−2
1818
+ 4
1819
+ (107)
1820
+ Therefore the pole structure for conformally coupled theories are similar to those of flat spacetime.
1821
+ In even dimensional flat spacetime the causal correlator is given by[38],
1822
+ Gc = [Φ(t, ⃗x), Φ(t + ∆t, ⃗x + ∆⃗x)] =
1823
+ i
1824
+ 4π∆⃗x[δ(∆⃗t + ∆⃗x) − δ(∆⃗t − ∆⃗x)]
1825
+ (108)
1826
+ In similar fashion with odd dimensional spacetime D = d + 13, it is given by
1827
+ Gc = [Φ(t, ⃗x), Φ(t + ∆t, ⃗x + ∆⃗x)] = Γ( d−1
1828
+ 2 )
1829
+
1830
+ d+1
1831
+ 2
1832
+ 1
1833
+ L
1834
+ d−1
1835
+ 2
1836
+ (109)
1837
+ 3even dimensional space d.
1838
+ 22
1839
+
1840
+ XA PREPRINT - JANUARY 23, 2023
1841
+ We can see from eq. (108) that in even dimensional Minkowski spacetime the support of Gc is
1842
+ exactly on the lightcone while in odd dimensional spacetime the support of Gc is also inside the
1843
+ lightcone. Exactly similar result will hold for conformally coupled scalar theory living on (A)dS
1844
+ background through (107). We now go ahead and generalize the results with coupling n ≥ 1. The
1845
+ causal propagator can be written for any coupling n in this fashion,
1846
+ Gc(x, x′) = W (2n)(x, x′) − W (2n)(x′, x)
1847
+ (110)
1848
+ The 2n point correlators are related to two point correlators using (52). And thus the pole structure
1849
+ of 2n point correlator can be easily understood using this relation. In odd dimension the brunch cut
1850
+ in Wightman function results a brunch cut in 2n point correlator for any odd coupling n. However
1851
+ when we choose the even coupling the brunch cut turns into simple pole for the 2n point correlator.
1852
+ Huygen’s principle for scalar fields are therefore obeyed as well as no statistics inversion is noticed
1853
+ by the Unruh radiation in odd dimension only when we choose the even coupling. On the contrary
1854
+ the Huygen’s principle is violated in odd dimensions with odd coupling and statistics inversion also
1855
+ happens.
1856
+ In case of even coupling, the pole structure of the 2n-point correlator function are also
1857
+ quite interesting. Through (52) focusing in even dimensions, the Wightman function has no brunch
1858
+ cut in even dimensions. Therefore for any even coupling the 2n-point correlators will not have any
1859
+ surprise brunch cut in even dimensions. So, the Huygen’s principle is going to be satisfied trivially.
1860
+ However, in odd dimensions there is a brunch cut in Wightman function. But in the 2n-point
1861
+ correlator, when we use (52) it immediately suggests the brunch cut turns to a simple pole for any
1862
+ even n. As a result for even n, the Unruh radiation of scalars in flat space as well as in conformally
1863
+ coupled maximally symmetric scalar solutions the Huygen’s principle is always maintained in
1864
+ any dimensions. It is not possible to violate Huygen’s principle if we choose coupling with even
1865
+ n. The support of the scalar solutions are surprisingly always on the lightcone for even coupling.
1866
+ Figure 10(A) accurately depicts the status Huygen’s Principle in scalar Unruh radiation with odd
1867
+ coupling and odd dimesnions, where the support is just on the light cone. Interesting to see this is
1868
+ the exact situation when the statistics inversion happens through (54). In odd dimensions scalar
1869
+ theory propagator under consideration is anti-periodic in β = 2π
1870
+ ω when we choose odd coupling. In
1871
+ any other scenario the 2n point propagator is periodic in β where the Hugen’s principle is perfectly
1872
+ maintained in Unruh radiation.
1873
+ Let us now focus our discussion on the Huygen’s principle for fermionic theory with inter-
1874
+ action Hamiltonian (61). This is the most commonly used interaction Hamiltonian between
1875
+ fermionic theory and detector[22].
1876
+ Now the definition of Huygen principle (written before
1877
+ eq. (106)) remains the same for fermionic theory as well but the definition of Gc is given by
1878
+ Anti-Commuatator instead of commuatator algebra as in (106)4,
1879
+ Gc(x, x′) = ⟨0| {χ(x), χ(x′)} |0⟩
1880
+ (111)
1881
+ where,
1882
+ χ(x) =: Ψa(x)Ψa(x) :
1883
+ (112)
1884
+ The study of support for fermionic correlator on lightcone is a bit troublesome specially in the
1885
+ curved spacetime because of the gamma matrices. For the interaction Hamiltonian given by (61), the
1886
+ 4where the trace over spin index is assumed
1887
+ 23
1888
+
1889
+ A PREPRINT - JANUARY 23, 2023
1890
+ Figure 11: Support of the four point propagators of a massless fermions in any dimensions for interaction Hamiltonian
1891
+ (60). This is the clear indication that Huygen’s principle is always maintained for fermions.
1892
+ detector response function is dependent upon the four point fermionic correlator as seen explicitly
1893
+ from (62). In ref.[25], the author focused on the two point function of fermionic fields to understand
1894
+ the status of Huygen’s principle of the Unruh radiation detected by the Unruh-DeWitt detectors
1895
+ instead of the four point function. As a result, it was concluded in ref.[25] that the Unruh radiation
1896
+ detected for fermions maintain Huygen’s principle in odd dimensions while violate it in even
1897
+ dimensions. Because the pole and brunch cut structure of four point fermionic correlator is quite
1898
+ different to the two point correlator. In case of flat space[22], AdS[23] as well as in dS ((105)), one
1899
+ can easily see that four point fermionic correlators in D dimensions is explicitly given by the two
1900
+ point scalar correlators in 2D dimensions. Therefore to understand the status of Huygen’s principle
1901
+ for Unruh radiation of the fermionic theory (60) in odd or even D dimension, we can instead think
1902
+ of the massless scalars in 2D dimensions. The scalar Wightman function when conformally coupled
1903
+ to the background (A)dS gravity solutions has no brunch cut in even dimensions. As a consequence,
1904
+ the 4-point fermionic propagator, to which the detector is sensitive always have support only on the
1905
+ light cone. It is quite surprising to see that scalar theory fails to maintain the Huygen principle in
1906
+ odd dimensions with usual linear coupling while the fermionic theory always maintains the Huygen
1907
+ principle with the usual interaction Hamiltonian(60). This result is true for matter fields in flat
1908
+ spacetime as well as conformally coupled to maximally symmetric spacetimes. The reason behind
1909
+ the conclusion being different to Huygen’s principle of fermionic Unruh radiation in ref.[25] is
1910
+ because they performed their analysis with two point function. But for the interaction Hamiltonian
1911
+ (60)5, one should actually analyse the four point fermionic correlator. Because the detector response
1912
+ is dependent on four point fermionic correlator rather than the fermionic Wightman function (see
1913
+ (62)), the correlator under consderation maintains a periodic condition with periodicity β = 2π
1914
+ ω and
1915
+ the Huygen’s principle is always maintained the irrespective of dimensionality of spacetime.
1916
+ 4
1917
+ Discussion and future works
1918
+ In this article we have completed the computation of finite time response of accelerated UDW
1919
+ detectors in maximally symmetric spacetime. The behaviour of the response function with dif-
1920
+ ferent parameters are systemtically analysed in figure (2) - (9). We also concluded the analysis
1921
+ 5see 2.15b no eq. of [25], where they use the same interaction Hamiltonian.
1922
+ 24
1923
+
1924
+ A PREPRINT - JANUARY 23, 2023
1925
+ for fermionic response function in maximally symmetric background (see the boxed statement
1926
+ after (105)). The result is quite powerful which also allows us to determine the status of Huygen’s
1927
+ principle of the fermionic Unruh radiation detected by UDW detector moving in maximally sym-
1928
+ metric spacetime. It is quite intriguing to note that the Huygen’s principle is always maintained
1929
+ in fermionic Unruh radiation which is minimally coupled to background as opposed to minimally
1930
+ coupled scalar[15]. We are currently working on to see if such results of fermionic response also
1931
+ holds when the UDW detectors are moving in other interesting curved spacetime solutions such as
1932
+ blackholes. As an application of finite time response we would also like to construct Unruh Otto
1933
+ engines[20] with the help of UDW detectors moving in maximally symmetric backgrounds. The
1934
+ variation of response function in dS and AdS space with respect to curvature makes the situation
1935
+ quite interesting and we are focusing on explicit conditions to extract required conditions for
1936
+ completing Otto cycle to have positive work output. The recent claim that with entangled qubits one
1937
+ can build up more efficient[40, 41] Otto engine is quite exciting and we are exploring the possibility
1938
+ to generalise it with maximally symemtric spacetimes. In our current manuscript, we have worked
1939
+ with scalar theory which is conformally coupled to background gravity but we are in a process
1940
+ of computing the finite time response function of UDW detectors for minimally coupled scalar
1941
+ theory[42].
1942
+ 5
1943
+ Appendix: Constant Acceleration Path in dS spacetime
1944
+ Here, we show that the path considered in eq. 48 is a constant acceleration path. The components
1945
+ of the acceleration can be written as,
1946
+ aµ = d2xµ
1947
+ dτ 2 + 2Γµ
1948
+ αβ
1949
+ �dxα
1950
+
1951
+ � �dxβ
1952
+
1953
+
1954
+ (113)
1955
+ where, Γµ
1956
+ αβ are Christoffel symbols of the first kind. Writing the components out explicitly for the
1957
+ path in eq. 48, we have,
1958
+ a(0)
1959
+ =
1960
+ d2η
1961
+ dτ 2 + 2Γ0
1962
+ αβ
1963
+ �dxα
1964
+
1965
+ � �dxβ
1966
+
1967
+
1968
+ =
1969
+ d2η
1970
+ dτ 2 − 1
1971
+ τ
1972
+ �dη
1973
+
1974
+ �2
1975
+ − 1
1976
+ τ
1977
+ �dx1
1978
+
1979
+ �2
1980
+ =
1981
+ τ0ω2eωτ −
1982
+ 1
1983
+ τ0eωτ (τ0ωeωτ)2 −
1984
+ 1
1985
+ τ0eωτ
1986
+ �aτ0
1987
+ ω ωeωτ�2
1988
+ =
1989
+ −a2τ0eωτ
1990
+ (114)
1991
+ a(1)
1992
+ =
1993
+ d2x1
1994
+ dτ 2 + 2Γ1
1995
+ αβ
1996
+ �dxα
1997
+
1998
+ � �dxβ
1999
+
2000
+
2001
+ =
2002
+ d2x1
2003
+ dτ 2 − 2
2004
+ τ
2005
+ dx1
2006
+
2007
+
2008
+
2009
+ =
2010
+ aτ0
2011
+ ω ω2eωτ −
2012
+ 2
2013
+ τ0eωτ
2014
+ �aτ0
2015
+ ω ωeωτ�
2016
+ (τ0ωeωτ)
2017
+ =
2018
+ −aτ0ωeωτ
2019
+ (115)
2020
+ a(2)
2021
+ =
2022
+ a(3) = ... = a(D−1) = 0
2023
+ (116)
2024
+ 25
2025
+
2026
+ A PREPRINT - JANUARY 23, 2023
2027
+ So, the magnitude of the acceleration a becomes,
2028
+ |a|2 = −aµaµ
2029
+ = −g00(a0)2 − g11(a1)2
2030
+ = −
2031
+ 1
2032
+ H2τ 2a4τ 2
2033
+ 0 e2ωτ +
2034
+ 1
2035
+ H2τ 2a2τ 2
2036
+ 0 ω2e2ωτ
2037
+ = −
2038
+ 1
2039
+ H2τ 2
2040
+ 0 e2ωτ a2τ 2
2041
+ 0 e2ωτ(a2 − ω2)
2042
+ = a2
2043
+ (117)
2044
+ Hence, |a| = a and the acceleration along this path is uniform.
2045
+ 6
2046
+ Acknowledgments
2047
+ MMF’s research is supported by NSERC and in part by the Delta Institute of Theoretical Physics.
2048
+ The authours would like to thank Sowmitra Das and Onirban Islam for the discussions.
2049
+ References
2050
+ [1] Stanley Deser and Orit Levin. Mapping hawking into unruh thermal properties. Physical
2051
+ Review D, 59(6):064004, 1999.
2052
+ [2] Stanley Deser and Orit Levin. Accelerated detectors and temperature in (anti) de sitter spaces.
2053
+ Classical and Quantum Gravity, 14(L163), 1997.
2054
+ [3] Stanley Deser and Orit Levin. Equivalence of hawking and unruh temperatures and entropies
2055
+ through flat space embeddings. Classical and Quantum Gravity, 15(12):L85, 1998.
2056
+ [4] Thanu Padmanabhan. Cosmological constant—the weight of the vacuum. Physics Reports,
2057
+ 380(5-6):235–320, 2003.
2058
+ [5] Thanu Padmanabhan. Gravity and the thermodynamics of horizons. Physics Reports, 406(2):
2059
+ 49–125, 2005.
2060
+ [6] David Jennings. On the response of a particle detector in Anti-de Sitter spacetime. Class.
2061
+ Quant. Grav., 27:205005, 2010. doi: 10.1088/0264-9381/27/20/205005.
2062
+ [7] Grant Salton, Robert B Mann, and Nicolas C Menicucci. Acceleration-assisted entanglement
2063
+ harvesting and rangefinding. New Journal of Physics, 17(3):035001, 2015.
2064
+ [8] Keith K Ng, Robert B Mann, and Eduardo Martín-Martínez. New techniques for entanglement
2065
+ harvesting in flat and curved spacetimes. Physical Review D, 97(12):125011, 2018.
2066
+ [9] Erickson Tjoa and Eduardo Martín-Martínez. When entanglement harvesting is not really
2067
+ harvesting. Physical Review D, 104(12):125005, 2021.
2068
+ [10] Héctor Maeso-García, T Rick Perche, and Eduardo Martín-Martínez. Entanglement harvesting:
2069
+ Detector gap and field mass optimization. Physical Review D, 106(4):045014, 2022.
2070
+ [11] Elena Cáceres, Mariano Chernicoff, Alberto Güijosa, and Juan F Pedraza. Quantum fluc-
2071
+ tuations and the unruh effect in strongly-coupled conformal field theories. Journal of High
2072
+ Energy Physics, 2010(6):1–30, 2010.
2073
+ [12] Andreas Blommaert, Thomas G Mertens, and Henri Verschelde. Unruh detectors and quantum
2074
+ chaos in jt gravity. Journal of High Energy Physics, 2021(3):1–37, 2021.
2075
+ 26
2076
+
2077
+ A PREPRINT - JANUARY 23, 2023
2078
+ [13] Aindriú Conroy.
2079
+ Unruh-dewitt detectors in cosmological spacetimes.
2080
+ arXiv preprint
2081
+ arXiv:2204.00359, 2022.
2082
+ [14] Eduardo Martin-Martinez and Nicolas C Menicucci. Cosmological quantum entanglement.
2083
+ Classical and Quantum Gravity, 29(22):224003, 2012.
2084
+ [15] Ana Blasco, Luis J. Garay, Mercedes Martin-Benito, and Eduardo Martin-Martinez. Violation
2085
+ of the Strong Huygen’s Principle and Timelike Signals from the Early Universe. Phys. Rev.
2086
+ Lett., 114(14):141103, 2015. doi: 10.1103/PhysRevLett.114.141103.
2087
+ [16] Jiatong Yan and Baocheng Zhang. Effect of spacetime dimensions on quantum entanglement
2088
+ between two uniformly accelerated atoms. Journal of High Energy Physics, 2022(10):1–29,
2089
+ 2022.
2090
+ [17] Anshuman Bhardwaj and Daniel E. Sheehy. Unruh Effect and Takagi’s Statistics Inversion in
2091
+ Strained Graphene. 9 2022.
2092
+ [18] Satoshi Ohya. Emergent anyon distribution in the unruh effect. Physical Review D, 96(4):
2093
+ 045017, 2017.
2094
+ [19] Enrique Arias, Thiago R. de Oliveira, and M. S. Sarandy. The Unruh Quantum Otto Engine.
2095
+ JHEP, 02:168, 2018. doi: 10.1007/JHEP02(2018)168.
2096
+ [20] Finnian Gray and Robert B Mann. Scalar and fermionic unruh otto engines. Journal of High
2097
+ Energy Physics, 2018(11):1–34, 2018.
2098
+ [21] Shin Takagi. Vacuum noise and stress induced by uniform acceleration hawking-unruh effect
2099
+ in rindler manifold of arbitrary dimension. Progress of Theoretical Physics Supplement, 88:
2100
+ 1–142, 1986.
2101
+ [22] Jorma Louko and Vladimir Toussaint. Unruh-dewitt detector’s response to fermions in flat
2102
+ spacetimes. Physical Review D, 94(6):064027, 2016.
2103
+ [23] Shahnewaz Ahmed and Mir Mehedi Faruk. Accelerated paths and unruh effect. part i. scalars
2104
+ and fermions in anti de sitter spacetime. Journal of High Energy Physics, 2021(9):1–46, 2021.
2105
+ [24] L. Sriramkumar. Odd statistics in odd dimensions for odd couplings. Mod. Phys. Lett. A, 17:
2106
+ 1059–1066, 2002. doi: 10.1142/S0217732302007545.
2107
+ [25] Hirosi Ooguri. Spectrum of Hawking Radiation and Huygens’ Principle. Phys. Rev. D, 33:
2108
+ 3573, 1986. doi: 10.1103/PhysRevD.33.3573.
2109
+ [26] L. Sriramkumar and T. Padmanabhan. Response of finite time particle detectors in noninertial
2110
+ frames and curved space-time. Class. Quant. Grav., 13:2061–2079, 1996. doi: 10.1088/
2111
+ 0264-9381/13/8/005.
2112
+ [27] Steven Weinberg. Gravitation and cosmology: principles and applications of the general theory
2113
+ of relativity. 1972.
2114
+ [28] Ivan M. Burbano, T. Rick Perche, and Bruno de S. L. Torres. A path integral formulation
2115
+ for particle detectors: the Unruh-DeWitt model as a line defect. JHEP, 03:076, 2021. doi:
2116
+ 10.1007/JHEP03(2021)076.
2117
+ [29] Ashok Das. Lectures on quantum field theory. World Scientific, 2020.
2118
+ [30] Sean M. Carroll. Spacetime and Geometry. Cambridge University Press, 7 2019. ISBN
2119
+ 978-0-8053-8732-2, 978-1-108-48839-6, 978-1-108-77555-7.
2120
+ 27
2121
+
2122
+ A PREPRINT - JANUARY 23, 2023
2123
+ [31] Shin Takagi. Vacuum noise and stress induced by uniform accelerationhawking-unruh effect
2124
+ in rindler manifold of arbitrary dimension. Progress of Theoretical Physics Supplement, 88:
2125
+ 1–142, 1986.
2126
+ [32] Raphael Bousso, Alexander Maloney, and Andrew Strominger. Conformal vacua and entropy
2127
+ in de sitter space. Physical Review D, 65(10):104039, 2002.
2128
+ [33] Shahnewaz Ahmed and Mir Mehedi Faruk. Accelerated paths and unruh effect. part i. scalars
2129
+ and fermions in anti de sitter spacetime. Journal of High Energy Physics, 2021(9):1–46, 2021.
2130
+ [34] Hael Collins. Fermionic α-vacua. Physical Review D, 71(2):024002, 2005.
2131
+ [35] AA Saharian. Quantum field theory in curved spacetime, 2020.
2132
+ [36] Karen Yagdjian. Huygens’ principle for the generalized Dirac operator in curved spacetime. J.
2133
+ Phys. A, 54(9):095204, 2021. doi: 10.1088/1751-8121/abdde9.
2134
+ [37] Gunther Paul. Huygens’ principle and hyperbolic equations. Academic Press, 2014.
2135
+ [38] Robert H. Jonsson. Decoupling of Information Propagation from Energy Propagation. PhD
2136
+ Thesis, University of Waterloo, 20156.
2137
+ [39] Marino Marcos. Qft in curved space, 2020.
2138
+ [40] Dipankar Barman and Bibhas Ranjan Majhi. Constructing an entangled Unruh Otto engine
2139
+ and its efficiency. JHEP, 05:046, 2022. doi: 10.1007/JHEP05(2022)046.
2140
+ [41] Gaurang Ramakant Kane and Bibhas Ranjan Majhi. Entangled quantum Unruh Otto engine is
2141
+ more efficient. Phys. Rev. D, 104(4):041701, 2021. doi: 10.1103/PhysRevD.104.L041701.
2142
+ [42] Bjorn Garbrecht and Tomislav Prokopec. Unruh response functions for scalar fields in de
2143
+ Sitter space. Class. Quant. Grav., 21:4993–5004, 2004. doi: 10.1088/0264-9381/21/21/016.
2144
+ 28
2145
+
UNFAT4oBgHgl3EQf2x7k/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
VdAzT4oBgHgl3EQfJ_tt/content/tmp_files/2301.01089v1.pdf.txt ADDED
@@ -0,0 +1,1277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ xDeepInt: a hybrid architecture for modeling the vector-wise
2
+ and bit-wise feature interactions
3
+ Yachen Yan
4
5
+ Credit Karma
6
+ San Francisco, California
7
+ Liubo Li
8
9
+ Credit Karma
10
+ San Francisco, California
11
+ ABSTRACT
12
+ Learning feature interactions is the key to success for the large-
13
+ scale CTR prediction and recommendation. In practice, handcrafted
14
+ feature engineering usually requires exhaustive searching. In order
15
+ to reduce the high cost of human efforts in feature engineering,
16
+ researchers propose several deep neural networks (DNN)-based ap-
17
+ proaches to learn the feature interactions in an end-to-end fashion.
18
+ However, existing methods either do not learn both vector-wise
19
+ interactions and bit-wise interactions simultaneously, or fail to
20
+ combine them in a controllable manner. In this paper, we propose
21
+ a new model, xDeepInt, based on a novel network architecture
22
+ called polynomial interaction network (PIN) which learns higher-
23
+ order vector-wise interactions recursively. By integrating subspace-
24
+ crossing mechanism, we enable xDeepInt to balance the mixture of
25
+ vector-wise and bit-wise feature interactions at a bounded order.
26
+ Based on the network architecture, we customize a combined opti-
27
+ mization strategy to conduct feature selection and interaction se-
28
+ lection. We implement the proposed model and evaluate the model
29
+ performance on three real-world datasets. Our experiment results
30
+ demonstrate the efficacy and effectiveness of xDeepInt over state-
31
+ of-the-art models. We open-source the TensorFlow implementation
32
+ of xDeepInt: https://github.com/yanyachen/xDeepInt.
33
+ CCS CONCEPTS
34
+ • Computing methodologies; • Machine learning; • Machine
35
+ learning approaches; • Neural networks;
36
+ KEYWORDS
37
+ CTR prediction, Recommendation System, Explicit Feature Interac-
38
+ tion, Deep Neural Network
39
+ ACM Reference Format:
40
+ Yachen Yan and Liubo Li. 2020. xDeepInt: a hybrid architecture for modeling
41
+ the vector-wise and bit-wise feature interactions. In San Diego 2020: ACM
42
+ SIGKDD Conference on Knowledge Discovery and Data Mining, August 22–27,
43
+ 2020, San Diego, CA. ACM, New York, NY, USA, 9 pages. https://doi.org/10.
44
+ 1145/nnnnnnn.nnnnnnn
45
+ Permission to make digital or hard copies of all or part of this work for personal or
46
+ classroom use is granted without fee provided that copies are not made or distributed
47
+ for profit or commercial advantage and that copies bear this notice and the full citation
48
+ on the first page. Copyrights for components of this work owned by others than ACM
49
+ must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
50
+ to post on servers or to redistribute to lists, requires prior specific permission and/or a
51
+ fee. Request permissions from [email protected].
52
+ DLP-KDD 2020, August 24, 2020, San Diego, California, USA
53
+ © 2020 Association for Computing Machinery.
54
+ ACM ISBN 978-1-4503-XXXX-X/18/06...$15.00
55
+ https://doi.org/10.1145/nnnnnnn.nnnnnnn
56
+ 1
57
+ INTRODUCTION
58
+ Click-through rate (CTR) prediction model [23] is an essential com-
59
+ ponent for the large-scale recommendation system, online adver-
60
+ tising and search ranking [7, 12, 19, 30]. In online marketplace
61
+ scenario, accurately estimating CTR will enable the recommen-
62
+ dation system to show users the items they prefer to view and
63
+ explore, which has a direct impact on both short-term revenue and
64
+ long-term user experience.
65
+ The input features (e.g., user id, item id, item category, site do-
66
+ main) of CTR prediction model are usually in a multi-field categori-
67
+ cal format [31] and transformed via field-aware one-hot encoding
68
+ and multi-hot encoding [34]. The representation of each field is a
69
+ sparse binary vector. The corresponding cardinality of each field
70
+ determines the dimension of the sparse vector. The concatenation
71
+ of these sparse vectors naturally generates high-dimensional and
72
+ sparse feature representations.
73
+ In CTR prediction model, exploring useful feature interactions
74
+ plays a crucial role in improving model performance[7, 8, 16, 22, 28].
75
+ Traditionally, data scientists search and build hand-crafted fea-
76
+ ture interactions to enhance model performance based on domain
77
+ knowledge. In practice, feature interactions of high-quality require
78
+ expensive cost of time and human workload [12]. Furthermore, it
79
+ is infeasible to manually extract all possible feature interactions
80
+ given a large number of features and high cardinality [7]. Therefore,
81
+ learning low-order and high-order feature interactions automati-
82
+ cally and efficiently in a high-dimensional and sparse feature space
83
+ becomes an essential problem for improving CTR prediction model
84
+ performance, in both academic and industrial communities.
85
+ Deep learning models have achieved great success in recom-
86
+ mender systems due to its great feature learning ability. Several
87
+ deep learning architecture has been proposed from both academia
88
+ and industry (e.g.,[7, 8, 13, 16, 20, 21, 24, 25, 28]). However, All the
89
+ existing models utilize DNNs as building block for learning high-
90
+ order implicit bit-wise feature interactions, without bounded order.
91
+ When modeling explicit feature interactions, the exiting approaches
92
+ only capture lower order explicit interactions efficiently. Learning
93
+ higher order typically requires higher computational cost.
94
+ In this paper, we propose a efficient neural network-based model
95
+ called xDeepInt to learn the combination of vector-wise and bit-
96
+ wise multiplicative feature interactions explicitly. Motivated by
97
+ polynomial regression, we design a novel Polynomial Interaction
98
+ Network layers to capture bounded degree vector-wise interactions
99
+ explicitly. In order to learn the bit-wise and vector-wise interactions
100
+ simultaneously in a controllable manner, we combine PIN with a
101
+ subspace-crossing mechanism, which gives a significant boost to
102
+ our model performance and brings more flexibility. The degree
103
+ arXiv:2301.01089v1 [cs.LG] 3 Jan 2023
104
+
105
+ DLP-KDD 2020, August 24, 2020, San Diego, California, USA
106
+ Yachen and Liubo
107
+ of bit-wise interactions grows with the number of subspace. In
108
+ summary, we make the following contributions in this paper:
109
+ • We design a novel neural network architecture named xDeepInt
110
+ that models the vector-wise interactions and bit-wise in-
111
+ teractions explicitly and simultaneously, dispensing with
112
+ jointly-trained DNN and nonlinear activation functions. The
113
+ proposed model is lightweight. But it yields superior per-
114
+ formance than many existing models with more complex
115
+ structure.
116
+ • Motivated by higher-order polynomial logistic regression,
117
+ we design a Polynomial-Interaction-Network (PIN) layer
118
+ which learns higher-order explicit feature interactions recur-
119
+ sively. The degrees of interactions are controlled by tuning
120
+ the number of PIN layers. An analysis is conducted to demon-
121
+ strate the polynomial approximation properties of PIN.
122
+ • We introduce a subspace-crossing mechanism for modeling
123
+ bit-wise interactions across different fields inside PIN layer.
124
+ The combination of PIN layer and the subspace-crossing
125
+ mechanism allows us to control the the degree of bit-wise in-
126
+ teractions. As the number of subspaces increases, our model
127
+ can dynamically learn more fine-grained bit-wise feature
128
+ interactions.
129
+ • We design an optimization strategy which is in harmony
130
+ with the architecture of the proposed model. We apply Group
131
+ Lasso FTRL to the embedding table, which shrinks the entire
132
+ rows to zero and achieves the feature selection. To optimize
133
+ weights in PIN layers, we apply FTRL directly. The sparsity
134
+ in weights results in selection of feature interactions.
135
+ • We conduct a comprehensive experiment on three real-world
136
+ datasets. The results demonstrate that xDeepInt outperforms
137
+ existing state-of-the-art models under extreme high-dimensional
138
+ and sparse settings. We also conduct a sensitivity analysis
139
+ on hyper-parameter settings of xDeepInt and ablation study
140
+ on integration of DNN.
141
+ 2
142
+ RELATED WORK
143
+ Deep learning based models have been applied for CTR prediction
144
+ problem in the industry since deep neural networks become domi-
145
+ nant in learning the useful feature representation of the mixed-type
146
+ input data and fitting model in an end-to-end fashion[30]. This
147
+ merit can reduce efforts in hand-crafted feature design and auto-
148
+ matically learn the feature interactions.
149
+ 2.1
150
+ Modeling Implicit Interaction
151
+ Most of the DNN-based methods map the high-dimensional sparse
152
+ categorical features and continuous features onto a low dimensional
153
+ latent space in the initial step. Without designing specific model
154
+ architecture, DNN-based method learns the high-order implicit fea-
155
+ ture interactions by feeding the stacked embedded feature vectors
156
+ into a deep feed-forward neural network.
157
+ Deep Crossing Network [24] utilizes residual layers in the feed-
158
+ forward structure to learn higher-order interactions with improved
159
+ stability. Some hybrid network architectures, including Wide &
160
+ Deep Network (WDL) [7], Product-based Neural Network (PNN) [20,
161
+ 21], Deep & Cross Network (DCN) [28], Deep Factorization Machine
162
+ (DeepFM) [8] and eXtreme Deep Factorization Machine (xDeepFM) [16]
163
+ employ feed-forward neural network as their deep component to
164
+ learn higher-order implicit interactions. The complement of the
165
+ implicit higher-order interaction improves the performance of the
166
+ network that only models the explicit interactions [2].
167
+ However, this type of approach detects all feature interactions at
168
+ the bit-wise level [16] implicitly, without efficiency. And the degree
169
+ of the interactions are not bounded.
170
+ 2.2
171
+ Modeling Explicit Interaction
172
+ Deep & Cross Network (DCN) [28] explores the feature interactions
173
+ at the bit-wise level in an explicit fashion. Specifically, each cross
174
+ layer of DCN constructs all cross terms to exploits the bit-wise
175
+ interactions. The number of recursive cross layers controls the
176
+ degree of bit-wise feature interactions.
177
+ Some recent models explicitly learn the vector-wise feature in-
178
+ teractions using a specific form of the vector product. Deep Fac-
179
+ torization Machine (DeepFM) [8] combines factorization machine
180
+ layer and feed-forward neural network through joint learning fea-
181
+ ture embedding. Factorization machine layer models the pairwise
182
+ vector-wise interaction between feature 𝑖 and feature 𝑗 by the inner
183
+ product of ⟨𝑥𝑖,𝑥𝑗⟩ = �𝑘
184
+ 𝑡=1 𝑥𝑖𝑡𝑥𝑗𝑡. Then, the vector-wise interac-
185
+ tions are concatenated with the output units of the feed-forward
186
+ neural network. Product Neural Network (PNN) [20, 21] introduces
187
+ the inner product layer and the outer product layer to learn ex-
188
+ plicit vector-wise interactions and bit-wise interactions respectively.
189
+ xDeepFM [16] learns the explicit vector-wise interaction by using
190
+ Compressed Interaction Network (CIN) which has an RNN-like
191
+ architecture and learns all possible vector-wise interactions us-
192
+ ing Hadamard product. The convolutional filters and the pooling
193
+ mechanism are used to extract information. FiBiNET [13] utilizes
194
+ Squeeze-and-Excitation network to dynamically learn the impor-
195
+ tance of features and models the feature interactions via bilinear
196
+ function.
197
+ In the recent research of the sequencing model, the architecture
198
+ of the Transformer [26] has been widely used to understand the
199
+ associations between relevant features. With different layers of the
200
+ multi-head self-attentive neural networks, AutoInt [25] can learn
201
+ different orders of feature combinations of input features. Residual
202
+ connections [9, 27] are added to carry through different degrees of
203
+ feature interaction.
204
+ The aforementioned approaches learn explicit feature interac-
205
+ tions by using outer product, kernel product or multi-head self-
206
+ attention, which require expensive computational cost.
207
+ 3
208
+ MODEL
209
+ In this section, we give an overview of the architecture of xDeepInt.
210
+ First, we introduce input and embedding layer, which map con-
211
+ tinuous features and high-dimensional categorical features onto a
212
+ dense vector. Second, we present the Polynomial Interaction Net-
213
+ work(PIN) which utilizes iterative interaction layers with residual
214
+ connections [9, 27] to explicitly learn the vector-wise interactions.
215
+ Third, we implement the subspace-crossing mechanism to model
216
+ bit-wise interaction. The number subspaces controls the degree of
217
+ mixture of bit-wise and vector-wise interactions.
218
+
219
+ Research Track Paper
220
+ DLP-KDD 2020, August 24, 2020, San Diego, California, USA
221
+ Embedding Layer
222
+ Categorical
223
+ Feature
224
+ Bucktized
225
+ Numeric Feature
226
+ 1st PIN Layer
227
+ 2nd PIN Layer
228
+ ...
229
+ l-th PIN Layer
230
+ Input Feature Map
231
+ Output Layer
232
+ Figure 1: The architecture of unrolled Polynomial Interac-
233
+ tion Network with residual connections
234
+ 3.1
235
+ Embedding Layer
236
+ In large-scale recommendation system, inputs include both con-
237
+ tinuous features and categorical features. Categorical features are
238
+ often directly encoded by one-hot encoding, which results in an
239
+ excessively high-dimensional and sparse feature space. Suppose we
240
+ have 𝐹 fields. In our feature preprocessing step, we bucketize all
241
+ the continuous features to equal frequency bins, then embed the
242
+ bucketized continuous features and categorical features to same
243
+ latent space 𝑅𝐾,
244
+ x𝑒
245
+ 𝑓 = x𝑜
246
+ 𝑓 V𝑓 ,
247
+ where x𝑒
248
+ 𝑓 = [𝑥𝑓 ,1,𝑥𝑓 ,2, · · · ,𝑥𝑓 ,𝐾], 𝑥𝑓 ,𝑘 is the 𝑘-th bit of the 𝑓 -th
249
+ field of the embedding feature map, V𝑓 is an embedding matrix for
250
+ field 𝑓 , and x𝑜
251
+ 𝑓 is a one-hot vector. Lastly, we stack 𝐹 embedding
252
+ vectors and obtain an 𝐹-by-𝐾 input feature map 𝑋0:
253
+ 𝑋0 =
254
+ 
255
+ x𝑒
256
+ 1
257
+ x𝑒
258
+ 2...
259
+ x𝑒
260
+ 𝐹
261
+ 
262
+ 3.2
263
+ Polynomial Interaction Network
264
+ Consider a 𝑙-th order polynomial with 𝑓 variables of the following
265
+ form:
266
+ Π𝑙
267
+ 𝑗=1(
268
+ 𝑓∑︁
269
+ 𝑖=1
270
+ 𝑎𝑖𝑗𝑥𝑖 + 𝑏𝑗).
271
+ (1)
272
+ This polynomial contains all possible multiplicative combinations
273
+ of 𝑥𝑖’s with order less than or equal to 𝑙 and has an iterative form:
274
+ 𝐹 (𝑙−1) (𝑥1, · · · ,𝑥𝑓 )(
275
+ 𝑓∑︁
276
+ 𝑖=1
277
+ 𝑎𝑖𝑙𝑥𝑖 + 𝑏𝑙)
278
+ (2)
279
+ where 𝐹 (𝑙−1) (𝑥1, · · · ,𝑥𝑓 ) = Π𝑙−1
280
+ 𝑗=1(�𝑓
281
+ 𝑖=1 𝑎𝑖𝑗𝑥𝑖 + 𝑏𝑗). Motivated by
282
+ the iterative form, we propose polynomial interaction network
283
+ defined by the following formula:
284
+ 𝑋𝑙 = 𝑓 (𝑊𝑙−1,𝑋𝑙−1,𝑋0) + 𝑋𝑙−1
285
+ = 𝑋𝑙−1 ◦ (𝑊𝑙−1𝑋0) + 𝑋𝑙−1
286
+ = 𝑋𝑙−1 ◦ [𝑊𝑙−1𝑋0 + 1]
287
+ (3)
288
+ where ◦ denotes the Hadamard product. For instance, [𝑎𝑖,𝑗]𝑚×𝑛 ◦
289
+ [𝑏𝑖,𝑗]𝑚×𝑛 = [𝑎𝑖,𝑗𝑏𝑖,𝑗]𝑚×𝑛. 𝑊𝑙−1 ∈ 𝑅𝐹×𝐹 and 1 ∈ 𝑅𝐹×𝐾 with all
290
+ entries are equal to one. 𝑋𝑙−1,𝑋𝑙 ∈ 𝑅𝐹×𝐾 are the output matrices
291
+ of (𝑙-1)-th and 𝑙-th interaction layer. Like (1), the 𝑙-th PIN layer’s
292
+ output is the weighted sum of all vector-wise feature interactions
293
+ of order less than or equal to 𝑙.
294
+ The architecture of the polynomial interaction network is moti-
295
+ vated by the following aspects.
296
+ First, the polynomial interaction network has a recursive struc-
297
+ ture. The outputs of the current layer are built upon the previ-
298
+ ous layer’s outputs and the first order feature map, ensuring that
299
+ higher-order feature interactions are based on lower-order feature
300
+ interactions from previous layers.
301
+ Second, we use the Hadamard product to model the explicit
302
+ vector-wise interaction, which brings us more flexibility in crossing
303
+ the bits of each dimension in shared latent space and preserves
304
+ more information of each degree of feature interactions.
305
+ Third, we build a field aggregation layer 𝐴𝑔𝑔(𝑙) (𝑋) = 𝑊𝑙𝑋 which
306
+ combines the feature map at the vector-wise level using a linear
307
+ transformation 𝑊𝑙. Each vector of the field aggregation feature
308
+ map can be viewed as a combinatorial feature vector constructed
309
+ by the weighted sum of the input feature map. Then we take the
310
+ Hadamard product of the output of the previous layer and field
311
+ aggregation feature map for this layer. This operation allows us
312
+
313
+ DLP-KDD 2020, August 24, 2020, San Diego, California, USA
314
+ Yachen and Liubo
315
+ .
316
+ Figure 2: Details of PIN layer
317
+ The aggregation layer is defined by 𝐴𝑔𝑔(𝑙−1) (𝑋0) = 𝑊𝑙−1 · 𝑋0. The PIN
318
+ takes the Hadamard product of the aggregated 1st-order feature map and
319
+ the output of the previous PIN layer to generate the higher order
320
+ vector-wise interaction.
321
+ to explore all possible 𝑙-th order polynomial feature interactions
322
+ based on existing (𝑙-1)-th order feature interactions.
323
+ Last, we utilize residual connections [9, 27] in the polynomial
324
+ interaction network, allowing a different degree of vector-wise
325
+ polynomial feature interactions to be combined, including the first
326
+ feature map. Since the polynomial interaction layer’s outputs con-
327
+ tain all degree of feature interactions, the skipped connection en-
328
+ able next polynomial interaction layer to focus on searching useful
329
+ higher-order feature interactions while complementing lower-order
330
+ feature interactions. As the number of layer increases, the degree
331
+ of the polynomial feature interactions increases. The recurrent ar-
332
+ chitecture of the proposed polynomial interaction network enables
333
+ to bound the degree of polynomial feature interactions.
334
+ 3.3
335
+ Subspace-crossing Mechanism
336
+ The Polynomial Interaction Network (PIN) models the vector-wise
337
+ interactions. However, PIN does not learn the bit-wise interaction
338
+ in the shared latent embedding space. In order to cross the bits of
339
+ different embedding dimensions, we propose the subspace-crossing
340
+ mechanism which allows xDeepInt to learn the bit-wise interactions.
341
+ Suppose we split the embedding space into ℎ sub-spaces, the input
342
+ feature map 𝑋0 is then represented by ℎ sub-matrices as follow:
343
+ 𝑋0 = [𝑋0,1,𝑋0,2, · · · ,𝑋0,ℎ]
344
+ (4)
345
+ where 𝑋0,𝑖 ∈ 𝑅𝐹×𝐾/ℎ and 𝑖 = 1, 2 · · · ,ℎ. Next, we stack all sub-
346
+ matrices at the field dimension and construct a stacked input feature
347
+ map 𝑋
348
+
349
+ 0 ∈ 𝑅(𝐹∗ℎ)×(𝐾/ℎ).
350
+ 𝑋
351
+
352
+ 0 =
353
+ 
354
+ 𝑋0,1
355
+ 𝑋0,2
356
+ ...
357
+ 𝑋0,ℎ
358
+ 
359
+ ,
360
+ (5)
361
+ where 𝑋0,𝑗 ∈ 𝑅𝐹×(𝐾/ℎ) and ℎ denotes the number of sub-spaces.
362
+ By splitting the embedding vector of each field to ℎ sub-vectors
363
+ and stacking them together, we can align bits of different embed-
364
+ ding dimension and create the vector-wise interactions on stacked
365
+ sub-embeddings. Accordingly, we feed 𝑋
366
+
367
+ 0 into the Polynomial In-
368
+ Figure 3: Subspace-crossing mechanism
369
+ teraction Network (PIN):
370
+ 𝑋
371
+
372
+ 1 = 𝑋
373
+
374
+ 0 ◦ [𝑊
375
+
376
+ 0𝑋
377
+
378
+ 0 + 1]
379
+ ...
380
+ 𝑋
381
+
382
+ 𝑙 = 𝑋
383
+
384
+ 𝑙−1 ◦ [𝑊
385
+
386
+ 𝑙−1𝑋
387
+
388
+ 0 + 1]
389
+ (6)
390
+ where𝑊
391
+
392
+ 𝑙 ∈ 𝑅(𝐹∗ℎ)×(𝐹∗ℎ), 1 ∈ 𝑅(𝐹∗ℎ)×(𝐾/ℎ) and𝑋
393
+
394
+ 𝑙 ∈ 𝑅(𝐹∗ℎ)×(𝐾/ℎ).
395
+ The field aggregation of feature map and the multiplicative inter-
396
+ actions building by Hadamard product are both of the vector-wise
397
+ level in vanilla PIN layers. The subspace-crossing mechanism en-
398
+ hanced PIN takes theℎ aligned subspaces as input, so that encourag-
399
+ ing the PIN to capture the explicit bit-wise interaction by crossing
400
+ features of the difference subspaces. The number of subspaces ℎ
401
+ controls the complexity of bit-wise interactions. Larger ℎ helps the
402
+ model to learn more complex feature interactions.
403
+ 3.4
404
+ Output Layer
405
+ The output of Polynomial Interaction Network is a feature map
406
+ that consists of different degree of feature interactions, including
407
+ raw input feature map reserved by residual connections and higher-
408
+ order feature interactions learned by PIN. For the final prediction,
409
+ we merely use formula as follows:
410
+ ˆ𝑦 = 𝜎 �(𝑊𝑜𝑢𝑡𝑋𝑙 + 𝑏1𝑇 )1�
411
+ (7)
412
+ where 𝜎 is the sigmoid function, 𝑊𝑜𝑢𝑡 ∈ 𝑅1×𝐹 is a feature map
413
+ aggregation vector that linearly combines all the features in the
414
+ feature map, 1 ∈ 𝑅𝐾 and 𝑏 ∈ 𝑅 is the bias.
415
+ 3.5
416
+ Optimization and Regularization
417
+ For optimization, we use Group Lasso Follow The Regularized
418
+ Leader (G-FTRL) [? ] as the optimizer for the embedding layers for
419
+ feature selection, and Follow The Regularized Leader (FTRL) [19]
420
+ as the optimizer for the PIN layers for interaction selection.
421
+ Group lasso FTRL regularizes the entire embedding vector of
422
+ insignificant features in each field to exactly zero, which essentially
423
+ conducts feature selection and brings more training efficiency for
424
+ industrial settings. The group lasso regularization is applied prior
425
+ to the subspace splitting mechanism such that feature selection is
426
+ consistent between each subspaces.
427
+ FTRL regularizes the single elements of weight kernel in PIN lay-
428
+ ers to exactly zero, which excludes insignificant feature interactions
429
+ and regularizes the complexity of the model.
430
+
431
+ Research Track Paper
432
+ DLP-KDD 2020, August 24, 2020, San Diego, California, USA
433
+ Group Lasso
434
+ FTRL
435
+ FTRL
436
+ Embedding Table
437
+ Weight
438
+ Figure 4: Group Lasso FTRL v.s. FTRL
439
+ The Group Lasso FTRL regularizes the embedding table with group-wise
440
+ sparsity. FTRL regularizes the weight kernels of PIN layer with
441
+ element-wise sparsity.
442
+ This optimization strategy takes advantages of the properties
443
+ of the different optimizers and achieves row-wise sparsity and
444
+ element-wise sparsity at embedding table and weight kernel respec-
445
+ tively. Therefore, it improves generalization ability and efficiency
446
+ for both training and serving. It also plays an important role in
447
+ model compression.
448
+ 3.6
449
+ Training
450
+ The loss function we use is Log Loss,
451
+ LogLoss = − 1
452
+ 𝑁
453
+ �𝑁
454
+ 𝑖=1(𝑦𝑖𝑙𝑜𝑔( ˆ𝑦𝑖) + (1 − 𝑦𝑖)𝑙𝑜𝑔(1 − ˆ𝑦𝑖))
455
+ (8)
456
+ where 𝑦𝑖 is the true label and ˆ𝑦𝑖 is the estimated click through rate.
457
+ 𝑁 is the total number of training examples.
458
+ 3.7
459
+ Difference Between PIN and DCN
460
+ PIN and DCN both have an iterative form. However, the two net-
461
+ work architectures are quite different in extracting feature interac-
462
+ tions.
463
+ 𝑥𝑙 = 𝑓𝐷𝐶𝑁 (𝑥𝑙−1,𝑤𝑙−1,𝑏𝑙−1)
464
+ = 𝑥0𝑥𝑇
465
+ 𝑙−1𝑤𝑙−1 + 𝑏𝑙−1 + 𝑥𝑙−1
466
+ (9)
467
+ For DCN, feature map are flattened and concatenated as a single
468
+ vector. All higher order bit-wise interactions are firstly constructed
469
+ by the term 𝑥0𝑥𝑇
470
+ 𝑙−1, and then aggregated by a linear regression for
471
+ next layer. This structure results in a special format of the output.
472
+ As discussed in [16], the output the DCN layer is a scalar multiple
473
+ of 𝑥0. [16] also pointed out the downsides: 1) the output of DCN
474
+ is in a special form, with each hidden layer is a scalar multiple of
475
+ 𝑥0 and thus limits expressive power; 2) interactions only come in a
476
+ bit-wise fashion.
477
+ PIN constructs vector-wise feature interaction using Hadamard
478
+ product, which preserve the information at vector-wise level. In
479
+ order to allow different fields to cross at vector level, PIN firstly
480
+ aggregates the input feature map by a linear transformation𝑊𝑙−1𝑋0
481
+ for each iterative PIN layer and build interactions by term 𝑋𝑙−1 ◦
482
+ 𝑊𝑙−1𝑋0. Accordingly, PIN keeps the vector-wise structure of feature
483
+ interactions and does not limit the output to a scalar multiple of 𝑋0.
484
+ Moreover, each PIN layer is directly connected with input feature
485
+ map 𝑋0, which improves model trainability. We also prove PIN’s
486
+ polynomial approximation property in later section.
487
+ 3.8
488
+ xDeepInt Analysis
489
+ In this section, we analyze polynomial approximation property of
490
+ the proposed xDeepInt model. We consider an xDeepInt model with
491
+ 𝑙 PIN layers, a subspace crossing mechanism with ℎ subspaces and
492
+ 𝐹 input feature with the same embedding size 𝐾.
493
+ 3.8.1
494
+ Polynomial Approximation. In order to understand how PIN
495
+ exploits the vector-wise interactions, we examine the polynomial
496
+ approximation properties of PIN. Let X(0) ∈ 𝑅𝐹×𝐾 be the embedded
497
+ feature vector with x𝑖 = [𝑥𝑖1, · · · ,𝑥𝑖𝐾] being the 𝑖-th row. x(𝑙)
498
+ 𝑖
499
+ =
500
+ [𝑥 (𝑙)
501
+ 𝑖1 , · · · ,𝑥 (𝑙)
502
+ 𝑖𝐾 ] is the 𝑖-th row of the output of 𝑙-th layer. Then, 𝑥 (𝑙)
503
+ 𝑖𝑘
504
+ has the following explicit form:
505
+ 𝑥 (𝑙)
506
+ 𝑖𝑘 = 𝑥 (0)
507
+ 𝑖𝑘
508
+ 𝑙−1
509
+
510
+ 𝑟=0
511
+ (
512
+ 𝐹
513
+ ∑︁
514
+ 𝑖=1
515
+ 𝑤 (𝑟)
516
+ 𝑖𝑗 𝑥 (0)
517
+ 𝑗𝑘 + 1), for 𝑘 = 1, · · · , 𝐾
518
+ (10)
519
+ where 𝑊 (𝑟) = [𝑤 (𝑟)
520
+ 𝑖𝑗 ]1≤𝑖,𝑗 ≤𝐹 is the weight matrix at 𝑟-th PIN layer.
521
+ The product �𝑙−1
522
+ 𝑟=0(�𝐹
523
+ 𝑖=1 𝑤 (𝑟)
524
+ 𝑖𝑗 𝑥 (0)
525
+ 𝑗𝑘 + 1) is the weighted sum of all
526
+ possible crossed terms of the embbeded input at the 𝑘-th bit having
527
+ order less than or equal to 𝑙 − 1. Thus, 𝑥 (𝑙)
528
+ 𝑖𝑘 is the weighted sum of
529
+ all crossed terms that contains 𝑥 (0)
530
+ 𝑖𝑘 and has the order less than or
531
+ equal to 𝑙.
532
+ For the bit-wise interaction modeled by subspace-crossing mech-
533
+ anism, we consider the case where the number of subspaces equals
534
+ to the embedding size 𝐾. In this extreme case, each row of 𝑊 ′
535
+ 0𝑋 ′
536
+ 0
537
+ is a weighted sum of all bits in all fields. This design allows the
538
+ combination of embedded features at different bits. To be more
539
+ explicit, we consider the stacked input feature map with ℎ = 𝐾
540
+ 𝑋
541
+
542
+ 0 =
543
+ 
544
+ 𝑋0,1
545
+ 𝑋0,2
546
+ ...
547
+ 𝑋0,𝐾
548
+ 
549
+ ,
550
+ (11)
551
+ where 𝑋0,𝑖 =
552
+ 
553
+ 𝑥 (0)
554
+ 𝑖,1
555
+ 𝑥 (0)
556
+ 𝑖,2
557
+ ...
558
+ 𝑥 (0)
559
+ 𝑖,𝐹
560
+ 
561
+ ∈ 𝑅𝐹 .The weight matrix 𝑊 ′
562
+ 0 is given by
563
+ 𝑊 ′
564
+ 0 =
565
+ 
566
+ 𝑤 (0)
567
+ 1,1
568
+ 𝑤 (0)
569
+ 1,2
570
+ · · ·
571
+ 𝑤 (0)
572
+ 1,𝐾
573
+ 𝑤 (0)
574
+ 1,𝐾+1
575
+ · · ·
576
+ 𝑤 (0)
577
+ 1,𝐹𝐾
578
+ ...
579
+ ...
580
+ ...
581
+ ...
582
+ ...
583
+ ...
584
+ ...
585
+ 𝑤 (0)
586
+ 𝐹𝐾,1
587
+ 𝑤 (0)
588
+ 𝐹𝐾,2
589
+ · · ·
590
+ 𝑤 (0)
591
+ 𝐹𝐾,𝐾
592
+ 𝑤 (0)
593
+ 𝐹𝐾,𝐾+1
594
+ · · ·
595
+ 𝑤 (0)
596
+ 𝐹𝐾,𝐹𝐾
597
+ 
598
+ ∈ 𝑅𝐹𝐾×𝐹𝐾 .
599
+ (12)
600
+
601
+ DLP-KDD 2020, August 24, 2020, San Diego, California, USA
602
+ Yachen and Liubo
603
+ Thus, the 𝑘-th row of 𝑊 ′
604
+ 0𝑋 ′
605
+ 0 + 1 has the following form:
606
+ 𝐹
607
+ ∑︁
608
+ 𝑖=1
609
+ 𝐾
610
+ ∑︁
611
+ 𝑗=1
612
+ 𝑤 (0)
613
+ 𝑘,(𝑖−1)𝐾+𝑗𝑥𝑖,𝑗 + 1.
614
+ (13)
615
+ This is a linear combination of bits in all fields, which allows the
616
+ PIN exploit all crossing of the feature map at bit-wise level. For
617
+ example, the [(𝑖 − 1)𝐾 + 𝑗]-th row of 𝑋 ′
618
+ 𝑙 is given by
619
+ 𝑥 (𝑙)
620
+ 𝑖,𝑗 = 𝑥 (0)
621
+ 𝑖,𝑗
622
+ 𝑙−1
623
+
624
+ 𝑟=0
625
+ 𝐹
626
+ ∑︁
627
+ 𝑖=1
628
+ 𝐾
629
+ ∑︁
630
+ 𝑗=1
631
+ 𝑤 (𝑟)
632
+ 𝑘,(𝑖−1)𝐾+𝑗𝑥𝑖,𝑗 + 1
633
+ (14)
634
+ 𝑥 (𝑙)
635
+ 𝑖,𝑗 is the weighted sum of all crossed terms that contains 𝑥 (0)
636
+ 𝑖,𝑗 and
637
+ has the order less than or equal to 𝑙.
638
+ 3.8.2
639
+ Time Complexity. The cost of computing feature maps𝑊𝑙−1𝑋
640
+ at 𝑙-th PIN layer is O(ℎ𝐹2𝐾). For a 𝐿-layers xDeepInt model, the
641
+ total cost of feature maps is O(𝐿ℎ𝐹2𝐾). The additional cost is from
642
+ the Hadamard product and residual connection, which is O(𝐿𝐹𝐾).
643
+ In practice, ℎ is not too large. Hence, the total time complexity
644
+ mainly relies on the number of fields 𝐹 and embedding size 𝐾. For
645
+ an 𝐿 layers DNN with each layer has 𝐷𝑘 hidden nodes, the time
646
+ complexity is O(𝐹𝐾 ×𝐷1×𝐷2+�𝐿−1
647
+ 𝑘=2 𝐷𝑘−1𝐷𝑘𝐷𝑘+1). The time com-
648
+ plexity of xDeepInt relies on the number of subspaces. Therefore,
649
+ xDeepInt has higher time complexity than DNN when modelling
650
+ higher degrees of bit-wise interactions.
651
+ 3.8.3
652
+ Space Complexity. The embedding layer contains �𝐹
653
+ 𝑓 =1 𝐾 ×
654
+ 𝐶𝑓 parameters, where 𝐶𝑓 is the cardinality of 𝑓 -th field. The output
655
+ layer aggregates the feature map at the last PIN layer. Hence, the
656
+ output layers require 𝐹 + 1 parameters. The subspace crossing
657
+ mechanism needs ℎ2 × 𝐹2 parameters at each PIN layer, which
658
+ is the exact size of the weight matrix 𝑊 ′𝑟 with 0 ≤ 𝑟 ≤ 𝑙 − 1.
659
+ There are (𝐾/ℎ) × 𝑘′ + ℎ2 × 𝐹2 × 𝑙 parameters in 𝑙 PIN layers.
660
+ Usually, we will pick a small ℎ to control the model complexity and
661
+ 𝑘′ is comparable to 𝐾. Accordingly, the overall complexity of the
662
+ xDeepInt is approximate 𝑂(ℎ2 × 𝐹2 × 𝑙), which is affected by the
663
+ embedding size 𝐾 heavily. A plain 𝐿 layers DNN with each layer has
664
+ 𝐷𝑘 hidden nodes requires 𝐹𝐾 ×𝐷1 +�𝐿
665
+ 𝑘=2 𝐷𝑘−1𝐷𝑘 parameters. The
666
+ complexity mainly depends on the embedding size and the number
667
+ of hidden nodes at each layer. To reduce the space complexity of
668
+ xDeepInt, we can apply the method introduced in [16]. The space
669
+ complexity of the model can be further reduced by exploiting a
670
+ 𝐿-order decomposition and replace the weight matrix 𝑊 ′𝑟 with two
671
+ low-rank matrices.
672
+ 4
673
+ EXPERIMENTS
674
+ In this section, we focus on evaluating the effectiveness of our
675
+ proposed models and answering the following questions:
676
+ • Q1: How does our proposed xDeepInt perform in CTR pre-
677
+ diction problem? Is it effective and efficient under extreme
678
+ high-dimensional and sparse data settings?
679
+ • Q2: How do different hyper-parameter settings influence
680
+ the performance of xDeepInt?
681
+ • Q3: Will modeling implicit higher-order feature interactions
682
+ further improve the performance of xDeepInt?
683
+ 4.1
684
+ Experiment Setup
685
+ 4.1.1
686
+ Datasets. We evaluate our proposed model on three public
687
+ real-world datasets widely used for research.
688
+ 1. Avazu.1 Avazu dataset is from kaggle competition in 2015.
689
+ Avazu provided 10 days of click-through data. We use 21 features
690
+ in total for modeling. All the features in this dataset are categorical
691
+ features.
692
+ 2. Criteo.2 Criteo dataset is from Kaggle competition in 2014.
693
+ Criteo AI Lab officially released this dataset after, for academic
694
+ use. This dataset contains 13 numerical features and 26 categorical
695
+ features. We discretize all the numerical features to integers by
696
+ transformation function ⌊𝐿𝑜𝑔 �𝑉 2�⌋ and treat them as categorical
697
+ features, which is conducted by winning team of Criteo competi-
698
+ tion.
699
+ 3. iPinYou.3 iPinYou dataset is from iPinYou Global RTB(Real-
700
+ Time Bidding) Bidding Algorithm Competition in 2013. We follow
701
+ the data processing steps of [32] and consider all 16 categorical
702
+ features.
703
+ For all the datasets, we randomly split the examples into three
704
+ parts: 70% is for training, 10% is for validation, and 20% is for test-
705
+ ing. We also remove each categorical features’ infrequent levels
706
+ appearing less than 20 times to reduce sparsity issue. Note that
707
+ we want to compare the effectiveness and efficiency on learning
708
+ higher-order feature interactions automatically, so we do not do any
709
+ feature engineering but only feature transformation, e.g., numerical
710
+ feature bucketing and categorical feature frequency thresholding.
711
+ 4.1.2
712
+ Evaluation Metrics. We consider AUC and LogLoss for eval-
713
+ uating the performance of the models.
714
+ LogLoss LogLoss is both our loss function and evaluation metric.
715
+ It measures the average distance between predicted probability and
716
+ true label of all the examples.
717
+ AUC Area Under the ROC Curve (AUC) measures the probabil-
718
+ ity that a randomly chosen positive example ranked higher by the
719
+ model than a randomly chosen negative example. AUC only con-
720
+ siders the relative order between positive and negative examples.
721
+ A higher AUC indicates better ranking performance.
722
+ 4.1.3
723
+ Competing Models. We compare xDeepInt with following
724
+ models: LR(logistic regression) [18, 19], FM(factorization machine) [22],
725
+ DNN (plain multilayer perceptron), Wide & Deep [7], DeepCross-
726
+ ing [24], DCN (Deep & Cross Network) [28], PNN (with both in-
727
+ ner product layer and outer product layer) [20, 21], DeepFM [8],
728
+ xDeepFM [16], AutoInt [25] and FiBiNET [13]. Some of the models
729
+ are state-of-the-art models for CTR prediction problem and are
730
+ widely used in the industry.
731
+ 4.1.4
732
+ Reproducibility. We implement all the models using Tensor-
733
+ flow [1]. The mini-batch size is 4096, and the embedding dimension
734
+ is 16 for all the features. For optimization, we employ Adam [15]
735
+ with learning rate set to 0.001 for all the neural network models, and
736
+ we apply FTRL [18, 19] with learning rate as 0.01 for both LR and
737
+ FM. For regularization, we choose L2 regularization with 𝜆 = 0.0001
738
+ for dense layer. Grid-search for each competing model’s hyper-
739
+ parameters is conducted on the validation dataset. The number of
740
+ 1https://www.kaggle.com/c/avazu-ctr-prediction
741
+ 2https://www.kaggle.com/c/criteo-display-ad-challenge
742
+ 3http://contest.ipinyou.com/
743
+
744
+ Research Track Paper
745
+ DLP-KDD 2020, August 24, 2020, San Diego, California, USA
746
+ DNN, Cross, CIN, Interacting layers is from 1 to 4. The number of
747
+ neurons ranges from 128 to 1024. All the models are trained with
748
+ early stopping and are evaluated every 2000 training steps.
749
+ For the hyper-parameters search of xDeepInt, The number of
750
+ recursive feature interaction layers is from 1 to 4. For the number
751
+ of sub-spaces ℎ, the searched values are 1, 2, 4, 8 and 16. Since our
752
+ embedding size is 16, this range covers from complete vector-wise
753
+ interaction to complete bit-wise interaction. We use G-FTRL opti-
754
+ mizer for embedding table and FTRL for PIN layers with learning
755
+ rate as 0.01.
756
+ 4.2
757
+ Model Performance Comparison (Q1)
758
+ Table 1: Performance Comparison of Different Algorithms
759
+ on Criteo, Avazu and iPinYou Dataset.
760
+ Criteo
761
+ Avazu
762
+ iPinYou
763
+ Model
764
+ AUC
765
+ LogLoss
766
+ AUC
767
+ LogLoss
768
+ AUC
769
+ LogLoss
770
+ LR
771
+ 0.7924
772
+ 0.4577
773
+ 0.7533
774
+ 0.3952
775
+ 0.7692
776
+ 0.005605
777
+ FM
778
+ 0.8030
779
+ 0.4487
780
+ 0.7652
781
+ 0.3889
782
+ 0.7737
783
+ 0.005576
784
+ DNN
785
+ 0.8051
786
+ 0.4461
787
+ 0.7627
788
+ 0.3895
789
+ 0.7732
790
+ 0.005749
791
+ Wide&Deep
792
+ 0.8062
793
+ 0.4451
794
+ 0.7637
795
+ 0.3889
796
+ 0.7763
797
+ 0.005589
798
+ DeepFM
799
+ 0.8069
800
+ 0.4445
801
+ 0.7665
802
+ 0.3879
803
+ 0.7749
804
+ 0.005609
805
+ DeepCrossing
806
+ 0.8068
807
+ 0.4456
808
+ 0.7628
809
+ 0.3891
810
+ 0.7706
811
+ 0.005657
812
+ DCN
813
+ 0.8056
814
+ 0.4457
815
+ 0.7661
816
+ 0.3880
817
+ 0.7758
818
+ 0.005682
819
+ PNN
820
+ 0.8083
821
+ 0.4433
822
+ 0.7663
823
+ 0.3882
824
+ 0.7783
825
+ 0.005584
826
+ xDeepFM
827
+ 0.8077
828
+ 0.4439
829
+ 0.7668
830
+ 0.3878
831
+ 0.7772
832
+ 0.005664
833
+ AutoInt
834
+ 0.8053
835
+ 0.4462
836
+ 0.7650
837
+ 0.3883
838
+ 0.7732
839
+ 0.005758
840
+ FiBiNET
841
+ 0.8082
842
+ 0.4439
843
+ 0.7652
844
+ 0.3886
845
+ 0.7756
846
+ 0.005679
847
+ xDeepInt
848
+ 0.8111
849
+ 0.4408
850
+ 0.7675
851
+ 0.3872
852
+ 0.7791
853
+ 0.005565
854
+ The overall performance of different models is listed in Table 1. We
855
+ have the following observations in terms of model effectiveness:
856
+ • LR is generally worse than other algorithms, which indicates
857
+ that learning higher-order feature interactions is essential
858
+ for CTR model performance.
859
+ • FM brings the most significant boost in performance while
860
+ we increase model complexity. This reveals the importance
861
+ of learning explicit vector-wise feature interactions.
862
+ • Models that combining vector-wise and bit-wise interactions
863
+ together consistently outperform other models. This phe-
864
+ nomenon indicates that both types of feature interactions are
865
+ essential to prediction performance and compensate each
866
+ other.
867
+ • xDeepInt achieves the best prediction performance among
868
+ all models. However, different datasets favor feature interac-
869
+ tions of different degrees and bit-wise feature interactions of
870
+ different complexity. The superior performance of our model
871
+ could attribute to the fact that xDeepInt model the bounded
872
+ degree of polynomial feature interactions by adjusting the
873
+ depth of PIN and achieve different complexity of bit-wise
874
+ feature interactions by changing the number of sub-spaces.
875
+ 4.3
876
+ Hyper-Parameter Study (Q2)
877
+ In order to have deeper insights of the proposed model, we conduct
878
+ experiments on three datasets and compare several variants of
879
+ xDeepInt on different hyper-parameter settings.
880
+ 4.3.1
881
+ Depth of Network. The depth of PIN determines the order
882
+ of feature interactions. Table 2 illustrates the performance change
883
+ with respect to the number of PIN layers. When the number of
884
+ layers set to 0, our model is equivalent to logistic regression and no
885
+ interactions are learned. The performance of xDeepInt achieves the
886
+ best when the number of layers is about 3 or 4. In this experiment,
887
+ we set the number of sub-spaces as 1, to disable the bit-wise feature
888
+ interactions.
889
+ Table 2: Impact of hyper-parameters: number of layers
890
+ #Layers
891
+ 0
892
+ 1
893
+ 2
894
+ 3
895
+ 4
896
+ 5
897
+ Criteo
898
+ AUC
899
+ 0.7921
900
+ 0.8038
901
+ 0.8050
902
+ 0.8057
903
+ 0.8063
904
+ 0.8061
905
+ LogLoss
906
+ 0.4580
907
+ 0.4477
908
+ 0.4466
909
+ 0.4461
910
+ 0.4452
911
+ 0.4454
912
+ Avazu
913
+ AUC
914
+ 0.7536
915
+ 0.7654
916
+ 0.7664
917
+ 0.7675
918
+ 0.7670
919
+ 0.7662
920
+ LogLoss
921
+ 0.3951
922
+ 0.3888
923
+ 0.3879
924
+ 0.3872
925
+ 0.3875
926
+ 0.3883
927
+ iPinYou
928
+ AUC
929
+ 0.7690
930
+ 0.7740
931
+ 0.7775
932
+ 0.7791
933
+ 0.7783
934
+ 0.7772
935
+ LogLoss
936
+ 0.005604
937
+ 0.005576
938
+ 0.005569
939
+ 0.005565
940
+ 0.005580
941
+ 0.005571
942
+ 4.3.2
943
+ Number of Sub-spaces. The subspace-crossing mechanism
944
+ enables the proposed model to control the complexity of bit-wise in-
945
+ teractions. Table 3 demonstrates that subspace-crossing mechanism
946
+ boosts the performance. In this experiment, we set the number of
947
+ PIN layers as 3, which is generally a good choice but not the best
948
+ setting for each dataset.
949
+ Table 3: Impact of hyper-parameters: number of sub-spaces
950
+ #Sub-spaces
951
+ 1
952
+ 2
953
+ 4
954
+ 8
955
+ 16
956
+ Criteo
957
+ AUC
958
+ 0.8072
959
+ 0.8081
960
+ 0.8089
961
+ 0.8096
962
+ 0.8101
963
+ LogLoss
964
+ 0.4445
965
+ 0.4435
966
+ 0.4425
967
+ 0.4421
968
+ 0.4418
969
+ Avazu
970
+ AUC
971
+ 0.7660
972
+ 0.7668
973
+ 0.7674
974
+ 0.7672
975
+ 0.7668
976
+ LogLoss
977
+ 0.3880
978
+ 0.3877
979
+ 0.3875
980
+ 0.3878
981
+ 0.3879
982
+ iPinYou
983
+ AUC
984
+ 0.7772
985
+ 0.7783
986
+ 0.7788
987
+ 0.7787
988
+ 0.7784
989
+ LogLoss
990
+ 0.005590
991
+ 0.005583
992
+ 0.005568
993
+ 0.005572
994
+ 0.005580
995
+ 4.3.3
996
+ Activation Function. By default, we use linear activation func-
997
+ tion on neurons of PIN layers. We also would like to explore how
998
+ different activation function of PIN affect the performance. Table 4
999
+ shows that the linear activation function is the most performant
1000
+ one for the PIN. We study the effect of activation function on Criteo
1001
+ dataset.
1002
+ Table 4: Impact of hyper-parameters: activation function
1003
+ AUC
1004
+ LogLoss
1005
+ linear
1006
+ 0.8111
1007
+ 0.4408
1008
+ tanh
1009
+ 0.8100
1010
+ 0.4418
1011
+ sigmoid
1012
+ 0.8082
1013
+ 0.4434
1014
+ softplus
1015
+ 0.8080
1016
+ 0.4436
1017
+ swish
1018
+ 0.8100
1019
+ 0.4418
1020
+ relu
1021
+ 0.8098
1022
+ 0.4419
1023
+ leaky relu
1024
+ 0.8102
1025
+ 0.4415
1026
+ elu
1027
+ 0.8099
1028
+ 0.4418
1029
+ selu
1030
+ 0.8100
1031
+ 0.4418
1032
+ 4.3.4
1033
+ Optimizer. We also build our model with Adam optimizer,
1034
+ same as all the competing models, to compare with our G-FTRL
1035
+ and FTRL combined optimization strategy. Table 5 shows that our
1036
+ G-FTRL and FTRL combined optimization strategy achieves better
1037
+
1038
+ DLP-KDD 2020, August 24, 2020, San Diego, California, USA
1039
+ Yachen and Liubo
1040
+ performance. Table 6 shows that our optimization strategy gets
1041
+ higher degree of feature sparse ratio (ratio of all zero embedding
1042
+ vectors in embedding table) and sparse ratio (ratio of zero weights in
1043
+ PIN layers), which results in lightweight model. One thing should be
1044
+ noted is that xDeepInt still achieves the best prediction performance
1045
+ among all models when using Adam optimizer, which demonstrates
1046
+ the effectiveness of xDeepInt architecture.
1047
+ Table 5: Impact of hyper-parameters: optimizer
1048
+ Dataset
1049
+ Model
1050
+ LogLoss
1051
+ AUC
1052
+ Criteo
1053
+ G-FTRL/FTRL
1054
+ 0.4408
1055
+ 0.8111
1056
+ Adam
1057
+ 0.4415
1058
+ 0.8105
1059
+ Avazu
1060
+ G-FTRL/FTRL
1061
+ 0.3872
1062
+ 0.7675
1063
+ Adam
1064
+ 0.3873
1065
+ 0.7674
1066
+ iPinYou
1067
+ G-FTRL/FTRL
1068
+ 0.005565
1069
+ 0.7791
1070
+ Adam
1071
+ 0.005583
1072
+ 0.7784
1073
+ Table 6: Analysis of model sparsity
1074
+ Dataset
1075
+ Feature Sparse ratio
1076
+ Weight Sparse ratio
1077
+ Criteo
1078
+ 0.6506
1079
+ 0.1030
1080
+ Avazu
1081
+ 0.2193
1082
+ 0.0448
1083
+ iPinYou
1084
+ 0.8274
1085
+ 0.0627
1086
+ 4.4
1087
+ Ablation Study: Integrating Implicit
1088
+ Interactions (Q3)
1089
+ In this section, we conduct ablation study comparing the perfor-
1090
+ mance of our proposed model with and without integrating implicit
1091
+ feature interactions.
1092
+ Feed-forward neural network is widely used by various model
1093
+ architectures for learning implicit feature interactions. In this ex-
1094
+ periment, we jointly train xDeepInt with a three-layer feed-forward
1095
+ neural network and name the combined model as xDeepInt+ to
1096
+ compare with vanilla xDeepInt.
1097
+ Table 7 compares vanilla xDeepInt and xDeepInt+. We observe
1098
+ that the jointly-trained feed-forward neural network does not boost
1099
+ the performance of vanilla xDeepInt. The reason is that vanilla
1100
+ xDeepInt model has already learned bit-wise interactions through
1101
+ the subspace-crossing mechanism. Thus, feed-forward neural net-
1102
+ work does not bring in additional predictive power.
1103
+ Table 7: Ablation study comparing the performance of
1104
+ xDeepInt with and without integrating DNN
1105
+ Dataset
1106
+ Model
1107
+ LogLoss
1108
+ AUC
1109
+ Criteo
1110
+ xDeepInt
1111
+ 0.4408
1112
+ 0.8111
1113
+ xDeepInt+
1114
+ 0.4412
1115
+ 0.8107
1116
+ Avazu
1117
+ xDeepInt
1118
+ 0.3872
1119
+ 0.7675
1120
+ xDeepInt+
1121
+ 0.3874
1122
+ 0.7673
1123
+ iPinYou
1124
+ xDeepInt
1125
+ 0.005565
1126
+ 0.7791
1127
+ xDeepInt+
1128
+ 0.005581
1129
+ 0.7787
1130
+ 5
1131
+ CONCLUSION
1132
+ In this paper, we design a novel network layer named polynomial
1133
+ interaction network (PIN), which learns the higher order vector-
1134
+ wise feature interactions on the embedding space. By incorporating
1135
+ PIN with the subspace-crossing mechanism, our proposed model
1136
+ xDeepInt learns bit-wise and vector-wise feature interactions of
1137
+ bounded degree simultaneously in controllable manner. We add
1138
+ residual connections to PIN layers, such that the output of each layer
1139
+ is an ensemble of the low-order and high-order interactions. The
1140
+ degree of interaction is controlled by the number of PIN layers, and
1141
+ the complexity of bit-wise interaction is controlled by the number
1142
+ of sub-spaces. Additionally, an optimization method is introduced
1143
+ to performs feature selection and interaction selection based on the
1144
+ network structure. Our experimental result demonstrates that the
1145
+ proposed xDeepInt outperforms existing state-of-art methods on
1146
+ real-world datasets. To our best knowledge, xDeepInt is the first
1147
+ neural network architecture that achieves state-of-art performance
1148
+ without integration of feed-forward neural network using non-
1149
+ linear activation functions.
1150
+ We have multiple directions of future work. First, the proposed
1151
+ model only focuses on modeling fixed-length feature vectors. In or-
1152
+ der to model historical and sequential behavior in recommendation
1153
+ systems[33, 34], We are interested in making our model architecture
1154
+ applicable to variable-length feature vectors. Second, We would like
1155
+ to extend the application of polynomial interaction layers to more
1156
+ modeling scenarios and exploit PIN’s potential on other problems.
1157
+ Third, the model is fully explainable when the subspace crossing
1158
+ mechanism is disable. The explainability of the model is another
1159
+ direction of future work.
1160
+
1161
+ Research Track Paper
1162
+ DLP-KDD 2020, August 24, 2020, San Diego, California, USA
1163
+ REFERENCES
1164
+ [1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey
1165
+ Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al.
1166
+ 2016. Tensorflow: A system for large-scale machine learning. In 12th {USENIX}
1167
+ Symposium on Operating Systems Design and Implementation ({OSDI} 16). 265–
1168
+ 283.
1169
+ [2] Alex Beutel, Paul Covington, Sagar Jain, Can Xu, Jia Li, Vince Gatto, and Ed H
1170
+ Chi. 2018. Latent cross: Making use of context in recurrent recommender systems.
1171
+ In Proceedings of the Eleventh ACM International Conference on Web Search and
1172
+ Data Mining. ACM, 46–54.
1173
+ [3] Mathieu Blondel, Akinori Fujino, Naonori Ueda, and Masakazu Ishihata. 2016.
1174
+ Higher-order factorization machines. In Advances in Neural Information Process-
1175
+ ing Systems. 3351–3359.
1176
+ [4] Mathieu Blondel, Masakazu Ishihata, Akinori Fujino, and Naonori Ueda. 2016.
1177
+ Polynomial networks and factorization machines: New insights and efficient
1178
+ training algorithms. arXiv preprint arXiv:1607.08810 (2016).
1179
+ [5] Patrick PK Chan, Xian Hu, Lili Zhao, Daniel S Yeung, Dapeng Liu, and Lei Xiao.
1180
+ 2018. Convolutional Neural Networks based Click-Through Rate Prediction with
1181
+ Multiple Feature Sequences.. In IJCAI. 2007–2013.
1182
+ [6] Chen Cheng, Fen Xia, Tong Zhang, Irwin King, and Michael R Lyu. 2014. Gradient
1183
+ boosting factorization machines. In Proceedings of the 8th ACM Conference on
1184
+ Recommender systems. ACM, 265–272.
1185
+ [7] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra,
1186
+ Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al.
1187
+ 2016. Wide & deep learning for recommender systems. In Proceedings of the 1st
1188
+ workshop on deep learning for recommender systems. ACM, 7–10.
1189
+ [8] Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017.
1190
+ DeepFM: a factorization-machine based neural network for CTR prediction. arXiv
1191
+ preprint arXiv:1703.04247 (2017).
1192
+ [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual
1193
+ learning for image recognition. In Proceedings of the IEEE conference on computer
1194
+ vision and pattern recognition. 770–778.
1195
+ [10] Xiangnan He and Tat-Seng Chua. 2017. Neural Factorization Machines for Sparse
1196
+ Predictive Analytics.(2017). (2017).
1197
+ [11] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng
1198
+ Chua. 2017. Neural collaborative filtering. In Proceedings of the 26th International
1199
+ Conference on World Wide Web. International World Wide Web Conferences
1200
+ Steering Committee, 173–182.
1201
+ [12] Xinran He, Junfeng Pan, Ou Jin, Tianbing Xu, Bo Liu, Tao Xu, Yanxin Shi, Antoine
1202
+ Atallah, Ralf Herbrich, Stuart Bowers, et al. 2014. Practical lessons from predicting
1203
+ clicks on ads at facebook. In Proceedings of the Eighth International Workshop on
1204
+ Data Mining for Online Advertising. ACM, 1–9.
1205
+ [13] Tongwen Huang, Zhiqi Zhang, and Junlin Zhang. 2019. FiBiNET: Combining
1206
+ Feature Importance and Bilinear feature Interaction for Click-Through Rate
1207
+ Prediction. arXiv preprint arXiv:1905.09433 (2019).
1208
+ [14] Yuchin Juan, Yong Zhuang, Wei-Sheng Chin, and Chih-Jen Lin. 2016. Field-
1209
+ aware factorization machines for CTR prediction. In Proceedings of the 10th ACM
1210
+ Conference on Recommender Systems. ACM, 43–50.
1211
+ [15] Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic opti-
1212
+ mization. arXiv preprint arXiv:1412.6980 (2014).
1213
+ [16] Jianxun Lian, Xiaohuan Zhou, Fuzheng Zhang, Zhongxia Chen, Xing Xie, and
1214
+ Guangzhong Sun. 2018. xdeepfm: Combining explicit and implicit feature in-
1215
+ teractions for recommender systems. In Proceedings of the 24th ACM SIGKDD
1216
+ International Conference on Knowledge Discovery & Data Mining. ACM, 1754–
1217
+ 1763.
1218
+ [17] Xiaoliang Ling, Weiwei Deng, Chen Gu, Hucheng Zhou, Cui Li, and Feng Sun.
1219
+ 2017. Model ensemble for click prediction in bing search ads. In Proceedings of
1220
+ the 26th International Conference on World Wide Web Companion. International
1221
+ World Wide Web Conferences Steering Committee, 689–698.
1222
+ [18] H Brendan McMahan. 2011. Follow-the-regularized-leader and mirror descent:
1223
+ Equivalence theorems and l1 regularization. (2011).
1224
+ [19] H Brendan McMahan, Gary Holt, David Sculley, Michael Young, Dietmar Ebner,
1225
+ Julian Grady, Lan Nie, Todd Phillips, Eugene Davydov, Daniel Golovin, et al. 2013.
1226
+ Ad click prediction: a view from the trenches. In Proceedings of the 19th ACM
1227
+ SIGKDD international conference on Knowledge discovery and data mining. ACM,
1228
+ 1222–1230.
1229
+ [20] Yanru Qu, Han Cai, Kan Ren, Weinan Zhang, Yong Yu, Ying Wen, and Jun Wang.
1230
+ 2016. Product-based neural networks for user response prediction. In 2016 IEEE
1231
+ 16th International Conference on Data Mining (ICDM). IEEE, 1149–1154.
1232
+ [21] Yanru Qu, Bohui Fang, Weinan Zhang, Ruiming Tang, Minzhe Niu, Huifeng
1233
+ Guo, Yong Yu, and Xiuqiang He. 2018. Product-Based Neural Networks for User
1234
+ Response Prediction over Multi-Field Categorical Data. ACM Transactions on
1235
+ Information Systems (TOIS) 37, 1 (2018), 5.
1236
+ [22] Steffen Rendle. 2010. Factorization machines. In 2010 IEEE International Confer-
1237
+ ence on Data Mining. IEEE, 995–1000.
1238
+ [23] Matthew Richardson, Ewa Dominowska, and Robert Ragno. 2007. Predicting
1239
+ clicks: estimating the click-through rate for new ads. In Proceedings of the 16th
1240
+ international conference on World Wide Web. ACM, 521–530.
1241
+ [24] Ying Shan, T Ryan Hoens, Jian Jiao, Haijing Wang, Dong Yu, and JC Mao. 2016.
1242
+ Deep crossing: Web-scale modeling without manually crafted combinatorial
1243
+ features. In Proceedings of the 22nd ACM SIGKDD International Conference on
1244
+ Knowledge Discovery and Data Mining. ACM, 255–262.
1245
+ [25] Weiping Song, Chence Shi, Zhiping Xiao, Zhijian Duan, Yewen Xu, Ming Zhang,
1246
+ and Jian Tang. 2018. AutoInt: Automatic Feature Interaction Learning via Self-
1247
+ Attentive Neural Networks. arXiv preprint arXiv:1810.11921 (2018).
1248
+ [26] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,
1249
+ Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all
1250
+ you need. In Advances in Neural Information Processing Systems. 5998–6008.
1251
+ [27] Andreas Veit, Michael J Wilber, and Serge Belongie. 2016. Residual networks
1252
+ behave like ensembles of relatively shallow networks. In Advances in neural
1253
+ information processing systems. 550–558.
1254
+ [28] Ruoxi Wang, Bin Fu, Gang Fu, and Mingliang Wang. 2017. Deep & cross network
1255
+ for ad click predictions. In Proceedings of the ADKDD’17. ACM, 12.
1256
+ [29] Jun Xiao, Hao Ye, Xiangnan He, Hanwang Zhang, Fei Wu, and Tat-Seng Chua.
1257
+ 2017. Attentional factorization machines: Learning the weight of feature interac-
1258
+ tions via attention networks. arXiv preprint arXiv:1708.04617 (2017).
1259
+ [30] Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. 2019. Deep learning based rec-
1260
+ ommender system: A survey and new perspectives. ACM Computing Surveys
1261
+ (CSUR) 52, 1 (2019), 5.
1262
+ [31] Weinan Zhang, Tianming Du, and Jun Wang. 2016. Deep learning over multi-field
1263
+ categorical data. In European conference on information retrieval. Springer, 45–57.
1264
+ [32] Weinan Zhang, Shuai Yuan, Jun Wang, and Xuehua Shen. 2014. Real-time bidding
1265
+ benchmarking with ipinyou dataset. arXiv preprint arXiv:1407.7073 (2014).
1266
+ [33] Guorui Zhou, Na Mou, Ying Fan, Qi Pi, Weijie Bian, Chang Zhou, Xiaoqiang Zhu,
1267
+ and Kun Gai. 2018. Deep Interest Evolution Network for Click-Through Rate
1268
+ Prediction. arXiv preprint arXiv:1809.03672 (2018).
1269
+ [34] Guorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui
1270
+ Yan, Junqi Jin, Han Li, and Kun Gai. 2018. Deep interest network for click-through
1271
+ rate prediction. In Proceedings of the 24th ACM SIGKDD International Conference
1272
+ on Knowledge Discovery & Data Mining. ACM, 1059–1068.
1273
+ [35] Jie Zhu, Ying Shan, JC Mao, Dong Yu, Holakou Rahmanian, and Yi Zhang. 2017.
1274
+ Deep embedding forest: Forest-based serving with deep embedding features.
1275
+ In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge
1276
+ Discovery and Data Mining. ACM, 1703–1711.
1277
+
VdAzT4oBgHgl3EQfJ_tt/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
Y9E5T4oBgHgl3EQfDA5Y/content/tmp_files/2301.05401v1.pdf.txt ADDED
@@ -0,0 +1,4445 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.05401v1 [astro-ph.HE] 13 Jan 2023
2
+ Draft version January 16, 2023
3
+ Typeset using LATEX twocolumn style in AASTeX63
4
+ Long-duration Gamma-ray Burst Progenitors and Magnetar Formation
5
+ Cui-Ying Song1 and Tong Liu2
6
+ 1Tsung-Dao Lee Institute, Shanghai Jiao Tong University, Shanghai 200240, China; [email protected]
7
+ 2Department of Astronomy, Xiamen University, Xiamen 361005, China; [email protected]
8
+ ABSTRACT
9
+ Millisecond magnetars produced in the center of dying massive stars are one prominent model to
10
+ power gamma-ray bursts (GRBs). Their detailed nature, however, remains unsolved. To explore the
11
+ effects of the initial mass, rotation, mass loss, and metallicity of the progenitor stars of 10 − 30 M⊙ on
12
+ the formation and properties of the protomagnetars, we evolve over 150 single star models from the
13
+ pre-main-sequence to core collapse by using the stellar evolution code MESA. We find that all of the
14
+ fast-rotating stars become Wolf-Rayet stars. The final stellar, helium and carbon-oxygen core masses
15
+ roughly increase with increasing initial mass, and decrease moderately with increasing initial rotation
16
+ rate. We illustrate the effects of these intrinsic signatures on the hydrogen and helium envelopes and
17
+ the metallicity. We then discuss the progenitors of the different types of supernovae. Furthermore, we
18
+ find that the compactness parameter remains a nonmonotonic function of the initial mass and initial
19
+ velocity when the effects of different metallicity and wind mass loss are considered. More importantly,
20
+ we present the estimated period, magnetic field strength, and masses of protomagnetars in all cases.
21
+ The typical rotational energy of these millisecond magnetars is sufficient to power long-duration GRBs.
22
+ Keywords: Massive stars (732) - Stellar evolutionary models (2046) - Gamma-ray bursts
23
+ (629) - Magnetars (992)
24
+ 1. INTRODUCTION
25
+ Observations of long-duration gamma-ray bursts
26
+ (LGRBs) associated with core-collapse supernovae
27
+ (CCSNe, e.g., Galama et al. 1998; Bloom et al.
28
+ 1999; Hjorth et al. 2003; Stanek et al. 2003) sug-
29
+ gest that some fraction of LGRBs originate from
30
+ the death of massive stars (see reviews by Piran
31
+ 2004; Woosley & Bloom 2006; Kumar & Zhang
32
+ 2015; Cano et al. 2017).
33
+ After the massive star
34
+ collapses, a black hole (BH) hyperaccretion sys-
35
+ tem (e.g., Woosley 1993; MacFadyen & Woosley
36
+ 1999; Liu et al. 2017) or a rapidly rotating neu-
37
+ tron star (NS) with a strong magnetic field
38
+ (magnetar, e.g., Usov 1992; Duncan & Thompson
39
+ 1992;
40
+ Wheeler et al.
41
+ 2000;
42
+ Zhang & Mészáros
43
+ 2001) might be formed.
44
+ In the BH hyper-
45
+ accretion scenario,
46
+ the annihilation of neutri-
47
+ nos and anti-neutrinos escaped from neutrino-
48
+ dominated accretion disks (e.g., Popham et al.
49
+ 1999; Narayan et al. 2001; Chen & Beloborodov
50
+ 2007;
51
+ Gu et al.
52
+ 2006;
53
+ Liu et al.
54
+ 2007,
55
+ 2015;
56
+ Zalamea & Beloborodov
57
+ 2011;
58
+ Kawanaka et al.
59
+ 2013, see a review by Liu et al. 2017) or the
60
+ large-scale
61
+ magnetic
62
+ fields
63
+ extract
64
+ the
65
+ rota-
66
+ tional
67
+ energy
68
+ of
69
+ BHs,
70
+ i.e.,
71
+ Blandford-Znajek
72
+ mechanism,
73
+ (,
74
+ e.g., Blandford & Znajek 1977;
75
+ Lee et al. 2000; Wang et al. 2002; McKinney 2005;
76
+ McKinney et al. 2012; Komissarov & Barkov 2009;
77
+ Tchekhovskoy et al. 2011; Lei et al. 2013) to launch
78
+ ultrarelativistic jets. The release energy from the
79
+ spin-down of newborn magnetars could also ac-
80
+ count for gamma-ray bursts (GRBs, e.g., Usov
81
+ 1992; Wheeler et al. 2000; Metzger et al. 2011,
82
+ 2017; Rowlinson et al. 2013; Kaspi & Beloborodov
83
+ 2017). The collimated jet production mechanism
84
+ of magnetars has been studied by some literatures
85
+
86
+ 2
87
+ Song et al.
88
+ (Komissarov & Barkov
89
+ 2007;
90
+ Bucciantini et al.
91
+ 2008, 2009, 2012; Kiuchi et al. 2012; Siegel et al.
92
+ 2014; Shankar et al. 2021). If the ultrarelativistic
93
+ jet produced by the central engine could break
94
+ out from the progenitor envelope and circumstel-
95
+ lar medium and be along the line of sight, the
96
+ observable LGRB could be triggered.
97
+ The characteristics of LGRB progenitors are still
98
+ unclear since no direct observational evidence has
99
+ been presented to reveal the association of any
100
+ progenitor and the observed LGRBs.
101
+ The event
102
+ rate of LGRBs is much lower than that of CC-
103
+ SNe (e.g., Podsiadlowski et al. 2004; Izzard et al.
104
+ 2004; Guetta et al. 2005; Soderberg et al. 2006;
105
+ Wanderman & Piran 2010), implying that only
106
+ a small fraction of massive stars could produce
107
+ LGRBs and CCSNe at the end of their lives.
108
+ Soderberg et al. (2010) estimated that the ratio of
109
+ LGRBs to Ibc supernovae (SN) is approximately
110
+ 1% based on a radio survey. Georgy et al. (2012)
111
+ obtained this fraction from rotating stellar models
112
+ and found that it could exceed 25%. The absence
113
+ of hydrogen/helium lines in the spectra of SNe as-
114
+ sociated with LGRBs suggests that their progen-
115
+ itors have undergone violent mass loss or chem-
116
+ ical mixing of elements.
117
+ Wolf-Rayet (WR) stars
118
+ with striping H/He envelopes before explosion have
119
+ been proposed as GRB progenitors (e.g., Woosley
120
+ 1993; Woosley & Heger 2006; Crowther 2007). WR
121
+ stars can be subdivided into three classes based
122
+ on the exhibition of broad emission lines in spec-
123
+ tra. These groups include N-rich WR stars (WN)
124
+ and C-rich and O-rich WR stars (WC/WO). The
125
+ origin of WR is an ongoing study.
126
+ It is widely
127
+ believed that rapidly rotating single massive stars
128
+ could undergo quasi-chemically homogeneous evo-
129
+ lution (e.g., Yoon & Langer 2005; Yoon et al. 2006;
130
+ Ekström et al. 2012; Maeder & Meynet 2012), al-
131
+ lowing the systems to lose their envelopes while still
132
+ retaining enough angular momentum to produce
133
+ LGRBs. Furthermore, WR stars can also be born
134
+ in massive close binaries through tidal interactions
135
+ with their companion stars (e.g., Cantiello et al.
136
+ 2007; Yoon et al. 2010; Eldridge et al. 2011; Langer
137
+ 2012; de Mink et al. 2013).
138
+ Several recent studies have investigated LGRB
139
+ progenitor evolution models and corresponding
140
+ remnants (Heger et al. 2003; Hirschi et al. 2005;
141
+ Obergaulinger & Aloy 2017; Aguilera-Dena et al.
142
+ 2018; Aloy & Obergaulinger 2021).
143
+ Roy et al.
144
+ (2020) showed the effects of mass, rotation rate,
145
+ and metallicity of massive stars on their evolution
146
+ into WN and subsequently into WC, which are pos-
147
+ sible Type-Ic SNe/GRB progenitors. By studying
148
+ the surface helium and nitrogen mass fraction evo-
149
+ lution, they found that an O star with rotation rate
150
+ Ω/Ωcirt ≥ 0.4 could evolve into the WN phase.
151
+ This might denote that rapidly rotating mas-
152
+ sive stars could satisfy the requirement of Type-
153
+ Ic SNe/LGRB production.
154
+ Aguilera-Dena et al.
155
+ (2020) assumed that these successful neutrino-
156
+ driven explosion models would produce magnetar
157
+ and power superluminous SNe, and that failed ex-
158
+ plosion models would lead to the BH form and
159
+ power LGRBs. The subnormal distribution of the
160
+ initial SN explosion energy could naturally build
161
+ the “lower mass gap” in the mass distribution of
162
+ compact objects (Liu et al. 2021). However, in ad-
163
+ dition to the collapsar, the magnetar might also
164
+ be the central engine of LGRBs.
165
+ It is difficult
166
+ to determine whether the central engine of GRB
167
+ is a BH or magnetar.
168
+ The shallow decay phase
169
+ (Dai 2004; Zhang et al. 2006; Dall’Osso et al. 2011;
170
+ Li et al. 2016) and “internal plateau” (Troja et al.
171
+ 2007; Lyons et al. 2010) in some GRB afterglow
172
+ light curves have been proposed as the magnetar
173
+ signature, but the BH hyperaccretion model can-
174
+ not be ruled out, especially for long-lasting plateaus
175
+ (e.g., Yi et al. 2022). Regardless of the type of cen-
176
+ tral engine, the shallow decay phase might be re-
177
+ lated to a precessing jet (Huang & Liu 2021). In
178
+ this paper, we focus on the progenitors of LGRBs
179
+ and the formation of magnetars.
180
+ The physical conditions for dying massive stars
181
+ to produce GRBs remain an open question. De-
182
+ velopments to enhance theoretical models are re-
183
+ quired to study how various physical parameters
184
+ could replicate the formation conditions and char-
185
+ acteristics of the GRB central engines. Conversely,
186
+ to predict the final fate of massive stars and rem-
187
+
188
+ LGRB progenitors and magnetar formation
189
+ 3
190
+ Figure 1. (a) Hertzsprung-Russell diagram of the massive star with different initial masses Mini = 15 and 30 M⊙,
191
+ metallicity Z = 0.01 and 0.1 Z⊙, initial rotation rates Ω/Ωcrit = 0.1 and 0.6, and scaling factors of the “Dutch” wind
192
+ loss rate ηwind = 0.5 and 1. (b) Evolution of the central density and temperature in the models. The endpoint of the
193
+ line is marked by the pentagram, corresponding to the time of the iron core collapse.
194
+ nants, researchers should use observational GRB
195
+ and SN data to constrain stellar evolution models.
196
+ In the framework of the 1D single star evolu-
197
+ tion scenario, we explore how initial rotation, mass,
198
+ metallicity, and mass loss impact the final fate of
199
+ the massive stars.
200
+ More importantly, the initial
201
+ signatures of the magnetars are discussed.
202
+ This
203
+ paper is structured as follows.
204
+ In Section 2, we
205
+ describe the physical ingredients of our stellar evo-
206
+ lution models. We present our analysis of pre-SNe
207
+ and the characteristics of the protomagnetar in Sec-
208
+ tion 3. Conclusions and discussion are presented in
209
+ Section 4.
210
+ 2. PROGENITOR MODELING IN MESA
211
+ We
212
+ use
213
+ the
214
+ Modules
215
+ for
216
+ Experiments
217
+ in
218
+ Stellar Astrophysics software package (MESA,
219
+ Paxton et al. 2011, 2013, 2015, 2018, 2019, version
220
+ 21.12.1) to simulate a large set of 158 massive star
221
+ models from the pre-main-sequence stage until the
222
+ onset of iron core collapse. Here, we define collapse
223
+ as the moment when the infall velocity of any point
224
+ inside the stellar model exceeds 1,000 km s−1. We
225
+ take the test suite 20M_pre_ms_to_core_collapse
226
+ as our baseline model.
227
+ Our stellar models cover
228
+ an initial mass range between 10 and 30 M⊙
229
+ with a step size of 1 M⊙, which are expected
230
+ to form NSs upon collapse.
231
+ The initial metal-
232
+ licity is Z = 0.01 and 0.1 Z⊙, where Z is the
233
+ mass fraction of elements heavier than helium and
234
+ Z⊙ is the solar metallicity (e.g., Grevesse & Sauval
235
+ 1998; von Steiger & Zurbuchen 2016).
236
+ Following
237
+ Tout et al. (1996) and Pols et al. (1998), we use
238
+ the initial helium fraction Y = 0.24 + 2Z and ini-
239
+ tial hydrogen mass fraction X = 1 − Y − Z. The
240
+ initial rotational rate Ω/Ωcrit = 0.1 − 0.8 in 0.1
241
+ intervals, which corresponds to the initial equato-
242
+ rial angular velocity Ω ≈ 80 − 770 km s−1. Here,
243
+ Ωcrit = (GM/R3
244
+ e)1/2 is the critical angular veloc-
245
+ ity at the equator of the star, where M and Re
246
+ denote the mass of the star and the equator ra-
247
+ dius, respectively. For stellar winds, all models are
248
+ evolved with the “Dutch” wind-loss scheme (e.g.,
249
+ de Jager et al. 1988;
250
+ Nieuwenhuijzen & de Jager
251
+ 1990;
252
+ Nugis & Lamers
253
+ 2000;
254
+ Vink et al.
255
+ 2001;
256
+
257
+ 4
258
+ Song et al.
259
+ Figure 2. The evolution of stellar mass and total angular momentum.
260
+ Glebbeek et al.
261
+ 2009),
262
+ and
263
+ the
264
+ scaling
265
+ factor
266
+ ηwind = 0.5 or 1. We treat convection using the
267
+ MLT++ scheme of MESA. According to the Ledoux
268
+ criterion and standard mixing length theory (MLT)
269
+ approximation (Cox & Giuli 1968), we chose a mix-
270
+ ing length parameter αMLT = 1.5.
271
+ The effect
272
+ of semiconvection (Langer 1991; Yoon et al. 2006)
273
+ is included, and we adopt a baseline choice of
274
+ αsc = 0.01 except after core helium depletion,
275
+ where αsc = 0.
276
+ Considering thermohaline mix-
277
+ ing (Kippenhahn et al. 1980), we set αth = 2 and
278
+ 0 before and after core helium depletion, respec-
279
+ tively.
280
+ The hydrodynamic mixing instabilities at
281
+ convective boundaries, called overshoot mixing, are
282
+ treated as an exponential decay process (Herwig
283
+ 2000) beyond the Schwarzschild boundary.
284
+ We
285
+ adopt overshoot mixing parameters fov = 0.005
286
+ and f0 = 0.001 (Paxton et al. 2011).
287
+ The de-
288
+ tailed definition and range of these parameters are
289
+ described and discussed by Paxton et al. (2011,
290
+ 2013), Farmer et al. (2016) and on the MESA web-
291
+ site1.
292
+ 1 https://docs.mesastar.org/en/release-r22.05.1/
293
+ We use the approx21_cr60_plus_co56.net net-
294
+ work to compute nuclear reactions in the stel-
295
+ lar interior (Timmes 1999; Paxton et al. 2011).
296
+ This approximate nuclear reaction network con-
297
+ sists of 21 base isotopes plus
298
+ 60Cr and
299
+ 56Co,
300
+ and it is a traditional workhorse in massive
301
+ star models.
302
+ Nuclear reaction rates are from
303
+ JINA REACLIB (Cyburt et al. 2010), where type 2
304
+ opacity tables (Iglesias & Rogers 1996) and the
305
+ Skye (Jermyn et al. 2021) equation of state are
306
+ adopted.
307
+ We start by running models at the pre-main-
308
+ sequence stage and activating rotation when near
309
+ the zero-age-main-sequence. The implementation
310
+ of rotation in MESA closely follows Heger et al.
311
+ (2000) and Heger et al. (2005). Five different ro-
312
+ tationally induced mixing processes are included:
313
+ Solberg-H/oiland instability, secular shear insta-
314
+ bility,
315
+ Eddington-Sweet
316
+ circulation,
317
+ Goldreich-
318
+ Schubert-Fricke, and Spruit-Tayler dynamo, which
319
+ could lead to angular momentum redistribution
320
+ and chemical mixing.
321
+ It should be noted that
322
+ the Spruit-Tayler dynamo only operates in radia-
323
+ tive regions of the star and produces poloidal mag-
324
+ netic fields.
325
+ The toroidal magnetic fields gener-
326
+
327
+ LGRB progenitors and magnetar formation
328
+ 5
329
+ ated by differential winding are orders of mag-
330
+ nitude larger than the poloidal magnetic fields
331
+ (Heger et al. 2005). We assume that the compo-
332
+ sitional mixing efficiency fc = 1/30 (Heger et al.
333
+ 2000).
334
+ The stellar evolution tracks and central condi-
335
+ tions for some of our models are shown in Figure
336
+ 1. The blue and red lines indicate initial masses
337
+ M = 15 and 30 M⊙, respectively. The darker lines
338
+ indicate models with a faster initial rotation speed.
339
+ The solid, dotted, and dashed lines correspond to
340
+ different metallicity and the scaling factor of wind
341
+ (Z/Z⊙, ηwind)=(0.1, 1), (0.1, 0.5), (0.01, 1). The
342
+ endpoint of lines is marked by a pentagram, cor-
343
+ responding to the time of the iron core collapse.
344
+ The more rapidly rotating model evolves toward
345
+ the blue part of the Hertzsprung-Russell (H-R) di-
346
+ agram and ends its life with a WR star. The slower
347
+ rotating models evolve toward the red part of the
348
+ H-R diagram and become red supergiants.
349
+ The
350
+ effective temperature and luminosity generally in-
351
+ crease with increasing initial mass, rotation veloc-
352
+ ity, and scaling factor of wind and decrease with
353
+ increasing metallicity. Roughly, the more massive
354
+ the stars are, the higher the central density and
355
+ temperature.
356
+ In Figure 2, we present the evolution of stellar
357
+ mass and total angular momentum. More rapid ro-
358
+ tation, lower metallicity, and a smaller scaling fac-
359
+ tor of wind can extend the star lifetime. The final
360
+ mass decreases with increasing rotation rate. The
361
+ larger the initial velocity of a star is, the greater
362
+ the angular momentum loss.
363
+ 3. RESULTS
364
+ 3.1. Pre-SN Properties
365
+ We summarize the properties of all models at
366
+ the end of their evolution in Table 1 of the Ap-
367
+ pendix, listing the initial mass, metallicity, rota-
368
+ tional rate, and corresponding velocity at the equa-
369
+ tor, the “Dutch” wind scale factor, the initial total
370
+ angular momentum, final mass, ages, radius, ef-
371
+ fective temperature and luminosity, final angular
372
+ momentum and masses of the He/CO/Fe core at
373
+ the end of the calculation. The compactness pa-
374
+ rameter, mass, average magnetic field strength, and
375
+ rotation period of the protomagnetar are also pre-
376
+ sented in the table.
377
+ In Figure 3, we show the final stellar masses
378
+ Mf, helium core masses MHe, carbon-oxygen core
379
+ masses MCO, and iron core mass MFe of mas-
380
+ sive stars at the beginning of iron core collapse
381
+ as a function of the initial surface angular veloc-
382
+ ity Ω. The different colors correspond to various
383
+ initial parameters.
384
+ The He and CO core masses
385
+ are measured using the mass coordinate, where the
386
+ mass fraction of hydrogen is less than 0.1 for MHe
387
+ and the mass fraction of helium decreases below
388
+ 0.1 for MCO. Here, we define the iron core mass
389
+ boundary as the outermost location where the Si
390
+ mass fraction is less than 0.1.
391
+ Remarkably, the
392
+ radius and composition of the stellar core might
393
+ change with different core mass definitions (e.g.,
394
+ Sukhbold & Woosley 2014; Laplace et al. 2021). It
395
+ is difficult to precisely define the stellar core bound-
396
+ ary because there is a composition and density gra-
397
+ dient at the edge. Moreover, the stellar core mass
398
+ and boundary are also affected by semiconvection
399
+ and overshoot (Schootemeijer et al. 2019).
400
+ It is obvious from Figure 3 (a) and (b) that MHe
401
+ values approximate Mf values when Ω/Ωcrit ≥ 0.4.
402
+ For the more massive and lower metallicity stars,
403
+ the corresponding values Ω/Ωcrit ≥ 0.3. In other
404
+ words, stars almost entirely lose their hydrogen en-
405
+ velopes and become naked He cores.
406
+ Note that
407
+ Mf, MHe, MCO and MFe show little change when
408
+ Ω/Ωcrit ≥ 0.4. The values of Mf, MHe, and MCO
409
+ are positively correlated with the initial mass Mini
410
+ and negatively correlated with the initial metallic-
411
+ ity Z and scaling factor of wind loss ηwind when
412
+ the initial surface angular velocity is low. This is
413
+ similar to models without rotation. For high initial
414
+ surface angular velocity, rotation plays an impor-
415
+ tant role in enhanced stellar mass loss according
416
+ to the prescription, i.e.,
417
+ ˙M ∝ 1/(1 − Ω/Ωcrit)0.43
418
+ (Paxton et al. 2013).
419
+ Similar to Figure 3, the Mf, MHe, MCO, and MFe
420
+ of massive stars at the beginning of iron core col-
421
+ lapse as a function of initial stellar mass Mini are
422
+ presented in Figure 4. Here, we set Ω/Ωcrit = 0.6.
423
+
424
+ 6
425
+ Song et al.
426
+ Figure 3. Final stellar mass Mf, He core mass MHe, CO core mass MCO and iron core mass MFe at the beginning
427
+ of core collapse as a function of initial surface angular velocity Ω/Ωcrit.
428
+
429
+ LGRB progenitors and magnetar formation
430
+ 7
431
+ Figure 4. Final stellar mass Mf, He core mass MHe, CO core mass MCO and iron core mass MFe at the beginning
432
+ of core collapse as a function of initial stellar mass Mini.
433
+
434
+ 8
435
+ Song et al.
436
+ Figure 5.
437
+ Final hydrogen, helium, and metallicity
438
+ mass fraction on the surface versus initial rotation rate
439
+ for different initial mass, metallicity, and mass loss
440
+ rates.
441
+ Figure 6.
442
+ Final hydrogen, helium, and metallicity
443
+ mass fraction on the surface versus initial mass for dif-
444
+ ferent mass loss rates and initial metallicity values at
445
+ the initial surface angular velocity Ω = 0.6 Ωcrit.
446
+
447
+ LGRB progenitors and magnetar formation
448
+ 9
449
+ It is easy to find that Mf, MHe, and MCO increase
450
+ as the initial mass increases, except for some fluc-
451
+ tuations. MFe of massive stars with different initial
452
+ Z and ηwind fluctuates with Ω and Mini, and there
453
+ is no obvious correlation. Iron core masses in our
454
+ models range from 1.29 to 1.92 M⊙, and most of
455
+ them are concentrated between 1.4 − 1.6 M⊙.
456
+ In Figure 5, we show the final hydrogen (up-
457
+ per panel), helium (middle panel), and metallicity
458
+ (lower panel) mass fraction on the surface versus Ω
459
+ for different Mini, Z, and ηwind. As shown in the
460
+ left panel of Figure 5, all of the hydrogen mass frac-
461
+ tions on the surface of massive stars are lower than
462
+ 0.1 when Ω/Ωcrit ≥ 0.4. This suggests that rapid
463
+ rotation can cause stars to lose their hydrogen en-
464
+ velopes. It should be noted that the mass fractions
465
+ of hydrogen, helium, and metallicity show slight
466
+ changes when Ω/Ωcrit ≥ 0.4. In Figure 6, we show
467
+ the final hydrogen, helium, and metallicity mass
468
+ fraction values on the surface versus Mini for dif-
469
+ ferent Z and ηwind. Here, we set Ω/Ωcrit=0.6. The
470
+ results reveal that the surface helium mass fraction
471
+ decreases gradually with increasing Mini, while the
472
+ surface metallicity mass fraction displays the oppo-
473
+ site trend.
474
+ Massive stars without hydrogen envelopes could
475
+ core-collapse and give rise to type Ib or Ic SNe.
476
+ Type Ib or Ic SNe are distinguished based on the
477
+ presence or absence of helium lines in the spec-
478
+ trum. Sometimes, it is difficult to distinguish them
479
+ from limited observational data.
480
+ Most progeni-
481
+ tor models still retain some helium after the core
482
+ collapse. The different characteristics of the pro-
483
+ genitor stars which lead to the difference between
484
+ Ib or Ic SNe remain unknown (Hachinger et al.
485
+ 2012; Dessart et al. 2011, 2012, 2022). Therefore,
486
+ these stars are usually collectively referred to as the
487
+ Ibc SNe.
488
+ Following Eldridge & Tout (2004) and
489
+ Eldridge (2005), we use surface helium abundance
490
+ Ysurface to estimate the SNe type produced by the
491
+ progenitors. If Ysurface < 0.3, a type Ic SN occurs;
492
+ if 0.3 < Ysurface < 0.7, we label the star as type
493
+ Ibc due to uncertain, and if Ysurface > 0.7, a type
494
+ Ib star results. As shown in the middle panel of
495
+ Figures 5 and 6 and He masses in 3 and 4, most
496
+ of our progenitor models would give rise to Ibc SN,
497
+ and a small fraction of them would produce Ic or
498
+ Ib SN.
499
+ 3.2. Protomagnetar signatures
500
+ After a massive star collapses, a proto-NS might
501
+ be born in the center. In this paper, we take the
502
+ iron core masses as the NS baryonic masses Mb.
503
+ The binding energy will be released by emitting
504
+ neutrinos during the iron core collapse to the NS.
505
+ Following Lattimer & Prakash (2001), the NS grav-
506
+ itational mass MNS can be calculated by
507
+ Mb − MNS
508
+ MNS
509
+
510
+ 0.6 GMNS/c2
511
+ 1 − 0.5 GMNS/c2 .
512
+ (1)
513
+ The change in NS mass due to material fallback
514
+ in accretion during the explosion is negligible, espe-
515
+ cially for stars with zero-age-main-sequence masses
516
+ less than 30 M⊙. The results of accurate calcula-
517
+ tions, including fallback, exhibit good agreement
518
+ with this simple approach (e.g., Sukhbold et al.
519
+ 2016; Ertl et al. 2020).
520
+ Magnetic fields can be generated by differen-
521
+ tial rotation in radiative regions (e.g., Spruit 2002;
522
+ Petrovic et al. 2005; Heger et al. 2005). The aver-
523
+ age strength of dynamo-generated magnetic fields
524
+ inside the iron core is given by
525
+ < Bφ >=
526
+ � MFe
527
+ 0
528
+ Bφ(m)dm
529
+ � MFe
530
+ 0
531
+ dm
532
+ .
533
+ (2)
534
+ Assuming the conservation of magnetic flux and
535
+ iron core contracted homogeneously to the NS with
536
+ radius RNS = 12 km, we obtain the average surface
537
+ magnetic field strength of NSs.
538
+ The
539
+ core
540
+ compactness
541
+ was
542
+ defined
543
+ as
544
+ (O’Connor & Ott 2011)
545
+ ξM0 =
546
+ M0/M⊙
547
+ R(Mbary = M0)/1000 km,
548
+ (3)
549
+ where M0 = 2.5 M⊙ was chosen to evaluate the
550
+ compactness parameter (e.g., Ugliano et al. 2012;
551
+ Sukhbold & Woosley 2014), which corresponds to
552
+ the point where the collapse speed first reaches
553
+ 1, 000 km s−1. This has been pointed out as a rough
554
+
555
+ 10
556
+ Song et al.
557
+ Figure 7. The periods PNS of the protomagnetar as a function of initial mass Mini. All progenitor models are
558
+ included.
559
+ Figure 8. The mass MNS of the protomagnetar as a function of initial mass Mini. All progenitor models are included.
560
+ indicator of whether the collapse of a nonrotat-
561
+ ing stellar core could lead to a successful neutrino-
562
+ driven explosion (ξ2.5 < 0.45) or form a BH (ξ2.5 >
563
+ 0.45). While this criterion is not sufficient to ac-
564
+ curately predict the fate of rapidly spinning stars,
565
+ it can still reveal structural features of the stel-
566
+ lar core (e.g., Ertl et al. 2016; Müller et al. 2016;
567
+ Aguilera-Dena et al. 2020).
568
+ The natal spin rates of NSs can be determined
569
+ by angular momentum within the iron core:
570
+ PNS = 2πINS
571
+ JFe
572
+ .
573
+ (4)
574
+
575
+ LGRB progenitors and magnetar formation
576
+ 11
577
+ Figure 9. The average magnetic field strength of the protomagnetar as a function of initial mass Mini. All progenitor
578
+ models are included.
579
+ Figure 10. Compactness parameters ξ2.5 of progenitor models as a function of initial rotation rate Ω/Ωcrit and mass
580
+ Mini. The ξ2.5 = 0.45 line separates models that could explode (gray) and disallowed (white) regions.
581
+
582
+ 12
583
+ Song et al.
584
+ Figure 11. The total rotation energy Erot as a function of periods PNS. All models involved in the diagram eventually
585
+ evolved into WR stars.
586
+
587
+ LGRB progenitors and magnetar formation
588
+ 13
589
+ Here, the moment of inertia of the NSs is set as INS
590
+ = 0.35MNSR2
591
+ NS (e.g., Lattimer & Prakash 2001).
592
+ The periods PNS, masses MNS and average sur-
593
+ face magnetic fields BNS of proto-NSs after core
594
+ collapse are presented in Figures 7, 8, and 9, re-
595
+ spectively. The left and right panels correspond to
596
+ the “Dutch” wind scaling factor ηwind = 1 and 0.5,
597
+ respectively. The pentagrams and solid circles rep-
598
+ resent different metallicity Z = 0.01 and 0.1 Z⊙.
599
+ The ratio of the angular velocity to critical angu-
600
+ lar velocity Ω/Ωcrit is indicated by the color bar,
601
+ which varies from 0.1 − 0.8. In Figure 7, we find
602
+ that when ηwind = 1, the PNS of most proto-NSs
603
+ born in lower initial metallicity (Z = 0.01 Z⊙) pro-
604
+ genitors are larger than those of progenitors pro-
605
+ duced by higher initial metallicity (Z = 0.1 Z⊙).
606
+ However, this phenomenon disappears when ηwind
607
+ = 0.5. The periods PNS range from 1.0 − 2.8 ms.
608
+ The rotation rate of NSs does not increase mono-
609
+ tonically with the initial velocity of the predeces-
610
+ sor star, which may be related to the fact that the
611
+ angular momentum loss during star evolution is in-
612
+ fluenced by several factors. As shown in Figure 8,
613
+ the masses MNS span from 1.18 M⊙ to 1.68 M⊙,
614
+ with a median value of 1.4 M⊙.
615
+ Similar to Figure 7, we can find in Figure 9
616
+ that the average surface magnetic fields BNS of
617
+ most proto-NSs born from lower initial metallicity
618
+ (Z = 0.01 Z⊙) progenitors are smaller than those
619
+ of progenitors produced by higher initial metallic-
620
+ ity (Z = 0.1 Z⊙) when ηwind = 1, but not for ηwind
621
+ = 0.5. Even after taking into account the effects
622
+ of initial velocity, stellar wind loss, and metallicity,
623
+ progenitor stars with larger initial masses are still
624
+ more likely to form fast-spinning NSs. The more
625
+ massive NSs are more likely to have faster rotation
626
+ rates and stronger magnetic field strengths.
627
+ Re-
628
+ gardless, these NSs are definitely classified as mil-
629
+ lisecond magnetars.
630
+ The compactness parameter ξ2.5 is calculated ac-
631
+ cording to Equation (3). As shown in Figure 10,
632
+ ξ2.5 varies with the initial rotation rate Ω/Ωcrit and
633
+ initial mass Mini of progenitors. The ξ2.5 = 0.45
634
+ line separates models that could explode, which are
635
+ driven by neutrino (gray) and disallowed (white)
636
+ regions. It is easy to find that most of our progeni-
637
+ tor models could explode successfully. The models
638
+ with Z = 0.01 Z⊙ and ηwind = 1 appear to have
639
+ rather low compactness. The values of the com-
640
+ pactness parameter are less affected by the initial
641
+ velocity of stars, especially for ηwind = 1. For mod-
642
+ els with the same initial velocity, the compactness
643
+ parameter varies with the initial mass in a non-
644
+ monotonic way. It is consistent with previous stud-
645
+ ies (Müller et al. 2016; Aguilera-Dena et al. 2020).
646
+ The compactness parameter is positively correlated
647
+ with the mass of the NSs in all cases.
648
+ To explore the ability of newborn magnetars to
649
+ power GRBs, the total rotation energy Erot and
650
+ PNS of magnetars born from different values of ini-
651
+ tial mass, metallicity, wind loss rate, and initial
652
+ rotation rate are shown in Figure 11. All of the
653
+ progenitor stars are WR stars that have lost their
654
+ hydrogen envelope. Erot ranges from ∼ 5 × 1051
655
+ to ∼ 2 × 1052 ergs, and PNS ranges from 1.2 to
656
+ 2.5 ms.
657
+ We conclude that stars with larger ini-
658
+ tial masses tend to produce rapidly rotating mag-
659
+ netars with more rotational energy, even when dif-
660
+ ferent initial rotation rates and stellar wind losses
661
+ are taken into account.
662
+ This phenomenon is es-
663
+ pecially evident for stars with Z = 0.01Z⊙. Con-
664
+ sidering that the envelopes are easily disrupted by
665
+ jets in most cases, the above results indicate that
666
+ our models could meet the energy requirements
667
+ of most of the observed LGRBs, including some
668
+ collapsar-origin SGRBs (e.g. Zhang et al. 2009;
669
+ Levesque et al. 2010; Thöne et al. 2011; Xin et al.
670
+ 2011). In other words, we present theoretical evi-
671
+ dence on magnetar-origin GRBs by simulating the
672
+ evolution of fast-rotating, low-metallicity, and mas-
673
+ sive stars.
674
+ 4. CONCLUSIONS AND DISCUSSION
675
+ We use MESA code to simulate a grid of fast-
676
+ rotating (Ω/Ωcrit = 0.1−0.8 with 0.1 interval), low
677
+ metallicity (Z = 0.1 and 0.01 Z⊙), and massive
678
+ star (10 − 30 M⊙ with mass interval 1 M⊙) mod-
679
+ els with different “Dutch” wind-loss scaling factors
680
+ (ηwind = 0.5 and 1) from the pre-main-sequence
681
+ stage until core collapse.
682
+ The effects of the ini-
683
+ tial rotation, mass, metallicity, and mass loss on
684
+
685
+ 14
686
+ Song et al.
687
+ the formation and characteristics of the GRB pro-
688
+ genitor and protomagnetar are explored. The fi-
689
+ nal stellar, helium and carbon-oxygen core masses
690
+ roughly increase with increasing initial mass and
691
+ decrease moderately with increasing initial rota-
692
+ tion rate. We also discuss the progenitors of the
693
+ different types of supernovae. Most of our progen-
694
+ itor models would give rise to type Ibc SNe, and
695
+ a small fraction of them will produce type Ic or Ib
696
+ SNe. Furthermore, we find that the compactness
697
+ parameter remains a nonmonotonic function of the
698
+ initial mass and initial velocity when the effects of
699
+ different metallicity and wind mass loss are consid-
700
+ ered. All of our stellar models evolved to WR stars
701
+ at a high initial rotation rate Ω/Ωcrit ≥ 0.4. The
702
+ period of a protomagnetar ranges from 1.0 − 2.8
703
+ ms with a median value of 1.63 ms, and the mass
704
+ ranges from 1.18 − 1.68 M⊙ with a median value
705
+ of 1.41 M⊙.
706
+ The average surface magnetic field
707
+ strength is concentrated on the order of ∼ 1014 G.
708
+ The Erot can be up to about 2 × 1052 ergs. The
709
+ rotational energy, magnetic field strength, and pe-
710
+ riod of the protomagnetar are satisfied to power the
711
+ typical LGRBs.
712
+ Observation and theory demonstrated that GRB
713
+ rates and properties vary with redshift due to the
714
+ different distributions of metallicity, star forma-
715
+ tion rate, and initial mass distributions of massive
716
+ stars at different redshifts (e.g., Yonetoku et al.
717
+ 2004; Hirschi et al. 2005; Langer & Norman 2006;
718
+ Madau & Dickinson 2014; Taggart & Perley 2021).
719
+ The evolution of LGRB progenitors is also influ-
720
+ enced by redshift (Yoon et al. 2006; Fryer et al.
721
+ 2022). However, the analysis of the metallicity of
722
+ GRB host galaxies reveals that GRBs favor subso-
723
+ lar metallicity, even considering the redshift effect
724
+ (Levan et al. 2016).
725
+ Therefore, we only consider
726
+ two subsolar metallicity values Z = 0.1 and 0.01
727
+ Z⊙ in our stellar models.
728
+ It is unclear how much the evolutionary path-
729
+ ways of single and binary stars contributed to the
730
+ production of type Ibc SN and GRB progenitors
731
+ (e.g., Langer 2012). Here, we do not consider the
732
+ interaction of binary stars and focus on exploring
733
+ the effects of different parameters on single star
734
+ evolution and the formation of LGRB central en-
735
+ gines. The evolution of binary stars is more com-
736
+ plicated than that of a single star.
737
+ In addition
738
+ to some usual uncertainties in single star evolu-
739
+ tion, as we mentioned above, the evolution path
740
+ and outcome of a binary star system also depend
741
+ on the initial mass ratio, orbital eccentricity, ma-
742
+ terial, and angular momentum exchange. Binary
743
+ channels might also be important for the forma-
744
+ tion of GRB progenitors (e.g., Fryer et al. 1999;
745
+ Fryer & Heger 2005; Eldridge et al. 2011).
746
+ In a
747
+ close binary system, the mass gainer can be spun up
748
+ to break up the rotation velocity through mass and
749
+ angular momentum transport (e.g., Cantiello et al.
750
+ 2007).
751
+ Therefore,
752
+ the binary progenitors for
753
+ LGRBs do not require a fast initial rotation veloc-
754
+ ity to strip the stellar hydrogen envelope and retain
755
+ sufficient angular momentum to produce a GRB
756
+ jet (e.g., Podsiadlowski et al. 2010; Chrimes et al.
757
+ 2020). Moreover, Laplace et al. (2021) studied the
758
+ systematic differences in the core density and com-
759
+ position structure of solar metallicity nonrotating
760
+ single and donor stars in binary systems with an
761
+ initial mass range of 11 − 21 M⊙.
762
+ They con-
763
+ cluded that single stars systematically possessed
764
+ more massive He cores than binary-striped stars
765
+ with the same initial mass. Lloyd-Ronning (2022)
766
+ proposed that radio-loud GRBs are produced by in-
767
+ teracting binary systems, while radio-quiet GRBs
768
+ originate from the collapse of single stars.
769
+ The material fallback process during an SN ex-
770
+ plosion can affect the final properties of compact
771
+ remnants (Zhang et al. 2008; Fryer et al. 2012;
772
+ Ugliano et al. 2012; Janka 2013; Müller et al. 2019;
773
+ Ertl et al. 2020). First, the mass of NS could grow
774
+ by accretion of fallback material.
775
+ In this paper,
776
+ we take the iron core mass as the baryonic mass
777
+ of NS, then convert it to NS gravitational mass
778
+ through an analytical, radius-dependent approach
779
+ (Lattimer & Prakash 2001) and do not consider de-
780
+ tailed SNe explosion physics.
781
+ The mass of our
782
+ NSs ranges from 1.18 M⊙ to 1.68 M⊙, with a me-
783
+ dian value of 1.41 M⊙. This result exhibits good
784
+ agreement with the previous simulation, includ-
785
+ ing fallback (e.g., Ugliano et al. 2012; Müller et al.
786
+
787
+ LGRB progenitors and magnetar formation
788
+ 15
789
+ 2016; Sukhbold et al. 2016).
790
+ However, if a large
791
+ amount of matter could fall on the newly born NS,
792
+ a black hole might form.
793
+ Second, recent three-
794
+ dimensional simulations of nonrotating progenitors
795
+ indicate that the NS can be spun up to millisecond
796
+ periods by fallback (Chan et al. 2020). Here, we
797
+ estimate the rotation rate of the newly born NS by
798
+ the conservation of the total angular momentum in
799
+ the iron core, neglecting the fallback process.
800
+ There is no firm conclusion regarding the up-
801
+ per and lower limits of NS mass born in na-
802
+ ture (Özel & Freire 2016).
803
+ It is difficult to form
804
+ NSs with masses below 1.2 M⊙ through iron-core
805
+ collapse SN explosions, although low-mass NSs
806
+ (≈ 0.93 M⊙) have been observed.
807
+ These low-
808
+ mass NSs may be born in an electron-capture SNe
809
+ explosion with 8−10 M⊙ progenitors (Lattimer
810
+ 2012). The well-measured massive pulsar masses,
811
+ such as J1614+2230 (1.97±0.04 M⊙), J0348+0432
812
+ (2.01±0.04 M⊙), J2215+5135 (2.27±0.17 M⊙),
813
+ and J0740+6620 (2.14±0.1 M⊙), indicate that
814
+ the
815
+ maximum
816
+ NS
817
+ mass
818
+ can
819
+ be
820
+ at
821
+ least
822
+ 2
823
+ M⊙ (Demorest et al. 2010; Antoniadis et al. 2013;
824
+ Linares et al. 2018; Cromartie et al. 2020).
825
+ The
826
+ observation of a compact object in a binary merger
827
+ GW190814 (Abbott et al. 2020) could be an indi-
828
+ cation of the most massive NS mass reaching 2.6 −
829
+ 2.9 M⊙ (Godzieba et al. 2021). The NS mass dis-
830
+ tributions in binary systems have been investigated
831
+ by many authors (Thorsett & Chakrabarty 1999;
832
+ Fryer et al. 2012).
833
+ Woosley (2019), Ertl et al.
834
+ (2020), and Woosley et al. (2020) calculated the
835
+ evolution and explosion of a grid of nonrotating he-
836
+ lium stars and studied the remnant mass distribu-
837
+ tions. They simplify binary star evolution and as-
838
+ sume that the entire hydrogen envelope is promptly
839
+ removed by binary interaction at the moment of he-
840
+ lium core ignition. Their results show that the me-
841
+ dian NS masses in binary systems are in the range
842
+ of 1.32 − 1.37 M⊙ and vary slightly with mass loss
843
+ and metallicity (Woosley et al. 2020). They set the
844
+ lightest and heaviest NS gravitational mass at 1.24
845
+ and 2.3 M⊙, respectively.
846
+ Previous observations
847
+ have shown overwhelming evidence that the mass
848
+ distribution of NSs is bimodal (e.g., Özel et al.
849
+ 2012; Antoniadis et al. 2016; Alsing et al. 2018),
850
+ with two peaks at 1.3 M⊙ and 1.5−1.7 M⊙.
851
+ The distribution of the compactness parameter
852
+ for nonrotating stars has been studied by some au-
853
+ thors (O’Connor & Ott 2011; Ugliano et al. 2012;
854
+ Sukhbold & Woosley
855
+ 2014;
856
+ Müller et al.
857
+ 2016).
858
+ The approximate critical value between black
859
+ hole and NS formation varies in different stud-
860
+ ies.
861
+ Müller et al. (2016) proposed a semianalytic
862
+ method to determine explodability and found that
863
+ ξ2.5 = 0.278 is the best value for discriminating suc-
864
+ cessful or failed explosions.
865
+ Aguilera-Dena et al.
866
+ (2020) calculated the compactness of 42 rotating
867
+ massive star models with initial equatorial rotation
868
+ velocities of 600 km s−1 and Z = 0.02 Z⊙. They
869
+ show that semianalytic exploding models are com-
870
+ patible with the BH and NS thresholds η2.5 = 0.45.
871
+ They suggested that the compactness is a non-
872
+ monotonic function of the initial mass for rapidly
873
+ rotating stars. In this paper, we explore the effects
874
+ of the initial rotation rate, initial mass, metallicity
875
+ and mass loss on compactness. The results show
876
+ that the compactness parameter remains a non-
877
+ monotonic function of the initial mass and initial
878
+ velocity when the effects of different metallicity and
879
+ wind mass loss are considered. In the future, we
880
+ will test the accuracy of the compactness param-
881
+ eter as a criterion for rotating massive star explo-
882
+ sions by simulating the process of SN explosion. We
883
+ will also investigate the effects of the initial mass,
884
+ metallicity and rotational rate on the SN explosion
885
+ energy, ejecta mass and remnant characteristics.
886
+ Software:
887
+ MESA
888
+ (Paxton et al.
889
+ 2011,
890
+ 2013,
891
+ 2015,
892
+ 2018, 2019),
893
+ MatPlotLib
894
+ (Hunter
895
+ 2007), NASA ADS, py_mesa_reader, NumPy
896
+ (van der Walt et al.
897
+ 2011),
898
+ and
899
+ Pandas
900
+ (Reback et al. 2022).
901
+
902
+ 16
903
+ Song et al.
904
+ ACKNOWLEDGMENTS
905
+ We appreciate Prof. Alexander Heger and Prof.
906
+ Bernhard Müller for their constructive suggestions
907
+ and comments, and thank Prof.
908
+ Dong Lai, Dr.
909
+ Tuan Yi, Dr. Zhenyu Zhu, and Shuai Zha for help-
910
+ ful discussion. The computations in this paper were
911
+ run on the π2.0 cluster supported by the Center
912
+ for High Performance Computing at Shanghai Jiao
913
+ Tong University. This work was supported by the
914
+ National Natural Science Foundation of China un-
915
+ der grants 12103033, 12173031, and 12221003.
916
+ REFERENCES
917
+ Abbott, R., Abbott, T. D., Abraham, S., et al. 2020,
918
+ ApJL, 896, L44. doi:10.3847/2041-8213/ab960f
919
+ Aguilera-Dena, D. R., Langer, N., Antoniadis, J., &
920
+ Müller, B. 2020, ApJ, 901, 114.
921
+ doi:10.3847/1538-4357/abb138
922
+ Aguilera-Dena, D. R., Langer, N., Moriya, T. J., &
923
+ Schootemeijer, A. 2018, ApJ, 858, 115.
924
+ doi:10.3847/1538-4357/aabfc1
925
+ Aloy, M. Á., & Obergaulinger, M. 2021, MNRAS, 500,
926
+ 4365. doi:10.1093/mnras/staa3273
927
+ Alsing, J., Silva, H. O., & Berti, E. 2018, MNRAS,
928
+ 478, 1377. doi:10.1093/mnras/sty1065
929
+ Antoniadis, J., Freire, P. C. C., Wex, N., et al. 2013,
930
+ Science, 340, 448. doi:10.1126/science.1233232
931
+ Antoniadis, J., Tauris, T. M.,Özel, F., et al. 2016,
932
+ arXiv:1605.01665.
933
+ Blandford, R. D., & Znajek, R. L. 1977, MNRAS, 179,
934
+ 433. doi:10.1093/mnras/179.3.433
935
+ Bloom, J. S., Kulkarni, S. R., Djorgovski, S. G., et al.
936
+ 1999, Nature, 401, 453. doi:10.1038/46744
937
+ Bucciantini, N., Metzger, B. D., Thompson, T. A., &
938
+ Quataert, E. 2012, MNRAS, 419, 1537.
939
+ doi:10.1111/j.1365-2966.2011.19810.x
940
+ Bucciantini, N., Quataert, E., Arons, J., Metzger,
941
+ B. D., & Thompson, T. A. 2008, MNRAS, 383, L25.
942
+ doi:10.1111/j.1745-3933.2007.00403.x
943
+ Bucciantini, N., Quataert, E., Metzger, B. D., et al.
944
+ 2009, MNRAS, 396, 2038.
945
+ doi:10.1111/j.1365-2966.2009.14940.x
946
+ Cano, Z., Wang, S.-Q., Dai, Z.-G., & Wu, X.-F. 2017,
947
+ Advances in Astronomy, 2017, 8929054.
948
+ doi:10.1155/2017/8929054
949
+ Cantiello, M., Yoon, S. C., Langer, N., & Livio, M.
950
+ 2007, A&A, 465, L29.
951
+ doi:10.1051/0004-6361:20077115
952
+ Chan, C., Müller, B., & Heger, A. 2020, MNRAS, 495,
953
+ 3751. doi:10.1093/mnras/staa1431
954
+ Chen, W.-X., & Beloborodov, A. M. 2007, ApJ, 657,
955
+ 383. doi:10.1086/508923
956
+ Chrimes, A. A., Stanway, E. R., & Eldridge, J. J.
957
+ 2020, MNRAS, 491, 3479.
958
+ doi:10.1093/mnras/stz3246
959
+ Cox, J. P., & Giuli, R. T. 1968, Principles of stellar
960
+ structure (New York: Gordon and Breach)
961
+ Cromartie, H. T., Fonseca, E., Ransom, S. M., et al.
962
+ 2020, Nature Astronomy, 4, 72.
963
+ doi:10.1038/s41550-019-0880-2
964
+ Crowther, P. A. 2007, ARA&A, 45, 177.
965
+ doi:10.1146/annurev.astro.45.051806.110615
966
+ Cyburt, R. H., Amthor, A. M., Ferguson, R., et al.
967
+ 2010, ApJS, 189, 240.
968
+ doi:10.1088/0067-0049/189/1/240
969
+ Dai, Z. G. 2004, ApJ, 606, 1000. doi:10.1086/383019
970
+ Dall’Osso, S., Stratta, G., Guetta, D., et al. 2011,
971
+ A&A, 526, A121. doi:10.1051/0004-6361/201014168
972
+ Dessart, L., Hillier, D. J., Livne, E., et al. 2011,
973
+ MNRAS, 414, 2985.
974
+ doi:10.1111/j.1365-2966.2011.18598.x
975
+
976
+ LGRB progenitors and magnetar formation
977
+ 17
978
+ Dessart, L., Hillier, D. J., Li, C., et al. 2012, MNRAS,
979
+ 424, 2139. doi:10.1111/j.1365-2966.2012.21374.x
980
+ Dessart, L., Prieto, J. L., Hillier, D. J., et al. 2022,
981
+ arXiv:2209.13248
982
+ de Jager, C., Nieuwenhuijzen, H., & van der Hucht,
983
+ K. A. 1988, A&AS, 72, 259
984
+ de Mink, S. E., Langer, N., Izzard, R. G., Sana, H., &
985
+ de Koter, A. 2013, ApJ, 764, 166.
986
+ doi:10.1088/0004-637X/764/2/166
987
+ Demorest, P. B., Pennucci, T., Ransom, S. M.,
988
+ Roberts, M. S. E., & Hessels, J. W. T. 2010,
989
+ Nature, 467, 1081. doi:10.1038/nature09466
990
+ Duncan, R. C., & Thompson, C. 1992, ApJL, 392, L9.
991
+ doi:10.1086/186413
992
+ Ekström, S., Georgy, C., Eggenberger, P., et al. 2012,
993
+ A&A, 537, A146. doi:10.1051/0004-6361/201117751
994
+ Eldridge, J. J. 2005, Ph.D. Thesis
995
+ Eldridge, J. J., Langer, N., & Tout, C. A. 2011,
996
+ MNRAS, 414, 3501.
997
+ doi:10.1111/j.1365-2966.2011.18650.x
998
+ Eldridge, J. J., & Tout, C. A. 2004, MNRAS, 353, 87.
999
+ doi:10.1111/j.1365-2966.2004.08041.x
1000
+ Ertl, T., Janka, H. T., Woosley, S. E., Sukhbold, T.,
1001
+ & Ugliano, M. 2016, ApJ, 818, 124.
1002
+ doi:10.3847/0004-637X/818/2/124
1003
+ Ertl, T., Woosley, S. E., Sukhbold, T., & Janka, H. T.
1004
+ 2020, ApJ, 890, 51. doi:10.3847/1538-4357/ab6458
1005
+ Farmer, R., Fields, C. E., Petermann, I., et al. 2016,
1006
+ ApJS, 227, 22. doi:10.3847/1538-4365/227/2/22
1007
+ Fryer, C. L., Belczynski, K., Wiktorowicz, G., et al.
1008
+ 2012, ApJ, 749, 91.
1009
+ doi:10.1088/0004-637X/749/1/91
1010
+ Fryer, C. L., & Heger, A. 2005, ApJ, 623, 302.
1011
+ doi:10.1086/428379
1012
+ Fryer, C. L., Lien, A. Y., Fruchter, A., et al. 2022,
1013
+ ApJ, 929, 111. doi:10.3847/1538-4357/ac5d5c
1014
+ Fryer, C. L., Woosley, S. E., & Hartmann, D. H. 1999,
1015
+ ApJ, 526, 152. doi:10.1086/307992
1016
+ Galama, T. J., Vreeswijk, P. M., van Paradijs, J., et
1017
+ al. 1998, Nature, 395, 670. doi:10.1038/27150
1018
+ Georgy, C., Ekström, S., Meynet, G., et al. 2012,
1019
+ A&A, 542, A29. doi:10.1051/0004-6361/201118340
1020
+ Glebbeek, E., Gaburov, E., de Mink, S. E., Pols,
1021
+ O. R., & Portegies Zwart, S. F. 2009, A&A, 497,
1022
+ 255. doi:10.1051/0004-6361/200810425
1023
+ Godzieba, D. A., Radice, D., & Bernuzzi, S. 2021,
1024
+ ApJ, 908, 122. doi:10.3847/1538-4357/abd4dd
1025
+ Grevesse, N., & Sauval, A. J. 1998, SSRv, 85, 161.
1026
+ doi:10.1023/A:1005161325181
1027
+ Gu, W.-M., Liu, T., & Lu, J.-F. 2006, ApJL, 643,
1028
+ L87. doi:10.1086/505140
1029
+ Guetta, D., Piran, T., & Waxman, E. 2005, ApJ, 619,
1030
+ 412. doi:10.1086/423125
1031
+ Hachinger, S., Mazzali, P. A., Taubenberger, S., et al.
1032
+ 2012, MNRAS, 422, 70.
1033
+ doi:10.1111/j.1365-2966.2012.20464.x
1034
+ Heger, A., Fryer, C. L., Woosley, S. E., Langer, N., &
1035
+ Hartmann, D. H. 2003, ApJ, 591, 288.
1036
+ doi:10.1086/375341
1037
+ Heger, A., Langer, N., & Woosley, S. E. 2000, ApJ,
1038
+ 528, 368. doi:10.1086/308158
1039
+ Heger, A., Woosley, S. E., & Spruit, H. C. 2005, ApJ,
1040
+ 626, 350. doi:10.1086/429868
1041
+ Herwig, F. 2000, A&A, 360, 952.
1042
+ Hirschi, R., Meynet, G., & Maeder, A. 2005, A&A,
1043
+ 443, 581. doi:10.1051/0004-6361:20053329
1044
+ Hjorth, J., Sollerman, J., Møller, P., et al. 2003,
1045
+ Nature, 423, 847. doi:10.1038/nature01750
1046
+ Huang, B.-Q., & Liu, T. 2021, ApJ, 916, 71.
1047
+ doi:10.3847/1538-4357/ac07a0
1048
+ Hunter, J. D. 2007, Computing in Science and
1049
+ Engineering, 9, 90. doi:10.1109/MCSE.2007.55
1050
+ Iglesias, C. A., & Rogers, F. J. 1996, ApJ, 464, 943.
1051
+ doi:10.1086/177381
1052
+ Izzard, R. G., Ramirez-Ruiz, E., & Tout, C. A. 2004,
1053
+ MNRAS, 348, 1215.
1054
+ doi:10.1111/j.1365-2966.2004.07436.x
1055
+ Janka, H.-T. 2013, MNRAS, 434, 1355.
1056
+ doi:10.1093/mnras/stt1106
1057
+ Jermyn, A. S., Schwab, J., Bauer, E., Timmes, F. X.,
1058
+ & Potekhin, A. Y. 2021, ApJ, 913, 72.
1059
+ doi:10.3847/1538-4357/abf48e
1060
+ Kaspi, V. M., & Beloborodov, A. M. 2017, ARA&A,
1061
+ 55, 261. doi:10.1146/annurev-astro-081915-023329
1062
+
1063
+ 18
1064
+ Song et al.
1065
+ Kawanaka, N., Piran, T., & Krolik, J. H. 2013, ApJ,
1066
+ 766, 31. doi:10.1088/0004-637X/766/1/31
1067
+ Kippenhahn, R., Ruschenplatt, G., & Thomas, H. C.
1068
+ 1980, A&A, 91, 175
1069
+ Kiuchi, K., Kyutoku, K., & Shibata, M. 2012, PhRvD,
1070
+ 86, 064008. doi:10.1103/PhysRevD.86.064008
1071
+ Komissarov, S. S., & Barkov, M. V. 2007, MNRAS,
1072
+ 382, 1029. doi:10.1111/j.1365-2966.2007.12485.x
1073
+ Komissarov, S. S., & Barkov, M. V. 2009, MNRAS,
1074
+ 397, 1153. doi:10.1111/j.1365-2966.2009.14831.x
1075
+ Kumar, P., & Zhang, B. 2015, PhR, 561, 1.
1076
+ doi:10.1016/j.physrep.2014.09.008
1077
+ Langer, N. 1991, A&A, 252, 669
1078
+ Langer, N. 2012, ARA&A, 50, 107.
1079
+ doi:10.1146/annurev-astro-081811-125534
1080
+ Langer, N., & Norman, C. A. 2006, ApJL, 638, L63.
1081
+ doi:10.1086/500363
1082
+ Laplace, E., Justham, S., Renzo, M., et al. 2021,
1083
+ A&A, 656, A58. doi:10.1051/0004-6361/202140506
1084
+ Lattimer, J. M. 2012, Annual Review of Nuclear and
1085
+ Particle Science, 62, 485.
1086
+ doi:10.1146/annurev-nucl-102711-095018
1087
+ Lattimer, J. M., & Prakash, M. 2001, ApJ, 550, 426.
1088
+ doi:10.1086/319702
1089
+ Lee, H. K., Wijers, R. A. M. J., & Brown, G. E. 2000,
1090
+ PhR, 325, 83. doi:10.1016/S0370-1573(99)00084-8
1091
+ Lei, W.-H., Zhang, B., & Liang, E.-W. 2013, ApJ,
1092
+ 765, 125. doi:10.1088/0004-637X/765/2/125
1093
+ Levan, A., Crowther, P., de Grijs, R., et al. 2016,
1094
+ SSRv, 202, 33. doi:10.1007/s11214-016-0312-x
1095
+ Levesque, E. M., Bloom, J. S., Butler, N. R., et al.
1096
+ 2010, MNRAS, 401, 963.
1097
+ doi:10.1111/j.1365-2966.2009.15733.x
1098
+ Li, A., Zhang, B., Zhang, N.-B., et al. 2016, PhRvD,
1099
+ 94, 083010. doi:10.1103/PhysRevD.94.083010
1100
+ Linares, M., Shahbaz, T., & Casares, J. 2018, ApJ,
1101
+ 859, 54. doi:10.3847/1538-4357/aabde6
1102
+ Liu, T., Gu, W.-M., Xue, L., & Lu, J.-F. 2007, ApJ,
1103
+ 661, 1025. doi:10.1086/513689
1104
+ Liu, T., Gu, W.-M., & Zhang, B. 2017, NewAR, 79, 1.
1105
+ doi:10.1016/j.newar.2017.07.001
1106
+ Liu, T., Hou, S.-J., Xue, L., & Gu, W.-M. 2015, ApJS,
1107
+ 218, 12. doi:10.1088/0067-0049/218/1/12
1108
+ Liu, T., Wei, Y.-F., Xue, L., et al. 2021, ApJ, 908,
1109
+ 106. doi:10.3847/1538-4357/abd24e
1110
+ Lloyd-Ronning, N. 2022, ApJ, 928, 104.
1111
+ doi:10.3847/1538-4357/ac54b3
1112
+ Lyons, N., O’Brien, P. T., Zhang, B., et al. 2010,
1113
+ MNRAS, 402, 705.
1114
+ doi:10.1111/j.1365-2966.2009.15538.x
1115
+ MacFadyen, A. I., & Woosley, S. E. 1999, ApJ, 524,
1116
+ 262. doi:10.1086/307790
1117
+ Madau, P., & Dickinson, M. 2014, ARA&A, 52, 415.
1118
+ doi:10.1146/annurev-astro-081811-125615
1119
+ Maeder, A., & Meynet, G. 2012, Reviews of Modern
1120
+ Physics, 84, 25. doi:10.1103/RevModPhys.84.25
1121
+ McKinney, J. C. 2005, ApJL, 630, L5.
1122
+ doi:10.1086/468184
1123
+ McKinney, J. C., Tchekhovskoy, A., & Blandford,
1124
+ R. D. 2012, MNRAS, 423, 3083.
1125
+ doi:10.1111/j.1365-2966.2012.21074.x
1126
+ Metzger, B. D., Berger, E., & Margalit, B. 2017, ApJ,
1127
+ 841, 14. doi:10.3847/1538-4357/aa633d
1128
+ Metzger, B. D., Giannios, D., Thompson, T. A.,
1129
+ Bucciantini, N., & Quataert, E. 2011, MNRAS, 413,
1130
+ 2031. doi:10.1111/j.1365-2966.2011.18280.x
1131
+ Müller, B., Heger, A., Liptai, D., & Cameron, J. B.
1132
+ 2016, MNRAS, 460, 742.
1133
+ doi:10.1093/mnras/stw1083
1134
+ Müller, B., Tauris, T. M., Heger, A., et al. 2019,
1135
+ MNRAS, 484, 3307. doi:10.1093/mnras/stz216
1136
+ Narayan, R., Piran, T., & Kumar, P. 2001, ApJ, 557,
1137
+ 949. doi:10.1086/322267
1138
+ Nieuwenhuijzen, H., & de Jager, C. 1990, A&A, 231,
1139
+ 134.
1140
+ Nugis, T., & Lamers, H. J. G. L. M. 2000, A&A, 360,
1141
+ 227.
1142
+ Obergaulinger, M., & Aloy, M. Á. 2017, MNRAS, 469,
1143
+ L43. doi:10.1093/mnrasl/slx046
1144
+ O’Connor, E., & Ott, C. D. 2011, ApJ, 730, 70.
1145
+ doi:10.1088/0004-637X/730/2/70
1146
+ Özel, F. & Freire, P. 2016, ARA&A, 54, 401.
1147
+ doi:10.1146/annurev-astro-081915-023322
1148
+ Özel, F., Psaltis, D., Narayan, R., & Santos Villarreal,
1149
+ A. 2012, ApJ, 757, 55.
1150
+ doi:10.1088/0004-637X/757/1/55
1151
+
1152
+ LGRB progenitors and magnetar formation
1153
+ 19
1154
+ Paxton, B., Bildsten, L., Dotter, A., et al. 2011, ApJS,
1155
+ 192, 3. doi:10.1088/0067-0049/192/1/3
1156
+ Paxton, B., Cantiello, M., Arras, P., et al. 2013, ApJS,
1157
+ 208, 4. doi:10.1088/0067-0049/208/1/4
1158
+ Paxton, B., Marchant, P., Schwab, J., et al. 2015,
1159
+ ApJS, 220, 15. doi:10.1088/0067-0049/220/1/15
1160
+ Paxton, B., Schwab, J., Bauer, E. B., et al. 2018,
1161
+ ApJS, 234, 34. doi:10.3847/1538-4365/aaa5a8
1162
+ Paxton, B., Smolec, R., Schwab, J., et al. 2019, ApJS,
1163
+ 243, 10. doi:10.3847/1538-4365/ab2241
1164
+ Petrovic, J., Langer, N., Yoon, S. C., & Heger, A.
1165
+ 2005, A&A, 435, 247.
1166
+ doi:10.1051/0004-6361:20042545
1167
+ Piran, T. 2004, Reviews of Modern Physics, 76, 1143.
1168
+ doi:10.1103/RevModPhys.76.1143
1169
+ Podsiadlowski, P., Ivanova, N., Justham, S., &
1170
+ Rappaport, S. 2010, MNRAS, 406, 840.
1171
+ doi:10.1111/j.1365-2966.2010.16751.x
1172
+ Podsiadlowski, P., Mazzali, P. A., Nomoto, K.,
1173
+ Lazzati, D., & Cappellaro, E. 2004, ApJL, 607, L17.
1174
+ doi:10.1086/421347
1175
+ Pols, O. R., Schröder, K.-P., Hurley, J. R., Tout,
1176
+ C. A., & Eggleton, P. P. 1998, MNRAS, 298, 525.
1177
+ doi:10.1046/j.1365-8711.1998.01658.x
1178
+ Popham, R., Woosley, S. E., & Fryer, C. 1999, ApJ,
1179
+ 518, 356. doi:10.1086/307259
1180
+ Reback, J., Brockmendel, J., McKinney, W., et al.
1181
+ 2020, pandas-dev/pandas: Pandas (Zenodo),
1182
+ Zenodo, doi:10.5281/zenodo.3509134
1183
+ Rowlinson, A., O’Brien, P. T., Metzger, B. D., Tanvir,
1184
+ N. R., & Levan, A. J. 2013, MNRAS, 430, 1061.
1185
+ doi:10.1093/mnras/sts683
1186
+ Roy, A., Sutherland, R. S., Krumholz, M. R., et al.
1187
+ 2020, MNRAS, 494, 3861.
1188
+ doi:10.1093/mnras/staa781
1189
+ Schootemeijer, A., Langer, N., Grin, N. J., & Wang,
1190
+ C. 2019, A&A, 625, A132.
1191
+ doi:10.1051/0004-6361/201935046
1192
+ Shankar, S., Mösta, P., Barnes, J., Duffell, P. C., &
1193
+ Kasen, D. 2021, MNRAS, 508, 5390.
1194
+ doi:10.1093/mnras/stab2964
1195
+ Siegel, D. M., Ciolfi, R., & Rezzolla, L. 2014, ApJL,
1196
+ 785, L6. doi:10.1088/2041-8205/785/1/L6
1197
+ Soderberg, A. M., Chakraborti, S., Pignata, G., et al.
1198
+ 2010, Nature, 463, 513. doi:10.1038/nature08714
1199
+ Soderberg, A. M., Nakar, E., Berger, E., & Kulkarni,
1200
+ S. R. 2006, ApJ, 638, 930. doi:10.1086/499121
1201
+ Spruit, H. C. 2002, A&A, 381, 923.
1202
+ doi:10.1051/0004-6361:20011465
1203
+ Stanek, K. Z., Matheson, T., Garnavich, P. M., et al.
1204
+ 2003, ApJL, 591, L17. doi:10.1086/376976
1205
+ Sukhbold, T., Ertl, T., Woosley, S. E., Brown, J. M.,
1206
+ & Janka, H. T. 2016, ApJ, 821, 38.
1207
+ doi:10.3847/0004-637X/821/1/38
1208
+ Sukhbold, T., & Woosley, S. E. 2014, ApJ, 783, 10.
1209
+ doi:10.1088/0004-637X/783/1/10
1210
+ Taggart, K., & Perley, D. A. 2021, MNRAS, 503,
1211
+ 3931. doi:10.1093/mnras/stab174
1212
+ Tchekhovskoy, A., Narayan, R., & McKinney, J. C.
1213
+ 2011, MNRAS, 418, L79.
1214
+ doi:10.1111/j.1745-3933.2011.01147.x
1215
+ Thöne, C. C., Campana, S., Lazzati, D., et al. 2011,
1216
+ MNRAS, 414, 479.
1217
+ doi:10.1111/j.1365-2966.2011.18408.x
1218
+ Thorsett, S. E., & Chakrabarty, D. 1999, ApJ, 512,
1219
+ 288. doi:10.1086/306742
1220
+ Timmes, F. X. 1999, ApJS, 124, 241.
1221
+ doi:10.1086/313257
1222
+ Tout, C. A., Pols, O. R., Eggleton, P. P., & Han, Z.
1223
+ 1996, MNRAS, 281, 257.
1224
+ doi:10.1093/mnras/281.1.257
1225
+ Troja, E., Cusumano, G., O’Brien, P. T., et al. 2007,
1226
+ ApJ, 665, 599. doi:10.1086/519450
1227
+ Ugliano, M., Janka, H.-T., Marek, A., & Arcones, A.
1228
+ 2012, ApJ, 757, 69.
1229
+ doi:10.1088/0004-637X/757/1/69
1230
+ Usov, V. V. 1992, Nature, 357, 472.
1231
+ doi:10.1038/357472a0
1232
+ van der Walt, S., Colbert, S. C., & Varoquaux, G.
1233
+ 2011, Computing in Science and Engineering, 13,
1234
+ 22. doi:10.1109/MCSE.2011.37
1235
+ Vink, J. S., de Koter, A., & Lamers, H. J. G. L. M.
1236
+ 2001, A&A, 369, 574.
1237
+ doi:10.1051/0004-6361:20010127
1238
+ von Steiger, R., & Zurbuchen, T. H. 2016, ApJ, 816,
1239
+ 13. doi:10.3847/0004-637X/816/1/13
1240
+
1241
+ 20
1242
+ Song et al.
1243
+ Wanderman, D., & Piran, T. 2010, MNRAS, 406,
1244
+ 1944. doi:10.1111/j.1365-2966.2010.16787.x
1245
+ Wang, D. X., Xiao, K., & Lei, W. H. 2002, MNRAS,
1246
+ 335, 655. doi:10.1046/j.1365-8711.2002.05652.x
1247
+ Wheeler, J. C., Yi, I., Höflich, P., & Wang, L. 2000,
1248
+ ApJ, 537, 810. doi:10.1086/309055
1249
+ Woosley, S. E. 1993, ApJ, 405, 273.
1250
+ doi:10.1086/172359
1251
+ Woosley, S. E. 2019, ApJ, 878, 49.
1252
+ doi:10.3847/1538-4357/ab1b41
1253
+ Woosley, S. E., & Bloom, J. S. 2006, ARA&A, 44,
1254
+ 507. doi:10.1146/annurev.astro.43.072103.150558
1255
+ Woosley, S. E., & Heger, A. 2006, ApJ, 637, 914.
1256
+ doi:10.1086/498500
1257
+ Woosley, S. E., Sukhbold, T., & Janka, H. T. 2020,
1258
+ ApJ, 896, 56. doi:10.3847/1538-4357/ab8cc1
1259
+ Xin, L.-P., Liang, E.-W., Wei, J.-Y., et al. 2011,
1260
+ MNRAS, 410, 27.
1261
+ doi:10.1111/j.1365-2966.2010.17419.x
1262
+ Yi, S.-X., Du, M., & Liu, T. 2022, ApJ, 924, 69.
1263
+ doi:10.3847/1538-4357/ac35e7
1264
+ Yonetoku, D., Murakami, T., Nakamura, T., et al.
1265
+ 2004, ApJ, 609, 935. doi:10.1086/421285
1266
+ Yoon, S. C., & Langer, N. 2005, A&A, 443, 643.
1267
+ doi:10.1051/0004-6361:20054030
1268
+ Yoon, S. C., Langer, N., & Norman, C. 2006, A&A,
1269
+ 460, 199. doi:10.1051/0004-6361:20065912
1270
+ Yoon, S. C., Woosley, S. E., & Langer, N. 2010, ApJ,
1271
+ 725, 940. doi:10.1088/0004-637X/725/1/940
1272
+ Zalamea, I., & Beloborodov, A. M. 2011, MNRAS,
1273
+ 410, 2302. doi:10.1111/j.1365-2966.2010.17600.x
1274
+ Zhang, B., Fan, Y. Z., Dyks, J., et al. 2006, ApJ, 642,
1275
+ 354. doi:10.1086/500723
1276
+ Zhang, B., & Mészáros, P. 2001, ApJL, 552, L35.
1277
+ doi:10.1086/320255
1278
+ Zhang, B., Zhang, B.-B., Virgili, F. J., et al. 2009,
1279
+ ApJ, 703, 1696. doi:10.1088/0004-637X/703/2/1696
1280
+ Zhang, W., Woosley, S. E., & Heger, A. 2008, ApJ,
1281
+ 679, 639. doi:10.1086/526404
1282
+
1283
+ LGRB progenitors and magnetar formation
1284
+ 21
1285
+ APPENDIX
1286
+ The properties of the star models at the end of their evolution are shown in Table 1.
1287
+
1288
+ 22
1289
+ Song et al.
1290
+ Table 1. Properties of the models at the end of their evolution.
1291
+ Progenitor Parameters
1292
+ Stellar Properties at Core Collapse
1293
+ Protomagnetar
1294
+ Mini
1295
+ Z
1296
+
1297
+ Vequ
1298
+ ηwind
1299
+ Jini
1300
+ Mf
1301
+ Ages
1302
+ Rf
1303
+ log Teff logL
1304
+ JHe
1305
+ MHe
1306
+ JCO
1307
+ MCO
1308
+ JFe
1309
+ MFe ξ2.5 MNS
1310
+ BNS
1311
+ PNS
1312
+ (M⊙) (Z⊙) (Ωcrit) (km s−1)
1313
+ (1052ergs s) (M⊙) (Myr) (R⊙)
1314
+ (K)
1315
+ (L⊙) ( 1050 ergs s) (M⊙) (1049 ergs s) (M⊙) (1048 ergs s) (M⊙)
1316
+ (M⊙) (1014 G) (ms)
1317
+ 10
1318
+ 0.01
1319
+ 0.1
1320
+ 89
1321
+ 0.5
1322
+ 0.31
1323
+ 9.97
1324
+ 28.31 558.77
1325
+ 3.59
1326
+ 4.81
1327
+ 0.61
1328
+ 3.51
1329
+ 1.15
1330
+ 1.98
1331
+ 4.19
1332
+ 1.47 0.01 1.33
1333
+ 1.69
1334
+ 2
1335
+ 10
1336
+ 0.01
1337
+ 0.3
1338
+ 265
1339
+ 0.5
1340
+ 0.91
1341
+ 9.97
1342
+ 38.49
1343
+ 29.79
1344
+ 4.28
1345
+ 5.04
1346
+ 1.41
1347
+ 5.38
1348
+ 4.54
1349
+ 3.43
1350
+ 5.15
1351
+ 1.47 0.11 1.33
1352
+ 2.04
1353
+ 1.62
1354
+ 10
1355
+ 0.01
1356
+ 0.5
1357
+ 428
1358
+ 0.5
1359
+ 1.4
1360
+ 5.87
1361
+ 43.95
1362
+ 0.92
1363
+ 5.12
1364
+ 5.36
1365
+ 3.17
1366
+ 5.87
1367
+ 3.62
1368
+ 3.44
1369
+ 4.96
1370
+ 1.61 0.23 1.44
1371
+ 1.5
1372
+ 1.83
1373
+ 10
1374
+ 0.01
1375
+ 0.6
1376
+ 502
1377
+ 0.5
1378
+ 1.58
1379
+ 5.98
1380
+ 44.93
1381
+ 0.68
1382
+ 5.15
1383
+ 5.24
1384
+ 3.66
1385
+ 5.98
1386
+ 8.64
1387
+ 4.18
1388
+ 5.63
1389
+ 1.51
1390
+ 0.2
1391
+ 1.36
1392
+ 1.68
1393
+ 1.52
1394
+ 10
1395
+ 0.01
1396
+ 0.6
1397
+ 502
1398
+ 1
1399
+ 1.58
1400
+ 4.28
1401
+ 45.02
1402
+ 0.95
1403
+ 5.04
1404
+ 5.05
1405
+ 0.76
1406
+ 4.28
1407
+ 2.48
1408
+ 2.87
1409
+ 4.05
1410
+ 1.5
1411
+ 0.1
1412
+ 1.35
1413
+ 2.46
1414
+ 2.1
1415
+ 10
1416
+ 0.01
1417
+ 0.7
1418
+ 571
1419
+ 0.5
1420
+ 1.71
1421
+ 5.93
1422
+ 45.77
1423
+ 0.68
1424
+ 5.15
1425
+ 5.23
1426
+ 3.61
1427
+ 5.93
1428
+ 8.43
1429
+ 4.13
1430
+ 5.18
1431
+ 1.59 0.21 1.42
1432
+ 2.07
1433
+ 1.73
1434
+ 10
1435
+ 0.01
1436
+ 0.8
1437
+ 593
1438
+ 0.5
1439
+ 1.56
1440
+ 5.82
1441
+ 44.95
1442
+ 0.7
1443
+ 5.16
1444
+ 5.27
1445
+ 3.13
1446
+ 5.82
1447
+ 6.2
1448
+ 4
1449
+ 4.42
1450
+ 1.59 0.24 1.43
1451
+ 1.67
1452
+ 2.03
1453
+ 10
1454
+ 0.1
1455
+ 0.1
1456
+ 82
1457
+ 0.5
1458
+ 0.31
1459
+ 9.73
1460
+ 28.48 657.59
1461
+ 3.55
1462
+ 4.81
1463
+ 0.27
1464
+ 3.47
1465
+ 0.82
1466
+ 2.02
1467
+ 2.99
1468
+ 1.47 0.02 1.33
1469
+ 0.8
1470
+ 2.8
1471
+ 10
1472
+ 0.1
1473
+ 0.6
1474
+ 461
1475
+ 1
1476
+ 1.58
1477
+ 7.35
1478
+ 44.7
1479
+ 1.73
1480
+ 5.03
1481
+ 5.55
1482
+ 1.87
1483
+ 7.2
1484
+ 8.51
1485
+ 5.21
1486
+ 4.52
1487
+ 1.63 0.26 1.46
1488
+ 1.53
1489
+ 2.03
1490
+ 10
1491
+ 0.1
1492
+ 0.6
1493
+ 461
1494
+ 0.5
1495
+ 1.58
1496
+ 5.85
1497
+ 45.62
1498
+ 0.78
1499
+ 5.11
1500
+ 5.17
1501
+ 2.87
1502
+ 5.85
1503
+ 5.55
1504
+ 4.12
1505
+ 5.73
1506
+ 1.53 0.21 1.37
1507
+ 2.95
1508
+ 1.51
1509
+ 11
1510
+ 0.01
1511
+ 0.6
1512
+ 510
1513
+ 0.5
1514
+ 1.9
1515
+ 6.35
1516
+ 38.36
1517
+ 0.68
1518
+ 5.25
1519
+ 5.61
1520
+ 3.74
1521
+ 6.35
1522
+ 8.9
1523
+ 4.48
1524
+ 5.77
1525
+ 1.64 0.26 1.46
1526
+ 2
1527
+ 1.6
1528
+ 11
1529
+ 0.01
1530
+ 0.6
1531
+ 510
1532
+ 1
1533
+ 1.9
1534
+ 4.5
1535
+ 38.43
1536
+ 0.81
1537
+ 5.09
1538
+ 5.11
1539
+ 0.81
1540
+ 4.5
1541
+ 2.93
1542
+ 3.07
1543
+ 4.18
1544
+ 1.54 0.18 1.39
1545
+ 2.2
1546
+ 2.09
1547
+ 11
1548
+ 0.1
1549
+ 0.6
1550
+ 469
1551
+ 0.5
1552
+ 1.9
1553
+ 6.28
1554
+ 38.65
1555
+ 0.81
1556
+ 5.11
1557
+ 5.21
1558
+ 2.57
1559
+ 6.28
1560
+ 8.06
1561
+ 4.42
1562
+ 4.97
1563
+ 1.53 0.13 1.37
1564
+ 1.8
1565
+ 1.74
1566
+ 11
1567
+ 0.1
1568
+ 0.6
1569
+ 469
1570
+ 1
1571
+ 1.9
1572
+ 7.91
1573
+ 38.02
1574
+ 1.51
1575
+ 5.01
1576
+ 5.34
1577
+ 2.33
1578
+ 7.81
1579
+ 11
1580
+ 5.7
1581
+ 4.8
1582
+ 1.55 0.16 1.39
1583
+ 1.72
1584
+ 1.83
1585
+ 12
1586
+ 0.01
1587
+ 0.6
1588
+ 518
1589
+ 1
1590
+ 2.24
1591
+ 4.86
1592
+ 33.26
1593
+ 0.62
1594
+ 5.17
1595
+ 5.21
1596
+ 0.94
1597
+ 4.86
1598
+ 3.87
1599
+ 3.44
1600
+ 3.82
1601
+ 1.59 0.24 1.43
1602
+ 1.42
1603
+ 2.35
1604
+ 12
1605
+ 0.1
1606
+ 0.1
1607
+ 85
1608
+ 0.5
1609
+ 0.45
1610
+ 11.72 20.79 688.92
1611
+ 3.57
1612
+ 4.92
1613
+ 0.74
1614
+ 4.1
1615
+ 1.96
1616
+ 2.43
1617
+ 3.43
1618
+ 1.43 0.04
1619
+ 1.3
1620
+ 1.04
1621
+ 2.38
1622
+ 12
1623
+ 0.1
1624
+ 0.6
1625
+ 476
1626
+ 0.5
1627
+ 2.26
1628
+ 8.07
1629
+ 33.12
1630
+ 0.7
1631
+ 5.18
1632
+ 5.35
1633
+ 4.22
1634
+ 8.07
1635
+ 18.1
1636
+ 5.95
1637
+ 3.38
1638
+ 1.51 0.16 1.36
1639
+ 1.05
1640
+ 2.53
1641
+ 12
1642
+ 0.1
1643
+ 0.6
1644
+ 476
1645
+ 1
1646
+ 2.26
1647
+ 8.19
1648
+ 33
1649
+ 0.8
1650
+ 5.15
1651
+ 5.37
1652
+ 2.56
1653
+ 8.19
1654
+ 13.3
1655
+ 6.07
1656
+ 5
1657
+ 1.53 0.24 1.37
1658
+ 4.31
1659
+ 1.73
1660
+ 13
1661
+ 0.01
1662
+ 0.6
1663
+ 525
1664
+ 0.5
1665
+ 2.61
1666
+ 7.16
1667
+ 29.2
1668
+ 0.67
1669
+ 5.18
1670
+ 5.3
1671
+ 4.03
1672
+ 7.16
1673
+ 13.5
1674
+ 5.18
1675
+ 5.69
1676
+ 1.47 0.17 1.33
1677
+ 3.62
1678
+ 1.47
1679
+ 13
1680
+ 0.01
1681
+ 0.6
1682
+ 525
1683
+ 1
1684
+ 2.61
1685
+ 5.02
1686
+ 29.34
1687
+ 0.87
1688
+ 5.17
1689
+ 5.52
1690
+ 0.96
1691
+ 5.02
1692
+ 4.23
1693
+ 3.58
1694
+ 4.39
1695
+ 1.57 0.19 1.41
1696
+ 1.45
1697
+ 2.03
1698
+ 13
1699
+ 0.1
1700
+ 0.6
1701
+ 483
1702
+ 0.5
1703
+ 2.64
1704
+ 7.41
1705
+ 29.33
1706
+ 0.7
1707
+ 5.25
1708
+ 5.65
1709
+ 3.39
1710
+ 7.41
1711
+ 13.8
1712
+ 5.46
1713
+ 5.73
1714
+ 1.5
1715
+ 0.14 1.36
1716
+ 1.97
1717
+ 1.49
1718
+ 13
1719
+ 0.1
1720
+ 0.6
1721
+ 483
1722
+ 1
1723
+ 2.64
1724
+ 8.68
1725
+ 29.06
1726
+ 0.91
1727
+ 5.13
1728
+ 5.38
1729
+ 2.74
1730
+ 8.68
1731
+ 14.9
1732
+ 6.56
1733
+ 4.86
1734
+ 1.51 0.13 1.36
1735
+ 3.61
1736
+ 1.76
1737
+ 14
1738
+ 0.01
1739
+ 0.6
1740
+ 531
1741
+ 1
1742
+ 3
1743
+ 5.32
1744
+ 26.14
1745
+ 0.61
1746
+ 5.18
1747
+ 5.25
1748
+ 1.05
1749
+ 5.32
1750
+ 5
1751
+ 3.86
1752
+ 4.03
1753
+ 1.59 0.21 1.42
1754
+ 1.8
1755
+ 2.23
1756
+ 14
1757
+ 0.1
1758
+ 0.6
1759
+ 490
1760
+ 1
1761
+ 3.05
1762
+ 9.28
1763
+ 25.9
1764
+ 1.03
1765
+ 5.12
1766
+ 5.45
1767
+ 3.59
1768
+ 9.28
1769
+ 18.7
1770
+ 7.02
1771
+ 4.08
1772
+ 1.48 0.11 1.34
1773
+ 1.14
1774
+ 2.06
1775
+ 14
1776
+ 0.1
1777
+ 0.6
1778
+ 490
1779
+ 0.5
1780
+ 3.05
1781
+ 7.87
1782
+ 26.12
1783
+ 0.61
1784
+ 5.21
1785
+ 5.36
1786
+ 3.6
1787
+ 7.87
1788
+ 15.4
1789
+ 5.77
1790
+ 6.43
1791
+ 1.61 0.23 1.44
1792
+ 2.5
1793
+ 1.41
1794
+ 15
1795
+ 0.01
1796
+ 0.2
1797
+ 190
1798
+ 1
1799
+ 1.33
1800
+ 14.95
1801
+ 17.2
1802
+ 570.08
1803
+ 3.68
1804
+ 5.18
1805
+ 1.82
1806
+ 6.06
1807
+ 6.65
1808
+ 4.07
1809
+ 6.31
1810
+ 1.64 0.26 1.46
1811
+ 2.41
1812
+ 1.46
1813
+ 15
1814
+ 0.01
1815
+ 0.4
1816
+ 372
1817
+ 0.5
1818
+ 2.52
1819
+ 8.09
1820
+ 22.48
1821
+ 0.58
1822
+ 5.22
1823
+ 5.37
1824
+ 4.23
1825
+ 8.09
1826
+ 17.2
1827
+ 6
1828
+ 5.7
1829
+ 1.51
1830
+ 0.2
1831
+ 1.36
1832
+ 2.39
1833
+ 1.51
1834
+ 15
1835
+ 0.01
1836
+ 0.4
1837
+ 372
1838
+ 1
1839
+ 2.52
1840
+ 5.68
1841
+ 22.57
1842
+ 0.59
1843
+ 5.18
1844
+ 5.22
1845
+ 1.13
1846
+ 5.68
1847
+ 5.26
1848
+ 4.18
1849
+ 4.47
1850
+ 1.63 0.22 1.46
1851
+ 1.41
1852
+ 2.05
1853
+ 15
1854
+ 0.01
1855
+ 0.5
1856
+ 458
1857
+ 1
1858
+ 3.02
1859
+ 5.55
1860
+ 23.07
1861
+ 0.6
1862
+ 5.17
1863
+ 5.18
1864
+ 1.1
1865
+ 5.55
1866
+ 4.5
1867
+ 4.05
1868
+ 4.38
1869
+ 1.54 0.19 1.38
1870
+ 1.52
1871
+ 1.99
1872
+ 15
1873
+ 0.01
1874
+ 0.6
1875
+ 537
1876
+ 0.5
1877
+ 3.42
1878
+ 7.79
1879
+ 23.48
1880
+ 0.58
1881
+ 5.22
1882
+ 5.37
1883
+ 4
1884
+ 7.79
1885
+ 15.5
1886
+ 5.75
1887
+ 5.01
1888
+ 1.55
1889
+ 0.2
1890
+ 1.39
1891
+ 2.21
1892
+ 1.75
1893
+ Table 1 continued
1894
+
1895
+ LGRB progenitors and magnetar formation
1896
+ 23
1897
+ Table 1 (continued)
1898
+ Progenitor Parameters
1899
+ Stellar Properties at Core Collapse
1900
+ Protomagnetar
1901
+ Mini
1902
+ Z
1903
+
1904
+ Vequ
1905
+ ηwind
1906
+ Jini
1907
+ Mf
1908
+ Ages
1909
+ Rf
1910
+ log Teff logL
1911
+ JHe
1912
+ MHe
1913
+ JCO
1914
+ MCO
1915
+ JFe
1916
+ MFe ξ2.5 MNS
1917
+ BNS
1918
+ PNS
1919
+ (M⊙) (Z⊙) (Ωcrit) (km s−1)
1920
+ (1052ergs s) (M⊙) (Myr) (R⊙)
1921
+ (K)
1922
+ (L⊙) ( 1050 ergs s) (M⊙) (1049 ergs s) (M⊙) (1048 ergs s) (M⊙)
1923
+ (M⊙) (1014 G) (ms)
1924
+ 15
1925
+ 0.01
1926
+ 0.6
1927
+ 537
1928
+ 1
1929
+ 3.42
1930
+ 5.5
1931
+ 23.56
1932
+ 0.64
1933
+ 5.15
1934
+ 5.18
1935
+ 1.08
1936
+ 5.5
1937
+ 4.97
1938
+ 4.02
1939
+ 3.76
1940
+ 1.58 0.19 1.41
1941
+ 0.81
1942
+ 2.37
1943
+ 15
1944
+ 0.01
1945
+ 0.7
1946
+ 612
1947
+ 0.5
1948
+ 3.72
1949
+ 7.75
1950
+ 23.88
1951
+ 0.54
1952
+ 5.24
1953
+ 5.38
1954
+ 3.98
1955
+ 7.75
1956
+ 16.1
1957
+ 5.69
1958
+ 5.99
1959
+ 1.58 0.23 1.41
1960
+ 2.4
1961
+ 1.49
1962
+ 15
1963
+ 0.01
1964
+ 0.7
1965
+ 612
1966
+ 1
1967
+ 3.72
1968
+ 5.34
1969
+ 23.94
1970
+ 0.64
1971
+ 5.16
1972
+ 5.19
1973
+ 1.01
1974
+ 5.34
1975
+ 4.84
1976
+ 3.85
1977
+ 4.19
1978
+ 1.58 0.18 1.42
1979
+ 0.99
1980
+ 2.13
1981
+ 15
1982
+ 0.01
1983
+ 0.8
1984
+ 658
1985
+ 0.5
1986
+ 3.69
1987
+ 7.95
1988
+ 23.83
1989
+ 0.6
1990
+ 5.21
1991
+ 5.36
1992
+ 4.52
1993
+ 7.95
1994
+ 16.9
1995
+ 5.8
1996
+ 6.64
1997
+ 1.61 0.22 1.44
1998
+ 1.96
1999
+ 1.37
2000
+ 15
2001
+ 0.01
2002
+ 0.8
2003
+ 658
2004
+ 1
2005
+ 3.69
2006
+ 5.44
2007
+ 23.91
2008
+ 0.58
2009
+ 5.19
2010
+ 5.25
2011
+ 1.07
2012
+ 5.44
2013
+ 4.78
2014
+ 3.98
2015
+ 4.48
2016
+ 1.63 0.24 1.46
2017
+ 1.41
2018
+ 2.05
2019
+ 15
2020
+ 0.1
2021
+ 0.1
2022
+ 88
2023
+ 1
2024
+ 0.69
2025
+ 12.7
2026
+ 14.51 719.67
2027
+ 3.59
2028
+ 5.04
2029
+ 1.19
2030
+ 4.89
2031
+ 3.93
2032
+ 3.09
2033
+ 4.13
2034
+ 1.53 0.15 1.37
2035
+ 2.7
2036
+ 2.09
2037
+ 15
2038
+ 0.1
2039
+ 0.1
2040
+ 88
2041
+ 0.5
2042
+ 0.69
2043
+ 13.52 15.55 748.97
2044
+ 3.61
2045
+ 5.13
2046
+ 1.46
2047
+ 5.66
2048
+ 5.27
2049
+ 3.77
2050
+ 4.59
2051
+ 1.53 0.19 1.38
2052
+ 1.56
2053
+ 1.89
2054
+ 15
2055
+ 0.1
2056
+ 0.2
2057
+ 175
2058
+ 1
2059
+ 1.36
2060
+ 14.63 16.44 697.36
2061
+ 3.61
2062
+ 5.1
2063
+ 0.98
2064
+ 5.39
2065
+ 3.38
2066
+ 3.49
2067
+ 4.51
2068
+ 1.57 0.16 1.41
2069
+ 1.83
2070
+ 1.97
2071
+ 15
2072
+ 0.1
2073
+ 0.2
2074
+ 175
2075
+ 0.5
2076
+ 1.36
2077
+ 14.49 16.71 750.76
2078
+ 3.62
2079
+ 5.17
2080
+ 1.81
2081
+ 5.94
2082
+ 6.18
2083
+ 3.99
2084
+ 5.48
2085
+ 1.59 0.22 1.43
2086
+ 1.73
2087
+ 1.64
2088
+ 15
2089
+ 0.1
2090
+ 0.3
2091
+ 261
2092
+ 0.5
2093
+ 1.99
2094
+ 14.76 19.35 670.84
2095
+ 3.66
2096
+ 5.24
2097
+ 1.55
2098
+ 6.94
2099
+ 5.85
2100
+ 4.74
2101
+ 3.44
2102
+ 1.29 0.38 1.18
2103
+ 0.04
2104
+ 2.16
2105
+ 15
2106
+ 0.1
2107
+ 0.4
2108
+ 343
2109
+ 0.5
2110
+ 2.57
2111
+ 12.01 22.24
2112
+ 1.15
2113
+ 5.16
2114
+ 5.71
2115
+ 8.89
2116
+ 12.01
2117
+ 42.4
2118
+ 9.31
2119
+ 7.74
2120
+ 1.69 0.35
2121
+ 1.5
2122
+ 2.88
2123
+ 1.22
2124
+ 15
2125
+ 0.1
2126
+ 0.4
2127
+ 343
2128
+ 1
2129
+ 2.57
2130
+ 10.19 22.24
2131
+ 0.7
2132
+ 5.22
2133
+ 5.51
2134
+ 3.34
2135
+ 10.19
2136
+ 19.4
2137
+ 7.99
2138
+ 5.7
2139
+ 1.79 0.47 1.59
2140
+ 1.95
2141
+ 1.75
2142
+ 15
2143
+ 0.1
2144
+ 0.5
2145
+ 422
2146
+ 0.5
2147
+ 3.08
2148
+ 7.78
2149
+ 23.11
2150
+ 0.61
2151
+ 5.21
2152
+ 5.35
2153
+ 3.02
2154
+ 7.78
2155
+ 14
2156
+ 5.73
2157
+ 6.03
2158
+ 1.55 0.21
2159
+ 1.4
2160
+ 3.1
2161
+ 1.46
2162
+ 15
2163
+ 0.1
2164
+ 0.6
2165
+ 496
2166
+ 0.5
2167
+ 3.48
2168
+ 7.88
2169
+ 23.55
2170
+ 0.67
2171
+ 5.19
2172
+ 5.35
2173
+ 3.71
2174
+ 7.88
2175
+ 14.7
2176
+ 5.74
2177
+ 5.56
2178
+ 1.58
2179
+ 0.2
2180
+ 1.42
2181
+ 2.08
2182
+ 1.61
2183
+ 15
2184
+ 0.1
2185
+ 0.6
2186
+ 495
2187
+ 1
2188
+ 3.48
2189
+ 9.78
2190
+ 23.33
2191
+ 0.7
2192
+ 5.21
2193
+ 5.47
2194
+ 3.94
2195
+ 9.78
2196
+ 21.7
2197
+ 7.55
2198
+ 6.49
2199
+ 1.64 0.26 1.47
2200
+ 1.5
2201
+ 1.42
2202
+ 15
2203
+ 0.1
2204
+ 0.7
2205
+ 565
2206
+ 1
2207
+ 3.79
2208
+ 9.21
2209
+ 23.69
2210
+ 0.81
2211
+ 5.16
2212
+ 5.39
2213
+ 3.12
2214
+ 9.21
2215
+ 18.3
2216
+ 7.15
2217
+ 5.24
2218
+ 1.55 0.15
2219
+ 1.4
2220
+ 2.22
2221
+ 1.68
2222
+ 15
2223
+ 0.1
2224
+ 0.8
2225
+ 622
2226
+ 1
2227
+ 3.9
2228
+ 9.43
2229
+ 23.79
2230
+ 0.79
2231
+ 5.17
2232
+ 5.45
2233
+ 3.37
2234
+ 9.43
2235
+ 19.4
2236
+ 7.3
2237
+ 6.66
2238
+ 1.56 0.23
2239
+ 1.4
2240
+ 3.37
2241
+ 1.32
2242
+ 15
2243
+ 0.1
2244
+ 0.8
2245
+ 623
2246
+ 0.5
2247
+ 3.9
2248
+ 7.49
2249
+ 24.06
2250
+ 0.66
2251
+ 5.18
2252
+ 5.33
2253
+ 3.32
2254
+ 7.49
2255
+ 13.5
2256
+ 5.52
2257
+ 5.07
2258
+ 1.57 0.15 1.41
2259
+ 3.91
2260
+ 1.76
2261
+ 16
2262
+ 0.01
2263
+ 0.6
2264
+ 543
2265
+ 1
2266
+ 3.86
2267
+ 5.8
2268
+ 21.45
2269
+ 0.58
2270
+ 5.19
2271
+ 5.24
2272
+ 1.19
2273
+ 5.8
2274
+ 5.1
2275
+ 4.23
2276
+ 4.32
2277
+ 1.62 0.23 1.45
2278
+ 1.01
2279
+ 2.12
2280
+ 16
2281
+ 0.1
2282
+ 0.6
2283
+ 501
2284
+ 0.5
2285
+ 3.94
2286
+ 8.37
2287
+ 21.42
2288
+ 0.64
2289
+ 5.21
2290
+ 5.4
2291
+ 4.14
2292
+ 8.37
2293
+ 18.4
2294
+ 6.13
2295
+ 5.31
2296
+ 1.54 0.17 1.38
2297
+ 2.73
2298
+ 1.64
2299
+ 16
2300
+ 0.1
2301
+ 0.6
2302
+ 501
2303
+ 1
2304
+ 3.94
2305
+ 10.22 21.17
2306
+ 0.68
2307
+ 5.22
2308
+ 5.5
2309
+ 3.84
2310
+ 10.22
2311
+ 22.5
2312
+ 8.01
2313
+ 6.34
2314
+ 1.83
2315
+ 0.6
2316
+ 1.62
2317
+ 2.53
2318
+ 1.61
2319
+ 17
2320
+ 0.01
2321
+ 0.6
2322
+ 549
2323
+ 1
2324
+ 4.32
2325
+ 5.86
2326
+ 19.73
2327
+ 0.65
2328
+ 5.19
2329
+ 5.32
2330
+ 1.16
2331
+ 5.86
2332
+ 5.44
2333
+ 4.28
2334
+ 4.35
2335
+ 1.56 0.16
2336
+ 1.4
2337
+ 1.44
2338
+ 2.03
2339
+ 17
2340
+ 0.1
2341
+ 0.6
2342
+ 507
2343
+ 1
2344
+ 4.42
2345
+ 10.34 19.44
2346
+ 0.78
2347
+ 5.18
2348
+ 5.45
2349
+ 3.78
2350
+ 10.34
2351
+ 22.7
2352
+ 8.03
2353
+ 6.38
2354
+ 1.63 0.24 1.46
2355
+ 2.36
2356
+ 1.44
2357
+ 17
2358
+ 0.1
2359
+ 0.6
2360
+ 507
2361
+ 0.5
2362
+ 4.42
2363
+ 8.8
2364
+ 19.61
2365
+ 0.73
2366
+ 5.17
2367
+ 5.37
2368
+ 4.12
2369
+ 8.8
2370
+ 17.8
2371
+ 6.53
2372
+ 5.27
2373
+ 1.54 0.14 1.39
2374
+ 4.83
2375
+ 1.66
2376
+ 18
2377
+ 0.01
2378
+ 0.6
2379
+ 554
2380
+ 0.5
2381
+ 4.81
2382
+ 9.1
2383
+ 18.14
2384
+ 0.61
2385
+ 5.22
2386
+ 5.41
2387
+ 4.92
2388
+ 9.1
2389
+ 21.9
2390
+ 6.71
2391
+ 7.87
2392
+ 1.66 0.27 1.48
2393
+ 3.13
2394
+ 1.19
2395
+ 18
2396
+ 0.01
2397
+ 0.6
2398
+ 554
2399
+ 1
2400
+ 4.81
2401
+ 6.03
2402
+ 18.22
2403
+ 0.56
2404
+ 5.2
2405
+ 5.26
2406
+ 1.18
2407
+ 6.03
2408
+ 5.9
2409
+ 4.45
2410
+ 4.17
2411
+ 1.61 0.23 1.44
2412
+ 1.11
2413
+ 2.18
2414
+ 18
2415
+ 0.1
2416
+ 0.6
2417
+ 512
2418
+ 0.5
2419
+ 4.93
2420
+ 9.46
2421
+ 18.14
2422
+ 0.78
2423
+ 5.17
2424
+ 5.4
2425
+ 4.83
2426
+ 9.46
2427
+ 22
2428
+ 7.08
2429
+ 5.67
2430
+ 1.45
2431
+ 0.1
2432
+ 1.31
2433
+ 3.25
2434
+ 1.46
2435
+ 19
2436
+ 0.01
2437
+ 0.6
2438
+ 559
2439
+ 0.5
2440
+ 5.32
2441
+ 9.59
2442
+ 16.87
2443
+ 0.59
2444
+ 5.24
2445
+ 5.46
2446
+ 5.31
2447
+ 9.59
2448
+ 23.9
2449
+ 7.13
2450
+ 3.97
2451
+ 1.5
2452
+ 0.18 1.35
2453
+ 1.32
2454
+ 2.14
2455
+ 19
2456
+ 0.01
2457
+ 0.6
2458
+ 559
2459
+ 1
2460
+ 5.32
2461
+ 6.45
2462
+ 16.93
2463
+ 0.59
2464
+ 5.19
2465
+ 5.25
2466
+ 1.38
2467
+ 6.45
2468
+ 7.04
2469
+ 4.77
2470
+ 4.43
2471
+ 1.48 0.16 1.34
2472
+ 2.16
2473
+ 1.9
2474
+ 19
2475
+ 0.1
2476
+ 0.6
2477
+ 516
2478
+ 0.5
2479
+ 5.46
2480
+ 9.34
2481
+ 16.87
2482
+ 0.75
2483
+ 5.2
2484
+ 5.51
2485
+ 4.33
2486
+ 9.34
2487
+ 21
2488
+ 7.03
2489
+ 5.5
2490
+ 1.46
2491
+ 0.1
2492
+ 1.32
2493
+ 2.51
2494
+ 1.51
2495
+ 19
2496
+ 0.1
2497
+ 0.6
2498
+ 516
2499
+ 1
2500
+ 5.46
2501
+ 10.21 16.71
2502
+ 0.64
2503
+ 5.23
2504
+ 5.49
2505
+ 3.64
2506
+ 10.21
2507
+ 19.3
2508
+ 7.55
2509
+ 5.7
2510
+ 1.69 0.35
2511
+ 1.5
2512
+ 1.86
2513
+ 1.66
2514
+ 20
2515
+ 0.01
2516
+ 0.1
2517
+ 100
2518
+ 1
2519
+ 1.14
2520
+ 19.63 10.95 657.71
2521
+ 3.7
2522
+ 5.41
2523
+ 3.7
2524
+ 8.71
2525
+ 18
2526
+ 6.43
2527
+ 6.71
2528
+ 1.64 0.28 1.46
2529
+ 1.92
2530
+ 1.37
2531
+ Table 1 continued
2532
+
2533
+ 24
2534
+ Song et al.
2535
+ Table 1 (continued)
2536
+ Progenitor Parameters
2537
+ Stellar Properties at Core Collapse
2538
+ Protomagnetar
2539
+ Mini
2540
+ Z
2541
+
2542
+ Vequ
2543
+ ηwind
2544
+ Jini
2545
+ Mf
2546
+ Ages
2547
+ Rf
2548
+ log Teff logL
2549
+ JHe
2550
+ MHe
2551
+ JCO
2552
+ MCO
2553
+ JFe
2554
+ MFe ξ2.5 MNS
2555
+ BNS
2556
+ PNS
2557
+ (M⊙) (Z⊙) (Ωcrit) (km s−1)
2558
+ (1052ergs s) (M⊙) (Myr) (R⊙)
2559
+ (K)
2560
+ (L⊙) ( 1050 ergs s) (M⊙) (1049 ergs s) (M⊙) (1048 ergs s) (M⊙)
2561
+ (M⊙) (1014 G) (ms)
2562
+ 20
2563
+ 0.01
2564
+ 0.2
2565
+ 199
2566
+ 1
2567
+ 2.26
2568
+ 19.88 12.02 429.99
2569
+ 3.78
2570
+ 5.33
2571
+ 3.54
2572
+ 8.86
2573
+ 15.9
2574
+ 6.44
2575
+ 7.41
2576
+ 1.84 0.59 1.62
2577
+ 2.49
2578
+ 1.38
2579
+ 20
2580
+ 0.01
2581
+ 0.3
2582
+ 296
2583
+ 0.5
2584
+ 3.32
2585
+ 18.01 14.58
2586
+ 2.24
2587
+ 5.06
2588
+ 5.89
2589
+ 27.3
2590
+ 18.01
2591
+ 42.2
2592
+ 8.93
2593
+ 10.4
2594
+ 1.86 0.61 1.64
2595
+ 5.26
2596
+ 1
2597
+ 20
2598
+ 0.01
2599
+ 0.3
2600
+ 296
2601
+ 1
2602
+ 3.32
2603
+ 9.63
2604
+ 14.74
2605
+ 0.64
2606
+ 5.21
2607
+ 5.41
2608
+ 3.11
2609
+ 9.63
2610
+ 16.4
2611
+ 7.2
2612
+ 4.96
2613
+ 1.52 0.12 1.37
2614
+ 1.83
2615
+ 1.74
2616
+ 20
2617
+ 0.01
2618
+ 0.4
2619
+ 390
2620
+ 0.5
2621
+ 4.29
2622
+ 10.27 15.15
2623
+ 0.67
2624
+ 5.21
2625
+ 5.44
2626
+ 5.79
2627
+ 10.27
2628
+ 27.9
2629
+ 7.77
2630
+ 6.08
2631
+ 1.51 0.13 1.36
2632
+ 3.04
2633
+ 1.41
2634
+ 20
2635
+ 0.01
2636
+ 0.4
2637
+ 390
2638
+ 1
2639
+ 4.29
2640
+ 6.74
2641
+ 15.21
2642
+ 0.57
2643
+ 5.2
2644
+ 5.28
2645
+ 1.41
2646
+ 6.74
2647
+ 7.4
2648
+ 5.01
2649
+ 4.11
2650
+ 1.5
2651
+ 0.18 1.35
2652
+ 1.59
2653
+ 2.07
2654
+ 20
2655
+ 0.01
2656
+ 0.5
2657
+ 480
2658
+ 0.5
2659
+ 5.15
2660
+ 9.72
2661
+ 15.48
2662
+ 0.74
2663
+ 5.18
2664
+ 5.43
2665
+ 4.84
2666
+ 9.72
2667
+ 22.8
2668
+ 7.22
2669
+ 4.87
2670
+ 1.5
2671
+ 0.11 1.35
2672
+ 1.46
2673
+ 1.75
2674
+ 20
2675
+ 0.01
2676
+ 0.5
2677
+ 480
2678
+ 1
2679
+ 5.15
2680
+ 6.58
2681
+ 15.56
2682
+ 0.58
2683
+ 5.2
2684
+ 5.27
2685
+ 1.4
2686
+ 6.58
2687
+ 7.45
2688
+ 4.88
2689
+ 4.87
2690
+ 1.53 0.17 1.37
2691
+ 1.49
2692
+ 1.78
2693
+ 20
2694
+ 0.01
2695
+ 0.6
2696
+ 564
2697
+ 0.5
2698
+ 5.84
2699
+ 9.91
2700
+ 15.76
2701
+ 0.67
2702
+ 5.21
2703
+ 5.45
2704
+ 5.45
2705
+ 9.91
2706
+ 25.4
2707
+ 7.38
2708
+ 5.45
2709
+ 1.55 0.14 1.39
2710
+ 2.49
2711
+ 1.61
2712
+ 20
2713
+ 0.01
2714
+ 0.6
2715
+ 564
2716
+ 1
2717
+ 5.84
2718
+ 6.65
2719
+ 15.83
2720
+ 0.55
2721
+ 5.21
2722
+ 5.28
2723
+ 1.47
2724
+ 6.65
2725
+ 7.17
2726
+ 4.91
2727
+ 4.82
2728
+ 1.57
2729
+ 0.2
2730
+ 1.41
2731
+ 2.65
2732
+ 1.84
2733
+ 20
2734
+ 0.01
2735
+ 0.7
2736
+ 644
2737
+ 0.5
2738
+ 6.36
2739
+ 9.82
2740
+ 16
2741
+ 0.55
2742
+ 5.26
2743
+ 5.47
2744
+ 5.39
2745
+ 9.82
2746
+ 25
2747
+ 7.35
2748
+ 7.15
2749
+ 1.61 0.22 1.45
2750
+ 3.25
2751
+ 1.27
2752
+ 20
2753
+ 0.01
2754
+ 0.7
2755
+ 644
2756
+ 1
2757
+ 6.36
2758
+ 6.48
2759
+ 16.07
2760
+ 0.53
2761
+ 5.21
2762
+ 5.27
2763
+ 1.38
2764
+ 6.48
2765
+ 7.12
2766
+ 4.84
2767
+ 5.11
2768
+ 1.57 0.21 1.41
2769
+ 1.87
2770
+ 1.73
2771
+ 20
2772
+ 0.01
2773
+ 0.8
2774
+ 716
2775
+ 0.5
2776
+ 6.67
2777
+ 9.76
2778
+ 16.14
2779
+ 0.5
2780
+ 5.28
2781
+ 5.46
2782
+ 5.24
2783
+ 9.76
2784
+ 25.6
2785
+ 7.49
2786
+ 6.55
2787
+ 1.61 0.23 1.44
2788
+ 2.94
2789
+ 1.39
2790
+ 20
2791
+ 0.01
2792
+ 0.8
2793
+ 716
2794
+ 1
2795
+ 6.67
2796
+ 6.34
2797
+ 16.22
2798
+ 0.55
2799
+ 5.23
2800
+ 5.35
2801
+ 1.29
2802
+ 6.34
2803
+ 6.59
2804
+ 4.71
2805
+ 4.24
2806
+ 1.61 0.23 1.44
2807
+ 1.6
2808
+ 2.15
2809
+ 20
2810
+ 0.1
2811
+ 0.1
2812
+ 92
2813
+ 1
2814
+ 1.18
2815
+ 15
2816
+ 10.45 854.89
2817
+ 3.62
2818
+ 5.31
2819
+ 2.77
2820
+ 7.45
2821
+ 11.6
2822
+ 5.32
2823
+ 5.95
2824
+ 1.63 0.26 1.46
2825
+ 2.51
2826
+ 1.54
2827
+ 20
2828
+ 0.1
2829
+ 0.3
2830
+ 273
2831
+ 0.5
2832
+ 3.41
2833
+ 16.4
2834
+ 14.47
2835
+ 16.9
2836
+ 4.62
2837
+ 5.87
2838
+ 9.18
2839
+ 15.61
2840
+ 53.6
2841
+ 12.55
2842
+ 4.73
2843
+ 1.85
2844
+ 0.6
2845
+ 1.63
2846
+ 0.53
2847
+ 2.17
2848
+ 20
2849
+ 0.1
2850
+ 0.3
2851
+ 264
2852
+ 1
2853
+ 3.41
2854
+ 18.73 13.36 934.99
2855
+ 3.67
2856
+ 5.56
2857
+ 3.21
2858
+ 10.37
2859
+ 14.5
2860
+ 7.72
2861
+ 4.92
2862
+ 1.47 0.16 1.33
2863
+ 2.29
2864
+ 1.7
2865
+ 20
2866
+ 0.1
2867
+ 0.4
2868
+ 360
2869
+ 0.5
2870
+ 4.42
2871
+ 11.66 15.09
2872
+ 0.45
2873
+ 5.34
2874
+ 5.64
2875
+ 6.45
2876
+ 11.66
2877
+ 32.3
2878
+ 8.81
2879
+ 7.46
2880
+ 1.6
2881
+ 0.26 1.44
2882
+ 2.08
2883
+ 1.21
2884
+ 20
2885
+ 0.1
2886
+ 0.4
2887
+ 360
2888
+ 1
2889
+ 4.42
2890
+ 12.62 14.87
2891
+ 0.83
2892
+ 5.28
2893
+ 5.92
2894
+ 4.75
2895
+ 12.62
2896
+ 30.2
2897
+ 10.04
2898
+ 6.59
2899
+ 1.58 0.22 1.41
2900
+ 3.52
2901
+ 1.35
2902
+ 20
2903
+ 0.1
2904
+ 0.5
2905
+ 443
2906
+ 0.5
2907
+ 5.3
2908
+ 9.79
2909
+ 15.47
2910
+ 0.75
2911
+ 5.18
2912
+ 5.41
2913
+ 4.19
2914
+ 9.79
2915
+ 20.7
2916
+ 7.32
2917
+ 5.2
2918
+ 1.49 0.11 1.34
2919
+ 4.68
2920
+ 1.63
2921
+ 20
2922
+ 0.1
2923
+ 0.5
2924
+ 443
2925
+ 1
2926
+ 5.29
2927
+ 11.91 15.28
2928
+ 0.54
2929
+ 5.29
2930
+ 5.58
2931
+ 4.65
2932
+ 11.91
2933
+ 27.6
2934
+ 9.25
2935
+ 5.42
2936
+ 1.58 0.27 1.41
2937
+ 3.61
2938
+ 1.64
2939
+ 20
2940
+ 0.1
2941
+ 0.6
2942
+ 521
2943
+ 0.5
2944
+ 6.01
2945
+ 9.42
2946
+ 15.76
2947
+ 0.73
2948
+ 5.18
2949
+ 5.4
2950
+ 3.79
2951
+ 9.42
2952
+ 19.4
2953
+ 7.11
2954
+ 4.76
2955
+ 1.49 0.11 1.34
2956
+ 2.6
2957
+ 1.78
2958
+ 20
2959
+ 0.1
2960
+ 0.6
2961
+ 521
2962
+ 1
2963
+ 6.01
2964
+ 11.6
2965
+ 15.58
2966
+ 0.59
2967
+ 5.27
2968
+ 5.56
2969
+ 4.62
2970
+ 11.6
2971
+ 29
2972
+ 9.18
2973
+ 5.98
2974
+ 1.65
2975
+ 0.3
2976
+ 1.48
2977
+ 2.74
2978
+ 1.56
2979
+ 20
2980
+ 0.1
2981
+ 0.7
2982
+ 596
2983
+ 0.5
2984
+ 6.55
2985
+ 9.33
2986
+ 16
2987
+ 0.77
2988
+ 5.17
2989
+ 5.39
2990
+ 4.2
2991
+ 9.33
2992
+ 20.4
2993
+ 7.05
2994
+ 5.35
2995
+ 1.47 0.09 1.33
2996
+ 2.66
2997
+ 1.56
2998
+ 20
2999
+ 0.1
3000
+ 0.8
3001
+ 664
3002
+ 1
3003
+ 6.89
3004
+ 11.76 15.96
3005
+ 0.51
3006
+ 5.32
3007
+ 5.65
3008
+ 5.41
3009
+ 11.76
3010
+ 33.5
3011
+ 9.52
3012
+ 6.65
3013
+ 1.76 0.44 1.56
3014
+ 4.44
3015
+ 1.48
3016
+ 20
3017
+ 0.1
3018
+ 0.8
3019
+ 665
3020
+ 0.5
3021
+ 6.88
3022
+ 9.09
3023
+ 16.14
3024
+ 0.5
3025
+ 5.27
3026
+ 5.44
3027
+ 4.28
3028
+ 9.09
3029
+ 20.4
3030
+ 6.86
3031
+ 6.32
3032
+ 1.68
3033
+ 0.3
3034
+ 1.49
3035
+ 2.5
3036
+ 1.49
3037
+ 21
3038
+ 0.01
3039
+ 0.6
3040
+ 568
3041
+ 0.5
3042
+ 6.4
3043
+ 10.31 14.82
3044
+ 0.76
3045
+ 5.18
3046
+ 5.45
3047
+ 5.73
3048
+ 10.31
3049
+ 26.3
3050
+ 7.68
3051
+ 5.77
3052
+ 1.44
3053
+ 0.1
3054
+ 1.31
3055
+ 3.69
3056
+ 1.43
3057
+ 21
3058
+ 0.01
3059
+ 0.6
3060
+ 568
3061
+ 1
3062
+ 6.4
3063
+ 6.56
3064
+ 14.9
3065
+ 0.54
3066
+ 5.22
3067
+ 5.28
3068
+ 1.35
3069
+ 6.56
3070
+ 7.33
3071
+ 4.89
3072
+ 4.21
3073
+ 1.54 0.21 1.39
3074
+ 1.29
3075
+ 2.07
3076
+ 21
3077
+ 0.1
3078
+ 0.6
3079
+ 525
3080
+ 0.5
3081
+ 6.58
3082
+ 9.7
3083
+ 14.81
3084
+ 0.71
3085
+ 5.19
3086
+ 5.42
3087
+ 3.83
3088
+ 9.7
3089
+ 19.9
3090
+ 7.29
3091
+ 5.15
3092
+ 1.53 0.13 1.38
3093
+ 3.02
3094
+ 1.68
3095
+ 21
3096
+ 0.1
3097
+ 0.6
3098
+ 525
3099
+ 1
3100
+ 6.58
3101
+ 11.82 14.67
3102
+ 0.59
3103
+ 5.27
3104
+ 5.57
3105
+ 4.68
3106
+ 11.82
3107
+ 28.7
3108
+ 9.3
3109
+ 6.63
3110
+ 1.55 0.23 1.39
3111
+ 3.54
3112
+ 1.32
3113
+ 22
3114
+ 0.01
3115
+ 0.6
3116
+ 572
3117
+ 0.5
3118
+ 6.97
3119
+ 10.43 13.99
3120
+ 0.66
3121
+ 5.22
3122
+ 5.48
3123
+ 5.31
3124
+ 10.43
3125
+ 25.5
3126
+ 7.72
3127
+ 7.62
3128
+ 1.66 0.19 1.48
3129
+ 3.02
3130
+ 1.22
3131
+ 22
3132
+ 0.01
3133
+ 0.6
3134
+ 572
3135
+ 1
3136
+ 6.97
3137
+ 6.93
3138
+ 14.05
3139
+ 0.59
3140
+ 5.2
3141
+ 5.29
3142
+ 1.49
3143
+ 6.93
3144
+ 7.83
3145
+ 5.15
3146
+ 4.73
3147
+ 1.56 0.17
3148
+ 1.4
3149
+ 1.37
3150
+ 1.87
3151
+ 22
3152
+ 0.1
3153
+ 0.6
3154
+ 530
3155
+ 0.5
3156
+ 7.18
3157
+ 9.88
3158
+ 13.99
3159
+ 0.67
3160
+ 5.21
3161
+ 5.43
3162
+ 3.84
3163
+ 9.88
3164
+ 19.6
3165
+ 7.41
3166
+ 5.96
3167
+ 1.51 0.16 1.36
3168
+ 2.6
3169
+ 1.44
3170
+ Table 1 continued
3171
+
3172
+ LGRB progenitors and magnetar formation
3173
+ 25
3174
+ Table 1 (continued)
3175
+ Progenitor Parameters
3176
+ Stellar Properties at Core Collapse
3177
+ Protomagnetar
3178
+ Mini
3179
+ Z
3180
+
3181
+ Vequ
3182
+ ηwind
3183
+ Jini
3184
+ Mf
3185
+ Ages
3186
+ Rf
3187
+ log Teff logL
3188
+ JHe
3189
+ MHe
3190
+ JCO
3191
+ MCO
3192
+ JFe
3193
+ MFe ξ2.5 MNS
3194
+ BNS
3195
+ PNS
3196
+ (M⊙) (Z⊙) (Ωcrit) (km s−1)
3197
+ (1052ergs s) (M⊙) (Myr) (R⊙)
3198
+ (K)
3199
+ (L⊙) ( 1050 ergs s) (M⊙) (1049 ergs s) (M⊙) (1048 ergs s) (M⊙)
3200
+ (M⊙) (1014 G) (ms)
3201
+ 23
3202
+ 0.01
3203
+ 0.6
3204
+ 577
3205
+ 0.5
3206
+ 7.56
3207
+ 11.02 13.24
3208
+ 0.57
3209
+ 5.26
3210
+ 5.53
3211
+ 5.92
3212
+ 11.02
3213
+ 28.3
3214
+ 8.19
3215
+ 8.15
3216
+ 1.82
3217
+ 0.5
3218
+ 1.6
3219
+ 2.28
3220
+ 1.24
3221
+ 23
3222
+ 0.01
3223
+ 0.6
3224
+ 577
3225
+ 1
3226
+ 7.56
3227
+ 6.98
3228
+ 13.31
3229
+ 0.52
3230
+ 5.23
3231
+ 5.32
3232
+ 1.48
3233
+ 6.98
3234
+ 8.08
3235
+ 5.21
3236
+ 4.69
3237
+ 1.57 0.21 1.41
3238
+ 2.76
3239
+ 1.89
3240
+ 23
3241
+ 0.1
3242
+ 0.6
3243
+ 533
3244
+ 0.5
3245
+ 7.8
3246
+ 10.03 13.25
3247
+ 0.63
3248
+ 5.22
3249
+ 5.44
3250
+ 4.12
3251
+ 10.03
3252
+ 21.5
3253
+ 7.59
3254
+ 6.08
3255
+ 1.57
3256
+ 0.2
3257
+ 1.41
3258
+ 2.23
3259
+ 1.46
3260
+ 23
3261
+ 0.1
3262
+ 0.6
3263
+ 533
3264
+ 1
3265
+ 7.8
3266
+ 12.45 13.13
3267
+ 0.54
3268
+ 5.3
3269
+ 5.61
3270
+ 5.27
3271
+ 12.45
3272
+ 30.4
3273
+ 9.57
3274
+ 6.74
3275
+ 1.6
3276
+ 0.27 1.43
3277
+ 4.88
3278
+ 1.34
3279
+ 24
3280
+ 0.01
3281
+ 0.6
3282
+ 581
3283
+ 0.5
3284
+ 8.16
3285
+ 11.41 12.57
3286
+ 0.5
3287
+ 5.3
3288
+ 5.56
3289
+ 6.18
3290
+ 11.41
3291
+ 29.4
3292
+ 8.48
3293
+ 7.58
3294
+ 1.7
3295
+ 0.36 1.51
3296
+ 2.86
3297
+ 1.26
3298
+ 24
3299
+ 0.1
3300
+ 0.6
3301
+ 537
3302
+ 1
3303
+ 8.44
3304
+ 12.59 12.48
3305
+ 0.75
3306
+ 5.23
3307
+ 5.61
3308
+ 5.78
3309
+ 12.59
3310
+ 31.2
3311
+ 9.49
3312
+ 4.87
3313
+ 1.5
3314
+ 0.14 1.36
3315
+ 1.66
3316
+ 1.75
3317
+ 24
3318
+ 0.1
3319
+ 0.6
3320
+ 537
3321
+ 0.5
3322
+ 8.44
3323
+ 10.92 12.57
3324
+ 0.59
3325
+ 5.24
3326
+ 5.48
3327
+ 4.82
3328
+ 10.92
3329
+ 24.8
3330
+ 8.23
3331
+ 7.28
3332
+ 1.65 0.29 1.47
3333
+ 6.77
3334
+ 1.27
3335
+ 25
3336
+ 0.01
3337
+ 0.1
3338
+ 103
3339
+ 0.5
3340
+ 1.7
3341
+ 24.93
3342
+ 8.65
3343
+ 110.83
3344
+ 4.11
3345
+ 5.48
3346
+ 3.71
3347
+ 9.73
3348
+ 17.4
3349
+ 7.08
3350
+ 7.03
3351
+ 1.92 0.66 1.69
3352
+ 2.26
3353
+ 1.51
3354
+ 25
3355
+ 0.01
3356
+ 0.2
3357
+ 205
3358
+ 1
3359
+ 3.38
3360
+ 24.72
3361
+ 9.92
3362
+ 109.61
3363
+ 4.15
3364
+ 5.64
3365
+ 7.79
3366
+ 13.45
3367
+ 43
3368
+ 10.49
3369
+ 10
3370
+ 1.8
3371
+ 0.43
3372
+ 1.6
3373
+ 5.7
3374
+ 1
3375
+ 25
3376
+ 0.01
3377
+ 0.3
3378
+ 306
3379
+ 1
3380
+ 4.96
3381
+ 9.38
3382
+ 11.32
3383
+ 0.64
3384
+ 5.21
3385
+ 5.39
3386
+ 2.46
3387
+ 9.38
3388
+ 13.7
3389
+ 7.06
3390
+ 5.33
3391
+ 1.51 0.12 1.36
3392
+ 2.27
3393
+ 1.6
3394
+ 25
3395
+ 0.01
3396
+ 0.4
3397
+ 404
3398
+ 0.5
3399
+ 6.43
3400
+ 11.74 11.56
3401
+ 0.48
3402
+ 5.32
3403
+ 5.58
3404
+ 5.81
3405
+ 11.74
3406
+ 29
3407
+ 8.8
3408
+ 6.72
3409
+ 1.73 0.38 1.54
3410
+ 2.01
3411
+ 1.44
3412
+ 25
3413
+ 0.01
3414
+ 0.4
3415
+ 404
3416
+ 1
3417
+ 6.43
3418
+ 7.63
3419
+ 11.62
3420
+ 0.51
3421
+ 5.25
3422
+ 5.35
3423
+ 1.66
3424
+ 7.63
3425
+ 9.37
3426
+ 5.72
3427
+ 4.09
3428
+ 1.57 0.21 1.41
3429
+ 1.21
3430
+ 2.17
3431
+ 25
3432
+ 0.01
3433
+ 0.5
3434
+ 497
3435
+ 0.5
3436
+ 7.72
3437
+ 11.83 11.79
3438
+ 0.5
3439
+ 5.31
3440
+ 5.61
3441
+ 6.28
3442
+ 11.83
3443
+ 29.8
3444
+ 8.75
3445
+ 7.02
3446
+ 1.68 0.33
3447
+ 1.5
3448
+ 2.09
3449
+ 1.34
3450
+ 25
3451
+ 0.01
3452
+ 0.5
3453
+ 497
3454
+ 1
3455
+ 7.72
3456
+ 7.67
3457
+ 11.85
3458
+ 0.53
3459
+ 5.24
3460
+ 5.36
3461
+ 1.73
3462
+ 7.67
3463
+ 9.59
3464
+ 5.75
3465
+ 4.39
3466
+ 1.52
3467
+ 0.2
3468
+ 1.37
3469
+ 2.12
3470
+ 1.97
3471
+ 25
3472
+ 0.01
3473
+ 0.6
3474
+ 585
3475
+ 0.5
3476
+ 8.79
3477
+ 11.45 11.99
3478
+ 0.51
3479
+ 5.3
3480
+ 5.57
3481
+ 5.52
3482
+ 11.45
3483
+ 27.1
3484
+ 8.53
3485
+ 7.09
3486
+ 1.76 0.45 1.56
3487
+ 2.41
3488
+ 1.39
3489
+ 25
3490
+ 0.01
3491
+ 0.6
3492
+ 585
3493
+ 1
3494
+ 8.79
3495
+ 7.46
3496
+ 12.07
3497
+ 0.49
3498
+ 5.26
3499
+ 5.39
3500
+ 1.65
3501
+ 7.46
3502
+ 9.09
3503
+ 5.58
3504
+ 4.8
3505
+ 1.55 0.23
3506
+ 1.4
3507
+ 1.27
3508
+ 1.83
3509
+ 25
3510
+ 0.01
3511
+ 0.7
3512
+ 669
3513
+ 0.5
3514
+ 9.6
3515
+ 11.86 12.17
3516
+ 0.49
3517
+ 5.31
3518
+ 5.59
3519
+ 6.91
3520
+ 11.86
3521
+ 33.5
3522
+ 8.86
3523
+ 7.37
3524
+ 1.66 0.28 1.48
3525
+ 3.03
3526
+ 1.27
3527
+ 25
3528
+ 0.01
3529
+ 0.7
3530
+ 669
3531
+ 1
3532
+ 9.6
3533
+ 7.42
3534
+ 12.24
3535
+ 0.5
3536
+ 5.25
3537
+ 5.35
3538
+ 1.64
3539
+ 7.42
3540
+ 9.19
3541
+ 5.57
3542
+ 5.24
3543
+ 1.58 0.23 1.42
3544
+ 2.19
3545
+ 1.7
3546
+ 25
3547
+ 0.01
3548
+ 0.8
3549
+ 747
3550
+ 0.5
3551
+ 10.1
3552
+ 11.36 12.29
3553
+ 0.6
3554
+ 5.26
3555
+ 5.54
3556
+ 5.7
3557
+ 11.36
3558
+ 27
3559
+ 8.37
3560
+ 7.27
3561
+ 1.79 0.54 1.58
3562
+ 2.23
3563
+ 1.37
3564
+ 25
3565
+ 0.01
3566
+ 0.8
3567
+ 747
3568
+ 1
3569
+ 10.1
3570
+ 7.52
3571
+ 12.34
3572
+ 0.51
3573
+ 5.24
3574
+ 5.34
3575
+ 1.7
3576
+ 7.52
3577
+ 9.4
3578
+ 5.63
3579
+ 4.5
3580
+ 1.58 0.21 1.41
3581
+ 1.45
3582
+ 1.98
3583
+ 25
3584
+ 0.1
3585
+ 0.1
3586
+ 95
3587
+ 1
3588
+ 1.76
3589
+ 13.86
3590
+ 8.96
3591
+ 1271
3592
+ 3.61
3593
+ 5.62
3594
+ 6.4
3595
+ 11.93
3596
+ 34
3597
+ 9.29
3598
+ 8.47
3599
+ 1.72 0.36 1.53
3600
+ 4.09
3601
+ 1.14
3602
+ 25
3603
+ 0.1
3604
+ 0.2
3605
+ 190
3606
+ 1
3607
+ 3.49
3608
+ 23.86
3609
+ 9.46
3610
+ 955.17
3611
+ 3.7
3612
+ 5.71
3613
+ 4.36
3614
+ 11
3615
+ 21.5
3616
+ 8.28
3617
+ 5.63
3618
+ 1.82 0.57 1.61
3619
+ 0.46
3620
+ 1.8
3621
+ 25
3622
+ 0.1
3623
+ 0.2
3624
+ 190
3625
+ 0.5
3626
+ 3.49
3627
+ 24.36
3628
+ 9.58 1085.14
3629
+ 3.65
3630
+ 5.62
3631
+ 5.08
3632
+ 12.42
3633
+ 26.9
3634
+ 9.68
3635
+ 6.87
3636
+ 1.79 0.45 1.58
3637
+ 2.48
3638
+ 1.45
3639
+ 25
3640
+ 0.1
3641
+ 0.3
3642
+ 283
3643
+ 1
3644
+ 5.13
3645
+ 19.98 10.77 1501.66
3646
+ 3.62
3647
+ 5.81
3648
+ 7.04
3649
+ 16.04
3650
+ 32.4
3651
+ 12.02
3652
+ 6.66
3653
+ 1.76 0.39 1.56
3654
+ 3.08
3655
+ 1.47
3656
+ 25
3657
+ 0.1
3658
+ 0.4
3659
+ 372
3660
+ 0.5
3661
+ 6.65
3662
+ 13.2
3663
+ 11.54
3664
+ 0.63
3665
+ 5.27
3666
+ 5.65
3667
+ 7.34
3668
+ 13.2
3669
+ 39.1
3670
+ 10.19
3671
+ 5.43
3672
+ 1.53 0.14 1.38
3673
+ 2.11
3674
+ 1.6
3675
+ 25
3676
+ 0.1
3677
+ 0.4
3678
+ 372
3679
+ 1
3680
+ 6.65
3681
+ 12.87 11.39
3682
+ 0.55
3683
+ 5.3
3684
+ 5.63
3685
+ 5.54
3686
+ 12.87
3687
+ 53.7
3688
+ 12.8
3689
+ 4.58
3690
+ 1.53 0.16 1.38
3691
+ 3.13
3692
+ 1.9
3693
+ 25
3694
+ 0.1
3695
+ 0.5
3696
+ 458
3697
+ 1
3698
+ 7.99
3699
+ 12.69 11.68
3700
+ 0.71
3701
+ 5.24
3702
+ 5.61
3703
+ 5.75
3704
+ 12.69
3705
+ 31.7
3706
+ 9.63
3707
+ 5.2
3708
+ 1.53 0.13 1.38
3709
+ 2.75
3710
+ 1.67
3711
+ 25
3712
+ 0.1
3713
+ 0.6
3714
+ 540
3715
+ 1
3716
+ 9.1
3717
+ 12.51 11.91
3718
+ 0.71
3719
+ 5.23
3720
+ 5.6
3721
+ 5.81
3722
+ 12.51
3723
+ 31.6
3724
+ 9.47
3725
+ 6.83
3726
+ 1.57 0.13 1.41
3727
+ 6.02
3728
+ 1.3
3729
+ 25
3730
+ 0.1
3731
+ 0.7
3732
+ 619
3733
+ 0.5
3734
+ 9.95
3735
+ 11.21 12.15
3736
+ 0.51
3737
+ 5.32
3738
+ 5.66
3739
+ 5.71
3740
+ 11.21
3741
+ 27.2
3742
+ 8.3
3743
+ 6.95
3744
+ 1.77
3745
+ 0.5
3746
+ 1.57
3747
+ 2.08
3748
+ 1.42
3749
+ 25
3750
+ 0.1
3751
+ 0.7
3752
+ 620
3753
+ 1
3754
+ 9.95
3755
+ 11.94 12.06
3756
+ 0.65
3757
+ 5.28
3758
+ 5.7
3759
+ 5.16
3760
+ 11.94
3761
+ 27.2
3762
+ 9.04
3763
+ 7.14
3764
+ 1.63
3765
+ 0.2
3766
+ 1.46
3767
+ 4.28
3768
+ 1.29
3769
+ 25
3770
+ 0.1
3771
+ 0.8
3772
+ 693
3773
+ 0.5
3774
+ 10.5
3775
+ 10.79 12.26
3776
+ 0.65
3777
+ 5.23
3778
+ 5.5
3779
+ 5.36
3780
+ 10.79
3781
+ 24.7
3782
+ 7.88
3783
+ 7.6
3784
+ 1.81 0.57
3785
+ 1.6
3786
+ 2.56
3787
+ 1.33
3788
+ 26
3789
+ 0.01
3790
+ 0.6
3791
+ 588
3792
+ 0.5
3793
+ 9.44
3794
+ 11.66 11.48
3795
+ 0.5
3796
+ 5.3
3797
+ 5.57
3798
+ 5.46
3799
+ 11.66
3800
+ 26.7
3801
+ 8.66
3802
+ 6.68
3803
+ 1.68 0.34
3804
+ 1.5
3805
+ 1.9
3806
+ 1.41
3807
+ Table 1 continued
3808
+
3809
+ 26
3810
+ Song et al.
3811
+ Table 1 (continued)
3812
+ Progenitor Parameters
3813
+ Stellar Properties at Core Collapse
3814
+ Protomagnetar
3815
+ Mini
3816
+ Z
3817
+
3818
+ Vequ
3819
+ ηwind
3820
+ Jini
3821
+ Mf
3822
+ Ages
3823
+ Rf
3824
+ log Teff logL
3825
+ JHe
3826
+ MHe
3827
+ JCO
3828
+ MCO
3829
+ JFe
3830
+ MFe ξ2.5 MNS
3831
+ BNS
3832
+ PNS
3833
+ (M⊙) (Z⊙) (Ωcrit) (km s−1)
3834
+ (1052ergs s) (M⊙) (Myr) (R⊙)
3835
+ (K)
3836
+ (L⊙) ( 1050 ergs s) (M⊙) (1049 ergs s) (M⊙) (1048 ergs s) (M⊙)
3837
+ (M⊙) (1014 G) (ms)
3838
+ 26
3839
+ 0.01
3840
+ 0.6
3841
+ 588
3842
+ 1
3843
+ 9.44
3844
+ 7.61
3845
+ 11.54
3846
+ 0.55
3847
+ 5.23
3848
+ 5.33
3849
+ 1.66
3850
+ 7.61
3851
+ 9.36
3852
+ 5.72
3853
+ 3.73
3854
+ 1.56 0.18
3855
+ 1.4
3856
+ 1.56
3857
+ 2.37
3858
+ 26
3859
+ 0.1
3860
+ 0.6
3861
+ 544
3862
+ 1
3863
+ 9.77
3864
+ 12.34 11.37
3865
+ 0.6
3866
+ 5.27
3867
+ 5.58
3868
+ 5.47
3869
+ 12.34
3870
+ 52.9
3871
+ 12.27
3872
+ 5.83
3873
+ 1.48 0.13 1.34
3874
+ 3.71
3875
+ 1.45
3876
+ 27
3877
+ 0.01
3878
+ 0.6
3879
+ 592
3880
+ 1
3881
+ 10.1
3882
+ 7.82
3883
+ 11.07
3884
+ 0.52
3885
+ 5.25
3886
+ 5.37
3887
+ 1.73
3888
+ 7.82
3889
+ 9.71
3890
+ 5.87
3891
+ 4.08
3892
+ 1.55 0.19 1.39
3893
+ 1.25
3894
+ 2.15
3895
+ 27
3896
+ 0.1
3897
+ 0.6
3898
+ 547
3899
+ 1
3900
+ 10.5
3901
+ 12.41 10.91
3902
+ 0.63
3903
+ 5.25
3904
+ 5.57
3905
+ 5.2
3906
+ 12.41
3907
+ 51
3908
+ 12.38
3909
+ 5
3910
+ 1.47 0.14 1.33
3911
+ 2.84
3912
+ 1.68
3913
+ 27
3914
+ 0.1
3915
+ 0.6
3916
+ 548
3917
+ 0.5
3918
+ 10.5
3919
+ 11.82 10.97
3920
+ 0.52
3921
+ 5.3
3922
+ 5.58
3923
+ 5.27
3924
+ 11.82
3925
+ 27.1
3926
+ 8.88
3927
+ 6.37
3928
+ 1.58 0.23 1.42
3929
+ 2.53
3930
+ 1.4
3931
+ 28
3932
+ 0.01
3933
+ 0.6
3934
+ 595
3935
+ 0.5
3936
+ 10.8
3937
+ 12.95 10.57
3938
+ 0.53
3939
+ 5.31
3940
+ 5.64
3941
+ 7.26
3942
+ 12.95
3943
+ 69.8
3944
+ 12.88
3945
+ 4.92
3946
+ 1.55 0.22 1.39
3947
+ 3.88
3948
+ 1.78
3949
+ 28
3950
+ 0.01
3951
+ 0.6
3952
+ 595
3953
+ 1
3954
+ 10.8
3955
+ 8.22
3956
+ 10.64
3957
+ 0.53
3958
+ 5.24
3959
+ 5.38
3960
+ 1.93
3961
+ 8.22
3962
+ 10.9
3963
+ 6.18
3964
+ 4.48
3965
+ 1.55 0.19
3966
+ 1.4
3967
+ 1.96
3968
+ 1.96
3969
+ 28
3970
+ 0.1
3971
+ 0.6
3972
+ 550
3973
+ 0.5
3974
+ 11.2
3975
+ 11.69 10.55
3976
+ 0.5
3977
+ 5.3
3978
+ 5.57
3979
+ 4.96
3980
+ 11.69
3981
+ 26.4
3982
+ 8.86
3983
+ 6.86
3984
+ 1.57 0.23 1.41
3985
+ 4.52
3986
+ 1.29
3987
+ 28
3988
+ 0.1
3989
+ 0.6
3990
+ 550
3991
+ 1
3992
+ 11.2
3993
+ 12.28 10.48
3994
+ 0.59
3995
+ 5.27
3996
+ 5.58
3997
+ 5.29
3998
+ 12.28
3999
+ 51.7
4000
+ 12.24
4001
+ 4.2
4002
+ 1.51 0.15 1.36
4003
+ 1.62
4004
+ 2.04
4005
+ 29
4006
+ 0.01
4007
+ 0.6
4008
+ 599
4009
+ 1
4010
+ 11.5
4011
+ 8.33
4012
+ 10.24
4013
+ 0.52
4014
+ 5.25
4015
+ 5.38
4016
+ 1.93
4017
+ 8.33
4018
+ 10.9
4019
+ 6.27
4020
+ 4.38
4021
+ 1.53 0.19 1.37
4022
+ 1.09
4023
+ 1.97
4024
+ 29
4025
+ 0.1
4026
+ 0.6
4027
+ 554
4028
+ 1
4029
+ 11.9
4030
+ 12.27 10.13
4031
+ 0.53
4032
+ 5.35
4033
+ 5.78
4034
+ 5.35
4035
+ 12.27
4036
+ 52.5
4037
+ 12.23
4038
+ 6.45
4039
+ 1.61 0.24 1.44
4040
+ 3.95
4041
+ 1.4
4042
+ 29
4043
+ 0.1
4044
+ 0.6
4045
+ 554
4046
+ 0.5
4047
+ 11.9
4048
+ 11.97 10.17
4049
+ 0.48
4050
+ 5.32
4051
+ 5.6
4052
+ 4.93
4053
+ 11.97
4054
+ 27.2
4055
+ 9.19
4056
+ 5.74
4057
+ 1.57 0.24 1.41
4058
+ 2.32
4059
+ 1.55
4060
+ 30
4061
+ 0.01
4062
+ 0.3
4063
+ 314
4064
+ 0.5
4065
+ 6.86
4066
+ 14.2
4067
+ 9.33
4068
+ 0.62
4069
+ 5.29
4070
+ 5.69
4071
+ 7.23
4072
+ 14.2
4073
+ 38.4
4074
+ 10.86
4075
+ 5.94
4076
+ 1.51 0.14 1.36
4077
+ 4.28
4078
+ 1.44
4079
+ 30
4080
+ 0.01
4081
+ 0.3
4082
+ 314
4083
+ 1
4084
+ 6.86
4085
+ 13.59
4086
+ 9.27
4087
+ 0.67
4088
+ 5.26
4089
+ 5.64
4090
+ 5.15
4091
+ 13.59
4092
+ 50.2
4093
+ 13.53
4094
+ 5.54
4095
+ 1.51 0.14 1.36
4096
+ 3.93
4097
+ 1.55
4098
+ 30
4099
+ 0.01
4100
+ 0.4
4101
+ 415
4102
+ 0.5
4103
+ 8.9
4104
+ 13.92
4105
+ 9.48
4106
+ 0.7
4107
+ 5.25
4108
+ 5.66
4109
+ 7.46
4110
+ 13.92
4111
+ 71.8
4112
+ 13.85
4113
+ 5.56
4114
+ 1.59 0.14 1.43
4115
+ 2.12
4116
+ 1.62
4117
+ 30
4118
+ 0.01
4119
+ 0.4
4120
+ 415
4121
+ 1
4122
+ 8.9
4123
+ 8.94
4124
+ 9.53
4125
+ 0.48
4126
+ 5.28
4127
+ 5.44
4128
+ 2.15
4129
+ 8.94
4130
+ 12.2
4131
+ 6.77
4132
+ 4.85
4133
+ 1.64 0.24 1.46
4134
+ 1.43
4135
+ 1.9
4136
+ 30
4137
+ 0.01
4138
+ 0.5
4139
+ 511
4140
+ 0.5
4141
+ 10.7
4142
+ 13.76
4143
+ 9.65
4144
+ 0.64
4145
+ 5.28
4146
+ 5.67
4147
+ 7.72
4148
+ 13.76
4149
+ 38.5
4150
+ 10.4
4151
+ 6.12
4152
+ 1.56 0.15
4153
+ 1.4
4154
+ 3.97
4155
+ 1.44
4156
+ 30
4157
+ 0.01
4158
+ 0.5
4159
+ 511
4160
+ 1
4161
+ 10.7
4162
+ 8.57
4163
+ 9.72
4164
+ 0.63
4165
+ 5.2
4166
+ 5.36
4167
+ 1.92
4168
+ 8.57
4169
+ 10.7
4170
+ 6.42
4171
+ 4.11
4172
+ 1.48 0.14 1.33
4173
+ 1.56
4174
+ 2.04
4175
+ 30
4176
+ 0.01
4177
+ 0.6
4178
+ 602
4179
+ 0.5
4180
+ 12.2
4181
+ 13.72
4182
+ 9.82
4183
+ 0.69
4184
+ 5.25
4185
+ 5.65
4186
+ 7.76
4187
+ 13.72
4188
+ 74.4
4189
+ 13.64
4190
+ 6.28
4191
+ 1.53 0.14 1.38
4192
+ 5.7
4193
+ 1.38
4194
+ 30
4195
+ 0.01
4196
+ 0.6
4197
+ 602
4198
+ 1
4199
+ 12.2
4200
+ 8.5
4201
+ 9.88
4202
+ 0.6
4203
+ 5.21
4204
+ 5.36
4205
+ 2.02
4206
+ 8.5
4207
+ 11.4
4208
+ 6.38
4209
+ 5.17
4210
+ 1.53 0.17 1.38
4211
+ 1.31
4212
+ 1.68
4213
+ 30
4214
+ 0.01
4215
+ 0.7
4216
+ 690
4217
+ 0.5
4218
+ 13.4
4219
+ 13.55
4220
+ 9.96
4221
+ 0.7
4222
+ 5.25
4223
+ 5.64
4224
+ 7.52
4225
+ 13.55
4226
+ 71.7
4227
+ 13.47
4228
+ 4.88
4229
+ 1.49 0.12 1.35
4230
+ 5.03
4231
+ 1.74
4232
+ 30
4233
+ 0.01
4234
+ 0.7
4235
+ 690
4236
+ 1
4237
+ 13.4
4238
+ 8.59
4239
+ 10.02
4240
+ 0.63
4241
+ 5.2
4242
+ 5.36
4243
+ 2.07
4244
+ 8.59
4245
+ 11.8
4246
+ 6.48
4247
+ 5.19
4248
+ 1.55 0.15 1.39
4249
+ 1.59
4250
+ 1.69
4251
+ 30
4252
+ 0.01
4253
+ 0.8
4254
+ 773
4255
+ 0.5
4256
+ 14.1
4257
+ 13.55 10.05
4258
+ 0.7
4259
+ 5.25
4260
+ 5.64
4261
+ 7.5
4262
+ 13.55
4263
+ 72.3
4264
+ 13.48
4265
+ 5.49
4266
+ 1.49 0.13 1.35
4267
+ 2.56
4268
+ 1.55
4269
+ 30
4270
+ 0.01
4271
+ 0.8
4272
+ 773
4273
+ 1
4274
+ 14.1
4275
+ 8.37
4276
+ 10.11
4277
+ 0.61
4278
+ 5.21
4279
+ 5.35
4280
+ 1.97
4281
+ 8.37
4282
+ 11
4283
+ 6.29
4284
+ 3.99
4285
+ 1.5
4286
+ 0.15 1.35
4287
+ 6.3
4288
+ 2.13
4289
+ 30
4290
+ 0.1
4291
+ 0.1
4292
+ 98
4293
+ 1
4294
+ 2.44
4295
+ 27.05
4296
+ 7.46 1310.05
4297
+ 3.63
4298
+ 5.72
4299
+ 8.38
4300
+ 14.26
4301
+ 46.6
4302
+ 11.34
4303
+ 7.26
4304
+ 1.77 0.52 1.57
4305
+ 3.91
4306
+ 1.36
4307
+ 30
4308
+ 0.1
4309
+ 0.2
4310
+ 194
4311
+ 0.5
4312
+ 4.82
4313
+ 29.15
4314
+ 7.99 1028.65
4315
+ 3.67
4316
+ 5.67
4317
+ 5.93
4318
+ 13.64
4319
+ 32.7
4320
+ 10.56
4321
+ 5.85
4322
+ 1.82 0.57 1.61
4323
+ 0.59
4324
+ 1.73
4325
+ 30
4326
+ 0.1
4327
+ 0.5
4328
+ 471
4329
+ 0.5
4330
+ 11.1
4331
+ 12.83
4332
+ 9.65
4333
+ 0.62
4334
+ 5.27
4335
+ 5.61
4336
+ 5.44
4337
+ 12.83
4338
+ 30.8
4339
+ 9.93
4340
+ 5.34
4341
+ 1.53 0.14 1.38
4342
+ 3.92
4343
+ 1.63
4344
+ 30
4345
+ 0.1
4346
+ 0.5
4347
+ 471
4348
+ 1
4349
+ 11.1
4350
+ 12.67
4351
+ 9.6
4352
+ 0.62
4353
+ 5.26
4354
+ 5.58
4355
+ 5.22
4356
+ 12.67
4357
+ 51.7
4358
+ 12.65
4359
+ 5.45
4360
+ 1.52 0.22 1.37
4361
+ 4.03
4362
+ 1.58
4363
+ 30
4364
+ 0.1
4365
+ 0.6
4366
+ 556
4367
+ 1
4368
+ 12.7
4369
+ 12.91
4370
+ 9.76
4371
+ 0.63
4372
+ 5.26
4373
+ 5.6
4374
+ 5.74
4375
+ 12.91
4376
+ 56.4
4377
+ 12.88
4378
+ 6.4
4379
+ 1.61 0.15 1.44
4380
+ 2.92
4381
+ 1.42
4382
+ 30
4383
+ 0.1
4384
+ 0.6
4385
+ 557
4386
+ 0.5
4387
+ 12.7
4388
+ 12.35
4389
+ 9.81
4390
+ 0.51
4391
+ 5.31
4392
+ 5.61
4393
+ 5.32
4394
+ 12.35
4395
+ 28.3
4396
+ 9.36
4397
+ 6.43
4398
+ 1.62 0.25 1.45
4399
+ 3.01
4400
+ 1.42
4401
+ 30
4402
+ 0.1
4403
+ 0.7
4404
+ 639
4405
+ 0.5
4406
+ 13.9
4407
+ 12.39
4408
+ 9.95
4409
+ 0.54
4410
+ 5.3
4411
+ 5.62
4412
+ 6.22
4413
+ 12.39
4414
+ 30.3
4415
+ 9.16
4416
+ 5.27
4417
+ 1.56 0.22
4418
+ 1.4
4419
+ 1.82
4420
+ 1.67
4421
+
4422
+ LGRB progenitors and magnetar formation
4423
+ 27
4424
+ Notes.
4425
+ The columns are the initial mass Mini
4426
+ (Column 1), the initial metallicity Z (Column 2),
4427
+ the initial rotation rate Ω (Column 3) and corre-
4428
+ sponding velocity at the equator Vequ (Column 4),
4429
+ the “Dutch” wind scale factor ηwind (Column 5),
4430
+ the initial total angular momentum Jini (Column
4431
+ 6), final mass Mf (Column 7), final star ages (Col-
4432
+ umn 8), final radius Rf (Column 9), final effective
4433
+ temperature log Teff (Column 10) and luminosity
4434
+ log L (Column 11) in logarithmic form, final angu-
4435
+ lar momentum in He core JHe (Column 12), He core
4436
+ mass at the end of calculation MHe (Column 13),
4437
+ final angular momentum in CO core JCO (Column
4438
+ 14), CO core mass at the end of calculation MCO
4439
+ (Column 15), final angular momentum in Fe core
4440
+ JFe (Column 16), Fe core mass at the end calcula-
4441
+ tion MFe (Column 17), the compactness parameter
4442
+ (Column 18), the NS mass (Column 19), the av-
4443
+ erage surface magnetic field strength (Column 20),
4444
+ and NS rotation period (Column 21).
4445
+
Y9E5T4oBgHgl3EQfDA5Y/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
YNA0T4oBgHgl3EQfFf_E/content/tmp_files/2301.02034v1.pdf.txt ADDED
@@ -0,0 +1,1557 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Draft version January 6, 2023
2
+ Typeset using LATEX twocolumn style in AASTeX631
3
+ Multi-stage reconnection powering a solar coronal jet
4
+ David M. Long
5
+ ,1 Lakshmi Pradeep Chitta
6
+ ,2 Deborah Baker
7
+ ,1 Iain G. Hannah
8
+ ,3 Nawin Ngampoopun
9
+ ,1
10
+ David Berghmans
11
+ ,4 Andrei N. Zhukov
12
+ ,4, 5 and Luca Teriaca
13
+ 2
14
+ 1University College London, Mullard Space Science Laboratory, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, UK
15
+ 2Max Planck Institute for Solar System Research, Justus-von-Liebig-Weg 3, 37077, G¨ottingen, Germany
16
+ 3School of Physics & Astronomy, University of Glasgow, University Avenue, Glasgow G12 8QQ, UK
17
+ 4Solar-Terrestrial Centre of Excellence – SIDC, Royal Observatory of Belgium, Ringlaan 3, 1180 Brussels, Belgium
18
+ 5Skobeltsyn Institute of Nuclear Physics, Moscow State University, 119992 Moscow, Russia
19
+ (Received January 6, 2023; Revised January 6, 2023; Accepted January 6, 2023)
20
+ Submitted to ApJ
21
+ ABSTRACT
22
+ Coronal jets are short-lived eruptive features commonly observed in polar coronal holes and are
23
+ thought to play a key role in the transfer of mass and energy into the solar corona. We describe
24
+ unique contemporaneous observations of a coronal blowout jet seen by the Extreme Ultraviolet Imager
25
+ onboard the Solar Orbiter spacecraft (SO/EUI) and the Atmospheric Imaging Assembly onboard
26
+ the Solar Dynamics Observatory (SDO/AIA). The coronal jet erupted from the south polar coronal
27
+ hole, and was observed with high spatial and temporal resolution by both instruments. This enabled
28
+ identification of the different stages of a breakout reconnection process producing the observed jet. We
29
+ find bulk plasma flow kinematics of ∼100–200 km s−1 across the lifetime of its observed propagation,
30
+ with a distinct kink in the jet where it impacted and was subsequently guided by a nearby polar plume.
31
+ We also identify a faint faster feature ahead of the bulk plasma motion propagating with a velocity of
32
+ ∼715 km s−1 which we attribute to untwisting of newly reconnected field lines during the eruption. A
33
+ Differential Emission Measure (DEM) analysis using the SDO/AIA observations revealed a very weak
34
+ jet signal, indicating that the erupting material was likely much cooler than the coronal passbands
35
+ used to derive the DEM. This is consistent with the very bright appearance of the jet in the Lyman-α
36
+ passband observed by SO/EUI. The DEM was used to estimate the radiative thermal energy of the
37
+ source region of the coronal jet, finding a value of ∼ 2 × 1024 ergs, comparable to the energy of a
38
+ nanoflare.
39
+ 1. INTRODUCTION
40
+ Coronal jets are collimated ejections of plasma from
41
+ the solar atmosphere that could escape into the helio-
42
+ sphere.
43
+ Typically observed using extreme ultraviolet
44
+ (EUV) (e.g., Alexander & Fletcher 1999; Nistic`o et al.
45
+ 2009; Chandrashekhar et al. 2014) or X-ray (e.g., Shi-
46
+ bata et al. 1992; Shimojo et al. 1996; Cirtain et al. 2007;
47
+ Sterling et al. 2015) observations, they are better ob-
48
+ served in coronal holes, due to the lower background
49
+ intensity making them easier to identify (e.g., Savcheva
50
+ et al. 2007). They are the subject of detailed investi-
51
+ Corresponding author: David M. Long
52
53
+ gation, as they offer a mechanism for transferring mass
54
+ and energy to the outer corona and solar wind (cf. Shen
55
+ 2021).
56
+ They also represent an opportunity to study
57
+ smaller scale evolution of the mechanisms involved in
58
+ larger solar eruptions, with recent observations (e.g.,
59
+ Nistic`o et al. 2009; Sterling et al. 2015) and simulations
60
+ (e.g., Wyper et al. 2017, 2018) suggesting that coronal
61
+ jets can be interpreted as being produced by the erup-
62
+ tion of mini-filaments via breakout reconnection. A de-
63
+ tailed overview of observations and modelling of coronal
64
+ jets can be found in the recent reviews by Raouafi et al.
65
+ (2016) and Shen (2021).
66
+ Coronal jets are typically identified as being divided
67
+ into two distinct types, “standard” and “blowout” jets
68
+ (cf. Moore et al. 2010), based on morphological be-
69
+ haviour. Standard jets follow the model originally pro-
70
+ arXiv:2301.02034v1 [astro-ph.SR] 5 Jan 2023
71
+
72
+ ID2
73
+ Long et al.
74
+ a; Ly−α, t = 06:04:52
75
+ 150
76
+ 225
77
+ 300
78
+
79
+ −1800
80
+ −1750
81
+ −1700
82
+ −1650
83
+ −1600
84
+ Y (Arcsec)
85
+ b; Ly−α, t = 06:04:52 − 06:03:52
86
+ 150
87
+ 225
88
+ 300
89
+
90
+
91
+
92
+
93
+
94
+
95
+
96
+ c; 174Å, t = 06:04:52
97
+ 150
98
+ 225
99
+ 300
100
+
101
+
102
+
103
+
104
+
105
+
106
+
107
+ d; 174Å, t = 06:04:52 − 06:03:52
108
+ 150
109
+ 225
110
+ 300
111
+
112
+
113
+
114
+
115
+
116
+
117
+
118
+ e; 304Å, t = 06:04:53
119
+
120
+
121
+
122
+
123
+
124
+
125
+
126
+ −1020
127
+ −1000
128
+ −980
129
+ −960
130
+ −940
131
+ Y (arcsec)
132
+ f; 304Å, t = 06:04:53 − 06:03:57
133
+
134
+
135
+
136
+
137
+
138
+
139
+
140
+
141
+
142
+
143
+
144
+
145
+
146
+ g; 171Å, t = 06:04:57
147
+
148
+
149
+
150
+
151
+
152
+
153
+
154
+
155
+
156
+
157
+
158
+
159
+
160
+ h; 171Å, t = 06:04:57 − 06:03:57
161
+
162
+
163
+
164
+
165
+
166
+
167
+
168
+
169
+
170
+
171
+
172
+
173
+
174
+ i; 193Å, t = 06:04:52
175
+ 20
176
+ 40
177
+ 60
178
+ 80
179
+ 100
180
+ 120
181
+ X (arcsec)
182
+ −1020
183
+ −1000
184
+ −980
185
+ −960
186
+ −940
187
+ Y (Arcsec)
188
+ j; 193Å, t = 06:04:52 − 06:03:57
189
+ 20
190
+ 40
191
+ 60
192
+ 80
193
+ 100
194
+ 120
195
+ X (arcsec)
196
+
197
+
198
+
199
+
200
+
201
+
202
+ k; 211Å, t = 06:04:57
203
+ 20
204
+ 40
205
+ 60
206
+ 80
207
+ 100
208
+ 120
209
+ X (arcsec)
210
+
211
+
212
+
213
+
214
+
215
+
216
+ l; 211Å, t = 06:04:57 − 06:03:57
217
+ 20
218
+ 40
219
+ 60
220
+ 80
221
+ 100
222
+ 120
223
+ X (arcsec)
224
+
225
+
226
+
227
+
228
+
229
+
230
+ Figure 1. Jet eruption in a polar coronal hole. The coronal jet (indicated by the white arrow) observed by EUI/HRI (top row)
231
+ and SDO/AIA (middle and bottom rows) using a combination of intensity and running difference images for each passband.
232
+ Top row shows the EUI/HRI 174 ˚A and Lyman-alpha passbands, middle row shows the SDO/AIA 304 ˚A and 171 ˚A passbands,
233
+ and the bottom row shows the SDO/AIA 193 ˚A and 211 ˚A passbands. In each case, intensity images have been enhanced using
234
+ the Multiscale Gaussian Normalisation technique of Morgan & Druckm¨uller (2014), while the times of the images used to make
235
+ the running difference images are given in the panel title. Note that all times indicate the time at Earth. An animated version of
236
+ this figure is available as movie 0.mp4, with a duration of 13 s which shows the temporal evolution of the erupting jet observed
237
+ by both Solar Orbiter EUI and SDO/AIA.
238
+ posed by Shibata et al. (1992), with a narrow spire that
239
+ remains thin throughout the lifetime of the jet, a rela-
240
+ tively dim source region, and no emission in the cooler
241
+ 304 ˚A passband. In contrast, blowout jets initially be-
242
+ have as standard jets, with the spire subsequently broad-
243
+ ening to match the width of the footpoints. Blowout jets
244
+ have also been observed to produce emission in the 304 ˚A
245
+ passband, with Moore et al. (2010) suggesting that re-
246
+ connection as a result of an emerging bipole which drives
247
+ the jet could release the overlying magnetic field and en-
248
+ able the observed surge (i.e., the eruption of cooler mate-
249
+ rial). This cooler material has traditionally been identi-
250
+ fied using the 304 ˚A passband, but Alexander & Fletcher
251
+ (1999) presented observations of a coronal jet made us-
252
+ ing the Lyman-α 1216 ˚A passband from the Transition
253
+ Region And Coronal Explorer (TRACE; Handy et al.
254
+ 1999).
255
+ In this case, the jet produced bright emission
256
+ in the Lyman-α passband, and could be tracked at a
257
+ cadence of ∼60 s.
258
+ The multiple coronal extreme ultraviolet (EUV) pass-
259
+ bands provided by the Atmospheric Imaging Assembly
260
+ (AIA; Lemen et al. 2012) onboard the Solar Dynamics
261
+ Observatory (SDO; Pesnell et al. 2012) has enabled the
262
+ development of multiple techniques to estimate plasma
263
+
264
+ EUI jet eruption
265
+ 3
266
+ parameters such as density and temperature using dif-
267
+ ferential emission measure (DEM; see, e.g., the codes
268
+ developed by Aschwanden et al. 2013; Hannah & Kon-
269
+ tar 2012, 2013; Plowman et al. 2013; Cheung et al. 2015;
270
+ Pickering & Morgan 2019).
271
+ This has enabled many
272
+ plasma diagnostic studies of different solar phenomena,
273
+ including coronal jets (e.g., Chen et al. 2013; Zhang &
274
+ Ji 2014; Joshi et al. 2020). More recently, DEM anal-
275
+ ysis has also been used to probe the radiative thermal
276
+ energy released during solar nanoflares, with Purkhart
277
+ & Veronig (2022) finding an energy range of 1024 –
278
+ 1029 erg for 30 SDO/AIA image series between 2011
279
+ and 2018.
280
+ The high spatial and temporal resolution
281
+ provided by SDO/AIA has also enabled a detailed in-
282
+ vestigation of the evolution of coronal jets (cf. Morton
283
+ et al. 2012b,a; Chen et al. 2012). The High Resolution
284
+ Imager (HRI) component of the Extreme Ultraviolet Im-
285
+ ager (EUI; Rochus et al. 2020) onboard the Solar Orbiter
286
+ (M¨uller et al. 2020) spacecraft will enable much higher
287
+ spatial and temporal resolution studies of solar phenom-
288
+ ena in its 174 ˚A and Lyman-α passbands. Already this
289
+ is providing new insights into very small features asso-
290
+ ciated with polar coronal jets (e.g., Mandal et al. 2022),
291
+ offering sub-arcsecond imaging of these features.
292
+ In this work, we use a combination of observations
293
+ from Solar Orbiter EUI and SDO/AIA to examine the
294
+ initial evolution of a small scale coronal jet erupting from
295
+ the southern polar coronal hole. The event is described
296
+ in section 2, with an analysis of the observations pre-
297
+ sented in section 3 before some conclusions are drawn in
298
+ section 4.
299
+ 2. OBSERVATIONS AND DATA ANALYSIS
300
+ The coronal jet discussed here erupted from the
301
+ south pole of the Sun on 2021-Sept-14 beginning at
302
+ ∼05:59:02 UT as observed by Solar Orbiter EUI1. At
303
+ the time, Solar Orbiter was located at 0.587 astronom-
304
+ ical units from the Sun at an angle of 47.372 degrees
305
+ behind the Earth, and was doing a calibration manoeu-
306
+ vre, pointing at the North and South Poles, and East
307
+ and West limbs to enable cross-calibration of the dif-
308
+ ferent high-resolution telescopes. The erupting jet was
309
+ observed by both the 174 ˚A and Lyman-α passbands
310
+ with an image scale of 0.492′′ (1.028′′) per pixel in the
311
+ 174 ˚A (Lyman-α) passbands, and a temporal resolution
312
+ of 5 s (see top row of Figure 1). For the analysis de-
313
+ 1 Note that the distances of the individual spacecraft from the Sun
314
+ meant that phenomena were observed at different times by the
315
+ different spacecraft, so to avoid confusion, we will use the time
316
+ at Earth throughout this discussion.
317
+ scribed here, we used the calibrated level-2 EUI/HRI
318
+ data from EUI Data Release 4.2
319
+ The jet was also well observed by SDO/AIA, with
320
+ the eruption starting at ∼06:02:30 UT as observed near
321
+ Earth. SDO/AIA has an image scale of 0.6′′ per pixel at
322
+ a 12 s cadence in each of the 7 EUV passbands (94, 131,
323
+ 171, 193, 211, 304, 335 ˚A), with the data presented here
324
+ processed using the standard aia prep.pro routine con-
325
+ tained within SolarSoftWare (Freeland & Handy 1998).
326
+ The eruption was well observed by the 304, 171, and
327
+ 193 ˚A passbands (with their temperature response func-
328
+ tions peaking at T∼ 104.9, 105.9, and 106.2 respectively),
329
+ and was also identifiable using the 211 ˚A passband
330
+ (T∼ 106.25) as shown in the middle and bottom rows
331
+ of Figure 1, with no signal apparent within the 94, 131,
332
+ or 335 ˚A passbands (T∼ 106.85, 105.7, and 106.4 respec-
333
+ tively).
334
+ Note that a full description of the response
335
+ function for each of the AIA passbands can be found
336
+ in Boerner et al. (2012).
337
+ 3. RESULTS
338
+ The erupting jet is shown as observed using multiple
339
+ passbands at ∼06:04:52 UT in Figure 1, with the tem-
340
+ poral evolution of the jet shown in more detail in the
341
+ associated animation movie 0.mp4. The jet has a clear
342
+ “λ” style morphology (see e.g., Raouafi et al. 2016, for
343
+ details), with each passband also showing an apparent
344
+ kink in the jet following its initial eruption as indicated
345
+ by the white arrows. Inspection of the temporal evo-
346
+ lution of the jet eruption using the larger field of view
347
+ provided by SDO/AIA suggests that this kinking is due
348
+ to the erupting jet interacting with a nearby very faint
349
+ polar plume, which deflects the initially laterally prop-
350
+ agating jet. As observed by EUI/HRI, the initially nar-
351
+ row jet reverses its propagation in the direction parallel
352
+ to the solar limb and starts to expand. This is apparent
353
+ in both the 174 ˚A and Lyman-α passbands, with the
354
+ higher resolution of the 174 ˚A passband also suggesting
355
+ a slight untwisting of the jet as it initially erupts.
356
+ This overall behaviour of the jet is also seen in the
357
+ SDO/AIA passbands shown in Figure 1, with the differ-
358
+ ent viewpoint and additional passbands providing addi-
359
+ tional information. From this viewpoint, the clear non-
360
+ radial motion is consistent with its interpretation as a
361
+ “λ” style jet. The jet also appears to be guided radially
362
+ outward from the Sun following the kinked evolution (al-
363
+ though the kinking is less pronounced here), with little
364
+ to none of the spreading observed by EUI/HRI. This
365
+ lack of spread in the jet in AIA images could be due to
366
+ 2 https://doi.org/10.24414/s5da-7e78
367
+
368
+ 4
369
+ Long et al.
370
+ the line-of-sight effects. While the jet can be identified
371
+ in the 193 ˚A and 211 ˚A passbands (observing plasma
372
+ at ∼1–2 MK), it is clearest in the 171 ˚A and particu-
373
+ larly the 304 ˚A passbands, suggesting that the feature
374
+ is likely composed of cooler plasma, which is consistent
375
+ with the clear identification of the jet in the EUI/HRI
376
+ Lyman-alpha passband.
377
+ 3.1. Initial evolution
378
+ The very high spatial and temporal cadence of the
379
+ EUI/HRI 174 ˚A passband enables a detailed analysis of
380
+ the initial stages of the jet eruption, as shown in Fig-
381
+ ure 2. Prior to the onset of the eruption, overarching
382
+ loops above the coronal bright point can be identified,
383
+ as shown in Figure 2a.
384
+ Within ∼60 s, the reconnec-
385
+ tion process has begun, with the eruption of the mini-
386
+ filament identifiable in Figure 2b.
387
+ This mini-filament
388
+ then interacts with the overlying loops to drive breakout
389
+ reconnection as shown in the image 5 s later (Figure 2c).
390
+ This can be seen as rapid disconnection and opening of
391
+ the overarching loops.
392
+ The result of bi-directional flows induced by this
393
+ breakout reconnection can be seen in the evolution of
394
+ small plasmoid-like blobs draining from the reconnection
395
+ site down the legs of the overarching loops to the solar
396
+ surface (as indicated by the unidirectional white arrows
397
+ in Figure 2d-e). Meanwhile the mini-filament continues
398
+ to erupt, with its spine indicated by the bi-directional
399
+ arrow in panels e-j of Figure 2. However, as it erupts,
400
+ its legs start to stretch (Figure 2g) and subsequently
401
+ break, producing bright side lobes (Figure 2h-i). This
402
+ enables the eruption of more plasmoid material into the
403
+ erupting jet (indicated by the red arrow in Figure 2h &
404
+ i and then by the white arrow in Figure 2j-l) which then
405
+ evolves out into the solar atmosphere.
406
+ In total, this process takes ∼140 s from the onset of
407
+ the eruption to the erupting plasma escaping out into
408
+ the outer corona, with the very high temporal and spa-
409
+ tial resolution of EUI/HRI enabling a detailed analysis
410
+ of the different reconnection stages of the initiation pro-
411
+ cess. While the same process was also observed by the
412
+ Lyman-α passband, the lower spatial resolution and very
413
+ high intensity of the plasma in this passband made it dif-
414
+ ficult to make a direct comparison of this small-scale be-
415
+ haviour. The difference between the two passbands can
416
+ be seen in panels a & c of Figure 1, with the small-scale
417
+ loop features identifiable in the 174 ˚A image (panel c)
418
+ not as apparent in the Lyman-α image (panel a).
419
+ 3.2. Jet kinematics
420
+ Once the plasma from the jet had been ejected via
421
+ the breakout reconnection process, the next step was to
422
+ quantify its evolution. A series of distance-time stack
423
+ plots were used to further examine the temporal evolu-
424
+ tion of the jet as shown in Figures 3 and 4 respectively.
425
+ The initial evolution of the jet close to the origin was ex-
426
+ amined using observations from EUI/HRI as shown in
427
+ Figure 3. The top row of Figure 3 shows individual snap-
428
+ shots of the initial stages of the erupting jet as observed
429
+ by EUI/HRI 174 ˚A (panel a) and EUI/HRI Lyman-α
430
+ (panel b) at 06:03:48 UT. Panels c & d then show the
431
+ distance-time stack plot along the white box indicated
432
+ in panels a & b. The white fiducial lines here indicate
433
+ that the jet had an initial velocity of ∼166 km s−1 as
434
+ observed by both EUI/HRIEUV and EUI/HRILYA.
435
+ Following this initial evolution, the longer term evo-
436
+ lution of the erupting jet was tracked using the larger
437
+ field of view of SDO/AIA (see Figure 4). Here, the top
438
+ row shows the evolution of the erupting jet, with the
439
+ bottom row showing distance-time stackplots along the
440
+ white region defined in panel b for the 171 ˚A (left col-
441
+ umn) and 193 ˚A (right column) passbands. The bright
442
+ jet feature was then identified in the stackplots as the
443
+ positions or points of the maximum intensity associated
444
+ with the jet at each time-step (as shown with symbols
445
+ in panel c for the 171 ˚A passband). These points were
446
+ then fitted using a linear fit to derive the kinematics.
447
+ This reveals a slight deceleration in the evolution of the
448
+ jet as observed by both passbands, from ∼200 km s−1
449
+ close to the source to ∼195 km s−1 further out following
450
+ the interaction with the nearby faint streamer.
451
+ Although care has been taken to try and ensure that
452
+ the same feature was identified using the datasets from
453
+ the different spacecraft, there may be several reasons
454
+ why the derived kinematics differ slightly. The EUI/HRI
455
+ observations are taken very close to the source of the
456
+ erupting jet while the SDO/AIA observations cover a
457
+ much longer distance and therefore timeframe. The an-
458
+ gle of the evolution of the jet with respect to the plane
459
+ of sky will also affect the kinematics derived using each
460
+ instrument.
461
+ While both SDO/AIA passbands in Figure 4 show a
462
+ feature propagating outward from the Sun with a fitted
463
+ velocity of ∼200 km s−1, it is also possible to identify
464
+ a very faint feature in the 193 ˚A passband with a ve-
465
+ locity of ∼715 km s−1 propagating ahead of the jet.
466
+ This feature is not very clear in the SDO/AIA obser-
467
+ vations presented here, and could not be easily identi-
468
+ fied in coronagraph images from the LASCO C2 corona-
469
+ graph onboard the SOHO spacecraft, making it difficult
470
+ to quantify. However, this feature is consistent with the
471
+ previous observations by Cirtain et al. (2007) of two ve-
472
+ locities associated with x-ray jets from a polar coronal
473
+ hole. Pariat et al. (2015) subsequently suggested that
474
+
475
+ EUI jet eruption
476
+ 5
477
+ 2021-09-14T06:06:13
478
+ 2021-09-14T06:06:03
479
+ 2021-09-14T06:05:58
480
+ 2021-09-14T06:05:53
481
+ 2021-09-14T06:04:53
482
+ 2021-09-14T06:04:18
483
+ 2021-09-14T06:03:58
484
+ 2021-09-14T06:03:38
485
+ 2021-09-14T06:03:28
486
+ 2021-09-14T06:02:03
487
+ 2021-09-14T06:02:58
488
+ 2021-09-14T06:03:03
489
+ overarching loops
490
+ (coronal bright point)
491
+ Mini-filament eruption
492
+ Breakout reconnection
493
+ a)
494
+ b)
495
+ c)
496
+ d)
497
+ e)
498
+ f)
499
+ g)
500
+ h)
501
+ i)
502
+ j)
503
+ k)
504
+ l)
505
+ Figure 2. The initial evolution of the jet eruption observed in the 174 ˚A passband by EUI/HRI. Panel a shows the initial
506
+ overarching loops prior to the onset of the eruption. Within ∼60 s, (panel b), the mini-filament starts to erupt, interacting with
507
+ the overarching loops and driving breakout reconnection (panel c). Bright plasma blobs produced in this reconnection process
508
+ then start to drain down the leg of the loops (white arrow in panels d-f), with the spine of the jet indicated by the double-ended
509
+ white arrow in panels e-j. The reconnection opens magnetic field lines, enabling the eruption of the jet plasma (indicated by
510
+ the red arrow in panels h-i and subsequently by the white arrow in panels j-l).
511
+
512
+ 6
513
+ Long et al.
514
+ t = 06:03:47
515
+ 0
516
+ 20
517
+ 40
518
+ 60
519
+ 80
520
+ Pixels
521
+ 0
522
+ 20
523
+ 40
524
+ 60
525
+ 80
526
+ Pixels
527
+ 0
528
+ 20
529
+ 40
530
+ 60
531
+ 80
532
+ 0
533
+ 20
534
+ 40
535
+ 60
536
+ 80
537
+ (a)
538
+ 06:00
539
+ 06:04
540
+ 06:08
541
+ 06:12
542
+ Start Time (14−Sep−21 05:57:55)
543
+ 0
544
+ 5
545
+ 10
546
+ 15
547
+ 06:00
548
+ 06:04
549
+ 06:08
550
+ 06:12
551
+ Start Time (14−Sep−21 05:57:55)
552
+ 0
553
+ 5
554
+ 10
555
+ 15
556
+ 165 km/s
557
+ (c)
558
+ t = 06:03:47
559
+ 0
560
+ 10
561
+ 20
562
+ 30
563
+ 40
564
+ Pixels
565
+
566
+
567
+
568
+
569
+
570
+
571
+ 0
572
+ 10
573
+ 20
574
+ 30
575
+ 40
576
+
577
+
578
+
579
+
580
+
581
+ (b)
582
+ 06:00
583
+ 06:04
584
+ 06:08
585
+ 06:12
586
+ Start Time (14−Sep−21 05:57:55)
587
+ 0
588
+ 5
589
+ 10
590
+ 15
591
+ 06:00
592
+ 06:04
593
+ 06:08
594
+ 06:12
595
+ Start Time (14−Sep−21 05:57:55)
596
+ 0
597
+ 5
598
+ 10
599
+ 15
600
+ 168 km/s
601
+ Distance (Mm)
602
+ (d)
603
+ Figure 3. The initial stages of the jet eruption observed
604
+ in the 174 ˚A and Lyman-α passbands by EUI/HRI. Panels
605
+ a & b show the initial stages of the eruption in the 174 ˚A
606
+ and Lyman-α passbands respectively, with the white dotted
607
+ box in panels a & b used to produce the distance-time stack
608
+ plots shown in panels c & d. A fiducial line is overlaid on
609
+ the leading edge of the bright jet feature in panels c & d to
610
+ guide the eye and to illustrate the propagation speed of the
611
+ jet.
612
+ an inclined solar jet could produce a standard jet fol-
613
+ lowed by a helical jet due to the untwisting of newly
614
+ reconnected magnetic field lines as the eruption evolves.
615
+ This standard jet could then be observed as a wave trav-
616
+ elling at a phase speed close to the local Alfv´en speed
617
+ (∼800 km s−1 as measured in X-ray observations by
618
+ Cirtain et al. 2007), with the helical (or blowout) jet ob-
619
+ served as a bulk plasma flow travelling at a fraction of
620
+ the phase speed. The faint feature observed here using
621
+ EUV observations can therefore be interpreted as the
622
+ standard jet wave simulated by Pariat et al. (2015) and
623
+ observed in X-rays by Cirtain et al. (2007).
624
+ 3.3. Differential Emission Measure analysis
625
+ The evolution of the jet plasma was examined in more
626
+ detail using a Differential Emission Measure (DEM) ap-
627
+ proach in order to quantify how the temperature and
628
+ density of the jet plasma evolved with time. The DEM
629
+ φ(T) is defined as,
630
+ φ(T) = n2
631
+ e(T) dh
632
+ dT ,
633
+ (1)
634
+ where ne is the electron number density and h is the
635
+ line of sight depth.
636
+ Computing DEMs from observa-
637
+ tions is an ill-posed problem, and multiple techniques
638
+ have been described to try and solve it using observa-
639
+ tions from SDO/AIA (see, e.g., Long et al. 2021, for
640
+ more details). In this case, we used the Regularised In-
641
+ version technique developed by Hannah & Kontar (2012,
642
+ 2013) to derive the differential emission measure of the
643
+ field of view shown in Figure 1. A selection of the re-
644
+ sulting DEMs can be seen in Figure 5 for temperatures
645
+ of 105.9 – 106.4 K in bins of 100.1 K. Note that above
646
+ this temperature it was not possible to identify any sig-
647
+ nature of the erupting jet feature. The left two columns
648
+ in Figure 5 show the DEM images in each temperature
649
+ bin at t= 06 : 05 : 23 UT, with the right two columns
650
+ showing the temporal evolution of the DEM in each tem-
651
+ perature bin for the region shown in Figure 4b used to
652
+ calculate the kinematics of the jet eruption as observed
653
+ by SDO/AIA. Although very faint, a signature of the
654
+ jet eruption can be identified in the different DEM tem-
655
+ perature bins shown here, with the feature identified by
656
+ arrows in each DEM stackplot to help guide the eye.
657
+ This faint feature matches the feature observed in Fig-
658
+ ure 4 propagating at a velocity of ∼200 km s−1, with
659
+ no evidence of the faster feature identified in the 193 ˚A
660
+ shown in Figure 4d.
661
+ To try and further understand the plasma evolution
662
+ of the jet, the EM-weighted temperature and density of
663
+ the field of view was estimated using the approach of
664
+ Vanninathan et al. (2015); Long et al. (2019). The EM-
665
+ weighted electron number density can be defined as,
666
+ ne =
667
+ ��
668
+ φ(T)dT
669
+ h
670
+ ,
671
+ (2)
672
+ where h is the plasma scale height, while the DEM-
673
+ weighted temperature can defined as,
674
+ T =
675
+ ��
676
+ φ(T)TdT
677
+
678
+ φ(T)dt ,
679
+ (3)
680
+ (see, e.g., Cheng et al. 2012). The resulting tempera-
681
+ ture and number density stackplots along the jet region
682
+ highlighted in Figure 4b are shown in panels a and b of
683
+ Figure 6 respectively. The erupting jet can be identi-
684
+ fied (and is highlighted using arrows to help guide the
685
+
686
+ EUI jet eruption
687
+ 7
688
+
689
+
690
+
691
+
692
+
693
+
694
+
695
+
696
+ (a) AIA 171 Å
697
+ (a) AIA 171 Å
698
+ (a) AIA 171 Å
699
+ (a) AIA 171 Å
700
+ (a) AIA 171 Å
701
+ (a) AIA 171 Å
702
+ (a) AIA 171 Å
703
+ (a) AIA 171 Å
704
+ (a) AIA 171 Å
705
+ (a) AIA 171 Å
706
+
707
+
708
+
709
+
710
+
711
+
712
+
713
+
714
+ (b) AIA 193 Å
715
+ (b) AIA 193 Å
716
+ (b) AIA 193 Å
717
+ (b) AIA 193 Å
718
+ (b) AIA 193 Å
719
+ (b) AIA 193 Å
720
+ (b) AIA 193 Å
721
+ (b) AIA 193 Å
722
+ (b) AIA 193 Å
723
+ (b) AIA 193 Å
724
+ 50 Mm
725
+ 06:04
726
+ 06:08
727
+ 06:12
728
+ 06:16
729
+ Start Time (14−Sep−21 06:01:39)
730
+ 0
731
+ 50
732
+ 100
733
+ 150
734
+ Distance along the jet (Mm)
735
+ 06:04
736
+ 06:08
737
+ 06:12
738
+ 06:16
739
+ Start Time (14−Sep−21 06:01:39)
740
+ 0
741
+ 50
742
+ 100
743
+ 150
744
+ Distance along the jet (Mm)
745
+ 195 km s
746
+ −1
747
+ 200 km s
748
+ −1
749
+ (c) AIA 171 Å
750
+ 195 km s
751
+ −1
752
+ 200 km s
753
+ −1
754
+ (c) AIA 171 Å
755
+ 195 km s
756
+ −1
757
+ 200 km s
758
+ −1
759
+ (c) AIA 171 Å
760
+ 195 km s
761
+ −1
762
+ 200 km s
763
+ −1
764
+ (c) AIA 171 Å
765
+ 195 km s
766
+ −1
767
+ 200 km s
768
+ −1
769
+ (c) AIA 171 Å
770
+ 195 km s
771
+ −1
772
+ 200 km s
773
+ −1
774
+ (c) AIA 171 Å
775
+ 195 km s
776
+ −1
777
+ 200 km s
778
+ −1
779
+ (c) AIA 171 Å
780
+ 195 km s
781
+ −1
782
+ 200 km s
783
+ −1
784
+ (c) AIA 171 Å
785
+ 195 km s
786
+ −1
787
+ 200 km s
788
+ −1
789
+ (c) AIA 171 Å
790
+ 195 km s
791
+ −1
792
+ 200 km s
793
+ −1
794
+ (c) AIA 171 Å
795
+ 06:04
796
+ 06:08
797
+ 06:12
798
+ 06:16
799
+ Start Time (14−Sep−21 06:01:46)
800
+ 06:04
801
+ 06:08
802
+ 06:12
803
+ 06:16
804
+ Start Time (14−Sep−21 06:01:46)
805
+ 195 km s
806
+ −1
807
+ 200 km s
808
+ −1
809
+ 715 km s
810
+ −1
811
+ (d) AIA 193 Å
812
+ 195 km s
813
+ −1
814
+ 200 km s
815
+ −1
816
+ 715 km s
817
+ −1
818
+ (d) AIA 193 Å
819
+ 195 km s
820
+ −1
821
+ 200 km s
822
+ −1
823
+ 715 km s
824
+ −1
825
+ (d) AIA 193 Å
826
+ 195 km s
827
+ −1
828
+ 200 km s
829
+ −1
830
+ 715 km s
831
+ −1
832
+ (d) AIA 193 Å
833
+ 195 km s
834
+ −1
835
+ 200 km s
836
+ −1
837
+ 715 km s
838
+ −1
839
+ (d) AIA 193 Å
840
+ 195 km s
841
+ −1
842
+ 200 km s
843
+ −1
844
+ 715 km s
845
+ −1
846
+ (d) AIA 193 Å
847
+ 195 km s
848
+ −1
849
+ 200 km s
850
+ −1
851
+ 715 km s
852
+ −1
853
+ (d) AIA 193 Å
854
+ 195 km s
855
+ −1
856
+ 200 km s
857
+ −1
858
+ 715 km s
859
+ −1
860
+ (d) AIA 193 Å
861
+ 195 km s
862
+ −1
863
+ 200 km s
864
+ −1
865
+ 715 km s
866
+ ��1
867
+ (d) AIA 193 Å
868
+ 195 km s
869
+ −1
870
+ 200 km s
871
+ −1
872
+ 715 km s
873
+ −1
874
+ (d) AIA 193 Å
875
+ Figure 4. The longer term evolution of the jet eruption as observed using the 171 ˚A and 193 ˚A passbands on SDO/AIA. Top
876
+ row shows the field of view used to track the jet evolution, with the bottom row showing the temporal evolution along the region
877
+ highlighted in white in panel b for the 171 ˚A (left) and 193 ˚A (right) passbands.
878
+ eye), but is again very faint in both number density and
879
+ temperature plots. A profile of both temperature and
880
+ density along the white dashed line shown in Figure 6a &
881
+ b is shown in black in panels c & d for temperature and
882
+ number density respectively. In both cases the temporal
883
+ evolution is very noisy, and a heavily smoothed line has
884
+ been added in red to highlight the slight increase above
885
+ the background corresponding to the passage of the jet
886
+ feature.
887
+ 3.4. Helical Morphology of the Jet and Link to
888
+ Switchbacks
889
+ An apparent helical (or kink, twist) morphology of the
890
+ jet material can be seen in panels a–d of Figure 1. Such
891
+ morphologies in jets are not uncommon, as noted by e.g.,
892
+ Shimojo et al. (1996); Wang et al. (1998); Veselovsky
893
+ et al. (1999); Jiang et al. (2007); Moore et al. (2015).
894
+ They suggest the presence of helical or kinked magnetic
895
+ field lines, similar to structures observed in erupting
896
+ prominences. We note that the jet is also visible in the
897
+ Lyman-α HRI channel (Figure 1a, b), indicating that
898
+ a part of the erupting material is at chromospheric or
899
+ transition region temperatures (see e.g., Canfield et al.
900
+ 1996; Sterling et al. 2015). As noted above, helical field
901
+
902
+ 8
903
+ Long et al.
904
+ Log T = 5.9−6.0
905
+
906
+
907
+
908
+
909
+
910
+ −1200
911
+ −1100
912
+ −1000
913
+ −900
914
+ Solar Y
915
+ Log T = 5.9−6.0
916
+
917
+
918
+
919
+
920
+
921
+ 0
922
+ 50
923
+ 100
924
+ 150
925
+ Solar Y
926
+ Log T = 5.9−6.0
927
+
928
+
929
+
930
+
931
+
932
+ 0
933
+ 50
934
+ 100
935
+ 150
936
+ Solar Y
937
+ Log T = 6.0−6.1
938
+
939
+
940
+
941
+
942
+
943
+
944
+
945
+
946
+
947
+
948
+ Log T = 6.0−6.1
949
+
950
+
951
+
952
+
953
+
954
+
955
+
956
+
957
+
958
+
959
+ Log T = 6.0−6.1
960
+
961
+
962
+
963
+
964
+
965
+
966
+
967
+
968
+
969
+
970
+ Log T = 6.1−6.2
971
+
972
+
973
+
974
+
975
+
976
+ −1200
977
+ −1100
978
+ −1000
979
+ −900
980
+ Solar Y
981
+ Log T = 6.1−6.2
982
+
983
+
984
+
985
+
986
+
987
+ 0
988
+ 50
989
+ 100
990
+ 150
991
+ Solar Y
992
+ Log T = 6.1−6.2
993
+
994
+
995
+
996
+
997
+
998
+ 0
999
+ 50
1000
+ 100
1001
+ 150
1002
+ Solar Y
1003
+ Log T = 6.2−6.3
1004
+ −100
1005
+ 0
1006
+ 100
1007
+ 200
1008
+ Solar X
1009
+
1010
+
1011
+
1012
+
1013
+
1014
+ Log T = 6.2−6.3
1015
+ 06:00
1016
+ 06:05
1017
+ 06:10
1018
+ 06:15
1019
+ Tstart = 14−Sep−21 06:00:11
1020
+
1021
+
1022
+
1023
+
1024
+
1025
+ Log T = 6.2−6.3
1026
+ 06:00
1027
+ 06:05
1028
+ 06:10
1029
+ 06:15
1030
+ Tstart = 14−Sep−21 06:00:11
1031
+
1032
+
1033
+
1034
+
1035
+
1036
+ Log T = 6.3−6.4
1037
+ −100
1038
+ 0
1039
+ 100
1040
+ 200
1041
+ Solar X
1042
+ −1200
1043
+ −1100
1044
+ −1000
1045
+ −900
1046
+ Solar Y
1047
+ Log T = 6.3−6.4
1048
+ 06:00
1049
+ 06:05
1050
+ 06:10
1051
+ 06:15
1052
+ Tstart = 14−Sep−21 06:00:11
1053
+ 0
1054
+ 50
1055
+ 100
1056
+ 150
1057
+ Solar Y
1058
+ Log T = 6.3−6.4
1059
+ 06:00
1060
+ 06:05
1061
+ 06:10
1062
+ 06:15
1063
+ Tstart = 14−Sep−21 06:00:11
1064
+ 0
1065
+ 50
1066
+ 100
1067
+ 150
1068
+ Solar Y
1069
+ T = 06:05:23 UT
1070
+ 18.0
1071
+ 19.0
1072
+ 20.0
1073
+ 21.0
1074
+ DEM [cm−5 K−1]
1075
+ Figure 5. Images (left two columns) and stackplots (right two columns) for different DEM temperature bins (as defined in the
1076
+ panel titles) showing the evolution of the jet in each temperature bin. The propagating jet feature has been highlighted using
1077
+ black arrows in the stackplots to help guide the eye.
1078
+ lines in jets are usually interpreted as the result of in-
1079
+ terchange reconnection of large-scale open coronal hole
1080
+ field lines with small-scale closed field lines at the coro-
1081
+ nal base (Pariat et al. 2009).
1082
+ The kinked magnetic field lines in jets may be linked
1083
+ to the magnetic field switchbacks that are frequently
1084
+ observed in the near-Sun solar wind by the Parker So-
1085
+ lar Probe mission (PSP; Kasper et al. 2019; Bale et al.
1086
+ 2019).
1087
+ Switchbacks represent significant deviations of
1088
+ the magnetic field from the nominal Parker spiral, which
1089
+ in extreme cases may result in the inversion of the ra-
1090
+ dial component of the magnetic field. The link of coro-
1091
+ nal jets with switchbacks was suggested by Sterling &
1092
+ Moore (2020) who argued that slightly kinked magnetic
1093
+ field lines resulting from the interchange reconnection
1094
+ may evolve to switchbacks as they propagate in the he-
1095
+ liosphere.
1096
+ The size of the observed kinked jet in the transverse
1097
+ (i.e. perpendicular to the radial) direction can be esti-
1098
+ mated from Figure 1c to be around 6×103 km. At this
1099
+ moment, the jet is located at 1.03 R⊙ from the centre
1100
+ of the Sun.
1101
+ Assuming the radial expansion of struc-
1102
+ tures propagating outwards, the transverse size of the
1103
+ corresponding feature at 35.7 R⊙ (the distance of the
1104
+ first perihelion of Parker Solar Probe) would be around
1105
+ 7×106 km. This is much larger than the transverse scale
1106
+ of switchbacks at this distance from the Sun, which is
1107
+ reported to be around 104 km (Horbury et al. 2020).
1108
+ However, Figure 1c shows that the jet does not look like
1109
+ a single kinked feature, but rather has a developed fine
1110
+ structure with many individual sub-features at scales
1111
+ down to a few HRI pixels, oriented at different directions
1112
+ with respect to the radial. If a kink has the minimal re-
1113
+ solved transverse size of two HRI pixels (i.e.
1114
+ around
1115
+
1116
+ EUI jet eruption
1117
+ 9
1118
+
1119
+ 06:04
1120
+ 06:08
1121
+ 06:12
1122
+ 06:16
1123
+ Start Time (14−Sep−21 06:01:41)
1124
+ 0
1125
+ 50
1126
+ 100
1127
+ 150
1128
+ Distance (Mm)
1129
+
1130
+ 06:04
1131
+ 06:08
1132
+ 06:12
1133
+ 06:16
1134
+ Start Time (14−Sep−21 06:01:41)
1135
+ 0
1136
+ 50
1137
+ 100
1138
+ 150
1139
+ Distance (Mm)
1140
+ 6.0
1141
+ 6.1
1142
+ 6.2
1143
+ a; EM weighted temperature (Log10T)
1144
+
1145
+ 06:04
1146
+ 06:08
1147
+ 06:12
1148
+ 06:16
1149
+ Start Time (14−Sep−21 06:01:41)
1150
+
1151
+
1152
+
1153
+
1154
+
1155
+
1156
+ 06:04
1157
+ 06:08
1158
+ 06:12
1159
+ 06:16
1160
+ Start Time (14−Sep−21 06:01:41)
1161
+
1162
+
1163
+
1164
+
1165
+
1166
+ 8.0
1167
+ 8.2
1168
+ 8.4
1169
+ 8.6
1170
+ b; EM weighted Density (cm−3)
1171
+
1172
+
1173
+
1174
+
1175
+
1176
+ 1.00
1177
+ 1.05
1178
+ 1.10
1179
+ 1.15
1180
+ 1.20
1181
+ T (MK)
1182
+ c
1183
+ 06:04
1184
+ 06:08
1185
+ 06:12
1186
+ 06:16
1187
+ Start Time (14−Sep−21 06:01:47)
1188
+ 8.15
1189
+ 8.20
1190
+ 8.25
1191
+ 8.30
1192
+ 8.35
1193
+ Log10Density (cm−3)
1194
+ d
1195
+ 06:04
1196
+ 06:08
1197
+ 06:12
1198
+ 06:16
1199
+ Start Time (14−Sep−21 06:01:47)
1200
+ 0.8
1201
+ 1.0
1202
+ 1.2
1203
+ 1.4
1204
+ 1.6
1205
+ Integrated emission measure (cm−5 x1027)
1206
+ e
1207
+ Figure 6. The differential emission measure of the erupting jet. Top row shows the emission measure weighted temperature
1208
+ (left) and electron number density (right) along the region of interest shown in Figure 4b used to calculate the jet kinematics.
1209
+ The faint jet feature has been highlighted using black arrows to help guide the eye. Panels c & d show a cut in temperature
1210
+ and the number density respectively along the white dashed line shown in panels a & b, with the original data shown in black
1211
+ and a heavily smoothed line shown in red to help guide the eye. Panel e shows the evolution in integrated emission measure in
1212
+ the blue square at the source of the jet eruption shown in Figure 1e.
1213
+ 400 km at the time of our observations), then its ra-
1214
+ dial expansion would lead to the switchback transverse
1215
+ size of around 5×105 km at 35.7 R⊙. This size is still
1216
+ an order of magnitude larger than the maximal switch-
1217
+ back size (not necessarily in the transverse direction) of
1218
+ 7×104 km inferred by Horbury et al. (2020). This means
1219
+ that the origin of individual switchbacks in the corona
1220
+ cannot be resolved by HRI, even if non-radial structures
1221
+ at the resolved scales of the jet may suggest the presence
1222
+ of helical fields at even smaller scales. The situation will
1223
+ improve only slightly at the closest perihelion of Solar
1224
+ Orbiter around 0.284 au, as the linear spatial resolution
1225
+ of HRI observations will be only a factor two better than
1226
+ that of the observations taken at 0.587 au reported here.
1227
+ Another important scale is that of switchback patches,
1228
+ which contain multiple smaller-scale switchbacks (Bale
1229
+ et al. 2021).
1230
+ Each switchback patch typically corre-
1231
+ sponds to the supergranulation scales of around 3◦–5◦
1232
+ in heliographic longitude (Bale et al. 2021), which is
1233
+ a few times larger than the transverse size of our jet
1234
+ (around 0.5◦). The jet reported in this study therefore
1235
+ corresponds to an intermediate scale situated between
1236
+ the scales of individual switchbacks and the switchback
1237
+ patches.
1238
+ 4. DISCUSSION & CONCLUSIONS
1239
+ As part of its commissioning activities taking obser-
1240
+ vations of the southern polar coronal hole on 2021-Sept-
1241
+ 14, Solar Orbiter/EUI observed a small coronal hole
1242
+ jet eruption with very high spatial and temporal res-
1243
+ olution.
1244
+ Despite the difference in spacecraft position,
1245
+ the jet was also well observed by SDO/AIA, enabling
1246
+ a multi-viewpoint, multi-thermal analysis of the erup-
1247
+ tion.
1248
+ In both cases, the jet was observed to initially
1249
+
1250
+ 10
1251
+ Long et al.
1252
+ erupt almost laterally, with a clear kink seen in the jet
1253
+ evolution as it impacted a nearby polar plume observed
1254
+ by both EUI/HRI and SDO/AIA (see movie 0.mp4 as-
1255
+ sociated with Figure 1). The jet was then observed by
1256
+ SDO/AIA to evolve radially away from the Sun, while
1257
+ the jet appeared to reverse its propagation in the direc-
1258
+ tion parallel to the solar limb while propagating outward
1259
+ from the Sun as observed by EUI/HRI.
1260
+ The very high spatial and temporal resolution pro-
1261
+ vided by EUI/HRI in the 174 ˚A passband enabled a
1262
+ detailed analysis of the initiation of the jet.
1263
+ As out-
1264
+ lined in Figure 2, the jet formed as a result of breakout
1265
+ reconnection between the erupting mini-filament and a
1266
+ small overlying loop system. This resulted in the further
1267
+ stretching and reconnection along the erupting filament,
1268
+ with small plasmoid-like blobs visible flowing away from
1269
+ the site of the reconnection down the legs of the loop
1270
+ system. This reconnection opened the overlying mag-
1271
+ netic field which then enabled the eruption of the jet
1272
+ itself. Although the process was also observed using the
1273
+ Lyman-α passband, the lower resolution and bright na-
1274
+ ture of the plasma in this passband meant that it was
1275
+ not possible to discern any fine structure comparable to
1276
+ that observed using the 174 ˚A passband.
1277
+ As observed in Figures 3 and 4, the jet displays slightly
1278
+ different kinematics when observed by SDO/AIA and
1279
+ EUI/HRI. Figure 3 shows the initial evolution of the
1280
+ jet observed close to the source prior to the interaction
1281
+ with the nearby polar plume as seen by EUI/HRI 174 ˚A
1282
+ and Lyman-α, with Figure 4 showing the longer term
1283
+ evolution of the jet as observed by SDO/AIA. These
1284
+ figures suggest a constant linear velocity for the jet of
1285
+ ∼165-200 km s−1, with the slight differences between the
1286
+ observations from the two spacecraft most likely due to
1287
+ the di���erent angle of observation, with the jet exhibit-
1288
+ ing different kinematics when propagating out of the
1289
+ plane of sky as seen from different perspectives. The jet
1290
+ was also best observed using the cooler passbands avail-
1291
+ able from both SDO/AIA and EUI/HRI, with a very
1292
+ clear jet seen in both AIA 304 ˚A and HRI Lyman-α, al-
1293
+ though it was possible to discern a fainter feature in the
1294
+ 171/174 ˚A, 193 ˚A, and 211 ˚A passbands. A very faint
1295
+ feature was also observed in the SDO/AIA 193 ˚A pass-
1296
+ band propagating away from the origin with a velocity
1297
+ of ∼715 km s−1 as shown in Figure 4. This is consis-
1298
+ tent with the previous observations in X-rays by Cirtain
1299
+ et al. (2007) and simulations of Pariat et al. (2015) of a
1300
+ wave travelling at a phase velocity close to the Alfv´en
1301
+ speed ahead of bulk plasma motion in a coronal jet, and
1302
+ implies that the jet is the result of interchange recon-
1303
+ nection between open and closed magnetic field in the
1304
+ corona, consistent both with the blowout jet interpreta-
1305
+ tion, and the unique high resolution EUI/HRI observa-
1306
+ tions in EUV of the breakout reconnection in the initial
1307
+ stages of the eruption, and the observations of an initial
1308
+ untwisting of the jet as it erupted. Although this be-
1309
+ haviour could be linked to the switchbacks detected by
1310
+ PSP, the size of the kink observed here is much larger
1311
+ than the typical switchback size measured by Horbury
1312
+ et al. (2020).
1313
+ While the jet can be clearly seen in the individual
1314
+ passbands shown in Figure 1, it was much more difficult
1315
+ to identify using the derived differential emission mea-
1316
+ sure, as can be seen in Figure 5. The jet can be identi-
1317
+ fied using the temperature bins shown in the figure, but
1318
+ no signal was observed in the temperature bins above
1319
+ ∼106.4 K. A further analysis of the emission measure
1320
+ weighted temperature and number density (as shown in
1321
+ Figure 6) show that a slight increase in both tempera-
1322
+ ture and density could be identified associated with the
1323
+ passage of the jet, but it was very small in both cases.
1324
+ These observations, combined with the very clear pre-
1325
+ sentation of the jet in the 304 ˚A passband observed by
1326
+ SDO/AIA and the Lyman-α observed by EUI/HRI, in-
1327
+ dicate that the jet primarily consisted of cooler material
1328
+ which was below the threshold of the DEM calculated
1329
+ using SDO/AIA observations. This indicates that the
1330
+ jet was erupting chromospheric material, again consis-
1331
+ tent with a blowout jet interpretation.
1332
+ However, the DEM derived from SDO/AIA can also
1333
+ be used to estimate the radiative thermal energy associ-
1334
+ ated with the onset of the jet eruption. As discussed by
1335
+ Benz & Krucker (1999) and more recently by Purkhart
1336
+ & Veronig (2022), it is possible to estimate the radiative
1337
+ thermal energy Eth released during a coronal bright-
1338
+ ening using the evolution of emission measure via the
1339
+ equation,
1340
+ Eth = 3kBT
1341
+
1342
+ ∆EMqhA,
1343
+ (4)
1344
+ where kB is Boltzmann’s constant, T is the tempera-
1345
+ ture during the peak of the EM evolution, ∆EM is the
1346
+ change in integrated emission measure relative to the
1347
+ pre-event value measured in cm−5, h is the total line of
1348
+ sight thickness of the event, A is the total area of the
1349
+ event, and q is the filling factor, representing the differ-
1350
+ ence between observed and actual volumes occupied by
1351
+ the emitting plasma. As this is difficult to accurately
1352
+ estimate, and does not measurably affect the estimated
1353
+ energy, we have assumed a filling factor q of 1 (cf. Par-
1354
+ nell & Jupp 2000; Purkhart & Veronig 2022). The re-
1355
+ gion chosen here as corresponding to the onset of the jet
1356
+ eruption is indicated by the blue square in Figure 1e,
1357
+ with the temporal evolution of the integrated emission
1358
+ measure in this region shown in Figure 6e. Note that
1359
+ due to the highly variable nature of the evolution of
1360
+
1361
+ EUI jet eruption
1362
+ 11
1363
+ the emission measure, the ∆EM was defined using the
1364
+ change between the start and peak of the smoothed red
1365
+ dashed line shown in Figure 6e. We also followed the
1366
+ lead of Purkhart & Veronig (2022), assuming a line of
1367
+ sight distance h =
1368
+
1369
+ A, after Benz & Krucker (1999).
1370
+ From equation 4, we derived a radiative thermal energy
1371
+ for the source region of the coronal jet of 1.5×1024 ergs,
1372
+ comparable to the energy of a nanoflare (see e.g. Chitta
1373
+ et al. 2021). However, it is worth noting that this is most
1374
+ likely an underestimation of the true radiative thermal
1375
+ energy produced by the source region. The jet in this
1376
+ case erupted from very close to the limb, but still on-
1377
+ disk as observed by SDO/AIA, and this close proximity
1378
+ to the limb leads to an increased absorption of the EUV
1379
+ radiation along the line of sight to the observer.
1380
+ These observations highlight the importance of high
1381
+ time cadence observations with very high spatial resolu-
1382
+ tion when studying the evolution of small scale features
1383
+ in the solar atmosphere. The HRI/EUV observations of
1384
+ the erupting jet presented here show the different stages
1385
+ of the breakout reconnection process which led to the
1386
+ eruption of the jet in very fine detail, and reveal un-
1387
+ twisting of the jet as it erupts, while the Lyman-α ob-
1388
+ servations show the cool nature of the erupting plasma.
1389
+ The additional point-of-view of Solar Orbiter is also very
1390
+ useful, as it highlights the interaction of the erupting
1391
+ jet with the nearby polar plume; behaviour that would
1392
+ have been missed if only observations from SDO/AIA
1393
+ were used to study the jet. However, the multiple pass-
1394
+ bands provided by SDO/AIA are vital for understanding
1395
+ the plasma diagnostics associated with the erupting jet,
1396
+ and for estimating the energy released during the erup-
1397
+ tion. The combination of observations from both SDO
1398
+ and Solar Orbiter will offer new insights into coronal
1399
+ jets, particularly as Solar Orbiter moves out of the solar
1400
+ ecliptic towards the poles.
1401
+ The authors wish to thank the anonymous referee whose
1402
+ suggestions helped to improve the paper. DML wishes
1403
+ to thank Peter Wyper and Etienne Pariat for useful dis-
1404
+ cussions which helped to codify the interpretation of the
1405
+ jet presented in this paper.
1406
+ Solar Orbiter is a space
1407
+ mission of international collaboration between ESA and
1408
+ NASA, operated by ESA. The EUI instrument was built
1409
+ by CSL, IAS, MPS, MSSL/UCL, PMOD/WRC, ROB,
1410
+ LCF/IO with funding from the Belgian Federal Science
1411
+ Policy Office (BELSPO/PRODEX PEA 4000134088);
1412
+ the Centre National d’Etudes Spatiales (CNES); the
1413
+ UK Space Agency (UKSA); the Bundesministerium f¨ur
1414
+ Wirtschaft und Energie (BMWi) through the Deutsches
1415
+ Zentrum f¨ur Luft- und Raumfahrt (DLR); and the
1416
+ Swiss Space Office (SSO). SDO data are courtesy of
1417
+ NASA/SDO and the AIA, EVE, and HMI science teams.
1418
+ D.M.L. is grateful to the Science Technology and Fa-
1419
+ cilities Council for the award of an Ernest Rutherford
1420
+ Fellowship (ST/R003246/1). L.P.C. gratefully acknowl-
1421
+ edges funding by the European Union. Views and opin-
1422
+ ions expressed are however those of the author(s) only
1423
+ and do not necessarily reflect those of the European
1424
+ Union or the European Research Council (grant agree-
1425
+ ment No 101039844). Neither the European Union nor
1426
+ the granting authority can be held responsible for them.
1427
+ D.Baker and I.G.H are funded under STFC consolidated
1428
+ grant numbers ST/S000240/1 and ST/T000422/1 re-
1429
+ spectively. N.N is supported by STFC PhD studentship
1430
+ grant ST/W507891/1. A.N.Z. thanks the Belgian Fed-
1431
+ eral Science Policy Office (BELSPO) for the provision
1432
+ of financial support in the framework of the PRODEX
1433
+ Programme of the European Space Agency (ESA) under
1434
+ contract number 4000136424.
1435
+ Facilities: SDO/AIA, Solar Orbiter EUI
1436
+ Software: SSW/IDL (Freeland & Handy 1998)
1437
+ REFERENCES
1438
+ Alexander, D., & Fletcher, L. 1999, SoPh, 190, 167,
1439
+ doi: 10.1023/A:1005213826793
1440
+ Aschwanden, M. J., Boerner, P., Schrijver, C. J., &
1441
+ Malanushenko, A. 2013, SoPh, 283, 5,
1442
+ doi: 10.1007/s11207-011-9876-5
1443
+ Bale, S. D., Badman, S. T., Bonnell, J. W., et al. 2019,
1444
+ Nature, 576, 237, doi: 10.1038/s41586-019-1818-7
1445
+ Bale, S. D., Horbury, T. S., Velli, M., et al. 2021, ApJ, 923,
1446
+ 174, doi: 10.3847/1538-4357/ac2d8c
1447
+ Benz, A. O., & Krucker, S. 1999, A&A, 341, 286
1448
+ Boerner, P., Edwards, C., Lemen, J., et al. 2012, SoPh, 275,
1449
+ 41, doi: 10.1007/s11207-011-9804-8
1450
+ Canfield, R. C., Reardon, K. P., Leka, K. D., et al. 1996,
1451
+ ApJ, 464, 1016, doi: 10.1086/177389
1452
+ Chandrashekhar, K., Morton, R. J., Banerjee, D., & Gupta,
1453
+ G. R. 2014, A&A, 562, A98,
1454
+ doi: 10.1051/0004-6361/201322408
1455
+ Chen, H.-D., Zhang, J., & Ma, S.-L. 2012, Research in
1456
+ Astronomy and Astrophysics, 12, 573,
1457
+ doi: 10.1088/1674-4527/12/5/009
1458
+
1459
+ 12
1460
+ Long et al.
1461
+ Chen, N., Ip, W.-H., & Innes, D. 2013, ApJ, 769, 96,
1462
+ doi: 10.1088/0004-637X/769/2/96
1463
+ Cheng, X., Zhang, J., Saar, S. H., & Ding, M. D. 2012,
1464
+ ApJ, 761, 62, doi: 10.1088/0004-637X/761/1/62
1465
+ Cheung, M. C. M., Boerner, P., Schrijver, C. J., et al. 2015,
1466
+ ApJ, 807, 143, doi: 10.1088/0004-637X/807/2/143
1467
+ Chitta, L. P., Peter, H., & Young, P. R. 2021, A&A, 647,
1468
+ A159, doi: 10.1051/0004-6361/202039969
1469
+ Cirtain, J. W., Golub, L., Lundquist, L., et al. 2007,
1470
+ Science, 318, 1580, doi: 10.1126/science.1147050
1471
+ Freeland, S. L., & Handy, B. N. 1998, SoPh, 182, 497,
1472
+ doi: 10.1023/A:1005038224881
1473
+ Handy, B. N., Acton, L. W., Kankelborg, C. C., et al. 1999,
1474
+ SoPh, 187, 229, doi: 10.1023/A:1005166902804
1475
+ Hannah, I. G., & Kontar, E. P. 2012, A&A, 539, A146,
1476
+ doi: 10.1051/0004-6361/201117576
1477
+ —. 2013, A&A, 553, A10,
1478
+ doi: 10.1051/0004-6361/201219727
1479
+ Horbury, T. S., Woolley, T., Laker, R., et al. 2020, ApJS,
1480
+ 246, 45, doi: 10.3847/1538-4365/ab5b15
1481
+ Jiang, Y. C., Chen, H. D., Li, K. J., Shen, Y. D., & Yang,
1482
+ L. H. 2007, A&A, 469, 331,
1483
+ doi: 10.1051/0004-6361:20053954
1484
+ Joshi, R., Chandra, R., Schmieder, B., et al. 2020, A&A,
1485
+ 639, A22, doi: 10.1051/0004-6361/202037806
1486
+ Kasper, J. C., Bale, S. D., Belcher, J. W., et al. 2019,
1487
+ Nature, 576, 228, doi: 10.1038/s41586-019-1813-z
1488
+ Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, SoPh,
1489
+ 275, 17, doi: 10.1007/s11207-011-9776-8
1490
+ Long, D. M., Jenkins, J., & Valori, G. 2019, ApJ, 882, 90,
1491
+ doi: 10.3847/1538-4357/ab338d
1492
+ Long, D. M., Reid, H. A. S., Valori, G., & O’Kane, J. 2021,
1493
+ ApJ, 921, 61, doi: 10.3847/1538-4357/ac1cdf
1494
+ Mandal, S., Chitta, L. P., Peter, H., et al. 2022, A&A, 664,
1495
+ A28, doi: 10.1051/0004-6361/202243765
1496
+ Moore, R. L., Cirtain, J. W., Sterling, A. C., & Falconer,
1497
+ D. A. 2010, ApJ, 720, 757,
1498
+ doi: 10.1088/0004-637X/720/1/757
1499
+ Moore, R. L., Sterling, A. C., & Falconer, D. A. 2015, ApJ,
1500
+ 806, 11, doi: 10.1088/0004-637X/806/1/11
1501
+ Morgan, H., & Druckm¨uller, M. 2014, SoPh, 289, 2945,
1502
+ doi: 10.1007/s11207-014-0523-9
1503
+ Morton, R. J., Srivastava, A. K., & Erd´elyi, R. 2012a,
1504
+ A&A, 542, A70, doi: 10.1051/0004-6361/201117218
1505
+ Morton, R. J., Verth, G., McLaughlin, J. A., & Erd´elyi, R.
1506
+ 2012b, ApJ, 744, 5, doi: 10.1088/0004-637X/744/1/5
1507
+ M¨uller, D., St. Cyr, O. C., Zouganelis, I., et al. 2020, A&A,
1508
+ 642, A1, doi: 10.1051/0004-6361/202038467
1509
+ Nistic`o, G., Bothmer, V., Patsourakos, S., & Zimbardo, G.
1510
+ 2009, SoPh, 259, 87, doi: 10.1007/s11207-009-9424-8
1511
+ Pariat, E., Antiochos, S. K., & DeVore, C. R. 2009, ApJ,
1512
+ 691, 61, doi: 10.1088/0004-637X/691/1/61
1513
+ Pariat, E., Dalmasse, K., DeVore, C. R., Antiochos, S. K.,
1514
+ & Karpen, J. T. 2015, A&A, 573, A130,
1515
+ doi: 10.1051/0004-6361/201424209
1516
+ Parnell, C. E., & Jupp, P. E. 2000, ApJ, 529, 554,
1517
+ doi: 10.1086/308271
1518
+ Pesnell, W. D., Thompson, B. J., & Chamberlin, P. C.
1519
+ 2012, SoPh, 275, 3, doi: 10.1007/s11207-011-9841-3
1520
+ Pickering, J., & Morgan, H. 2019, SoPh, 294, 136,
1521
+ doi: 10.1007/s11207-019-1526-3
1522
+ Plowman, J., Kankelborg, C., & Martens, P. 2013, ApJ,
1523
+ 771, 2, doi: 10.1088/0004-637X/771/1/2
1524
+ Purkhart, S., & Veronig, A. M. 2022, A&A, 661, A149,
1525
+ doi: 10.1051/0004-6361/202243234
1526
+ Raouafi, N. E., Patsourakos, S., Pariat, E., et al. 2016,
1527
+ SSRv, 201, 1, doi: 10.1007/s11214-016-0260-5
1528
+ Rochus, P., Auch`ere, F., Berghmans, D., et al. 2020, A&A,
1529
+ 642, A8, doi: 10.1051/0004-6361/201936663
1530
+ Savcheva, A., Cirtain, J., Deluca, E. E., et al. 2007, PASJ,
1531
+ 59, S771, doi: 10.1093/pasj/59.sp3.S771
1532
+ Shen, Y. 2021, Proceedings of the Royal Society of London
1533
+ Series A, 477, 217, doi: 10.1098/rspa.2020.0217
1534
+ Shibata, K., Ishido, Y., Acton, L. W., et al. 1992, PASJ, 44,
1535
+ L173
1536
+ Shimojo, M., Hashimoto, S., Shibata, K., et al. 1996, PASJ,
1537
+ 48, 123, doi: 10.1093/pasj/48.1.123
1538
+ Sterling, A. C., & Moore, R. L. 2020, ApJL, 896, L18,
1539
+ doi: 10.3847/2041-8213/ab96be
1540
+ Sterling, A. C., Moore, R. L., Falconer, D. A., & Adams,
1541
+ M. 2015, Nature, 523, 437, doi: 10.1038/nature14556
1542
+ Vanninathan, K., Veronig, A. M., Dissauer, K., et al. 2015,
1543
+ ApJ, 812, 173, doi: 10.1088/0004-637X/812/2/173
1544
+ Veselovsky, I. S., Zhukov, A. N., Koutchmy, S., Delann´ee,
1545
+ C., & Delaboudini`ere, J. P. 1999, in ESA Special
1546
+ Publication, Vol. 446, 8th SOHO Workshop: Plasma
1547
+ Dynamics and Diagnostics in the Solar Transition Region
1548
+ and Corona, ed. J. C. Vial & B. Kaldeich-Sch¨urmann, 675
1549
+ Wang, Y. M., Sheeley, N. R., J., Socker, D. G., et al. 1998,
1550
+ ApJ, 508, 899, doi: 10.1086/306450
1551
+ Wyper, P. F., Antiochos, S. K., & DeVore, C. R. 2017,
1552
+ Nature, 544, 452, doi: 10.1038/nature22050
1553
+ Wyper, P. F., DeVore, C. R., & Antiochos, S. K. 2018,
1554
+ ApJ, 852, 98, doi: 10.3847/1538-4357/aa9ffc
1555
+ Zhang, Q. M., & Ji, H. S. 2014, A&A, 567, A11,
1556
+ doi: 10.1051/0004-6361/201423698
1557
+
YNA0T4oBgHgl3EQfFf_E/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
Z9E0T4oBgHgl3EQfnQEE/content/tmp_files/2301.02508v1.pdf.txt ADDED
@@ -0,0 +1,1888 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ End-to-End 3D Dense Captioning with Vote2Cap-DETR
2
+ Sijin Chen1*
3
+ Hongyuan Zhu2
4
+ Xin Chen3
5
+ Yinjie Lei4
6
+ Tao Chen1†
7
+ Gang YU3
8
+ 1Fudan University
9
+ 2Institute for Infocomm Research, A*STAR
10
+ 3Tencent PCG
11
+ 4Sichuan University
12
+ https://github.com/ch3cook-fdu/Vote2Cap-DETR
13
+ Abstract
14
+ 3D dense captioning aims to generate multiple cap-
15
+ tions localized with their associated object regions. Exist-
16
+ ing methods follow a sophisticated “detect-then-describe”
17
+ pipeline equipped with numerous hand-crafted components.
18
+ However, these hand-crafted components would yield sub-
19
+ optimal performance given cluttered object spatial and
20
+ class distributions among different scenes.
21
+ In this pa-
22
+ per, we propose a simple-yet-effective transformer frame-
23
+ work Vote2Cap-DETR based on recent popular DEtection
24
+ TRansformer (DETR). Compared with prior arts, our
25
+ framework has several appealing advantages:
26
+ 1) With-
27
+ out resorting to numerous hand-crafted components, our
28
+ method is based on a full transformer encoder-decoder ar-
29
+ chitecture with a learnable vote query driven object de-
30
+ coder, and a caption decoder that produces the dense cap-
31
+ tions in a set-prediction manner. 2) In contrast to the two-
32
+ stage scheme, our method can perform detection and cap-
33
+ tioning in one-stage. 3) Without bells and whistles, exten-
34
+ sive experiments on two commonly used datasets, ScanRe-
35
+ fer and Nr3D, demonstrate that our Vote2Cap-DETR sur-
36
+ passes current state-of-the-arts by 11.13% and 7.11% in
37
+ [email protected], respectively. Codes will be released soon.
38
+ 1. Introduction
39
+ 3D dense captioning [11, 7, 38, 36, 18, 4] requires a sys-
40
+ tem to localize all the objects in a 3D scene, and generate
41
+ descriptive sentences for each object. This problem is chal-
42
+ lenging given 1) the sparsity of point clouds and 2) the clut-
43
+ tered distribution of objects.
44
+ 3D dense captioning can be divided into two tasks, ob-
45
+ ject detection and object caption generation. Scan2Cap[11],
46
+ MORE[18], and SpaCap3D[36] propose well-designed re-
47
+ lation reasoning modules to efficiently model relations
48
+ among object proposals.
49
+ [42] introduces contextual in-
50
+ *This work is accomplished when visiting the Advanced Perception
51
+ Reasoning Lab at I2R, A*STAR.
52
+ †Corresponding author.
53
+ Two-stage methods
54
+ Feature
55
+ Encoding
56
+ This is a gray chair. It is to the
57
+ right of a file cabinet.
58
+ The rolling office chair. The chair
59
+ is next to the square desk.
60
+
61
+ Object
62
+ Detection
63
+ Relation
64
+ Modelling
65
+ Caption
66
+ Generation
67
+ Post
68
+ Processing
69
+ Vote2Cap-DETR
70
+ Caption
71
+ Head
72
+ Detection
73
+ Head
74
+ Transformer
75
+ Feature
76
+ Encoding
77
+ Decoder
78
+ Vote Query
79
+ This is a gray chair. It is to the
80
+ right of a file cabinet.
81
+ The rolling office chair. The chair
82
+ is next to the square desk.
83
+
84
+ Figure 1. Illustration of existing two-stage 3D dense caption-
85
+ ing method (upper) and our Vote2Cap-DETR (bottom). Exist-
86
+ ing methods adopt a two-stage pipeline that heavily depends on a
87
+ detector’s output. Therefore, we propose a transformer-based one-
88
+ stage model, Vote2Cap-DETR, that frames 3D dense captioning
89
+ as a set prediction problem.
90
+ formation from two branches to improve the caption.
91
+ 3DJCG[4] and D3Net[7] study the correlation between 3D
92
+ visual grounding and 3D dense captioning, and point out
93
+ that these two tasks promote each other. Additionally, χ-
94
+ Trans2Cap[38] discusses how to transfer knowledge from
95
+ additional 2d information to boost 3d dense captioning.
96
+ Among existing methods, they all adopt a two-stage
97
+ “detect-then-describe” pipeline[11, 18, 36, 4, 7, 42] (Fig-
98
+ ure 1). This pipeline first generates a set of object propos-
99
+ als, then decodes each object by a caption generator with
100
+ an explicit reasoning procedure.
101
+ Though these methods
102
+ have achieved remarkable performance, the “detect-then-
103
+ describe” pipeline suffers from the following issues: 1) Be-
104
+ cause of the serial and explicit reasoning, this task highly
105
+ depends on the object detection performance, which limits
106
+ the mutual promotion of detection and captioning. 2) The
107
+ heavy reliance on hand-crafted components, e.g., radii, 3D
108
+ operators, the definition of proposal neighbors, and post-
109
+ processing (non-maximum suppression[25]) introduces ad-
110
+ 1
111
+ arXiv:2301.02508v1 [cs.CV] 6 Jan 2023
112
+
113
+ ditional hyper-parameters, leading to a sub-optimal perfor-
114
+ mance given the sparse object surfaces and cluttered object
115
+ distributions among different indoor scenes. This inspires
116
+ us to design an one-stage 3D dense captioning system.
117
+ To address the above issues, we propose Vote2Cap-
118
+ DETR, a full transformer encoder-decoder architecture for
119
+ one-stage 3D dense captioning.
120
+ Unlike the traditional
121
+ “detect-then-describe” pipeline, we directly feed the de-
122
+ coder’s output into the localization head and caption head
123
+ in parallel.
124
+ By casting 3D dense captioning as a set-to-
125
+ set problem, each target instance and its language annota-
126
+ tion is matched with a query in an one-to-one correspon-
127
+ dence manner, helping feature representation for proposals
128
+ be more discriminative to identify each distinctive object
129
+ in a 3D scene. Additionally, we also propose a novel vote
130
+ query driven decoder to introduce spatial bias for better lo-
131
+ calization of objects in a cluttered 3D scene.
132
+ With the fully attentional design, we resolve 3D dense
133
+ captioning with the following innovations: 1) Our method
134
+ treats the 3D dense captioning task as a set prediction prob-
135
+ lem. The proposed Vote2Cap-DETR directly decodes the
136
+ features into object sets with their locations and correspond-
137
+ ing captions by applying two parallel prediction heads. 2)
138
+ We propose a novel vote decoder by reformulating the ob-
139
+ ject queries in 3DETR into the format of the vote query,
140
+ which is a composition of the embeddings of the seeds point
141
+ and the vote transformation of the box with respect to the
142
+ seeds. This indicates the connection between the vote query
143
+ in Vote2Cap-DETR with the VoteNet, but with better lo-
144
+ calization and higher training efficiencies; 3) We develop a
145
+ novel query driven caption head, which absorbs the relation
146
+ and attribute modeling into the self- and cross-attention, so
147
+ that it can look into both the local and global context to bet-
148
+ ter describe the scene. Extensive experiments on two com-
149
+ monly used datasets, ScanRefer and Nr3D, demonstrate that
150
+ our approach surpasses prior arts with many hand-crafted
151
+ procedures by a large margin, which demonstrates the supe-
152
+ riority that, full transformer architecture with sophisticated
153
+ vote head and caption head can inspire many 3D vision and
154
+ language tasks.
155
+ To summarize, the main contributions of this work in-
156
+ clude:
157
+ • We propose a novel one-stage and fully attention
158
+ driven architecture for 3D dense captioning as a set-
159
+ to-set prediction problem, which achieves object local-
160
+ ization and caption generation in parallel.
161
+ • Extensive
162
+ experiments
163
+ show
164
+ that
165
+ our
166
+ proposed
167
+ Vote2Cap approach achieves a new state-of-the-art
168
+ performance on both Nr3D[1] (45.53% [email protected]) and
169
+ ScanRefer[11] (73.77% [email protected]).
170
+ 2. Related Work
171
+ We briefly summarize works on 3D dense captioning,
172
+ and DETR-based methods for image and 3D object detec-
173
+ tion. Additionally, we also introduce some methods for im-
174
+ age captioning, which are closely related to our work.
175
+ 3D Dense Captioning.
176
+ 3D dense captioning, a task
177
+ that requires translating 3D scene information to a set
178
+ of bounding boxes and natural language descriptions, is
179
+ challenging and has raised great interest among schol-
180
+ ars recent years.
181
+ Scan2Cap[11] and MORE[18] build
182
+ graph on a detector’s[29, 17] box estimations with hand-
183
+ crafted rules to reason complex relations among objects
184
+ in a 3D scene.
185
+ SpaCap3D[36] build a spatiality-guided
186
+ transformer to model spatial relations among the detector’s
187
+ output.
188
+ 3DJCG[4] and D3Net[7] study the joint promo-
189
+ tion of 3D dense captioning and 3D visual grounding. χ-
190
+ Trans2Cap[38] introduces additional 2D prior to comple-
191
+ ment information for 3D dense captioning with knowledge
192
+ transfer. Recently, [42] shifts attention to contextual infor-
193
+ mation for the perception of non-object information. These
194
+ approaches have made great attempts to solve the 3D dense
195
+ captioning problem. However, they all follow a “detect-
196
+ then-describe” pipeline, which is heavily dependent on a de-
197
+ tector’s performance. Our proposed Vote2Cap-DETR dif-
198
+ fers from existing works in that, our method is a one-stage
199
+ model that detects and generates captions in parallel, and
200
+ treats 3D dense captioning as a set prediction problem.
201
+ DETR: from 2D to 3D. DEtection Transformer(DETR)[5]
202
+ is a transformer[34] based architecture that treats object
203
+ detection as a set prediction problem, and does not re-
204
+ quire non-maximum suppression[25] for post-processing.
205
+ Though great results have been achieved, DETR suffers
206
+ from slow convergence. Many follow-up works[43, 39, 14,
207
+ 23, 9, 16] put efforts on speeding up DETR’s training by in-
208
+ troducing multi-scale features, cross attention designs, and
209
+ label assignment techniques. Researchers also attempt to
210
+ introduce transformer architectures to 3D object detection.
211
+ GroupFree3D[21] learns proposal features from the whole
212
+ point cloud through the transformer rather than grouping
213
+ local points. 3DETR[24] analyzes the potential of the stan-
214
+ dard transformer model, and generates proposals by uni-
215
+ formly sampling seed points from a 3D scene. In our work,
216
+ we extend the DETR architecture for 3D dense captioning
217
+ that makes caption generation and box localization fully in-
218
+ terrelated with parallel decoding. Additionally, we propose
219
+ vote query for better performance and faster convergence.
220
+ Image Captioning. Image captioning requires a model to
221
+ generate sentences describing key elements in an image,
222
+ which has become a hot topic in computer vision. Existing
223
+ image captioning works adopt an encoder-decoder architec-
224
+ ture, where the decoder generates sentences from visual fea-
225
+ tures extracted by the encoder. [2, 12, 15, 27] adopt a detec-
226
+ 2
227
+
228
+ tor to extract region features as visual clues for the decoder,
229
+ while [20, 41] extract grid features directly from an image.
230
+ Additionally, [26] generates captions with both region and
231
+ grid visual features. Though these methods are effective
232
+ in image captioning, they cannot be directly applied to 3D
233
+ dense captioning, which requires both accurately localizing
234
+ and describing a 3D object, rather than simply captioning a
235
+ whole 2D scene image. In contrast, our proposed caption
236
+ head sufficiently leverages the rich context information in
237
+ 3D point cloud, receives visual clues from both the object
238
+ query and its local context, and fuses them to achieve effec-
239
+ tive 3D dense captioning.
240
+ 3. Method
241
+ As shown in Fig. 2, given a 3D scene, our goal is to
242
+ localize objects of interest and generate informative natu-
243
+ ral language descriptions for each object. The input of our
244
+ model is a point cloud PC = [pin; fin] ∈ RN×(3+F ) rep-
245
+ resenting an indoor 3D scene. Here, pin ∈ RN×3 is the
246
+ absolute locations for each point, and fin ∈ RN×F is addi-
247
+ tional input feature for each point, such as color, normal,
248
+ height, or multiview feature introduced by [11, 6].
249
+ The
250
+ expected output is a set of box-caption pairs ( ˆB, ˆC) =
251
+ {(ˆb1, ˆc1), · · · , (ˆbK, ˆcK)}, representing an estimation of K
252
+ distinctive objects in this 3D scene.
253
+ Specifically, our system adopts 3DETR[24] encoder as
254
+ our scene encoder, and transformer decoder to capture both
255
+ object-object and object-scene interactions by the attention
256
+ mechanism. Then, we adopt two task-specific heads for ob-
257
+ ject detection and caption generation.
258
+ 3.1. 3DETR Encoder
259
+ Inspired by DETR[5], 3DETR[24] has made a successful
260
+ attempt at bringing full transformer architecture to the 3D
261
+ object detection task, which removes many hard-coded de-
262
+ sign decisions as the popular VoteNet and PointNet++ mod-
263
+ ules in most two-stage methods.
264
+ In 3DETR encoder, the input PC is first tokenized with
265
+ a set-abstraction layer[30]. Then, point tokens are fed into
266
+ a masked transformer encoder with a set-abstraction layer
267
+ followed by another two encoder layers. We denote the en-
268
+ coded scene tokens as [penc; fenc] ∈ R1,024×(3+256).
269
+ 3.2. Vote Query
270
+ Though 3DETR has achieved initial success in 3D ob-
271
+ ject detection, it suffers from certain limitations. 3DETR
272
+ proposes the box estimation around the query points (aka
273
+ proposal centers) sampled from the scenes, which can make
274
+ these boxes far away from real objects given the sparse ob-
275
+ ject surfaces, resulting in slow convergence to capture dis-
276
+ criminative object features with further miss detections.
277
+ Prior works on fast convergence DETR models[23, 10,
278
+ 40] show that by injecting more structured bias to ini-
279
+ tialize object queries, such as anchor points or content-
280
+ aware queries, accelerates training. Therefore, we propose
281
+ the vote query, which introduces both 3D spatial bias and
282
+ content-related information, for faster convergence and per-
283
+ formance improvement.
284
+ More specifically, we reformulate the object queries in
285
+ 3DETR into the format of vote query, as a composition of
286
+ the embedding of the reference points and vote transforma-
287
+ tion around them. This helps to build the connection be-
288
+ tween the object query in 3DETR and the vote set prediction
289
+ widely studied in VoteNet.
290
+ The detailed structure is shown in Figure 3. Here, vote
291
+ ∆pvote is predicted from encoded scene token feature fenc
292
+ with a Feed Forward Network (FFN) FFNvote that learns
293
+ to shift the encoded points to objects’ centers spatially:
294
+ pvote = penc + ∆pvote = penc + FFNvote (fenc) .
295
+ (1)
296
+ Then, we sample 256 points pseed from penc with farthest
297
+ point sampling, and locate each point’s offset estimation for
298
+ pvq = pseed + ∆pvote. Finally, we gather features from
299
+ (penc, fenc) for pvq with a set-abstraction layer[30], to for-
300
+ mulate the vote query feature fvq ∈ R256×256. We repre-
301
+ sent vote query as (pvq, fvq).
302
+ Following 3DETR[24], our model adopts an eight-layer
303
+ transformer decoder, and the i-th layer’s input query feature
304
+ f i
305
+ query is calculated through
306
+ f i
307
+ query = Layeri−1
308
+
309
+ f i−1
310
+ query + FFN (PE (pvq))
311
+
312
+ ,
313
+ (2)
314
+ where f 0
315
+ query = fvq, and PE(·) is the 3D Fourier posi-
316
+ tional encoding function[32]. Experiments in later sections
317
+ demonstrate that: 1) Vote query injects additional spatial
318
+ bias to object detection and boosts the detection perfor-
319
+ mance. 2) Encoding features from the point cloud as initial
320
+ queries accelerates convergence.
321
+ 3.3. Parallel Decoding
322
+ We adopt two task-specific heads for simultaneous ob-
323
+ ject detection and caption generation. The two task heads
324
+ are agnostic to each other’s output.
325
+ Detection Head. Detecting objects in a 3D scene requires
326
+ box corner estimation ˆB and class estimation ˆS (containing
327
+ “no object” class) from each object query feature. Follow-
328
+ ing 3DETR[24], box corner estimation is reformulated into
329
+ offset estimation from a query point to an object’s center,
330
+ and box size estimation. All subtasks are implemented by
331
+ FFNs. In practice, the object localization head is shared
332
+ through different layers in the decoder, following all exist-
333
+ ing works on DETR[5, 24, 23, 10].
334
+ Caption Head. 3D dense captioning requires attribute de-
335
+ tails on an object and its relation with its close surroundings.
336
+ However, the vote query itself is agnostic to box predictions
337
+ for the whole scene, and fails to provide adequate attribute
338
+ 3
339
+
340
+ Caption
341
+ Head
342
+ Detection
343
+ Heads
344
+ Vote Query Generation
345
+ Parallel Decoding
346
+ Scene
347
+ Encoder
348
+ Tokenizer
349
+ Feature Encoding
350
+ Transformer
351
+ Decoder
352
+ 𝑝𝑣𝑞, 𝑓𝑣𝑞
353
+ 𝓥𝒒
354
+ 𝓥𝒔
355
+ Prediction Results
356
+ This is a tan cabinet.
357
+ It is in the corner of
358
+ the room.
359
+ This is a chair. It is
360
+ placed at a table .
361
+ This is a gray chair. It
362
+ is to the left of another
363
+ chair.
364
+
365
+
366
+ Notations
367
+ 𝑝𝑒𝑛𝑐, 𝑓𝑒𝑛𝑐
368
+ 3D Fourier Positional Encoding
369
+ Query Feature
370
+ Surrounding Contextual Information
371
+ 𝓥𝒒
372
+ 𝓥𝒔
373
+ Figure 2. Approach. Vote2Cap-DETR is an one-stage transformer model that takes a 3D point cloud as its input, and generates a set of
374
+ box predictions and sentences localizing and describing each object in the point cloud. The scene encoder first generates encoded scene
375
+ tokens (penc, fenc) from the input point cloud. Then, we generate vote query (pvq, fvq) from the encoded scene tokens, which introduce
376
+ both spatial bias pvq and content-aware feature fvq to initial object queries. The transformer decoder decodes each vote query with two
377
+ parallel task heads for captioning and detection. We optimize Vote2Cap-DETR with a set loss.
378
+ FPS.
379
+ down sampling
380
+ +
381
+ 𝑝𝑠𝑒𝑒𝑑
382
+ Δ𝑝𝑣𝑜𝑡𝑒
383
+ Set
384
+ Abstraction
385
+ 𝑝𝑒𝑛𝑐, 𝑓𝑒𝑛𝑐
386
+ Vote Query Generation
387
+ 𝑓𝑒𝑛𝑐
388
+ 𝑝𝑒𝑛𝑐
389
+ 𝑝𝑣𝑞, 𝑓𝑣𝑞
390
+ 𝑭𝑭𝑵𝒗𝒐𝒕𝒆
391
+ Figure 3. Vote Query Generation. Vote query pvq contains spa-
392
+ tial bias (∆pvote) to initial object queries (pseed), which are sam-
393
+ pled from the scene with farthest point sampling (FPS) and gath-
394
+ ered feature fvq from the point cloud for each query.
395
+ and spatial relations for generating informative captions.
396
+ Therefore, the main difficulty is how to leverage sufficient
397
+ surrounding contextual information without confusing the
398
+ caption head.
399
+ To address the above issues, we propose Dual-Clued
400
+ Captioner(DCC), a lightweight transformer decoder-based
401
+ caption head, for 3D dense captioning. DCC consists of
402
+ a stack of 2 identical transformer decoder blocks, sinusoid
403
+ position embedding, and a linear classification head. To
404
+ generate informative captions, DCC receives two streams
405
+ of visual clue V = (Vq, Vs). Here, Vq is the last decoder
406
+ layer’s output feature of a vote query, and Vs is contextual
407
+ information surrounding the absolute location of each vote
408
+ Linear
409
+ Masked
410
+ Self-Attn
411
+ Cross-Attn
412
+ FFN
413
+ × 𝑳𝒄𝒂𝒑
414
+ Caption Head
415
+ 1D
416
+ 𝓥𝒔
417
+ 𝓥𝒒
418
+ the
419
+ chair
420
+ is
421
+ pulled
422
+
423
+ Word Embed
424
+ the
425
+ chair
426
+ is
427
+ pulled
428
+ into
429
+ the
430
+ table
431
+ Figure 4.
432
+ Dual-Clued Captioner(DCC). DCC is a lightweight
433
+ transformer based caption head that uses vote query feature Vq
434
+ as caption perfix to identify the described region, and contextual
435
+ features Vs surrounding the vote query to complement with more
436
+ surrounding information for more descriptive caption generation.
437
+ query. When generating a caption for a proposal, we substi-
438
+ tute the standard Start Of Seqenece(‘SOS’) prefix with Vq
439
+ of the described query identifying the object to be described
440
+ following [36]. Since the vote query is agnostic of actual
441
+ neighbor object proposals because of the parallel detection
442
+ branch, we introduce the vote query’s ks nearest local con-
443
+ text token features as its local surroundings Vs as keys for
444
+ cross attention. During the evaluation, we generate captions
445
+ through beam search with a beam size of 5.
446
+ 3.4. Set prediction loss for 3D Dense Captioning
447
+ Our proposed Vote2Cap-DETR generates a set of paired
448
+ box-caption proposals ( ˆB, ˆC) for 3D dense captioning. It
449
+ 4
450
+
451
+ requires supervision for vote query (Lvq), detection head
452
+ (Ldet), and caption head (Lcap).
453
+ Vote Query Loss. We borrow vote loss from VoteNet[29]
454
+ as Lvq, to help the vote query generation module learn to
455
+ shift points penc to an object’s center:
456
+ Lvq = 1
457
+ M
458
+ M
459
+
460
+ i=1
461
+ Ngt
462
+
463
+ j=1
464
+ ��pi
465
+ vote − cntj
466
+ ��
467
+ 1 · I
468
+
469
+ pi
470
+ enc ∈ Ij
471
+
472
+ . (3)
473
+ Here, I(·) is an indicator function that equals 1 when the
474
+ condition meets and 0 otherwise, Ngt is the number of in-
475
+ stances in a 3D scene, M is the size of pvote, and cntj is the
476
+ center of jth instance Ij.
477
+ Detection Loss. Following 3DETR[24], we use the same
478
+ Hungarian algorithm to assign each proposal with a ground
479
+ truth label. Since 3D dense captioning is closely related to
480
+ the object localization ability, we apply a larger weight on
481
+ the gIoU loss component for total set loss[24]:
482
+ Lset = α1Lgiou +α2Lcls +α3Lcenter−reg +α4Lsize−reg,
483
+ (4)
484
+ where α1 = 10, α2 = 1, α3 = 5, α4 = 1 are set heuristi-
485
+ cally. The set loss Lset is applied to all ndec−layer layers in
486
+ the decoder for better convergence.
487
+ Caption Loss. Following the standard practice of image
488
+ captioning, we train our caption head first with standard
489
+ cross-entropy loss (MLE training), and then fine-tune it
490
+ with Self-Critical Sequence Training (SCST)[31]. During
491
+ MLE training, the model is trained to predict the (t + 1)th
492
+ word ct+1
493
+ i
494
+ , given the first t words c[1:t]
495
+ i
496
+ and the visual clue
497
+ V. The loss function for a T-length sentence is defined as:
498
+ Lci =
499
+ T
500
+
501
+ i=1
502
+ Lci(t) = −
503
+ T
504
+
505
+ i=1
506
+ log ˆP
507
+
508
+ ct+1
509
+ i
510
+ |V, c[1:t]
511
+ i
512
+
513
+ .
514
+ (5)
515
+ After the caption head is trained under word-level supervi-
516
+ sion, we fine-tune it with SCST. During SCST, the model
517
+ generates multiple captions ˆc1,··· ,k with a beam size of k,
518
+ and another ˆg through greedy search as a baseline. The loss
519
+ function for SCST is defined as:
520
+ Lci = −
521
+ k
522
+
523
+ i=1
524
+ (R (ˆci) − R (ˆg)) · 1
525
+ |ˆci| log ˆP (ˆci|V) .
526
+ (6)
527
+ Here, the reward function R (·) is the CIDEr metric for cap-
528
+ tion evaluation, and the log probability of caption ˆci is nor-
529
+ malized by caption length |ˆci|, to encourage the model to
530
+ treat captions with different length equally important.
531
+ Set to Set Training for 3D Dense Captioning. We pro-
532
+ pose an easy-to-implement set-to-set training strategy for
533
+ 3D dense captioning. Given a 3D scene, we randomly sam-
534
+ ple one sentence from the corpus for each annotated in-
535
+ stance. Then, we assign language annotations to the cor-
536
+ responding number of proposals in the corresponding scene
537
+ with the same Hungarian algorithm. During training, we
538
+ average losses for captions Lci on all annotated instances in
539
+ a batch, to compute the caption loss Lcap. To balance losses
540
+ for different tasks, our loss function for the whole system is
541
+ defined as:
542
+ L = β1Lvq + β2
543
+ ndec−layer
544
+
545
+ i=1
546
+ Lset + β3Lcap,
547
+ (7)
548
+ where β1 = 10, β2 = 1, β3 = 5 are set heuristically.
549
+ 4. Experiments
550
+ We first present the datasets, metrics, and implementa-
551
+ tion details for 3D dense captioning (section 4.1). Then, we
552
+ provide comparisons with all state-of-the-art methods (sec-
553
+ tion 4.2). We also provide studies on the effectiveness of
554
+ different parts in our model (section 4.3). Finally, we visu-
555
+ alize several qualitative results to address the effectiveness
556
+ of our method (section 4.4).
557
+ 4.1. Datasets, Metrics, and Implementation Details
558
+ Datasets.
559
+ We report results on two commonly used
560
+ datasets, ScanRefer [6] and Nr3D[1], both of which are
561
+ built on 3D scenes from ScanNet[13]. ScanNet[13] con-
562
+ tains 1,201 indoor 3D scenes for training and 312 for
563
+ validation. ScanRefer/Nr3D contains 36,665/32,919 free-
564
+ form language annotations describing 7,875/4,664 ob-
565
+ jects from 562/511 3D scenes for training, and evalu-
566
+ ates on 9,508/8,584 sentences for 2,068/1,214 objects from
567
+ 141/130 3D scenes.
568
+ Evaluation Metrics. Following [11, 4, 18, 36], we first
569
+ apply NMS on object proposals to drop duplicate object
570
+ predictions. Each object proposal is a box-sentence pair
571
+ (ˆbi, ˆci), containing box corner prediction ˆbi and generated
572
+ sentence ˆci. Then, each instance is assigned an object pro-
573
+ posal with the largest IoU among the remaining proposals.
574
+ Here, we use (bi, Ci) to represent an instance’s label, where
575
+ bi is a box corner’s label and Ci is the corpus containing all
576
+ caption annotations for this instance. To jointly evaluate the
577
+ model’s localization and caption generation capability, we
578
+ adopt the m@kIoU metric[11]:
579
+ m@kIoU = 1
580
+ N
581
+ N
582
+
583
+ i=1
584
+ m (ˆci, Ci) · I
585
+
586
+ IoU
587
+
588
+ ˆbi, bi
589
+
590
+ ≥ k
591
+
592
+ .
593
+ (8)
594
+ Here, N is the number of total annotated instances in the
595
+ evaluation dataset, and m could be any metric for natu-
596
+ ral language generation, such as CIDEr[35], METEOR[3],
597
+ BLEU-4[28], and ROUGE-L[19].
598
+ Implementation Details. We offer implementation details
599
+ of different baselines. “w/o additional 2D” means the in-
600
+ put PC ∈ R40,000×10 contains absolute location as well as
601
+ 5
602
+
603
+ color, normal and height for 40, 000 points representing a
604
+ 3D scene. “additional 2D” means we replace color informa-
605
+ tion with 128-dimensional multiview feature extracted by
606
+ ENet[8] from 2D images following [11].
607
+ We first pre-train the whole network without the cap-
608
+ tion head, on ScanNet[13] detection dataset with ScanRe-
609
+ fer[6] categories for 1, 080 epochs (about 163k iterations,
610
+ 34 hours), using the AdamW optimizer[22] with a learning
611
+ rate decaying from 5 × 10−4 to 10−6 by a cosine anneal-
612
+ ing scheduler, a weight decay of 0.1, a gradient clipping of
613
+ 0.1, and a batch size of 8 following [24]. Then, we load the
614
+ pre-trained detector, and train our caption head with MLE
615
+ loss for another 720 epochs (51k/46k iterations for Scan-
616
+ Refer/Nr3D, 11/10 hours). To prevent overfitting, we fix
617
+ the learning rate of the detector as 10−6, and set that of the
618
+ caption head decaying from 10−4 to 10−6 using another co-
619
+ sine annealing scheduler. Due to the high memory cost of
620
+ SCST, we tune the caption head with a batch size of 2 and
621
+ freeze the detector for 180 epochs (50k/46k iterations for
622
+ ScanRefer/Nr3D, 14/11 hours) with a fixed learning rate of
623
+ 10−6. We evaluate the model every 2, 000 iterations during
624
+ training for consistency with existing works[11, 36], and
625
+ all experiments mentioned above are conducted on a single
626
+ RTX3090 GPU.
627
+ 4.2. Comparison with Existing Methods
628
+ In this section, we compare performance with exist-
629
+ ing works on metrics C, M, B-4, R as abbreviations for
630
+ CIDEr[35], METEOR[3], BLEU-4[28], Rouge-L[19] un-
631
+ der IoU thresholds of 0.25, 0.5 for ScanRefer (Table 1)
632
+ and 0.5 for Nr3D (Table 2).
633
+ “-” indicates that neither
634
+ the original paper nor any follow-up works provide such
635
+ results.
636
+ Since different supervision on the caption head
637
+ has a huge influence on the captioning performance, we
638
+ make separate comparisons for MLE training and SCST.
639
+ Among all the listed methods, experiments other than
640
+ D3Net[7] and 3DJCG[4] utilize the standard VoteNet[29]
641
+ detector. Meanwhile, D3Net[7] adopts PointGroup[17], a
642
+ 3D instance segmentation model, for better object detec-
643
+ tion.
644
+ 3DJCG[4] improves VoteNet’s localization perfor-
645
+ mance with an FCOS[33] head, which predicts distance
646
+ from a voting point to each side of a bounding box. Ad-
647
+ ditionally, 3DJCG and D3Net focus on the joint promotion
648
+ of 3D dense captioning and 3D visual grounding, therefore
649
+ their reported models are trained with data from both tasks.
650
+ Among methods listed under SCST, χ-Trans2Cap[38] com-
651
+ bines MLE training with standard SCST in an additive man-
652
+ ner, Scan2Cap and D3Net[7] adopt the same reward com-
653
+ bining CIDEr score and listener losses with a weighted sum.
654
+ It’s worth mentioning that our model adopts the standard
655
+ SCST, whose reward function is CIDEr score.
656
+ Table 1 reports comparisons on ScanRefer[6] validation
657
+ dataset. Our Vote2Cap-DETR surpasses current state-of-
658
+ the-art methods. For example, under MLE training with ad-
659
+ ditional 2D inputs, our Vote2Cap-DETR achieves 59.32%
660
+ [email protected] while 3DJCG[4] achieves 49.48% (9.84% [email protected]↑)
661
+ with additional training data. Additionally, under SCST, our
662
+ Vote2Cap-DETR achieves 70.63% [email protected], while 62.64%
663
+ (7.99% [email protected]↑) for current state-of-the-art D3Net[7] with
664
+ more training labels and semi-supervised training on more
665
+ training data.
666
+ In Table 2, we list results on the Nr3D[1] dataset with ad-
667
+ ditional 2D input following [36]. Since Scan2Cap[11] has
668
+ not reported results on Nr3D, we adopt the best-reported
669
+ result from [4]. Our proposed Vote2Cap-DETR also sur-
670
+ passes current state-of-the-art methods.
671
+ 4.3. Ablation Study
672
+ Since 3D dense captioning concerns both localization
673
+ and caption generation, we perform ablation studies to un-
674
+ derstand the effectiveness of different components.
675
+ Does the vote query improve 3DETR? We performed ab-
676
+ lation experiments in Table 3 and Figure 5 to see if the vote
677
+ query can improve 3DETR’s localization and convergence.
678
+ Introducing position features pvq alone helps improve de-
679
+ tection performance (0.97% mAP50↑). However, it (green
680
+ line in Figure 5) converges slower in the earlier training pro-
681
+ cedure than the 3DETR baseline (blue line in Figure 5), in-
682
+ ferring the vote query generation module is not well learned
683
+ to predict accurate spatial offset estimations at early train-
684
+ ing epochs. Introducing additional content feature fvq in
685
+ vote query features results in another boost in both detec-
686
+ tion performance (2.98% mAP50↑) and training speed (red
687
+ line in Figure 5). The overall localization performance of
688
+ Vote2Cap-DETR is about 7.2% mAP higher than the popu-
689
+ lar VoteNet.
690
+ 0
691
+ 20000
692
+ 40000
693
+ 60000
694
+ 80000
695
+ 100000
696
+ 120000
697
+ 140000
698
+ 160000
699
+ Training Iterations
700
+ 0
701
+ 10
702
+ 20
703
+ 30
704
+ 40
705
+ 50
706
+ Validation [email protected]
707
+ (pquery, f0
708
+ query) = (pvq, fvq)
709
+ (pquery, f0
710
+ query) = (pseed, 0)
711
+ (pquery, f0
712
+ query) = (pvq, 0)
713
+ Figure 5. Vote query and convergence. We take out convergence
714
+ study on a different combination of content feature fvq and posi-
715
+ tion pvq in vote query. The baseline model (pquery, f 0
716
+ query) =
717
+ (pseed, 0) downgrades to 3DETR. Introducing pvq boosts perfor-
718
+ mance but decelerates training since FFNvote requires time to
719
+ converge, and fvq accelerates training.
720
+ Does 3D context feature help captioning? Since the per-
721
+ 6
722
+
723
+ Method
724
+ Ldes
725
+ w/o additional 2D input
726
+ w/ additional 2D input
727
+ IoU = 0.25
728
+ IoU = 0.50
729
+ IoU = 0.25
730
+ IoU = 0.50
731
+ C↑
732
+ B-4↑
733
+ M↑
734
+ R↑
735
+ C↑
736
+ B-4↑
737
+ M↑
738
+ R↑
739
+ C↑
740
+ B-4↑
741
+ M↑
742
+ R↑
743
+ C↑
744
+ B-4↑
745
+ M↑
746
+ R↑
747
+ Scan2Cap[11]
748
+ MLE
749
+ 53.73
750
+ 34.25
751
+ 26.14
752
+ 54.95
753
+ 35.20
754
+ 22.36
755
+ 21.44
756
+ 43.57
757
+ 56.82
758
+ 34.18
759
+ 26.29
760
+ 55.27
761
+ 39.08
762
+ 23.32
763
+ 21.97
764
+ 44.78
765
+ MORE[18]
766
+ 58.89
767
+ 35.41
768
+ 26.36
769
+ 55.41
770
+ 38.98
771
+ 23.01
772
+ 21.65
773
+ 44.33
774
+ 62.91
775
+ 36.25
776
+ 26.75
777
+ 56.33
778
+ 40.94
779
+ 22.93
780
+ 21.66
781
+ 44.42
782
+ SpaCap3d[36]
783
+ 58.06
784
+ 35.30
785
+ 26.16
786
+ 55.03
787
+ 42.76
788
+ 25.38
789
+ 22.84
790
+ 45.66
791
+ 63.30
792
+ 36.46
793
+ 26.71
794
+ 55.71
795
+ 44.02
796
+ 25.26
797
+ 22.33
798
+ 45.36
799
+ 3DJCG[4]
800
+ 60.86
801
+ 39.67
802
+ 27.45
803
+ 59.02
804
+ 47.68
805
+ 31.53
806
+ 24.28
807
+ 51.80
808
+ 64.70
809
+ 40.17
810
+ 27.66
811
+ 59.23
812
+ 49.48
813
+ 31.03
814
+ 24.22
815
+ 50.80
816
+ D3Net[7]
817
+ -
818
+ -
819
+ -
820
+ -
821
+ -
822
+ -
823
+ -
824
+ -
825
+ -
826
+ -
827
+ -
828
+ -
829
+ 46.07
830
+ 30.29
831
+ 24.35
832
+ 51.67
833
+ Ours
834
+ 71.45
835
+ 39.34
836
+ 28.25
837
+ 59.33
838
+ 61.81
839
+ 34.46
840
+ 26.22
841
+ 54.40
842
+ 72.79
843
+ 39.17
844
+ 28.06
845
+ 59.23
846
+ 59.32
847
+ 32.42
848
+ 25.28
849
+ 52.53
850
+ χ-Trans2Cap[38]
851
+ SCST
852
+ 58.81
853
+ 34.17
854
+ 25.81
855
+ 54.10
856
+ 41.52
857
+ 23.83
858
+ 21.90
859
+ 44.97
860
+ 61.83
861
+ 35.65
862
+ 26.61
863
+ 54.70
864
+ 43.87
865
+ 25.05
866
+ 22.46
867
+ 45.28
868
+ Scan2Cap[11]
869
+ -
870
+ -
871
+ -
872
+ -
873
+ -
874
+ -
875
+ -
876
+ -
877
+ -
878
+ -
879
+ -
880
+ -
881
+ 48.38
882
+ 26.09
883
+ 22.15
884
+ 44.74
885
+ D3Net[7]
886
+ -
887
+ -
888
+ -
889
+ -
890
+ -
891
+ -
892
+ -
893
+ -
894
+ -
895
+ -
896
+ -
897
+ -
898
+ 62.64
899
+ 35.68
900
+ 25.72
901
+ 53.90
902
+ Ours
903
+ 84.15
904
+ 42.51
905
+ 28.47
906
+ 59.26
907
+ 73.77
908
+ 38.21
909
+ 26.64
910
+ 54.71
911
+ 86.28
912
+ 42.64
913
+ 28.27
914
+ 59.07
915
+ 70.63
916
+ 35.69
917
+ 25.51
918
+ 52.28
919
+ Table 1.
920
+ Evaluating Vote2Cap-DETR on ScanRefer[6]. We compare Vote2Cap-DETR with all published state-of-the-art 3D dense
921
+ caption methods on the ScanRefer dataset. Though our method does not depend on hand-crafted NMS[25] to drop overlapped boxes,
922
+ we follow the standard evaluation protocol from [11] for fair comparison and provide evaluation without NMS in Table 6. Our proposed
923
+ Vote2Cap-DETR achieves new state-of-the-art under both MLE training and SCST.
924
+ Method
925
+ Ldes
926
927
928
929
930
+ Scan2Cap[11]
931
+ MLE
932
+ 27.47
933
+ 17.24
934
+ 21.80
935
+ 49.06
936
+ SpaCap3d[36]
937
+ 33.71
938
+ 19.92
939
+ 22.61
940
+ 50.50
941
+ D3Net[7]
942
+ 33.85
943
+ 20.70
944
+ 23.13
945
+ 53.38
946
+ 3DJCG[4]
947
+ 38.06
948
+ 22.82
949
+ 23.77
950
+ 52.99
951
+ Ours
952
+ 43.84
953
+ 26.68
954
+ 25.41
955
+ 54.43
956
+ χ-Tran2Cap[38]
957
+ SCST
958
+ 33.62
959
+ 19.29
960
+ 22.27
961
+ 50.00
962
+ D3Net[7]
963
+ 38.42
964
+ 22.22
965
+ 24.74
966
+ 54.37
967
+ Ours
968
+ 45.53
969
+ 26.88
970
+ 25.43
971
+ 54.76
972
+ Table 2.
973
+ Evaluating Vote2Cap-DETR on Nr3D[1]. Likewise,
974
+ we perform the standard evaluation on the Nr3D dataset, and our
975
+ proposed Vote2Cap-DETR surpasses prior arts.
976
+ pquery
977
+ f 0
978
+ query
979
+ IoU=0.25
980
+ IoU=0.50
981
+ 1st layer IoU=0.50
982
+ mAP↑
983
+ AR↑
984
+ mAP↑
985
+ AR↑
986
+ mAP↑
987
+ AR↑
988
+ VoteNet Baseline
989
+ 63.42
990
+ 82.18
991
+ 44.96
992
+ 60.65
993
+ -
994
+ -
995
+ pseed
996
+ 0
997
+ 67.25
998
+ 84.91
999
+ 48.18
1000
+ 64.98
1001
+ 34.80
1002
+ 55.06
1003
+ pvq
1004
+ 0
1005
+ 67.33
1006
+ 85.60
1007
+ 49.15
1008
+ 66.38
1009
+ 30.23
1010
+ 58.44
1011
+ pvq
1012
+ fvq
1013
+ 69.61
1014
+ 87.20
1015
+ 52.13
1016
+ 69.12
1017
+ 46.53
1018
+ 66.51
1019
+ Table 3. Vote query and performance. We provide quantitative
1020
+ results for Figure 5. Introducing pvq as query positions improves
1021
+ detection, and gathering fvq from content further boosts perfor-
1022
+ mance.
1023
+ formance of 3D dense captioning is affected by both lo-
1024
+ calization and caption capability, we freeze all parameters
1025
+ other than the caption head, and train with 3D only input
1026
+ and standard cross entropy loss (MLE training) for a fair
1027
+ evaluation. We use object-centric decoder[36] as our base-
1028
+ line, which is a decoder that generates captions with object
1029
+ feature as a caption’s prefix. In Table 4, “-” refers to the
1030
+ object-centric decoder baseline, “global” means naively in-
1031
+ cluding all context tokens extracted from the scene encoder
1032
+ in the decoder, “local” is our proposed caption head that
1033
+ includes a vote query’s ks (ks = 128 empirically) nearest
1034
+ context tokens extracted from the scene encoder.
1035
+ With the object feature as a caption’s prefix, caption
1036
+ generation performance benefits from introducing addi-
1037
+ tional contextual information. Additionally, compared with
1038
+ naively introducing contextual information from the whole
1039
+ scene, introducing local information could be more benefi-
1040
+ cial. This demonstrates our motivation that close surround-
1041
+ ings matter when describing an object.
1042
+ key
1043
+ IoU=0.25
1044
+ IoU=0.5
1045
+ C↑
1046
+ B-4↑
1047
+ M↑
1048
+ R↑
1049
+ C↑
1050
+ B-4↑
1051
+ M↑
1052
+ R↑
1053
+ -
1054
+ 68.62
1055
+ 38.61
1056
+ 27.67
1057
+ 58.47
1058
+ 60.15
1059
+ 34.02
1060
+ 25.80
1061
+ 53.82
1062
+ global
1063
+ 70.05
1064
+ 39.23
1065
+ 27.84
1066
+ 58.44
1067
+ 61.20
1068
+ 34.66
1069
+ 25.93
1070
+ 53.79
1071
+ local
1072
+ 70.42
1073
+ 39.98
1074
+ 27.99
1075
+ 58.89
1076
+ 61.39
1077
+ 35.24
1078
+ 26.02
1079
+ 54.12
1080
+ Table 4.
1081
+ Different keys for caption generation. We provide
1082
+ a comparison on different keys used in caption generation. Intro-
1083
+ ducing contextual information relates to more informative captions
1084
+ generated. Since 3D dense captioning is more object-centric, in-
1085
+ troducing vote queries’ local contextual feature is a better choice.
1086
+ Do Set-to-Set Training benefit dense captioning? To ana-
1087
+ lyze effectiveness of set-to-set training, we follow the train-
1088
+ ing procedure that utilize a smaller learning rate for all pa-
1089
+ rameters other than the caption head, and freeze these pa-
1090
+ rameters during SCST. We name the baseline training strat-
1091
+ egy as “Sentence Training”, which traverses through all
1092
+ sentence annotations in the dataset and is widely adopted
1093
+ in various works[11, 36].
1094
+ As is shown in Figure 7, our
1095
+ proposed “Set-to-Set” training achieves comparable results
1096
+ with the traditional “Sentence Training” during MLE train-
1097
+ ing, and converges faster because of a bigger batch size on
1098
+ the caption head, which also benefits SCST.
1099
+ Training
1100
+ Ldes
1101
1102
1103
1104
1105
+ Sentence
1106
+ MLE
1107
+ 61.21
1108
+ 35.35
1109
+ 26.12
1110
+ 54.52
1111
+ Set-to-Set
1112
+ 61.81
1113
+ 34.46
1114
+ 26.22
1115
+ 54.40
1116
+ Sentence
1117
+ SCST
1118
+ 71.39
1119
+ 37.57
1120
+ 26.01
1121
+ 54.28
1122
+ Set-to-Set
1123
+ 73.77
1124
+ 38.21
1125
+ 26.64
1126
+ 54.71
1127
+ Table 5.
1128
+ Set to Set training and performance. We compare
1129
+ our proposed set-to-set training with traditional “Sentence Train-
1130
+ ing”, which traverses through all sentence annotations. We achieve
1131
+ comparable performance with MLE training, and 2.38% [email protected]
1132
+ improvement with SCST.
1133
+ Is Vote2Cap-DETR robust to NMS? Similar to other
1134
+ DETR works, the set loss will encourage the model to pro-
1135
+ duce compact predictions.
1136
+ We compare performance on
1137
+ 7
1138
+
1139
+ 3DJCG: This is a rectangular
1140
+ whiteboard. It is on the wall.
1141
+ SpaCap3D: The whiteboard is
1142
+ affixed to the wall. It is to the right
1143
+ of the window.
1144
+ Ours: The tv is on the wall. It is to
1145
+ the right of the table.
1146
+ GT: This is a big black tv. It is
1147
+ above a thin table.
1148
+ scene0011_00
1149
+ 3DJCG: This is a brown table. It
1150
+ is in the middle of the room.
1151
+ SpaCap3D: This is a wooden
1152
+ table. It is in the center of the
1153
+ room.
1154
+ Ours: This is a wooden table. It is
1155
+ in the corner of the room.
1156
+ GT: This is a small table with a
1157
+ wood look. It is the table closest
1158
+ to the front of the room in the
1159
+ upper left corner.
1160
+ scene0015_00
1161
+ 3DJCG: The is a small brown
1162
+ cabinet. It is to the right of the
1163
+ desk.
1164
+ SpaCap3D: The cabinet is below
1165
+ the desk. It is to the left of the
1166
+ chair.
1167
+ Ours: This is a white cabinet. It is
1168
+ to the right of the table.
1169
+ GT: A white cabinet is sitting on
1170
+ the floor next to the wall. It is to
1171
+ the left of the couch.
1172
+ scene0025_00
1173
+ 3DJCG: This is a brown table. It
1174
+ is in front of the couch.
1175
+ SpaCap3D: This is a wooden
1176
+ coffee table. It is in front of the
1177
+ couch.
1178
+ Ours: This is a brown ottoman. It
1179
+ is to the right of the chair.
1180
+ GT: This is a brown ottoman. It is
1181
+ in front of a couch.
1182
+ scene0050_00
1183
+ Figure 6. Qualitative Comparisons. We compare qualitative results with two state-of-the-art “detect-then-describe” methods, 3DJCG[4]
1184
+ and SpaCap3D[36]. We underline phrases describing spatial locations, and mark correct attribute words in green and wrong descriptions
1185
+ in red. Our method produces tight bounding boxes close to ground truth annotations and produce accurate descriptions of object attributes,
1186
+ classes and spatial relationship.
1187
+ 0
1188
+ 10000
1189
+ 20000
1190
+ 30000
1191
+ 40000
1192
+ 50000
1193
+ Training Iterations
1194
+ 0
1195
+ 10
1196
+ 20
1197
+ 30
1198
+ 40
1199
+ 50
1200
+ 60
1201
+ 70
1202
+ 80
1203
+ Validation [email protected]
1204
+ MLE Training on Set-to-Set
1205
+ MLE Training on Sentence
1206
+ SCST on Set-to-Set
1207
+ SCST on Sentence
1208
+ Figure 7.
1209
+ Set-to-Set training and convergence. Convergence
1210
+ speed analysis of two different training strategies with MLE train-
1211
+ ing as well as SCST. Set-to-Set training enables a larger batch size
1212
+ for the caption head, which accelerates convergence on 3D dense
1213
+ captioning.
1214
+ both 3D dense caption ([email protected]) and detection (mAP50,
1215
+ AR50) in Table 6.
1216
+ Since the m@kIoU metric (Eq.
1217
+ 8)
1218
+ does not contain any penalties on redundant predictions,
1219
+ getting rid of NMS[25] results in performance growth on
1220
+ [email protected]. Absence of NMS restricts the detection precision
1221
+ performance (mAP50) of SpaCap3D (14.47% mAP50 ↓)
1222
+ and 3DJCG (17.55% mAP50 ↓), however that of Vote2Cap-
1223
+ DETR remains stable.
1224
+ Models
1225
+ w/ NMS
1226
+ w/o NMS
1227
1228
+ mAP50↑
1229
+ AR50↑
1230
1231
+ mAP50↑
1232
+ AR50↑
1233
+ SpaCap3D
1234
+ 43.93
1235
+ 37.77
1236
+ 53.96
1237
+ 51.35
1238
+ 23.30
1239
+ 64.14
1240
+ 3DJCG
1241
+ 50.22
1242
+ 47.58
1243
+ 62.12
1244
+ 54.94
1245
+ 30.03
1246
+ 68.69
1247
+ Vote2Cap-DETR
1248
+ 70.63
1249
+ 52.79
1250
+ 66.09
1251
+ 71.57
1252
+ 52.82
1253
+ 67.80
1254
+ Table 6. Effect of NMS. We analyze whether the absence of NMS
1255
+ affects the 3D dense captioning performance ([email protected]) as well as
1256
+ detection performance (mAP50, AR50).
1257
+ 4.4. Qualitative Results
1258
+ We compare qualitative results with two state-of-the-art
1259
+ models, SpaCap3D[36] and 3DJCG[4] in Figure6. One can
1260
+ see that our method produces tight bounding boxes close to
1261
+ the ground-truth. Moreover, our method can produce ac-
1262
+ curate descriptions of object attributes, classes, and spatial
1263
+ relationships.
1264
+ 5. Conclusion.
1265
+ In this work, we present Vote2Cap-DETR, a trans-
1266
+ former based one-stage approach, for 3D dense caption-
1267
+ ing. The proposed Vote2Cap-DETR adopts a fully trans-
1268
+ former encoder-decoder architecture that decodes a set of
1269
+ vote queries to box predictions and captions in parallel. We
1270
+ show that by introducing spatial bias and content-aware fea-
1271
+ tures, vote query boosts both convergence and detection
1272
+ performance. Additionally, we develop a novel lightweight
1273
+ query-driven caption head for informative caption genera-
1274
+ tion. Experiments on two widely used datasets for 3D dense
1275
+ 8
1276
+
1277
+ captioning validates that our propose one-stage Vote2Cap-
1278
+ DETR model surpasses prior works with heavy dependence
1279
+ on hand-crafted components by a large margin.
1280
+ References
1281
+ [1] Panos Achlioptas, Ahmed Abdelreheem, Fei Xia, Mohamed
1282
+ Elhoseiny, and Leonidas Guibas. Referit3d: Neural listeners
1283
+ for fine-grained 3d object identification in real-world scenes.
1284
+ In European Conference on Computer Vision, pages 422–
1285
+ 440. Springer, 2020. 2, 5, 6, 7, 13, 14
1286
+ [2] Peter Anderson, Xiaodong He, Chris Buehler, Damien
1287
+ Teney, Mark Johnson, Stephen Gould, and Lei Zhang.
1288
+ Bottom-up and top-down attention for image captioning and
1289
+ visual question answering. In Proceedings of the IEEE con-
1290
+ ference on computer vision and pattern recognition, pages
1291
+ 6077–6086, 2018. 2
1292
+ [3] Satanjeev Banerjee and Alon Lavie. Meteor: An automatic
1293
+ metric for mt evaluation with improved correlation with hu-
1294
+ man judgments. In Proceedings of the acl workshop on in-
1295
+ trinsic and extrinsic evaluation measures for machine trans-
1296
+ lation and/or summarization, pages 65–72, 2005. 5, 6
1297
+ [4] Daigang Cai, Lichen Zhao, Jing Zhang, Lu Sheng, and Dong
1298
+ Xu. 3djcg: A unified framework for joint dense captioning
1299
+ and visual grounding on 3d point clouds. In Proceedings of
1300
+ the IEEE/CVF Conference on Computer Vision and Pattern
1301
+ Recognition, pages 16464–16473, 2022. 1, 2, 5, 6, 7, 8, 11
1302
+ [5] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas
1303
+ Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-
1304
+ end object detection with transformers. In European confer-
1305
+ ence on computer vision, pages 213–229. Springer, 2020. 2,
1306
+ 3
1307
+ [6] Dave Zhenyu Chen, Angel X Chang, and Matthias Nießner.
1308
+ Scanrefer: 3d object localization in rgb-d scans using natu-
1309
+ ral language. In European Conference on Computer Vision,
1310
+ pages 202–221. Springer, 2020. 3, 5, 6, 7, 13
1311
+ [7] Dave Zhenyu Chen, Qirui Wu, Matthias Nießner, and An-
1312
+ gel X Chang.
1313
+ D3net: A speaker-listener architecture for
1314
+ semi-supervised dense captioning and visual grounding in
1315
+ rgb-d scans. arXiv preprint arXiv:2112.01551, 2021. 1, 2, 6,
1316
+ 7
1317
+ [8] Jintai Chen, Biwen Lei, Qingyu Song, Haochao Ying,
1318
+ Danny Z Chen, and Jian Wu. A hierarchical graph network
1319
+ for 3d object detection on point clouds. In Proceedings of
1320
+ the IEEE/CVF conference on computer vision and pattern
1321
+ recognition, pages 392–401, 2020. 6
1322
+ [9] Qiang Chen, Xiaokang Chen, Gang Zeng, and Jingdong
1323
+ Wang.
1324
+ Group detr: Fast training convergence with de-
1325
+ coupled one-to-many label assignment.
1326
+ arXiv preprint
1327
+ arXiv:2207.13085, 2022. 2
1328
+ [10] Xiaokang Chen, Fangyun Wei, Gang Zeng, and Jingdong
1329
+ Wang. Conditional detr v2: Efficient detection transformer
1330
+ with box queries. arXiv preprint arXiv:2207.08914, 2022. 3
1331
+ [11] Zhenyu Chen, Ali Gholami, Matthias Nießner, and Angel X
1332
+ Chang. Scan2cap: Context-aware dense captioning in rgb-d
1333
+ scans. In Proceedings of the IEEE/CVF Conference on Com-
1334
+ puter Vision and Pattern Recognition, pages 3193–3203,
1335
+ 2021. 1, 2, 3, 5, 6, 7, 12
1336
+ [12] Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, and
1337
+ Rita Cucchiara. Meshed-memory transformer for image cap-
1338
+ tioning. In Proceedings of the IEEE/CVF conference on com-
1339
+ puter vision and pattern recognition, pages 10578–10587,
1340
+ 2020. 2
1341
+ [13] Angela Dai, Angel X Chang, Manolis Savva, Maciej Hal-
1342
+ ber, Thomas Funkhouser, and Matthias Nießner. Scannet:
1343
+ Richly-annotated 3d reconstructions of indoor scenes.
1344
+ In
1345
+ Proceedings of the IEEE conference on computer vision and
1346
+ pattern recognition, pages 5828–5839, 2017. 5, 6, 11, 13
1347
+ [14] Peng Gao, Minghang Zheng, Xiaogang Wang, Jifeng Dai,
1348
+ and Hongsheng Li. Fast convergence of detr with spatially
1349
+ modulated co-attention.
1350
+ In Proceedings of the IEEE/CVF
1351
+ International Conference on Computer Vision, pages 3621–
1352
+ 3630, 2021. 2
1353
+ [15] Lun Huang, Wenmin Wang, Jie Chen, and Xiao-Yong Wei.
1354
+ Attention on attention for image captioning. In Proceedings
1355
+ of the IEEE/CVF international conference on computer vi-
1356
+ sion, pages 4634–4643, 2019. 2
1357
+ [16] Ding Jia, Yuhui Yuan, Haodi He, Xiaopei Wu, Haojun Yu,
1358
+ Weihong Lin, Lei Sun, Chao Zhang, and Han Hu. Detrs with
1359
+ hybrid matching. arXiv preprint arXiv:2207.13080, 2022. 2
1360
+ [17] Li Jiang, Hengshuang Zhao, Shaoshuai Shi, Shu Liu, Chi-
1361
+ Wing Fu, and Jiaya Jia.
1362
+ Pointgroup:
1363
+ Dual-set point
1364
+ grouping for 3d instance segmentation. In Proceedings of
1365
+ the IEEE/CVF conference on computer vision and Pattern
1366
+ recognition, pages 4867–4876, 2020. 2, 6
1367
+ [18] Yang Jiao, Shaoxiang Chen, Zequn Jie, Jingjing Chen, Lin
1368
+ Ma, and Yu-Gang Jiang. More: Multi-order relation min-
1369
+ ing for dense captioning in 3d scenes.
1370
+ arXiv preprint
1371
+ arXiv:2203.05203, 2022. 1, 2, 5, 7
1372
+ [19] Chin-Yew Lin. Rouge: A package for automatic evaluation
1373
+ of summaries. In Text summarization branches out, pages
1374
+ 74–81, 2004. 5, 6
1375
+ [20] Wei Liu, Sihan Chen, Longteng Guo, Xinxin Zhu, and Jing
1376
+ Liu. Cptr: Full transformer network for image captioning.
1377
+ arXiv preprint arXiv:2101.10804, 2021. 3
1378
+ [21] Ze Liu, Zheng Zhang, Yue Cao, Han Hu, and Xin Tong.
1379
+ Group-free 3d object detection via transformers. In Proceed-
1380
+ ings of the IEEE/CVF International Conference on Com-
1381
+ puter Vision, pages 2949–2958, 2021. 2
1382
+ [22] Ilya Loshchilov and Frank Hutter. Decoupled weight decay
1383
+ regularization. arXiv preprint arXiv:1711.05101, 2017. 6
1384
+ [23] Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng,
1385
+ Houqiang Li, Yuhui Yuan, Lei Sun, and Jingdong Wang.
1386
+ Conditional detr for fast training convergence. In Proceed-
1387
+ ings of the IEEE/CVF International Conference on Com-
1388
+ puter Vision, pages 3651–3660, 2021. 2, 3
1389
+ [24] Ishan Misra, Rohit Girdhar, and Armand Joulin. An end-to-
1390
+ end transformer model for 3d object detection. In Proceed-
1391
+ ings of the IEEE/CVF International Conference on Com-
1392
+ puter Vision, pages 2906–2917, 2021. 2, 3, 5, 6, 11, 12,
1393
+ 16
1394
+ [25] Alexander Neubeck and Luc Van Gool.
1395
+ Efficient non-
1396
+ maximum suppression. In 18th International Conference on
1397
+ Pattern Recognition (ICPR’06), volume 3, pages 850–855.
1398
+ IEEE, 2006. 1, 2, 7, 8
1399
+ 9
1400
+
1401
+ [26] Van-Quang Nguyen, Masanori Suganuma, and Takayuki
1402
+ Okatani.
1403
+ Grit:
1404
+ Faster and better image captioning
1405
+ transformer using dual visual features.
1406
+ arXiv preprint
1407
+ arXiv:2207.09666, 2022. 3
1408
+ [27] Yingwei Pan, Ting Yao, Yehao Li, and Tao Mei. X-linear
1409
+ attention networks for image captioning. In Proceedings of
1410
+ the IEEE/CVF conference on computer vision and pattern
1411
+ recognition, pages 10971–10980, 2020. 2
1412
+ [28] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing
1413
+ Zhu. Bleu: a method for automatic evaluation of machine
1414
+ translation. In Proceedings of the 40th annual meeting of the
1415
+ Association for Computational Linguistics, pages 311–318,
1416
+ 2002. 5, 6
1417
+ [29] Charles R Qi, Or Litany, Kaiming He, and Leonidas J
1418
+ Guibas. Deep hough voting for 3d object detection in point
1419
+ clouds. In proceedings of the IEEE/CVF International Con-
1420
+ ference on Computer Vision, pages 9277–9286, 2019. 2, 5,
1421
+ 6, 11, 16
1422
+ [30] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J
1423
+ Guibas. Pointnet++: Deep hierarchical feature learning on
1424
+ point sets in a metric space. Advances in neural information
1425
+ processing systems, 30, 2017. 3, 12
1426
+ [31] Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret
1427
+ Ross, and Vaibhava Goel. Self-critical sequence training for
1428
+ image captioning. In Proceedings of the IEEE conference on
1429
+ computer vision and pattern recognition, pages 7008–7024,
1430
+ 2017. 5
1431
+ [32] Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara
1432
+ Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ra-
1433
+ mamoorthi, Jonathan Barron, and Ren Ng. Fourier features
1434
+ let networks learn high frequency functions in low dimen-
1435
+ sional domains. Advances in Neural Information Processing
1436
+ Systems, 33:7537–7547, 2020. 3
1437
+ [33] Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. Fcos:
1438
+ Fully convolutional one-stage object detection. In Proceed-
1439
+ ings of the IEEE/CVF international conference on computer
1440
+ vision, pages 9627–9636, 2019. 6
1441
+ [34] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko-
1442
+ reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia
1443
+ Polosukhin. Attention is all you need. Advances in neural
1444
+ information processing systems, 30, 2017. 2
1445
+ [35] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi
1446
+ Parikh. Cider: Consensus-based image description evalua-
1447
+ tion. In Proceedings of the IEEE conference on computer
1448
+ vision and pattern recognition, pages 4566–4575, 2015. 5, 6
1449
+ [36] Heng Wang, Chaoyi Zhang, Jianhui Yu, and Weidong Cai.
1450
+ Spatiality-guided transformer for 3d dense captioning on
1451
+ point clouds.
1452
+ arXiv preprint arXiv:2204.10688, 2022.
1453
+ 1,
1454
+ 2, 4, 5, 6, 7, 8, 14
1455
+ [37] Yue Wang, Vitor Campagnolo Guizilini, Tianyuan Zhang,
1456
+ Yilun Wang, Hang Zhao, and Justin Solomon.
1457
+ Detr3d:
1458
+ 3d object detection from multi-view images via 3d-to-2d
1459
+ queries. In Conference on Robot Learning, pages 180–191.
1460
+ PMLR, 2022. 14
1461
+ [38] Zhihao Yuan, Xu Yan, Yinghong Liao, Yao Guo, Guan-
1462
+ bin Li, Shuguang Cui, and Zhen Li. X-trans2cap: Cross-
1463
+ modal knowledge transfer using transformer for 3d dense
1464
+ captioning.
1465
+ In Proceedings of the IEEE/CVF Conference
1466
+ on Computer Vision and Pattern Recognition, pages 8563–
1467
+ 8573, 2022. 1, 2, 6, 7
1468
+ [39] Chi Zhang, Lijuan Liu, Xiaoxue Zang, Frederick Liu, Hao
1469
+ Zhang, Xinying Song, and Jindong Chen.
1470
+ Detr++: Tam-
1471
+ ing your multi-scale detection transformer. arXiv preprint
1472
+ arXiv:2206.02977, 2022. 2
1473
+ [40] Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun
1474
+ Zhu, Lionel M. Ni, and Heung-Yeung Shum. Dino: Detr
1475
+ with improved denoising anchor boxes for end-to-end object
1476
+ detection, 2022. 3
1477
+ [41] Xuying Zhang, Xiaoshuai Sun, Yunpeng Luo, Jiayi Ji, Yiyi
1478
+ Zhou, Yongjian Wu, Feiyue Huang, and Rongrong Ji. Rstnet:
1479
+ Captioning with adaptive attention on visual and non-visual
1480
+ words. In Proceedings of the IEEE/CVF conference on com-
1481
+ puter vision and pattern recognition, pages 15465–15474,
1482
+ 2021. 3
1483
+ [42] Yufeng Zhong, Long Xu, Jiebo Luo, and Lin Ma. Contextual
1484
+ modeling for 3d dense captioning on point clouds.
1485
+ arXiv
1486
+ preprint arXiv:2210.03925, 2022. 1, 2
1487
+ [43] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang
1488
+ Wang, and Jifeng Dai. Deformable detr: Deformable trans-
1489
+ formers for end-to-end object detection.
1490
+ arXiv preprint
1491
+ arXiv:2010.04159, 2020. 2
1492
+ 10
1493
+
1494
+ Appendix
1495
+ In our supplementary material, we first propose a non-transformer baseline for our method that builds on VoteNet[29] in
1496
+ section A. Then, we provide additional experimental details in section B. Finally, we provide several qualitative studies in
1497
+ section C. It’s also worth mentioning that our proposed Vote2Cap-DETR sets a new state-of-the-art on the Scan2Cap online
1498
+ test benchmark (Figure 8).
1499
+ A. VoteNet baseline with set-to-set training
1500
+ In this section, we perform ablation study by replacing our Vote2Cap-DETR’s components (SceneEncoder, Vote Query,
1501
+ Transformer Decoder) with VoteNet to study the behavior of non-transformer architecture’s behavior. In Table 7, we observe
1502
+ that without delicate hand-crafted relation modelling modules, the VoteNet baseline surpasses 3DJCG[4] by 3.48 in [email protected]
1503
+ ↑ and 6.23 in [email protected] ↑ and achieves comparable results on other metrics with MLE training. The results demonstrate the
1504
+ novel caption head and set-to-set training can also improve non-transformer architecture’s dense captioning performance. On
1505
+ the other hand, the VoteNet baseline still falls short in terms of our Vote2Cap-DETR, which demonstrates that Vote Query
1506
+ can help learn more discriminate features in an end-to-end manner for end tasks without resorting to many hand-crafted
1507
+ components as in VoteNet.
1508
+ Method
1509
+ IoU = 0.25
1510
+ IoU = 0.5
1511
+ C↑
1512
+ B-4↑
1513
+ M↑
1514
+ R↑
1515
+ C↑
1516
+ B-4↑
1517
+ M↑
1518
+ R↑
1519
+ 3DJCG[4]
1520
+ 64.70
1521
+ 40.17
1522
+ 27.66
1523
+ 59.23
1524
+ 49.48
1525
+ 31.03
1526
+ 24.22
1527
+ 50.80
1528
+ Ours(VoteNet)
1529
+ 70.93
1530
+ 39.92
1531
+ 28.09
1532
+ 58.88
1533
+ 52.96
1534
+ 30.59
1535
+ 24.40
1536
+ 50.10
1537
+ Ours(Full)
1538
+ 72.79
1539
+ 39.17
1540
+ 28.06
1541
+ 59.23
1542
+ 59.32
1543
+ 32.42
1544
+ 25.28
1545
+ 52.53
1546
+ Table 7.
1547
+ VoteNet baseline with set-to-set training. We replace Vote2Cap-DETR’s components with VoteNet. One can see that non-
1548
+ transformer VoteNet architecture also benefits from our novel caption head and set to set training. Although there are still performance
1549
+ gaps with our Vote2Cap-DETR architecture.
1550
+ B. Experiments
1551
+ We provide evaluations on the Scan2Cap online test benchmark (section B.1) as well as additional experimental details
1552
+ (section B.2 & B.3) in this section.
1553
+ B.1. Scan2Cap Test Benchmark
1554
+ Our proposed Vote2Cap-DETR achieves a new state-of-the-art for all metrics on the Scan2Cap online test benchmark
1555
+ (Figure 8, https://kaldir.vc.in.tum.de/scanrefer benchmark/benchmark captioning).
1556
+ B.2. Per-Class mAP Results
1557
+ We list per class mAP results for VoteNet[29], 3DETR[24], and our proposed Vote2Cap-DETR on ScanNet scenes[13]
1558
+ under an IoU threshold of 0.5 in Table 8. The overall performance is listed in the main paper.
1559
+ Method
1560
+ cabinet
1561
+ bed
1562
+ chair
1563
+ sofa
1564
+ table
1565
+ door
1566
+ window
1567
+ bookshelf
1568
+ picture
1569
+ counter
1570
+ desk
1571
+ curtain
1572
+ refrigerator
1573
+ shower curtain
1574
+ toilet
1575
+ sink
1576
+ bathtub
1577
+ others
1578
+ VoteNet[29]
1579
+ 21.41
1580
+ 78.41
1581
+ 78.47
1582
+ 74.44
1583
+ 55.42
1584
+ 34.68
1585
+ 14.91
1586
+ 29.80
1587
+ 9.04
1588
+ 16.57
1589
+ 51.12
1590
+ 34.62
1591
+ 40.12
1592
+ 45.82
1593
+ 89.93
1594
+ 37.23
1595
+ 83.41
1596
+ 13.79
1597
+ 3DETR[24]
1598
+ 26.30
1599
+ 75.78
1600
+ 82.19
1601
+ 59.15
1602
+ 62.25
1603
+ 39.16
1604
+ 21.47
1605
+ 33.14
1606
+ 16.45
1607
+ 34.41
1608
+ 49.68
1609
+ 38.34
1610
+ 42.83
1611
+ 33.33
1612
+ 88.68
1613
+ 52.62
1614
+ 82.41
1615
+ 29.06
1616
+ Vote2Cap-DETR
1617
+ 31.98
1618
+ 81.48
1619
+ 85.80
1620
+ 64.37
1621
+ 65.20
1622
+ 41.19
1623
+ 28.47
1624
+ 39.81
1625
+ 22.94
1626
+ 39.02
1627
+ 54.46
1628
+ 36.66
1629
+ 40.19
1630
+ 56.10
1631
+ 87.97
1632
+ 44.38
1633
+ 85.12
1634
+ 33.28
1635
+ Table 8. Per-class AP under IoU threshold of 0.5 on ScanNet scenes.
1636
+ B.3. Implementation Details
1637
+ Our proposed Vote2Cap-DETR first goes through the feature encoding module, then we generate vote queries from the
1638
+ encoded feature as object queries, and we decode the vote queries to bounding boxes and captions in the end.
1639
+ 11
1640
+
1641
+ Figure 8. Scan2Cap[11] test benchmark. Our proposed Vote2Cap-DETR achieves a new state-of-the-art for all metrics on the Scan2Cap
1642
+ online test benchmark.
1643
+ Feature Encoding directly operates on the input point cloud PC to 1,024 tokens with a feature size of 256. We first
1644
+ tokenizes the input point cloud PC = [pin; fin] ∈ R40,000×(3+din) to point tokens [ptoken; ftoken] ∈ R2,048×(3+256) with
1645
+ a set-abstraction layer[30] with hidden sizes of [3 + din, 64, 128, 256]. Then, our scene encoder encodes point tokens
1646
+ [ptoken; ftoken] ∈ R2,048×(3+256) to [penc; fenc] ∈ R1,024×(3+256). We adopt the same encoder as 3DETR-m[24], which
1647
+ contains a three-layer transformer encoder with a set-abstraction layer between the first two layers. Each encoder layer has
1648
+ a feature size of 256 and Feed Forward Network (FFN) with a hidden size of 128. The first encoder layer operates on 2,048
1649
+ points, while the last two operates on the 1,024 points downsampled by the set-abstraction layer. Additionally, three binary
1650
+ attention masks are applied to each encoder layer with a radius of [0.16, 0.64, 1.44] respectively to force the interactions of
1651
+ points in a given radius.
1652
+ Vote Query Generator generates 256 object queries [pvq; fvq] ∈ R256×(3+256) from the encoded points [penc; fenc] ∈
1653
+ R1,024×(3+256). It contains an FFN FFNvote with a hidden size of 256 to generate offset estimation and feature projection
1654
+ with respective to fenc. It also use a set abstraction layer to gather feature fvq ∈ R256×256 from encoded scene feature for
1655
+ pvq ∈ R256×3 as described in the main paper.
1656
+ Parallel Decoding aims to decode the vote queries [pvq; fvq] to corresponding box estimations and captions. The trans-
1657
+ former decoder consists of eight identical transformer decoder layers with four heads for both self-attention and cross-
1658
+ attention. It operates on vote queries [pvq; fvq] and encoded feature [penc; fenc] for the final query feature [pvq, fout] ∈
1659
+ R256×(3+256). Follow the transformer decoder are two parallel heads, the detection head and the caption head. The detection
1660
+ head generates center offset estimation ([−0.5, 0.5]3) from vote queries’ absolute location pvq, normalized size estimation
1661
+ 12
1662
+
1663
+ Scan2Cap Benchmark
1664
+ This table lists the benchmark results for the Scan2Cap Dense Captioning Benchmark scenario.
1665
+ Captioning F1-Score
1666
+ Dense Captioning
1667
+ Object Detection
1668
1669
1670
1671
1672
+ DCmAP
1673
1674
+ Method
1675
+ Info
1676
+ vote2cap-detr
1677
+ 0.3128 1
1678
+ 0.1778 1
1679
+ 0.2842 1
1680
+ 0.1316 1
1681
+ 0.1825 1
1682
+ 0.4454 1
1683
+ CFM
1684
+ 0.2360 2
1685
+ 0.1417 2
1686
+ 0.2253 2
1687
+ 0.1034 2
1688
+ 0.1379 5
1689
+ 0.3008 5
1690
+ CM3D-Trans+
1691
+ 0.2348 3
1692
+ 0.1383 3
1693
+ 0.2250 4
1694
+ 0.1030 3
1695
+ 0.1398 4
1696
+ 0.2966 7
1697
+ Yufeng Zhong, Long Xu, Jiebo Luo, Lin Ma: Contextual Modeling for 3D Dense Captioning on Point Clouds.
1698
+ Forest-xyz
1699
+ 0.2266 4
1700
+ 0.1363 4
1701
+ 0.2250 3
1702
+ 0.1027 4
1703
+ 0.1161 10
1704
+ 0.2825 10
1705
+ D3Net - Speaker
1706
+ P
1707
+ 0.2088 5
1708
+ 0.1335 6
1709
+ 0.2237 5
1710
+ 0.1022 5
1711
+ 0.1481 3
1712
+ 0.4198 2
1713
+ Dave Zhenyu Chen, Qirui Wu, Matthias Niessner, Angel X. Chang: D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding. 17th European Conference on Computer Vision
1714
+ (ECCV), 2022
1715
+ 3DJCG(Captioning)
1716
+ P
1717
+ 0.1918 6
1718
+ 0.1350 5
1719
+ 0.2207 6
1720
+ 0.1013 6
1721
+ 0.1506 2
1722
+ 0.3867 3
1723
+ Daigang Cai, Lichen Zhao, Jing Zhangt, Lu Sheng, Dong Xu: 3DJCG: A Unified Framework for Joint Dense Captioning and Visual Grounding on 3D Point Clouds. CVPR2022 Oral
1724
+ REMAN
1725
+ 0.1662 7
1726
+ 0.1070 7
1727
+ 0.1790 7
1728
+ 0.0815 7
1729
+ 0.1235 8
1730
+ 0.2927 9
1731
+ NOAH
1732
+ 0.1382 8
1733
+ 0.0901 8
1734
+ 0.1598 8
1735
+ 0.0747 8
1736
+ 0.1359 6
1737
+ 0.2977 6
1738
+ SpaCap3D
1739
+ P
1740
+ 0.1359 9
1741
+ 0.0883 9
1742
+ 0.1591 g
1743
+ 0.0738 g
1744
+ 0.1182 9
1745
+ 0.3275 4
1746
+ X-Trans2Cap
1747
+ P
1748
+ 0.1274 10
1749
+ 0.0808 11
1750
+ 0.1392 11
1751
+ 0.0653 11
1752
+ 0.1244 7
1753
+ 0.2795 11
1754
+ Yuan, Zhihao and Yan, Xu and Liao, Yinghong and Guo, Yao and Li, Guanbin and Cui, Shuguang and Li, Zhen: X-Trans2Cap: Cross-Modal Knowledge Transfer Using Transformer for 3D Dense Captioning. CVPR
1755
+ 2022
1756
+ MORE-xyz
1757
+ P
1758
+ 0.1239 11
1759
+ 0.0796 12
1760
+ 0.1362 12
1761
+ 0.0631 12
1762
+ 0.1116 12
1763
+ 0.2648 12
1764
+ Yang Jiao, Shaoxiang Chen, Zequn Jie, Jingjing Chen, Lin Ma, Yu-Gang Jiang: MoRE: Multi_oRder RElation Mining for Dense Captioning in 3D Scenes. ECCV 2022
1765
+ SUN+
1766
+ 0.1148 12
1767
+ 0.0846 10
1768
+ 0.1564 10
1769
+ 0.0711 10
1770
+ 0.1143 11
1771
+ 0.2958 8
1772
+ Scan2Cap
1773
+ P
1774
+ 0.0849 13
1775
+ 0.0576 13
1776
+ 0.1073 13
1777
+ 0.0492 13
1778
+ 0.0970 13
1779
+ 0.2481 13
1780
+ Dave Zhenyu Chen, Ali Gholami, Matthias Niener and Angel X. Chang: Scan2Cap: Context-aware Dense Captioning in RGB-D Scans. CVPR 2021([0, 1]3), and semantic class estimation from fout using separate FFN heads with a hidden size of 256. Note that we do
1781
+ not estimate the rotation angles since ScanNet[13] does not contain any rotated boxes. Our proposed caption head, DCC,
1782
+ generates captions with respect to final query features fout as Vq and pvq’s surrounding contextual features Vs. DCC is a
1783
+ two layer transformer decoder with four heads for multi-head attentions, as well as a feature size of 256, a sinusoid position
1784
+ encoding, and a vocabulary of 3,433 for ScanRefer[6] and 2,937 for Nr3D[1].
1785
+ C. Qualitative Results
1786
+ Qualitative results on Nr3D. We showcase qualitative results on 3D dense captioning on the Nr3D[1] dataset in Figure 9.
1787
+ Our proposed Vote2Cap-DETR is also able to generate tight bounding boxes as well as accurate descriptions for each object
1788
+ in a 3D scene.
1789
+ Visualization results of vote queries. We visualize the vote queries’ position pvq in our Vote2Cap-DETR and seed
1790
+ queries’ position pseed of 3DETR in Figure 10. Most of the vote queries focus on objects in a 3D scene, while pseed is mostly
1791
+ distributed in background areas.
1792
+ Visualization of detection results. We visualize several detection results in Figure 11. Our proposed Vote2Cap-DETR is
1793
+ able to generate accurate box predictions for a 3D scene.
1794
+ 13
1795
+
1796
+ scene0025_00
1797
+ scene0081_00
1798
+ scene0187_00
1799
+ scene0207_00
1800
+ scene0300_00
1801
+ scene0307_00
1802
+ scene0527_00
1803
+ scene0645_00
1804
+ SpaCap3D: The keyboard
1805
+ closest to the door.
1806
+ Ours: The monitor closest
1807
+ to the door.
1808
+ GT: The monitor closest to
1809
+ the door.
1810
+ SpaCap3D: The table with
1811
+ the plant on it.
1812
+ Ours: The round table in
1813
+ the corner of the room.
1814
+ GT: Round table near the
1815
+ end of the couch.
1816
+ SpaCap3D: The table that
1817
+ is not in the middle of the
1818
+ room.
1819
+ Ours: The table in the
1820
+ middle of the room.
1821
+ GT: The center-most box.
1822
+ In-between the two chairs.
1823
+ SpaCap3D: The door that
1824
+ is open.
1825
+ Ours: The door to the left
1826
+ of the stove.
1827
+ GT: White door nearest to
1828
+ stove.
1829
+ SpaCap3D: The bag on
1830
+ the table.
1831
+ Ours: The monitor closest
1832
+ to the window.
1833
+ GT: The computer monitor
1834
+ the is closest to the
1835
+ windows.
1836
+ SpaCap3D: The bookshelf
1837
+ that is not next to the tv.
1838
+ Ours: It is the bookshelf
1839
+ closest to the door.
1840
+ GT: The largest shelf by
1841
+ the door.
1842
+ SpaCap3D: The mirror
1843
+ that is not above the sink.
1844
+ Ours: Facing the sink, the
1845
+ cabinet on the right.
1846
+ GT: Upper cabinet above
1847
+ the right sink.
1848
+ SpaCap3D: The ottoman
1849
+ closest to the door.
1850
+ Ours: The ottoman closest
1851
+ to the couch.
1852
+ GT: The ottoman closest
1853
+ to the couch.
1854
+ Figure 9. Visualization of 3D dense captioning on Nr3D[1]. We visualize several results generated by our proposed Vote2Cap-DETR
1855
+ comparing with SpaCap3D[36] on the Nr3D[37] dataset. Our proposed method generates tight bounding box as well as accurate descrip-
1856
+ tions.
1857
+ 14
1858
+
1859
+ 𝑃𝐶
1860
+ 𝑝𝑠𝑒𝑒𝑑
1861
+ 𝑝𝑣𝑞
1862
+ scene0011_00
1863
+ scene0015_00
1864
+ scene0064_00
1865
+ scene0131_00
1866
+ scene0169_00
1867
+ scene0378_00
1868
+ Figure 10. Visualization of vote queries. We visualize absolute position of different object queries, pseed used in 3DETR (marked in
1869
+ blue) and pvq used in our proposed Vote2Cap-DETR (marked in red) with the input point cloud PC. Most of the vote queries focus on
1870
+ objects in a 3D scene (as red arrows pointed out), while pseed is mostly distributed in background areas (as blue arrows pointed out).
1871
+ 15
1872
+
1873
+ GT
1874
+ VoteNet
1875
+ 3DETR
1876
+ Vote2Cap-DETR
1877
+ scene0088_00
1878
+ scene0144_00
1879
+ scene0169_00
1880
+ scene0257_00
1881
+ scene0342_00
1882
+ scene0354_00
1883
+ Figure 11.
1884
+ Visualization of detection performance. We visualize detection results of VoteNet[29], 3DETR[24], and our proposed
1885
+ Vote2Cap-DETR. Our proposed Vote2Cap-DETR is able to generate accurate localization results.
1886
+ 16
1887
+
1888
+ E11
Z9E0T4oBgHgl3EQfnQEE/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
adE1T4oBgHgl3EQfKQPQ/content/tmp_files/2301.02963v1.pdf.txt ADDED
@@ -0,0 +1,598 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.02963v1 [physics.app-ph] 8 Jan 2023
2
+ Diffusive Pseudo-Conformal Mapping: Anisotropy-Free
3
+ Transformation Thermal Media with Perfect Interface Matching
4
+ Gaole Dai,1, ∗ Fubao Yang,2 Jun Wang,3, 4 Liujun Xu,5, † and Jiping Huang2, ‡
5
+ 1School of Sciences, Nantong University, Nantong 226019, China
6
+ 2Department of Physics, State Key Laboratory of Surface Physics,
7
+ and Key Laboratory of Micro and Nano Photonic Structures (MOE),
8
+ Fudan University, Shanghai 200433, China
9
+ 3School of Physics, East China University of
10
+ Science and Technology, Shanghai 200237, China
11
+ 4Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou 325001, China
12
+ 5Graduate School of China Academy of Engineering Physics, Beijing 100193, China
13
+ (Dated: January 10, 2023)
14
+ Abstract
15
+ Transformation media provide a fundamental paradigm for field regulation, but their tricky
16
+ anisotropy challenges fabrication. Though optical conformal mapping has been utilized to eliminate
17
+ anisotropy, two key factors still hinder its development in thermotics, i.e., the distinct diffusion
18
+ nature and inevitable interface mismatching. Here, we put forth the concept of diffusive pseudo-
19
+ conformal mapping, overcoming the inherent difference between diffusion and waves and achieving
20
+ perfect interface matching. The proposed mapping directly leads to heat guiding and expanding
21
+ functions with anisotropy-free transformation thermal media, whose feasibility is confirmed by
22
+ experiments or simulations. Besides diverse applications, we provide a unified perspective for two
23
+ distinct types of prevailing bilayer cloaks by uncovering their profound ties with pseudo-conformal
24
+ mapping. These results greatly simplify the preparation of transformation thermotics and have
25
+ implications for regulating other diffusion and wave phenomena.
26
+ 1
27
+
28
+ Heat control is essential for every aspect of human life, such as energy utilization, chip
29
+ cooling, and infrared detection. The past decade has witnessed the development of transfor-
30
+ mation theory [1, 2] in heat conduction, a diffusion process that intrinsically differs from wave
31
+ dynamics [3, 4]. Transformation thermotics indicates that anisotropic and inhomogeneous
32
+ thermal parameters in physical space can mimic heat transfer in curved space, ensured by the
33
+ form-invariance of heat equations under coordinate transformations. However, anisotropy is
34
+ a considerable restriction on practical realization because natural materials usually exhibit
35
+ isotropic thermal properties. Though alternative schemes like diffusive scattering cancel-
36
+ lation were proposed to avoid anisotropy [5–8], they generally apply to specific scenarios
37
+ such as thermal invisibility. Therefore, removing the intrinsic anisotropy of transformation
38
+ thermal media is still a big challenge.
39
+ Conformal transformation optics [1, 9] can eliminate anisotropy with two-dimensional
40
+ (2D) optical conformal mapping that locally preserves the angles and orientations of
41
+ curves [10].
42
+ Diverse wave phenomena have been realized with anisotropy-free transfor-
43
+ mation refractive index [11–19]. However, two critical problems challenge the application
44
+ of optical conformal mapping in thermotics.
45
+ On the one hand, diffusion and waves are
46
+ fundamentally different in governing equations (i.e., the Laplace equation vs. the Helmholtz
47
+ equation) and key parameters (i.e., thermal conductivity vs. refractive index). On the other
48
+ hand, matching the interface heat flux between the functional device and background is still
49
+ tricky due to the lack of an intuitive impedance matching criteria, which is essential for
50
+ accurate and robust heat manipulation. Thus, directly utilizing optical conformal mapping
51
+ for precise diffusion control is impracticable.
52
+ Here, we propose the concept of diffusive pseudo-conformal mapping with angle preser-
53
+ vation for certain families of curves representing thermal fields. Fortunately, “pseudo” does
54
+ not affect the crucial advantage of “conformal” in removing anisotropy and contributes to
55
+ perfect interface matching. The proposed mapping yields precise heat manipulation with
56
+ anisotropy-free transformation thermal media, and we take heat guiding and expanding as
57
+ two typical examples, with experimental or simulated confirmation. Moreover, we reveal
58
+ the geometric origin of bilayer cloaks previously designed by scattering cancellation from
59
+ the perspective of pseudo-conformal mapping. These results feature scalability in handling
60
+ complex heat transfer [20–22] and provide a unified geometric perspective towards various
61
+ thermal functions with isotropic materials.
62
+ 2
63
+
64
+ For clarity, we first establish diffusive conformal mapping and illustrate its restrictions.
65
+ A 2D conformal mapping f : z0 �→ z (z0 = x0 + iy0 and z = f(z0) = x + iy) is holomorphic,
66
+ satisfying the Cauchy–Riemann equations [10],
67
+ ∂f
68
+ ∂z0
69
+ = 1
70
+ 2
71
+ � ∂f
72
+ ∂x0
73
+ + i ∂f
74
+ ∂y0
75
+
76
+ = 0,
77
+ (1)
78
+ where z0 is the complex conjugate of z0. We consider form-invariant heat conduction equa-
79
+ tions in physical space (position denoted by z) and virtual space (position denoted by z0),
80
+ ∇ · (κ∇T(z)) − Q = 0,
81
+ (2a)
82
+ ∇0 · (κ0∇0T0(z0)) − Q0 = 0,
83
+ (2b)
84
+ where T, κ, and Q are the temperature, thermal conductivity (rank-2 tensor), and internal
85
+ source power in physical space, respectively. Their counterparts with a subscript “0” denote
86
+ corresponding parameters in virtual space.
87
+ The transformation rules to ensure T(z) =
88
+ T0(f −1(z0)) are [23, 24]
89
+ κ(z) = Jfκ0(z0)JT
90
+ f / det Jf,
91
+ (3a)
92
+ Q(z) = Q0(z0)/ det Jf,
93
+ (3b)
94
+ where Jf is the Jacobian matrix of f, and JT
95
+ f is the transpose of Jf.
96
+ Due to the holo-
97
+ morphicity of f, JfJT
98
+ f / det Jf is an identity matrix [9], so Eq. (3a) can be simplified as
99
+ κ(z) = κ0(z0(z)). With the familiar paradigm of transformation theory, virtual space is
100
+ isotropic and homogeneous, so κ and κ0 are identical constant scalars. This result is con-
101
+ sistent with the conventional technique using conformal mapping to solve heat equations in
102
+ irregular geometric domains [25]. Different from the engineered gradient index in conformal
103
+ transformation optics, such a frozen degree of freedom (i.e., κ = κ0) restricts the function
104
+ design for thermal manipulation. Besides, transformation media are often in contact with
105
+ the background that undergoes trivial mapping without changing material properties. How-
106
+ ever, conformal mapping usually cannot ensure the continuity of boundary conditions due
107
+ to the strict constraint of Eq. (1), i.e., conformality for all curves, as shown in the examples
108
+ below. In fact, in most cases, we do not even have the option to consider interface matching
109
+ due to the uniqueness of conformal mapping between two given domains. Therefore, we
110
+ 3
111
+
112
+ must develop diffusive pseudo-conformal mapping further and design thermal guiding and
113
+ expanding functions.
114
+ Thermal guiding can bend the heat flux by an arbitrary angle. Its virtual space and
115
+ physical space are shown in Figs. 1(a) and 1(c), respectively. In the virtual space, we apply
116
+ a thermal bias along the y0-axis. The inlet (hot source) is put on the bottom horizontal
117
+ boundary B0C0 and the outlet (cold source) is on the top boundary F0E0. The vertical
118
+ boundaries F0A0B0 and E0D0C0 are thermally insulated. In the physical space, we expect
119
+ the flux is rotated by angle ϕ when flowing out of the guide without changing its magnitude.
120
+ In Fig. 1(a), we plot the grid lines of the Cartesian coordinates. They are just the heat
121
+ flux streamlines (constant-x0 curves) and the isotherms (constant-y0 curves). The upper
122
+ rectangle (with a height W) undergoes a composition of rotation and translation to the
123
+ background in Fig. 1(c) without changing its thermal conductivity. The lower rectangle
124
+ (with a height L) is transformed into the partial annulus (the guide) in Fig. 1(c). Notably,
125
+ an identity transformation should happen between the inlets B0C0 and BC.
126
+ Also, the
127
+ mappings for the background and the guide should have the same effect at their interface
128
+ AD (on the constant-azimuth line equal to ϕ in Fig. 1(c), which is mapped from A0D0 in
129
+ Fig. 1(a)). The other boundaries, FAB and EDC, are still insulated in the physical space.
130
+ Although there exists a conformal mapping between the guide and its preimage according
131
+ to Riemann mapping theorem [10], it generally cannot transform B0C0 (or A0D0) into BC
132
+ (or AD), let alone achieve the interface effect as we want (see our discussion based on the
133
+ theory of quasiconformal mapping in Supplemental Material, Note I [26]).
134
+ To construct the guide, a two-step approach is implemented. A non-conformal mapping
135
+ is first used [Figs. 1(a) and 1(b)]: x1 = ln x0 and y1 = ϕy0/L, from the virtual space to an
136
+ intermediate named the auxiliary space (position denoted by z1 = x1 + iy1). The second
137
+ step to the physical space [Figs. 1(b) and 1(c)] is conformal: z = exp(z1), which can lead to
138
+ the waveguide in optics [9]. The composition transformation is also non-conformal and the
139
+ thermal conductivities in the auxiliary space (denoted by κ1) and the physical space (satisfy-
140
+ ing κ(z) = κ1(z1)) should be diagonally anisotropic, writing κ = κ0diag
141
+
142
+ ∂x1
143
+ ∂x0/ ∂y1
144
+ ∂y0, ∂y1
145
+ ∂y0/ ∂x1
146
+ ∂x0
147
+
148
+ in the Cartesian coordinate system.
149
+ However, if κ0 itself is diagonally anisotropic, e.g.,
150
+ κ0 ∼ diag
151
+
152
+ ∂y1
153
+ ∂y0/ ∂x1
154
+ ∂x0, ∂x1
155
+ ∂x0/ ∂y1
156
+ ∂y0
157
+
158
+ , anisotropy can be eliminated in the other two spaces.
159
+ In
160
+ addition, since only κy0y0
161
+ 0
162
+ contributes to heat flux, we can take κ0 = κy0y0
163
+ 0
164
+ without changing
165
+ the results in any space. In this way, thermal conductivities in all spaces are isotropic and
166
+ 4
167
+
168
+ we have
169
+ κ(z) = κ1(z1(z)) = κ0
170
+ ϕ
171
+ L
172
+
173
+ x2 + y2.
174
+ (4)
175
+ We also plot the grid lines in Figs. 1(b) and 1(c) mapped from their counterparts in
176
+ Fig. 1(a). The three sets of grid lines are all orthogonal since they happen to be streamlines
177
+ and isotherms in their spaces. Intuitively, we call the mapping in the first step “pseudo-
178
+ conformal” for maintaining orthogonality of certain curves. By doing a pseudo-conformal
179
+ mapping first and then a conformal one, the composition is still pseudo-conformal. The
180
+ streamlines in Figs. 1(a) and 1(c), which are both evenly distributed, so the heat flux mag-
181
+ nitude is not changed. Also, the flux in the guide does turn an angle ϕ as we expected.
182
+ To confirm our design, wo perform an experiment with the following parameters: a = 0.5,
183
+ L = 3W = ϕ/0.65, ϕ = π/6 and κ0 = 400 W m−1 K−1 (e.g., copper). Here, the unit length
184
+ is 40 cm.
185
+ The thermal conductivity profile based on Eq. (4) is shown in Fig. 2(a) and
186
+ the background has the same value as κ0.
187
+ The inhomogeneous κ is approximated by a
188
+ composite of copper and air holes [Fig. 2(b)]. We plot six straight lines, including the inlet
189
+ (Line 1) and outlet (Line 6) in Fig. 2(b). Line 4 is the interface of the guide and background.
190
+ Lines 2 and 3 divide the guide into thirds, and Line 5 is in the middle of the background.
191
+ Ideally, the temperature difference relative to the outlet on each line (isotherm) is marked
192
+ in Fig. 2(b). The total thermal bias is ∆T, which is experimentally realized by water baths
193
+ heating or cooling the sample. Fig. 2(c) shows the observed temperatures. Each of the six
194
+ lines is roughly isothermal, and we plot horizontal lines corresponding to their mean value
195
+ (excluding edge extremes).
196
+ Notably, the temperature differences between Lines 4-6 are
197
+ almost equal. We can conclude that the sample can bend the flux and keep its homogeneity.
198
+ More details about the experimental setup and data are presented in Supplemental Material,
199
+ Note II [26].
200
+ Next, we design a thermal expander that can convert the heat flux emitted by the point
201
+ source into parallel flows. In the virtual space [Fig. 3(a)], we consider the lower half-plane.
202
+ The isotherms form concentric semi-circles with the point source at the center, and the heat
203
+ flux magnitude varies with azimuth. To determine the temperature distribution, its value
204
+ on a certain isotherm should be given, e.g., 290 K on {z0 : |z0| = 1, y ⩽ 0} via an external
205
+ source. The x0-axis is thermally insulated except for the point source. By doing a conformal
206
+ mapping z = ii+z0
207
+ i−z0 used in the optical counterpart [9], i.e., a M¨obius transformation, the
208
+ unit lower half-disk becomes the unit upper half-disk in the physical space [Fig. 3(b)]. The
209
+ 5
210
+
211
+ point source is now at z = i and its power becomes Q = Q0/2 to ensure T(z) = T0(z0(z)).
212
+ The diameter on the x-axis {z : |x| ⩽ 1, y = 0} is an isotherm (and also the external source)
213
+ mapped from {z0 : |z0| = 1, y ⩽ 0} and the heat flux on it is parallel (along the negative
214
+ y-axis). However, flux magnitude has a varying value depending on which azimuth it is
215
+ mapped from. For practical applications, we want a homogenized flux so it can keep parallel
216
+ in a rectangular extension attached to {z : |x| ⩽ 1, y = 0} when the external source is
217
+ moved to the bottom of the extension. This homogenization can be realized by z = ii+z1(z0)
218
+ i−z1(z0).
219
+ Here, Arg[z1] = 2 arctan
220
+
221
+ Arg[z0]−2π
222
+ Arg[z0]−π
223
+
224
+ + 2π is pseudo-conformal from the virtual space to the
225
+ auxiliary space, and Arg denotes the argument (see Supplemental Material, Note III for how
226
+ to construct this mapping [26]). This approach leads to the same boundary conditions of
227
+ external source and insulation as the conformal one. Now, the thermal conductivity in the
228
+ expander is
229
+ κ(x, y) = κ1
230
+
231
+ 1 +
232
+
233
+ (x2 + y2 − 1)2
234
+ x4 + 2x2 (y2 + 1) + (y2 − 1)2
235
+ �−1
236
+ .
237
+ (5)
238
+ Since κ is still isotropic, the heat flux on the x-axis can keep parallel. In Fig. 3(c), we plot
239
+ the thermal fields in the new physical space. We can see that the intersections of streamlines
240
+ with the x-axis are now uniformly distributed, which is different from the case in Fig. 3(b).
241
+ This is an intuitive representation of homogenization. We further show the heat flux on
242
+ the x-axis produced by the two mappings in Fig. 3(d). The theoretical results agree with
243
+ the finite-element numerical ones from COMSOL Multiphysics (https://www.comsol.com/).
244
+ In Supplemental Material, Note IV [26], we confirm the performance of our design when
245
+ considering an extension. Eq.(5) can also be written in a compact form using transposed
246
+ bipolar coordinates (See Supplemental Material, Note V [26]).
247
+ Besides functional design in different scenarios, our approach can also reveal the under-
248
+ lying ties between material parameters and coordinate transformations for some previous
249
+ works based on alternative schemes of transformation theory. Here, for example, we provide
250
+ a geometric insight of bilayer cloaks [5–8]. Transformation-based annular cloak (shell cloak)
251
+ depends on a non-homeomorphic mapping to generate its inner boundary, e.g., transforming
252
+ a point or a line into a closed curve [1, 2]. Due to the geometric symmetry, we consider
253
+ the upper half of an annular cloak, i.e., a carpet cloak. The virtual and physical spaces in
254
+ Fig. 4 show a transformation from the upper half-disk D+ with a radius R2 to the upper
255
+ half-annulus A+ = {z : R1 ⩽ |z| ⩽ R2, y ⩾ 0} [Fig. 4(c)]. Here A+ is actually the outer
256
+ 6
257
+
258
+ layer of a bilayer cloak. For simplicity, we take R1 = 1 (in meters). The area outside D+
259
+ or A+ is the background. A cloak means the area enclosed by A+ is expected to have no
260
+ disturbance to the background. We still use a two-step pseudo-conformal mapping. First,
261
+ D+ is compressed/expanded into an half-ellipse E+ = {z1 : 0 ⩽ x2
262
+ 1/a2 + y2
263
+ 1/b2 ⩽ 1, y1 ⩾ 0}
264
+ [Fig. 4(b)] by x1 = R2
265
+ a x0 and y1 = R2
266
+ b y0. Here we take a2 = R2
267
+ 2 + R2
268
+ 1 and b2 = R2
269
+ 2 − R2
270
+ 1 so
271
+ the foci of the half-ellipse are (±2, 0). This non-conformal mapping is angle-preserving for
272
+ the Descartes grid lines in Fig. 4(a). Second, conformally map E+ to A+:
273
+
274
+
275
+
276
+
277
+
278
+
279
+
280
+ z = z1 −
281
+
282
+ z2
283
+ 1 − 4
284
+ 2
285
+ ,
286
+ if x1 < 0
287
+ z = z1 +
288
+
289
+ z2
290
+ 1 − 4
291
+ 2
292
+ ,
293
+ if x1 ⩾ 0.
294
+ (6)
295
+ This mapping is one branch of the inverse Zhukovsky transform [35] (See Supplemental
296
+ Material, Note VI for detailed explanation [26]). In particular, the upper boundary of A+
297
+ (i.e., {z : |z| = R2, y ⩾ 0}) finally undergoes an identity transformation as well as the
298
+ background. If the thermal bias is applied along the x0-axis, the grid lines in Fig. 4(a)
299
+ represent horizontal streamlines and vertical isotherms and only κx0x0
300
+ 0
301
+ contributes to heat
302
+ flux. Taking κ0 = κx0x0
303
+ 0
304
+ , we find the thermal conductivity in A+ is
305
+ κ = R2
306
+ 2 + R2
307
+ 1
308
+ R2
309
+ 2 − R2
310
+ 1
311
+ κ0,
312
+ (7)
313
+ which is just the cloaking condition for the outer layer of a bilayer cloak [5]. In addition,
314
+ the lower boundary of A+ including {z : |z| = R1, y ⩾ 0} and {z : R1 ⩽ |x| ⩽ R2, y = 0}
315
+ (denoted by Γ as a whole) are mapped from a streamline in the virtual space. The heat flux
316
+ should have no normal component on Γ, which can be ensured by the insulation condition.
317
+ This corresponds to the cloaking condition for the inner layer of a bilayer cloak, i.e., zero-
318
+ value thermal conductivity and arbitrary shape as along as it covers {z : |z| = R1, y ⩾ 0}.
319
+ If the thermal bias is instead along the y0-axis, our mapping is still angle-preserving for
320
+ the vertical streamlines and horizontal isotherms in Fig. 4(a). Take κ0 = κy0y0
321
+ 0
322
+ and we have
323
+ κ = R2
324
+ 2 − R2
325
+ 1
326
+ R2
327
+ 2 + R2
328
+ 1
329
+ κ0.
330
+ (8)
331
+ The lower boundary Γ is mapped from an isotherm, leading to another bilayer cloak made
332
+ of zero-index materials [7]. We call the previous cloak “normal bilayer” to distinguish them.
333
+ The term “zero-index” refers to the constant-temperature condition on the inner layer that
334
+ can be replaced by an effectively infinite thermal conductivity [7, 8]. Figs. 4(d) and 4(e)
335
+ 7
336
+
337
+ numerically confirm the performance of the two cloaks. In addition to the invisibility effect,
338
+ we can see that the patterns of isotherms and streamlines in the two cloaks have a duality
339
+ relationship by swapping the family of curves of streamlines and isotherms. Further, this
340
+ mapping can be performed on the entire complex plane to build the shell cloak. Our method
341
+ can also be used to design invisibility devices with other geometries, such as confocally
342
+ elliptical cloaks (See Supplemental Material, Note II for detailed discussions [26]).
343
+ In summary, we propose the concept of diffusive pseudo-conformal mapping to simulta-
344
+ neously achieve precise heat flux regulation and maintain the material isotropy in transfor-
345
+ mation thermotics. By preserving the angles of certain families of curves (e.g., isotherms
346
+ and streamlines), our approach can circumvent anisotropy and perfectly match the interface
347
+ heat flux between the transformation media and background. We demonstrate our theory
348
+ by designing feasible and robust thermal devices that can bend or parallelize heat flux at our
349
+ will. Also, we revisit scattering-cancellation-based bilayer cloaks from a unified perspective
350
+ of pseudo-conformal mapping. In addition to reobtaining the parameters without inversely
351
+ solving heat equations, the intrinsic geometric relationship between different types of bilayer
352
+ cloaks is also revealed. The idea of diffusive pseudo-conformal mapping can be further devel-
353
+ oped for controlling transient heat conduction [24, 38], multithermotics [39, 40], and other
354
+ diffusion physics [41–44]. The consideration of perfect interface matching also benefits the
355
+ design of transformation wave media, such as avoiding impedance mismatch that plagues
356
+ conformal invisibility devices [45].
357
+ We acknowledge financial support from the National Natural Science Foundation of China
358
+ (Grants No. 11725521, No. 12035004, No. 12147169, and No. 12205101) and the Science
359
+ and Technology Commission of Shanghai Municipality (Grant No. 20JC1414700).
360
361
362
363
+ [1] U. Leonhardt, Science 312, 1777 (2006).
364
+ [2] J. B. Pendry, D. Schurig, and D. R. Smith, Science 312, 1780 (2006).
365
+ [3] S. Yang, J. Wang, G. Dai, F. Yang, and J. Huang, Phys. Rep. 908, 1 (2021).
366
+ 8
367
+
368
+ [4] Y. Li, W. Li, T. Han, X. Zheng, J. Li, B. Li, S. Fan, and C.-W. Qiu, Nat. Rev. Mater. 6, 488
369
+ (2021).
370
+ [5] T. Han, X. Bai, D. Gao, J. T. L. Thong, B. Li, and C.-W. Qiu, Phys. Rev. Lett. 112, 054302
371
+ (2014).
372
+ [6] H. Xu, X. Shi, F. Gao, H. Sun, and B. Zhang, Phys. Rev. Lett. 112, 054301 (2014).
373
+ [7] Y. Li, K.-J. Zhu, Y.-G. Peng, W. Li, T. Yang, H.-X. Xu, H. Chen, X.-F. Zhu, S. Fan, and
374
+ C.-W. Qiu, Nat. Mater. 18, 48 (2019).
375
+ [8] L. Xu, S. Yang, and J. Huang, EPL 131, 24002 (2020).
376
+ [9] L. Xu and H. Chen, Nat. Photonics 9, 15 (2015).
377
+ [10] E. M. Stein and R. Shakarchi, Complex Analysis (Princeton University Press, Princeton, 2003).
378
+ [11] U. Leonhardt and T. Tyc, Science 323, 110 (2009).
379
+ [12] Y. Huang, Y. Zhang, J. Zhang, D. Liu, Q. Wang, B. Zhang, and Y. Luo, Nanophotonics 9,
380
+ 3243 (2020).
381
+ [13] M. Kraft, Y. Luo, S. A. Maier, and J. B. Pendry, Phys. Rev. X 5, 031029 (2015).
382
+ [14] Q. Chen, S. A. R. Horsley, N. J. G. Fonseca, T. Tyc, and O. Quevedo–Teruel, Nat. Commun.
383
+ 13, 2354 (2022).
384
+ [15] X. Wang, H. Chen, H. Liu, L. Xu, C. Sheng, and S. Zhu, Phys. Rev. Lett. 119, 033902 (2017).
385
+ [16] Y. Kim, S.-Y. Lee, J.-W. Ryu, I. Kim, J.-H. Han, H.-S. Tae, M. Choi, and B. Min, Nat.
386
+ Photonics 10, 647 (2016).
387
+ [17] Y. Liu, F. Sun, Y. Yang, Z. Chen, J. Zhang, S. He, and Y. Ma, Phys. Rev. Lett. 125, 207401
388
+ (2020).
389
+ [18] L. Xu, R. He, K. Yao, J. M. Chen, C. Sheng, Y. Chen, G. Cai, S. Zhu, H. Liu, and H. Chen,
390
+ Phys. Rev. Appl. 11, 034072 (2019).
391
+ [19] L. Lu, K. Ding, E. Galiffi, X. Ma, T. Dong, and J. B. Pendry, Nat. Commun. 12, 6887 (2021).
392
+ [20] Y. Li, X. Shen, Z. Wu, J. Huang, Y. Chen, Y. Ni, and J. Huang, Phys. Rev. Lett. 115, 195503
393
+ (2015).
394
+ [21] F. Yang, L. Xu, J. Wang, and J. Huang, Phys. Rev. Appl. 18, 034080 (2022).
395
+ [22] G. Dai, J. Shang, and J. Huang, Phys. Rev. E 97, 022129 (2018).
396
+ [23] C. Fan, Y. Gao, and J. Huang, Appl. Phys. Lett. 92, 251907 (2008).
397
+ [24] S. Guenneau, C. Amra, and D. Veynante, Opt. Express 20, 8207 (2012).
398
+ [25] G. F. Naterer, Advanced Heat Transfer, 3rd. (CRC Press, Boca Raton, 2022).
399
+ 9
400
+
401
+ [26] See Supplemental Material which includes Refs. [10, 27–37].
402
+ [27] L. V. Ahlfors, Lectures on Quasiconformal Mappings, 2nd. (American Mathematical Society,
403
+ Providence, 2006).
404
+ [28] V. Alberge and A. Papadopoulos, in Handbook of Teichm¨uller Theory, Volume VII, edited by
405
+ A. Papadopoulos (European Mathematical Society, Z¨urich, 2020), pp. 393–415.
406
+ [29] J. Li and J. B. Pendry, Phys. Rev. Lett. 101, 203901 (2008).
407
+ [30] R. Liu, C. Ji, J. J. Mock, J. Y. Chin, T. J. Cui, and D. R. Smith, Science 323, 366 (2009).
408
+ [31] B. Zhang, T. Chan, and B.-I. Wu, Phys. Rev. Lett. 104, 233903 (2010).
409
+ [32] J. Zhang, J. B. Pendry, and Y. Luo, Adv. Photonics 1, 014001 (2019).
410
+ [33] W. Zeng, L. M. Lui, F. Luo, T. F.-C. Chan, S.-T. Yau, and D. X. Gu, Numer. Math. 121,
411
+ 671 (2012).
412
+ [34] V. A. Markel, J. Opt. Soc. Am. A 33, 1244 (2016).
413
+ [35] J. W. Brown and R. V. Churchill, Complex Variables and Applications, 9th. (McGraw-Hill
414
+ Education, New York, 2014).
415
+ [36] T. Han, P. Yang, Y. Li, D. Lei, B. Li, K. Hippalgaonkar, and C.-W. Qiu, Adv. Mater. 30,
416
+ 1804019 (2018).
417
+ [37] J. Qin, W. Luo, P. Yang, B. Wang, T. Deng, and T. Han, Int. J. Heat Mass Transfer 141,
418
+ 487 (2019).
419
+ [38] L. Xu, J. Liu, P. Jin, G. Xu, J. Li, X. Ouyang, Y. Li, C.-W. Qiu, and J. Huang, Natl. Sci.
420
+ Rev., nwac159 (2022).
421
+ [39] L. Xu and J. Huang, Phys. Rev. Appl. 12, 044048 (2019).
422
+ [40] G. Dai, Y. Zhou, J. Wang, F. Yang, T. Qu, and J. Huang, Phys. Rev. Appl. 17, 044006 (2022).
423
+ [41] R. Schittny, M. Kadic, T. B¨uckman, and M. Wegener, Science 345, 427 (2014).
424
+ [42] Y. Ma, Y. Liu, M. Raza, Y. Wang, and S. He, Phys. Rev. Lett. 113, 205501 (2014).
425
+ [43] F. G¨om¨ory, M. Solovyov, J. ˇSouc, C. Navau, J. Prat-Camps, and A. Sanchez, Science 335,
426
+ 1466 (2012).
427
+ [44] W. Jiang, Y. Ma, and S. He, Phys. Rev. Appl. 9, 054041 (2018).
428
+ [45] L. Xu, H. Chen, T. Tyc, Y. Xie, and S. A. Cummer, Phys. Rev. B 93, 041406(R) (2016).
429
+ 10
430
+
431
+ FIG. 1. Transformation for a heat flux guide (gray and white meshes) including the background
432
+ (brown and white meshes). (a), (b) and (c) are the virtual space, the auxiliary space (only showing
433
+ the transformation of the guide) and the physical space, respectively.
434
+ 11
435
+
436
+ ■口口口口口口口口口口口0口=yo
437
+ Z= ez1FIG. 2. (a) Normalized thermal conductivity (κ/κ0) profile of the guide. The white curves are
438
+ isolines. (b) Sample structure made of copper and air holes for experimental setup. (c) Measured
439
+ temperatures showing the data on the red lines plotted in (b). The arc length means the distance
440
+ from left endpoint.
441
+ 12
442
+
443
+ 40
444
+ Heatfluxguide(p=元/6)
445
+ Samplestructure
446
+ Copper
447
+ Airhole
448
+ (390W
449
+ W/2
450
+ 30
451
+ 3
452
+ 5
453
+ (cm)
454
+ 20
455
+ Line
456
+ A
457
+ Line
458
+ azi:元/6
459
+ 10
460
+ Line3
461
+ azi:/9
462
+ Line2
463
+ Normalized
464
+ azi:元/18
465
+ conductivity
466
+ 0
467
+ 0.65
468
+ 30
469
+ 35
470
+ 40
471
+ 45
472
+ 50
473
+ 55
474
+ 60
475
+ 30
476
+ 35
477
+ 40
478
+ 45
479
+ 50
480
+ 55
481
+ 60
482
+ x (cm)
483
+ x (cm)
484
+ Line1
485
+ Line3
486
+ Line5
487
+ 320
488
+ Line2
489
+ Line4
490
+ Line6
491
+ +316.53K
492
+ 315
493
+ 310
494
+ +308.76K
495
+ 305
496
+ +302.24K
497
+ 300
498
+ 296.95K
499
+ 295
500
+ 294.50K
501
+ 291.98K
502
+ 290
503
+ Observedtemperatures inexperiments
504
+ 0.0
505
+ 2.5
506
+ 5.0
507
+ 7.5
508
+ 10.0
509
+ 12.5
510
+ 15.0
511
+ 17.5
512
+ 20.0
513
+ Arclength(cm)FIG. 3. Thermal expander. (a) The virtual space with the point source at the origin. Concentric
514
+ semicircles are isotherms. Gray curves with arrows represent streamlines, which are also constant-
515
+ azimuth lines ranging from 1.1π to 1.9π (0.1π interval from left to right). (b) The physical space
516
+ for a conformal mapping. (c) The physical space for a pseudo-conformal mapping. The thermal
517
+ fields in (a)–(c) are illustrated based on finite element numerical results. (d) The magnitude of
518
+ the normalized heat flux on the x-axis for the two physical spaces. The solid lines are theoretical
519
+ results while the scatter charts with markers are numerical results. Here, we take κ0 = 400 W m−1
520
+ K−1 and Q0 = 3000 W m−3.
521
+ 13
522
+
523
+ 口0.0
524
+ 1.0
525
+ (m)
526
+ w)
527
+ 0.5
528
+ 0.5
529
+ -1.0
530
+ 0.0
531
+ -1.0
532
+ -0.5
533
+ 0.0
534
+ 0.5
535
+ 1.0
536
+ -1.0
537
+ -0.5
538
+ 0.0
539
+ 0.5
540
+ 1.0
541
+ Xo (m)
542
+ x(m)
543
+ Normalizedheatflux
544
+ 1.3
545
+ 1.0
546
+ (w)
547
+ 1.0
548
+ 0.5
549
+ Simulation(C)
550
+ Simulation(PC)
551
+ Theory (C)
552
+ 0.0
553
+ 0.7
554
+ Theory(PC)
555
+ -1.0
556
+ -0.5
557
+ 0.0
558
+ 0.5
559
+ 1.0
560
+ -1.0
561
+ -0.5
562
+ 0.0
563
+ 0.5
564
+ 1.0
565
+ x (m)
566
+ x (m)FIG. 4. Carpet cloaks. (a)–(c) show the geometric transformation to construct such a cloak. (a),
567
+ (b), and (c) are the virtual space, auxiliary space and the physical space, respectively. (d) is the
568
+ computed temperature profile of a normal bilayer cloak with an insulating inner layer. We place
569
+ the hot source (300 K) and the cold source (200 K) on the left and right boundaries, respectively,
570
+ generating a thermal bias along the x-axis. (e) is the computed temperature profile of a zero-index
571
+ cloak with an constant-temperature inner layer (realized by an external source). We illustrate white
572
+ curves for isotherms and gray curves with arrows for streamlines. Here, we take R2 = 2R1 = 2 m
573
+ and κ0 = 400 W m−1 K−1. The entire simulation domain is limited to a 6 m × 3 m rectangle.
574
+ 14
575
+
576
+ 口口口口3
577
+ (w) x
578
+ 2
579
+ R
580
+ -3
581
+ -2
582
+ -1
583
+ 0
584
+ 1
585
+ 2
586
+ 3
587
+ x (m)
588
+ 3
589
+ 2
590
+ (w)
591
+ Z1+Vz-
592
+ 0
593
+ -3
594
+ -1
595
+ 0
596
+ 1
597
+ 2
598
+ (w) x
adE1T4oBgHgl3EQfKQPQ/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
cNE_T4oBgHgl3EQf0Bwx/content/tmp_files/2301.08326v1.pdf.txt ADDED
@@ -0,0 +1,545 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.08326v1 [physics.ed-ph] 19 Jan 2023
2
+ A low-cost confocal microscope for the undergraduate lab
3
+ A. Reguilon, W. Bethard, and E. Brekke∗
4
+ Department of Physics, St. Norbert College, De Pere, WI 54115
5
+ Abstract
6
+ We demonstrate a simple and cost-efficient scanning confocal microscope setup for use in ad-
7
+ vanced instructional physics laboratories. The setup is constructed from readily available com-
8
+ mercial products, and the implementation of a 3D-printed flexure stage allows for further cost
9
+ reduction and pedagogical opportunity. Experiments exploring the thickness of a microscope slide
10
+ and the surface of solid objects with height variation are presented as foundational components of
11
+ undergraduate laboratory projects, and demonstrate the capabilities of a confocal microscope. This
12
+ system allows observation of key components of a confocal microscope, including depth perception
13
+ and data acquisition via transverse scanning, making it an excellent pedagogical resource.
14
+ 1
15
+
16
+ I.
17
+ INTRODUCTION
18
+ The design and use of optical instruments is an area of broad interest, appealing par-
19
+ ticularly to students at the intersection of biological fields with physics and engineering.
20
+ With the large number of undergraduate physicists interested in careers in biomedical fields,
21
+ a confocal microscope provides an excellent opportunity to see how physics principles re-
22
+ late to state-of-the-art equipment used for image formation. The properties, advantages,
23
+ and applications of confocal micrsocopy have been carefully investigated in the biological
24
+ community.1–3
25
+ While many undergraduate optics courses spend significant time on the ideas of optical
26
+ instruments and traditional microscopes, confocal microscopy is an important application
27
+ that is often absent from the student experience. There has been a variety of designs for
28
+ homebuilt confocal microscopes in the lab,4–9 but these designs are often more focused on
29
+ the production of an image than on student understanding of the physics underlying the
30
+ operation of the instrument. Previous versions were often either too expensive, or left essen-
31
+ tial elements hidden in commercial components, making them nonideal for undergraduate
32
+ pedagogy.
33
+ In this paper we present a simple scanning confocal microscope experiment, ideal for
34
+ the undergraduate optics or advanced instructional lab environment. Rather than being
35
+ designed to minimize its size or acquire images on par with commercial confocal micro-
36
+ scopes, our instructional setup is designed to clearly demonstrate the physics involved in
37
+ the image acquisition method and to make the optical components easy to manipulate. Our
38
+ setup provides an excellent system to illustrate scanning confocal microscopy and also gives
39
+ experience with the calibration of a data acquisition process.
40
+ Our system is designed to be versatile, with potential to develop student experimental
41
+ skills in the areas optical systems, electronics, and 3D printing. Projects can be adapted to
42
+ suit the context of an advanced lab setting, an optics or electronics course, an independent
43
+ student project, or a senior thesis. In the following sections, we outline an experimental
44
+ process that can be used as a whole or broken up into parts, depending on the pedagogical
45
+ purposes of the instructor. In the first experiment, a microscope slide is used to illustrate the
46
+ ability of a confocal setup to acquire depth information in a sample. A second experiment
47
+ involves using this setup to investigate height variations of a solid material. Finally, either of
48
+ 2
49
+
50
+ these experiments can incorporate a 3D printed translation stage,10 which allows integration
51
+ of programming and electronic control.
52
+ II.
53
+ DESIGN
54
+ 100 mm lens
55
+ 500 mm lens
56
+ 150 mm lens
57
+ Objective lens
58
+ Mirror
59
+ Beam
60
+ Splitter
61
+ 650 nm
62
+ laser
63
+ Photodiode
64
+ Sample on
65
+ Translation Stage
66
+ Iris
67
+ Iris
68
+ X
69
+ Y
70
+ FIG. 1. The experimental setup for the confocal microscope. A generic visible laser is used, and
71
+ expanded to fill a microscope objective lens. The objective lens focuses the light onto a sample,
72
+ which is mounted on a translation stage at the focus of the objective. The light reflected from the
73
+ sample is separated by a beam splitter, and focused onto an image plane, where an iris limits light
74
+ to that which came from the focal plane of the objective. This intensity is then monitored by a
75
+ photodiode.
76
+ Our confocal microscope experimental setup is shown in Fig. 1. The essential elements
77
+ of this system are readily available commercial components. We use a simple 650 nm laser
78
+ diode, with a optical power of 5 mW. This laser is expanded with a two-lens beam expander
79
+ to a beam waist of 5 mm waist to fill a commercial microscope objective lens. The beam
80
+ 3
81
+
82
+ expander is constructed so that the distance between the lenses is the sum of the focal
83
+ lengths, which increases the beam size by a factor of f2/f1. An iris is placed at the focus
84
+ of the beam expander to make the beam more circular. Once the beam is collimated, the
85
+ distance between the expander and the objective, including the presence of a steering mirror,
86
+ is chosen for convenience. The sample is placed on a translation stage, which can be either
87
+ a commercially purchased or a 3D-printed stage. The objective focuses the light onto the
88
+ sample, where a portion is reflected back through the objective and a beamsplitter redirects
89
+ the light towards a photodetector.
90
+ TABLE I. The parts required for construction of the confocal microscope.
91
+ Part
92
+ Specs
93
+ Supplier and part number Price (US$)
94
+ Laser Diode
95
+ 650 nm, 5 mW
96
+ Adafuit 1054
97
+ 10.00
98
+ Diode Holder
99
+ Thorlabs RA90
100
+ 10.30
101
+ Feet (10)
102
+ 1′′ x 2.3′′ x 3/8′′
103
+ Thorlabs BA1S (5 PACK)
104
+ 48.20
105
+ Lens Holder (7)
106
+ Ø1′′ Optics, 8-32 Tap
107
+ Thorlabs LMR1
108
+ 109.83
109
+ Mirror Mount
110
+ Kinematic Mount for Ø1′′
111
+ Thorlabs KM100
112
+ 39.86
113
+ Mirror
114
+ Ø1′′ 400 - 750 nm
115
+ Thorlabs BB1-E02
116
+ 77.35
117
+ Objective Lens
118
+ 10X infinite conjugate
119
+ Amscope PL10X-INF-V300
120
+ 66.99
121
+ Thread Adaptor
122
+ SM1 to RMS
123
+ Thorlabs SM1A3
124
+ 18.50
125
+ Posts (10)
126
+ 2′′
127
+ Thorlabs TR2-P5
128
+ 50.52
129
+ Post Holders (10)
130
+ 2′′
131
+ Thorlabs PH2-P5
132
+ 81.28
133
+ 150 mm Lens
134
+ Ø1′′, AR Coating: 650–1050 nm
135
+ Thorlabs LB1437-B
136
+ 35.50
137
+ 500 mm Lens
138
+ Ø1′′, AR Coating: 650–1050 nm
139
+ Thorlabs LA1908-B
140
+ 33.01
141
+ 100 mm Lens
142
+ Ø1′′, AR Coating: 650–1050 nm
143
+ Thorlabs LA1509-B
144
+ 34.39
145
+ Beamsplitter
146
+ Economy Ø1′′ 50:50
147
+ Thorlabs EBS1
148
+ 35.78
149
+ Ring Actuated Iris (2)
150
+ (Ø0.8 - Ø12 mm)
151
+ Thorlabs SM1D12D
152
+ 145.86
153
+ Photodetector
154
+ 350 - 1100 nm
155
+ Thorlabs DET36A2
156
+ 134.20
157
+ Ultralight Breadboard
158
+ 24′′ x 24′′ x 0.98′′
159
+ Thorlabs PBG2424F
160
+ 842.93
161
+ 3-Axis RollerBlock
162
+ Long-Travel Bearing Stage
163
+ Thorlabs RB13M
164
+ 1,566.15
165
+ Total
166
+ 3,340.65
167
+ 4
168
+
169
+ If the sample is at the focal point of the objective, the light coming back through the
170
+ system will be collimated, and a final imaging lens will focus it through an iris before the
171
+ photodetector. Hence, if the sample is at exactly the focal length away from the objective,
172
+ the image plane will be at the iris location, and a large portion of the light will make it to
173
+ the photodetector. However, if the sample’s surface is not at the focal plane, the amount of
174
+ light at the photodetector will be greatly reduced.
175
+ Essential to the operation of the confocal microscope is the fact that the iris is in the
176
+ image plane at the light focus.
177
+ Thus, it effectively images only the part of the sample
178
+ at the focal plane of the objective lens. Unlike traditional microscopes, this allows depth
179
+ information in a sample.
180
+ The apparatus described here is a cost-effective means for demonstrating these ideas in
181
+ an undergraduate setting. A full list of the parts needed can be seen in Table I, where it
182
+ is assumed power supplies and an oscilloscope are available. The inclusion of the mirror
183
+ and lens focal lengths were chosen by convenience with an available breadboard, and can
184
+ be modified as desired. The total cost of the equipment comes to US $3,340.65. However,
185
+ the largest cost is the 3-axis translation stage, which can be replaced with a 3D-printed
186
+ flexure stage,10,11 reducing the cost to under US $2,000 and opening up new opportunities
187
+ for development.
188
+ This setup can be used to examine several simple objects, which provide excellent insight
189
+ into the confocal imaging process. A deeper understanding of depth information from the
190
+ microscope can come from examining standard microscope slides, especially those with con-
191
+ cave sample openings. The slide can be mounted on the translation stage in a number of
192
+ ways; in our experiment, it was clipped to a 3D-printed stand, such that the laser passed
193
+ through the slide with nothing behind it. The system can also be used to examine sur-
194
+ face variation of solid objects. We have found that 3D-printed objects with known height
195
+ variation works well, but a variety of other objects can be used.
196
+ III.
197
+ RESULTS AND ANALYSIS
198
+ In order to demonstrate the capabilities of the confocal microscope, two experiments are
199
+ outlined here. The first involves observing the thickness of semitransparent materials, with
200
+ commercial microscope slides offering an excellent starting point. Using a microscope slide
201
+ 5
202
+
203
+ to demonstrate how the confocal microscope works offers students the chance to determine
204
+ depth information on a sample.
205
+ As a starting point, a microscope slide with a concave
206
+ sample opening can be used. In this case two reflections occur, one off of the slide’s front
207
+ surface and the other off of its back surface.
208
+ As a result, adjusting the position of the
209
+ sample longitudinally along the direction of laser propagation allows the observation of
210
+ two successive peaks, as each of the slide’s front surface and back surface is in the focal
211
+ plane of the objective. An example of the data observed as the slide platform is adjusted
212
+ longitudinally (in the x-direction) is shown in Fig. 2. Opening the iris at the image plane
213
+ makes clear the way in which the confocal microscope causes a specific image plane for
214
+ different depths in the material. Without this iris, depth information is lost, as can be seen
215
+ in Fig. 2.
216
+ 3.5
217
+ 3.0
218
+ 2.5
219
+ 2.0
220
+ 1.5
221
+ 1.0
222
+ 0.5
223
+ Photodiode Signal (V)
224
+ 5.5
225
+ 5.0
226
+ 4.5
227
+ 4.0
228
+ 3.5
229
+ 3.0
230
+ 2.5
231
+ 2.0
232
+ X axis position (mm)
233
+ Slide Edge
234
+ Slide Center
235
+ Without Iris
236
+ FIG. 2. As the distance between the objective lens and the slide is varied, intensity peaks are seen
237
+ when either the front or the back surface of the slide is at the focal plane of the objective. This is
238
+ shown both at the flat edge of the slide, and at the curved center where the slide is not as thick.
239
+ The shifting of the second peak reveals the changing thickness of the slide. Removing the iris in
240
+ the imaging plane reveals the necessity of limiting the light from a single image plane for depth
241
+ perception.
242
+ By using a microscope slide with a concave sample opening, the thickness of the glass
243
+ 6
244
+
245
+ can be observed as a function of the position along the slide. As a starting point, this can
246
+ be done by moving the slide a small distance transversely (in the y-direction), then scanning
247
+ longitudinally (in the x-direction) to observe the two peaks, stepping through positions. An
248
+ example of the data obtained with this two axis scan technique is shown in Fig. 3.
249
+ 5.0
250
+ 4.8
251
+ 4.6
252
+ 4.4
253
+ Position of highest reflection (mm)
254
+ 10
255
+ 8
256
+ 6
257
+ 4
258
+ 2
259
+ 0
260
+ Distance along slide (mm)
261
+ Two Axis Scan
262
+ One Axis Scan
263
+ FIG. 3. The distance from the objective where peak reflected light is observed as a function of
264
+ distance along the slide. The region where the slide is flat is visible, before the concave opening
265
+ begins at 4 mm. Then the changing thickness where the concave sample opening begins can be
266
+ observed. The two-axis scan involves moving the translation stage transversely a small amount
267
+ before scanning longitudinally to find the maximum. The one-axis method involves only scanning
268
+ transversely, using a known calibration of the photodiode voltage with depth.
269
+ This setup can also be used to demonstrate the usefulness of a scanning confocal micro-
270
+ scope by acquiring the same depth information, while only moving the slide transversely to
271
+ the laser beam. This makes the image collection process much faster, but requires calibrat-
272
+ ing the system. As the slide is shifted transversely in the area where the depth varies, the
273
+ different depths will cause different voltages in the photodiode. To correlate these voltage
274
+ changes to height changes, a fit for the voltage as a function of height in the region scanned
275
+ is found from the longitudinal scan, such as shown in Fig. 4.
276
+ The shape of the intensity peak is caused by the changing beam waist as it propagates
277
+ 7
278
+
279
+ 2.5
280
+ 2.0
281
+ 1.5
282
+ 1.0
283
+ 0.5
284
+ Voltage (V)
285
+ 1.4
286
+ 1.2
287
+ 1.0
288
+ 0.8
289
+ 0.6
290
+ 0.4
291
+ 0.2
292
+ 0.0
293
+ Distance (mm)
294
+ Linear Fit
295
+ Range: 0.066 mm
296
+ 2.5
297
+ 2.0
298
+ 1.5
299
+ 1.0
300
+ 0.5
301
+ Voltage (V)
302
+ 1.4
303
+ 1.2
304
+ 1.0
305
+ 0.8
306
+ 0.6
307
+ 0.4
308
+ 0.2
309
+ 0.0
310
+ Distance (mm)
311
+ Range: 0.21 mm
312
+ Lorentzian Fit
313
+ (a)
314
+ (b)
315
+ FIG. 4.
316
+ The intensity peak obtained as the slide is translated longitudinally can be used to provide
317
+ a voltage vs. height calibration for the sample. An appropriate fitting function can be chosen to
318
+ fit the peak over a particular height variation region on a sample. a) A linear fit to a portion of
319
+ the peak, providing a simple calibration valid over 0.066 mm around the sample. b) A Lorentzian
320
+ fit valid over 0.21 mm range around the sample.
321
+ through the image plane, with the intensity limited by the iris size.
322
+ This is a complex
323
+ dependence, but can be roughly fit using a Lorentzian over much of the peak. For a more
324
+ precise calibration, it is desired to find a function that fits the intensity peak very well
325
+ over a particular range of heights of the sample. Using this calibration, voltage changes
326
+ when scanning transversely can be used to calculate height changes in the sample. The
327
+ simplest calibration is to use a portion of the graph that can be well approximated as
328
+ linear, as shown in Fig. 4(a). However, this linear fit is only valid over a limited height
329
+ variation, less than 100 µm. Over a larger height variation, an exponential or polynomial
330
+ fit may also work well, where we illustrate a Lorentzian fit in Fig. 4(b). Allowing students
331
+ to determine this calibration technique provides additional insight and experience in data
332
+ analysis methods. Taking the data again for the slide thickness using a transverse scan
333
+ calibrated with the Lorentzian fit is shown in Fig. 3, and agrees with that taken when
334
+ collecting points individually with motion in both dimensions. Further, this process allows
335
+ students to understand how data can be collected and analyzed quickly in this ‘scanning’
336
+ configuration of a confocal microscope.
337
+ The ability of the confocal microscope to determine height variations of a solid sample
338
+ 8
339
+
340
+ represents another key category of experiment that can be undertaken. To illustrate this
341
+ point, stereolithography (SLA) 3D-printed objects of known height variation are used. SLA
342
+ 3D printing, like fused deposition modeling (FDM) 3D printing, creates a model layer by
343
+ layer.
344
+ Instead of extruding filament, a laser polymerizes resin at a specific point.
345
+ This
346
+ process allows for a quantifiable amount of material to be deposited with a known volumetric
347
+ resolution. This property was exploited to create steps, with a known thickness, on the
348
+ surface of a 3D-printed block small enough to demonstrate the accuracy of the system.
349
+ Variations on this experiment could be done with a variety of objects, as long as the object
350
+ has height variation on the scale of 100s of microns.
351
+ With these solid objects, there are no longer multiple intensity peaks from different
352
+ depths of the sample. Instead, the location of the surface can be seen by the maximum
353
+ of the intensity peak.
354
+ Here, the surface height variation can be investigated either by
355
+ successively stepping transversely and longitudinally, or by scanning only transversely using
356
+ a calibrated intensity vs. height graph, as discussed above. For our experiment two different
357
+ surface variations were printed, one that increased 50 µm in height for each 200 µm shift
358
+ in position, and another that increased 100 µm in height for each 200 µm shift in position.
359
+ Figure 5 shows the measured surface location as a function of the distance along the sample
360
+ for each of these materials using the two-axis scan technique, and illustrates the ability
361
+ of the system to determine solid surface height variations. The precision of 3D printers
362
+ can limit the resolution of features created using this method, but it is expected that the
363
+ cost of high-resolution 3D printers will continue to decrease, allowing more intricate surface
364
+ patterns to be investigated. Additional solid objects such as coins can also be examined, but
365
+ the high reflectivity and angled surfaces can cause complex behavior in the signal intensity,
366
+ and extra care is required.
367
+ IV.
368
+ INCORPORATION OF 3D-PRINTED TRANSLATION STAGE
369
+ 3D printing has developed into an extremely valuable tool for the undergraduate lab, and
370
+ has been incorporated into a number of experimental designs.12–14 Due to recent advances
371
+ and open-source development, it is now possible to print and program excellent open-source
372
+ translation stages,10,11 and 3D printing has been incorporated into a variety of microscope
373
+ designs.15–20 In the confocal microscopy setup presented here, 3D printing presents the pos-
374
+ 9
375
+
376
+ 4.8
377
+ 4.6
378
+ 4.4
379
+ 4.2
380
+ 4.0
381
+ 3.8
382
+ Material Height (mm)
383
+ 10.5
384
+ 10.0
385
+ 9.5
386
+ 9.0
387
+ Distance along printed sample (mm)
388
+ 100 micron step
389
+ 50 micron step
390
+ FIG. 5.
391
+ The height of a solid 3D-printed object, as measured with the confocal microscope. This
392
+ object was designed and printed to show the capabilities of the microscope, and a number of similar
393
+ objects can be used.
394
+ sibility of replacing a commercial translation stage system with a homebuilt option. This
395
+ replacement serves two main purposes: First, it can reduce the cost of the necessary equip-
396
+ ment by a significant fraction. Second, it allows meaningful student experience in design,
397
+ 3D printing, electronics, and programming. This allows the confocal microscope project to
398
+ be quite general, appealing to students interested in a wide range of engineering or data
399
+ acquisition fields.
400
+ One option that incorporates 3-axis translation and excellent precision is the OpenFlexure
401
+ Block stage.11 This setup is well documented elsewhere, and here was incorporated with
402
+ little modification to the design presented. This stage was specified to have a step size in
403
+ the x direction of 12.4 ± 0.2 nm and translate 2 mm on any axis. While this translation
404
+ distance is not large enough to scan over large portions of the slide, it does allow the same
405
+ experimentation on slide thickness described above, with the data taken again as observed in
406
+ Fig. 2. Having both the data for slide thickness taken from the commercial translations stage
407
+ and the OpenFluxure Block stage allows an additional means of verifying the calibration for
408
+ 10
409
+
410
+ distance per step on the block stage. Using these data, our calibration was determined to be
411
+ 12.1 ± 0.4 nm, consistent with previous measurements.11 The use of 3D-printed translation
412
+ stages makes the scanning confocal microscope even more accessible, and would combine
413
+ well with projects commonly undertaken in an undergraduate lab environment.
414
+ V.
415
+ CONCLUSIONS AND FUTURE DIRECTIONS
416
+ The design for a homebuilt scanning confocal microscope presented here is ideal for an un-
417
+ dergraduate instructional laboratory setting. Through the measurement of microscope slide
418
+ thicknesses and surface variations, undergraduates develop a better sense of the methods of
419
+ scanning confocal microscopy and the information it provides. This design provides a robust
420
+ setup with understandable mechanisms at a low price that enables easy implementation.
421
+ Our setup is intended for pedagogical purposes rather than quality imaging competitive
422
+ with commercial confocal microscopes, but can be extended for more complete imaging. If
423
+ desired, the OpenFlexure stage can be automated, allowing for expansion of this project
424
+ into acquiring scanned three dimensional images. This is especially appealing in the context
425
+ of an independent project or senior thesis looking to incorporate additional programming.
426
+ An investigation of the resolution of the system2 can also be employed as an extension to
427
+ the experiments presented here. Investigating the limiting resolution would require materials
428
+ with much finer features, as with this wavelength and numerical aperture lens the theoretical
429
+ lateral resolution is expected to be near 1 µm, and the axial resolution on order 15 µm. Here
430
+ several factors could be investigated to determine maximum resolution, including beam size,
431
+ the minimal size and location of the iris, and the numerical aperture of the objective lens
432
+ used.
433
+ The confocal microscope system described in this paper can easily be expanded, either to
434
+ improve its quality or examine further applications. Enhancements include the addition of an
435
+ additional beamsplitter and camera, the use of achromatic doublets to decrease aberrations,
436
+ improving the laser quality through fiber-coupling before use, or a galvo-scanning mirror in
437
+ place of the x-y translation stage. In addition, the experiments outlined here can easily be
438
+ extended to simple biological samples or fluorescence microscopy, allowing further insight
439
+ into the way the optical system is commonly applied in biomedical fields.
440
+ 11
441
+
442
+ ACKNOWLEDGMENTS
443
+ We wish to acknowledge the contributions of Joseph Coonen and the programming insight
444
+ of Michael Olson.
445
446
+ 1 C. J. R. Sheppard and D. M. Schotton, Confocal Laser Scanning Microscopy, (Springer-Verlag,
447
+ Singapore, 1997).
448
+ 2 A.
449
+ D.
450
+ Elliot,
451
+ Confocal
452
+ Microscopy:
453
+ Principles
454
+ and
455
+ Modern
456
+ Practices,
457
+ Curr. Protoc. Cytom. 92, e68 (2020).
458
+ 3 J. C. Erie, J. W. McLaren,
459
+ and S. V. Patel, “Confocal Microscopy in Ophthalmology,”
460
+ Am. J. Ophthalmol. 148, 639 (2009).
461
+ 4 P. Xi, B. Rajwa, J.T. Jones, and J.P. Robinson, “The design and construction of a cost-efficient
462
+ confocal laser scanning microscope,” Am. J. Phys. 75, 203 (2007).
463
+ 5 J. Hsu, S. Dhingra, and B. D’Urso, “Design and construction of a cost-efficient Arduino-based
464
+ mirror galvanometer system for scanning optical microscopy,” Am. J. Phys. 85, 68 (2017).
465
+ 6 S. Arunkarthick, M. M. Bijeesh, A. S. Vetcha, N. Rastogi, P. Nandakumar, and G. K. Varier,
466
+ “Design and construction of a confocal laser scanning microscope for biomolecular imaging,”
467
+ Current Science 107, 1965 (2014).
468
+ 7 C. M.. Jennings, J. B. King, and S. H. Parekh, “Low-cost, minimalist line-scanning confocal
469
+ microscopy,” Opt. Letters 47, 4191 (2022).
470
+ 8 P. K.. Shakhi,
471
+ M. M. Bijeesh,
472
+ G. K. Varier,
473
+ and P. Nandakumar, “An in-house
474
+ constructed
475
+ dual
476
+ channel
477
+ confocal
478
+ fluorescence
479
+ microscope
480
+ for
481
+ biomolecular
482
+ imaging,”
483
+ OSA Continuum 4, 2177 (2021).
484
+ 9 C.
485
+ Gong,
486
+ N.
487
+ Kulkarni,
488
+ W.
489
+ Zhu,
490
+ C.
491
+ D.
492
+ Nguyen,
493
+ C.
494
+ Curiel-Lewandrowski,
495
+ and
496
+ D.
497
+ Kang,
498
+ “Low-cost,
499
+ high-speed
500
+ near
501
+ infrared
502
+ reflectance
503
+ confocal
504
+ microscope,”
505
+ Bio. Opt. Express 10, 3497 (2019).
506
+ 10 J. P. Sharkey, C. C. W. Foo, A. Kabla, J. J. Baumberg,
507
+ and R. W. Bowman, “A one-piece
508
+ printed flexure stage for open-source microscopy,” Rev. of Sci. Inst. 87, 025104 (2016).
509
+ 11 Q. Meng, K. Harrington, J. Stirling, and R. Bowman, “The OpenFlexure Block Stage: sub-100
510
+ 12
511
+
512
+ nm fibre alignment with a monolithic plastic flexure stage,” Optics Express 28, 4763 (2020).
513
+ 12 M. Mantia
514
+ and T. Bixby, “Optical measurements on a budget: A 3D printed ellipsometer,”
515
+ Am. J. Phys. 90, 445 (2022).
516
+ 13 E. Brekke, T. Bennett, H. Rook and E. L. Hazlett, “3D printing an external-cavity diode laser
517
+ housing,” Am. J. Phys. 88, 1170 (2020).
518
+ 14 B. Schmidt, M. Pacholok,
519
+ D. King
520
+ and J. Kariuki, “Application of 3D Printers to
521
+ Fabricate Low-Cost Electrode Components for Undergraduate Experiments and Research,”
522
+ J. Chem.Educ. 99, 1160 (2022).
523
+ 15 T. Matsui and D. Fujiwara, “Optical sectioning robotic microscopy for everyone: the structured
524
+ illumination microscope with the OpenFlexure stages,” Optics Express 30, 23208 (2022).
525
+ 16 J. T. Collins, J. Knapper, J. Sterling, J. Mduda, C. Mkindi, V. Mayagaya, G. A. Mwakajinga,
526
+ P. T. Nyakyi, V. L. Sanga, D. Carbery, L. White, S. Dale, Z. J. Lim, J. J. Baumberg, P. Cicuta,
527
+ S. McDermott, B. Vodenicharski,
528
+ and R. Bowman, “Robotic microscopy for everyone: the
529
+ OpenFlexure microscope,” Bio. Opt. Express 11, 2447 (2020).
530
+ 17 B. Diedrich, R. Lachmann, B. Marsikova, H. Wang, X. Uwurukundo, A. S. Mosig, and R. Heintz-
531
+ mann, “A versatile and customizable low-cost 3D-printed open standard for microscopic imag-
532
+ ing,” Nat. Comm. 11, 5979 (2020).
533
+ 18 M. Del Rosario, H. S. Heil, A. Mendes, V. Saggiomo,
534
+ and R. Henriques, “The field guide to
535
+ 3D printing in optical microscopy for life sciences,” Adv. Biology 6, 2100994 (2022).
536
+ 19 A. M. Chagas, L. L. Prieto-Godino, A. B. Arrenberg,
537
+ and T. Baden, “The e100 lab:
538
+ A 3D-printable open-source platform for fluorescence microscopy, optgenetics, and accurate
539
+ temperature control during behaviour of zebrafish, Drosophila, and Caenorhabdtis elegans,”
540
+ PLOS Biology 15, e2002702 (2017).
541
+ 20 J. W. P. Brown, A. Bauer, M. E. Polinkovsky, A. Bhumkar, D. J. B. Hunter, K. Gaus, E.
542
+ Sierecki, and Y. Gambin, “Single-molecule detection on a portable 3D-printed microscope,”
543
+ Nat. Comm. 10, 5662 (2019).
544
+ 13
545
+
cNE_T4oBgHgl3EQf0Bwx/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,445 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf,len=444
2
+ page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
3
+ page_content='08326v1 [physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
4
+ page_content='ed-ph] 19 Jan 2023 A low-cost confocal microscope for the undergraduate lab A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
5
+ page_content=' Reguilon, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
6
+ page_content=' Bethard, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
7
+ page_content=' Brekke∗ Department of Physics, St.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
8
+ page_content=' Norbert College, De Pere, WI 54115 Abstract We demonstrate a simple and cost-efficient scanning confocal microscope setup for use in ad- vanced instructional physics laboratories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
9
+ page_content=' The setup is constructed from readily available com- mercial products, and the implementation of a 3D-printed flexure stage allows for further cost reduction and pedagogical opportunity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
10
+ page_content=' Experiments exploring the thickness of a microscope slide and the surface of solid objects with height variation are presented as foundational components of undergraduate laboratory projects, and demonstrate the capabilities of a confocal microscope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
11
+ page_content=' This system allows observation of key components of a confocal microscope, including depth perception and data acquisition via transverse scanning, making it an excellent pedagogical resource.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
12
+ page_content=' 1 I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
13
+ page_content=' INTRODUCTION The design and use of optical instruments is an area of broad interest, appealing par- ticularly to students at the intersection of biological fields with physics and engineering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
14
+ page_content=' With the large number of undergraduate physicists interested in careers in biomedical fields, a confocal microscope provides an excellent opportunity to see how physics principles re- late to state-of-the-art equipment used for image formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
15
+ page_content=' The properties, advantages, and applications of confocal micrsocopy have been carefully investigated in the biological community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
16
+ page_content='1–3 While many undergraduate optics courses spend significant time on the ideas of optical instruments and traditional microscopes, confocal microscopy is an important application that is often absent from the student experience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
17
+ page_content=' There has been a variety of designs for homebuilt confocal microscopes in the lab,4–9 but these designs are often more focused on the production of an image than on student understanding of the physics underlying the operation of the instrument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
18
+ page_content=' Previous versions were often either too expensive, or left essen- tial elements hidden in commercial components, making them nonideal for undergraduate pedagogy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
19
+ page_content=' In this paper we present a simple scanning confocal microscope experiment, ideal for the undergraduate optics or advanced instructional lab environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
20
+ page_content=' Rather than being designed to minimize its size or acquire images on par with commercial confocal micro- scopes, our instructional setup is designed to clearly demonstrate the physics involved in the image acquisition method and to make the optical components easy to manipulate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
21
+ page_content=' Our setup provides an excellent system to illustrate scanning confocal microscopy and also gives experience with the calibration of a data acquisition process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
22
+ page_content=' Our system is designed to be versatile, with potential to develop student experimental skills in the areas optical systems, electronics, and 3D printing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
23
+ page_content=' Projects can be adapted to suit the context of an advanced lab setting, an optics or electronics course, an independent student project, or a senior thesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
24
+ page_content=' In the following sections, we outline an experimental process that can be used as a whole or broken up into parts, depending on the pedagogical purposes of the instructor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
25
+ page_content=' In the first experiment, a microscope slide is used to illustrate the ability of a confocal setup to acquire depth information in a sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
26
+ page_content=' A second experiment involves using this setup to investigate height variations of a solid material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
27
+ page_content=' Finally, either of 2 these experiments can incorporate a 3D printed translation stage,10 which allows integration of programming and electronic control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
28
+ page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
29
+ page_content=' DESIGN 100 mm lens 500 mm lens 150 mm lens Objective lens Mirror Beam Splitter 650 nm laser Photodiode Sample on Translation Stage Iris Iris X Y FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
30
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
31
+ page_content=' The experimental setup for the confocal microscope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
32
+ page_content=' A generic visible laser is used, and expanded to fill a microscope objective lens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
33
+ page_content=' The objective lens focuses the light onto a sample, which is mounted on a translation stage at the focus of the objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
34
+ page_content=' The light reflected from the sample is separated by a beam splitter, and focused onto an image plane, where an iris limits light to that which came from the focal plane of the objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
35
+ page_content=' This intensity is then monitored by a photodiode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
36
+ page_content=' Our confocal microscope experimental setup is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
37
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
38
+ page_content=' The essential elements of this system are readily available commercial components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
39
+ page_content=' We use a simple 650 nm laser diode, with a optical power of 5 mW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
40
+ page_content=' This laser is expanded with a two-lens beam expander to a beam waist of 5 mm waist to fill a commercial microscope objective lens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
41
+ page_content=' The beam 3 expander is constructed so that the distance between the lenses is the sum of the focal lengths, which increases the beam size by a factor of f2/f1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
42
+ page_content=' An iris is placed at the focus of the beam expander to make the beam more circular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
43
+ page_content=' Once the beam is collimated, the distance between the expander and the objective, including the presence of a steering mirror, is chosen for convenience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
44
+ page_content=' The sample is placed on a translation stage, which can be either a commercially purchased or a 3D-printed stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
45
+ page_content=' The objective focuses the light onto the sample, where a portion is reflected back through the objective and a beamsplitter redirects the light towards a photodetector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
46
+ page_content=' TABLE I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
47
+ page_content=' The parts required for construction of the confocal microscope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
48
+ page_content=' Part Specs Supplier and part number Price (US$) Laser Diode 650 nm, 5 mW Adafuit 1054 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
49
+ page_content='00 Diode Holder Thorlabs RA90 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
50
+ page_content='30 Feet (10) 1′′ x 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
51
+ page_content='3′′ x 3/8′′ Thorlabs BA1S (5 PACK) 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
52
+ page_content='20 Lens Holder (7) Ø1′′ Optics, 8-32 Tap Thorlabs LMR1 109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
53
+ page_content='83 Mirror Mount Kinematic Mount for Ø1′′ Thorlabs KM100 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
54
+ page_content='86 Mirror Ø1′′ 400 - 750 nm Thorlabs BB1-E02 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
55
+ page_content='35 Objective Lens 10X infinite conjugate Amscope PL10X-INF-V300 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
56
+ page_content='99 Thread Adaptor SM1 to RMS Thorlabs SM1A3 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
57
+ page_content='50 Posts (10) 2′′ Thorlabs TR2-P5 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
58
+ page_content='52 Post Holders (10) 2′′ Thorlabs PH2-P5 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
59
+ page_content='28 150 mm Lens Ø1′′, AR Coating: 650–1050 nm Thorlabs LB1437-B 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
60
+ page_content='50 500 mm Lens Ø1′′, AR Coating: 650–1050 nm Thorlabs LA1908-B 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
61
+ page_content='01 100 mm Lens Ø1′′, AR Coating: 650–1050 nm Thorlabs LA1509-B 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
62
+ page_content='39 Beamsplitter Economy Ø1′′ 50:50 Thorlabs EBS1 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
63
+ page_content='78 Ring Actuated Iris (2) (Ø0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
64
+ page_content='8 - Ø12 mm) Thorlabs SM1D12D 145.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
65
+ page_content='86 Photodetector 350 - 1100 nm Thorlabs DET36A2 134.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
66
+ page_content='20 Ultralight Breadboard 24′′ x 24′′ x 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
67
+ page_content='98′′ Thorlabs PBG2424F 842.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
68
+ page_content='93 3-Axis RollerBlock Long-Travel Bearing Stage Thorlabs RB13M 1,566.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
69
+ page_content='15 Total 3,340.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
70
+ page_content='65 4 If the sample is at the focal point of the objective, the light coming back through the system will be collimated, and a final imaging lens will focus it through an iris before the photodetector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
71
+ page_content=' Hence, if the sample is at exactly the focal length away from the objective, the image plane will be at the iris location, and a large portion of the light will make it to the photodetector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
72
+ page_content=' However, if the sample’s surface is not at the focal plane, the amount of light at the photodetector will be greatly reduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
73
+ page_content=' Essential to the operation of the confocal microscope is the fact that the iris is in the image plane at the light focus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
74
+ page_content=' Thus, it effectively images only the part of the sample at the focal plane of the objective lens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
75
+ page_content=' Unlike traditional microscopes, this allows depth information in a sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
76
+ page_content=' The apparatus described here is a cost-effective means for demonstrating these ideas in an undergraduate setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
77
+ page_content=' A full list of the parts needed can be seen in Table I, where it is assumed power supplies and an oscilloscope are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
78
+ page_content=' The inclusion of the mirror and lens focal lengths were chosen by convenience with an available breadboard, and can be modified as desired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
79
+ page_content=' The total cost of the equipment comes to US $3,340.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
80
+ page_content='65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
81
+ page_content=' However, the largest cost is the 3-axis translation stage, which can be replaced with a 3D-printed flexure stage,10,11 reducing the cost to under US $2,000 and opening up new opportunities for development.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
82
+ page_content=' This setup can be used to examine several simple objects, which provide excellent insight into the confocal imaging process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
83
+ page_content=' A deeper understanding of depth information from the microscope can come from examining standard microscope slides, especially those with con- cave sample openings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
84
+ page_content=' The slide can be mounted on the translation stage in a number of ways;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
85
+ page_content=' in our experiment, it was clipped to a 3D-printed stand, such that the laser passed through the slide with nothing behind it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
86
+ page_content=' The system can also be used to examine sur- face variation of solid objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
87
+ page_content=' We have found that 3D-printed objects with known height variation works well, but a variety of other objects can be used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
88
+ page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
89
+ page_content=' RESULTS AND ANALYSIS In order to demonstrate the capabilities of the confocal microscope, two experiments are outlined here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
90
+ page_content=' The first involves observing the thickness of semitransparent materials, with commercial microscope slides offering an excellent starting point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
91
+ page_content=' Using a microscope slide 5 to demonstrate how the confocal microscope works offers students the chance to determine depth information on a sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
92
+ page_content=' As a starting point, a microscope slide with a concave sample opening can be used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
93
+ page_content=' In this case two reflections occur, one off of the slide’s front surface and the other off of its back surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
94
+ page_content=' As a result, adjusting the position of the sample longitudinally along the direction of laser propagation allows the observation of two successive peaks, as each of the slide’s front surface and back surface is in the focal plane of the objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
95
+ page_content=' An example of the data observed as the slide platform is adjusted longitudinally (in the x-direction) is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
96
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
97
+ page_content=' Opening the iris at the image plane makes clear the way in which the confocal microscope causes a specific image plane for different depths in the material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
98
+ page_content=' Without this iris, depth information is lost, as can be seen in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
99
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
100
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
101
+ page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
102
+ page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
103
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
104
+ page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
105
+ page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
106
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
107
+ page_content='5 Photodiode Signal (V) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
108
+ page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
109
+ page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
110
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
111
+ page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
112
+ page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
113
+ page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
114
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
115
+ page_content='0 X axis position (mm) Slide Edge Slide Center Without Iris FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
116
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
117
+ page_content=' As the distance between the objective lens and the slide is varied, intensity peaks are seen when either the front or the back surface of the slide is at the focal plane of the objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
118
+ page_content=' This is shown both at the flat edge of the slide, and at the curved center where the slide is not as thick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
119
+ page_content=' The shifting of the second peak reveals the changing thickness of the slide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
120
+ page_content=' Removing the iris in the imaging plane reveals the necessity of limiting the light from a single image plane for depth perception.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
121
+ page_content=' By using a microscope slide with a concave sample opening, the thickness of the glass 6 can be observed as a function of the position along the slide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
122
+ page_content=' As a starting point, this can be done by moving the slide a small distance transversely (in the y-direction), then scanning longitudinally (in the x-direction) to observe the two peaks, stepping through positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
123
+ page_content=' An example of the data obtained with this two axis scan technique is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
124
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
125
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
126
+ page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
127
+ page_content='8 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
128
+ page_content='6 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
129
+ page_content='4 Position of highest reflection (mm) 10 8 6 4 2 0 Distance along slide (mm) Two Axis Scan One Axis Scan FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
130
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
131
+ page_content=' The distance from the objective where peak reflected light is observed as a function of distance along the slide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
132
+ page_content=' The region where the slide is flat is visible, before the concave opening begins at 4 mm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
133
+ page_content=' Then the changing thickness where the concave sample opening begins can be observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
134
+ page_content=' The two-axis scan involves moving the translation stage transversely a small amount before scanning longitudinally to find the maximum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
135
+ page_content=' The one-axis method involves only scanning transversely, using a known calibration of the photodiode voltage with depth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
136
+ page_content=' This setup can also be used to demonstrate the usefulness of a scanning confocal micro- scope by acquiring the same depth information, while only moving the slide transversely to the laser beam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
137
+ page_content=' This makes the image collection process much faster, but requires calibrat- ing the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
138
+ page_content=' As the slide is shifted transversely in the area where the depth varies, the di��erent depths will cause different voltages in the photodiode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
139
+ page_content=' To correlate these voltage changes to height changes, a fit for the voltage as a function of height in the region scanned is found from the longitudinal scan, such as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
140
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
141
+ page_content=' The shape of the intensity peak is caused by the changing beam waist as it propagates 7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
142
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
143
+ page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
144
+ page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
145
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
146
+ page_content='5 Voltage (V) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
147
+ page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
148
+ page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
149
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
150
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
151
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
152
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
153
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
154
+ page_content='0 Distance (mm) Linear Fit Range: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
155
+ page_content='066 mm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
156
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
157
+ page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
158
+ page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
159
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
160
+ page_content='5 Voltage (V) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
161
+ page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
162
+ page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
163
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
164
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
165
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
166
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
167
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
168
+ page_content='0 Distance (mm) Range: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
169
+ page_content='21 mm Lorentzian Fit (a) (b) FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
170
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
171
+ page_content=' The intensity peak obtained as the slide is translated longitudinally can be used to provide a voltage vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
172
+ page_content=' height calibration for the sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
173
+ page_content=' An appropriate fitting function can be chosen to fit the peak over a particular height variation region on a sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
174
+ page_content=' a) A linear fit to a portion of the peak, providing a simple calibration valid over 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
175
+ page_content='066 mm around the sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
176
+ page_content=' b) A Lorentzian fit valid over 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
177
+ page_content='21 mm range around the sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
178
+ page_content=' through the image plane, with the intensity limited by the iris size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
179
+ page_content=' This is a complex dependence, but can be roughly fit using a Lorentzian over much of the peak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
180
+ page_content=' For a more precise calibration, it is desired to find a function that fits the intensity peak very well over a particular range of heights of the sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
181
+ page_content=' Using this calibration, voltage changes when scanning transversely can be used to calculate height changes in the sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
182
+ page_content=' The simplest calibration is to use a portion of the graph that can be well approximated as linear, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
183
+ page_content=' 4(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
184
+ page_content=' However, this linear fit is only valid over a limited height variation, less than 100 µm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
185
+ page_content=' Over a larger height variation, an exponential or polynomial fit may also work well, where we illustrate a Lorentzian fit in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
186
+ page_content=' 4(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
187
+ page_content=' Allowing students to determine this calibration technique provides additional insight and experience in data analysis methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
188
+ page_content=' Taking the data again for the slide thickness using a transverse scan calibrated with the Lorentzian fit is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
189
+ page_content=' 3, and agrees with that taken when collecting points individually with motion in both dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
190
+ page_content=' Further, this process allows students to understand how data can be collected and analyzed quickly in this ‘scanning’ configuration of a confocal microscope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
191
+ page_content=' The ability of the confocal microscope to determine height variations of a solid sample 8 represents another key category of experiment that can be undertaken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
192
+ page_content=' To illustrate this point, stereolithography (SLA) 3D-printed objects of known height variation are used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
193
+ page_content=' SLA 3D printing, like fused deposition modeling (FDM) 3D printing, creates a model layer by layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
194
+ page_content=' Instead of extruding filament, a laser polymerizes resin at a specific point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
195
+ page_content=' This process allows for a quantifiable amount of material to be deposited with a known volumetric resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
196
+ page_content=' This property was exploited to create steps, with a known thickness, on the surface of a 3D-printed block small enough to demonstrate the accuracy of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
197
+ page_content=' Variations on this experiment could be done with a variety of objects, as long as the object has height variation on the scale of 100s of microns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
198
+ page_content=' With these solid objects, there are no longer multiple intensity peaks from different depths of the sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
199
+ page_content=' Instead, the location of the surface can be seen by the maximum of the intensity peak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
200
+ page_content=' Here, the surface height variation can be investigated either by successively stepping transversely and longitudinally, or by scanning only transversely using a calibrated intensity vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
201
+ page_content=' height graph, as discussed above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
202
+ page_content=' For our experiment two different surface variations were printed, one that increased 50 µm in height for each 200 µm shift in position, and another that increased 100 µm in height for each 200 µm shift in position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
203
+ page_content=' Figure 5 shows the measured surface location as a function of the distance along the sample for each of these materials using the two-axis scan technique, and illustrates the ability of the system to determine solid surface height variations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
204
+ page_content=' The precision of 3D printers can limit the resolution of features created using this method, but it is expected that the cost of high-resolution 3D printers will continue to decrease, allowing more intricate surface patterns to be investigated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
205
+ page_content=' Additional solid objects such as coins can also be examined, but the high reflectivity and angled surfaces can cause complex behavior in the signal intensity, and extra care is required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
206
+ page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
207
+ page_content=' INCORPORATION OF 3D-PRINTED TRANSLATION STAGE 3D printing has developed into an extremely valuable tool for the undergraduate lab, and has been incorporated into a number of experimental designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
208
+ page_content='12–14 Due to recent advances and open-source development, it is now possible to print and program excellent open-source translation stages,10,11 and 3D printing has been incorporated into a variety of microscope designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
209
+ page_content='15–20 In the confocal microscopy setup presented here, 3D printing presents the pos- 9 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
210
+ page_content='8 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
211
+ page_content='6 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
212
+ page_content='4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
213
+ page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
214
+ page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
215
+ page_content='8 Material Height (mm) 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
216
+ page_content='5 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
217
+ page_content='0 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
218
+ page_content='5 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
219
+ page_content='0 Distance along printed sample (mm) 100 micron step 50 micron step FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
220
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
221
+ page_content=' The height of a solid 3D-printed object, as measured with the confocal microscope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
222
+ page_content=' This object was designed and printed to show the capabilities of the microscope, and a number of similar objects can be used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
223
+ page_content=' sibility of replacing a commercial translation stage system with a homebuilt option.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
224
+ page_content=' This replacement serves two main purposes: First, it can reduce the cost of the necessary equip- ment by a significant fraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
225
+ page_content=' Second, it allows meaningful student experience in design, 3D printing, electronics, and programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
226
+ page_content=' This allows the confocal microscope project to be quite general, appealing to students interested in a wide range of engineering or data acquisition fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
227
+ page_content=' One option that incorporates 3-axis translation and excellent precision is the OpenFlexure Block stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
228
+ page_content='11 This setup is well documented elsewhere, and here was incorporated with little modification to the design presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
229
+ page_content=' This stage was specified to have a step size in the x direction of 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
230
+ page_content='4 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
231
+ page_content='2 nm and translate 2 mm on any axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
232
+ page_content=' While this translation distance is not large enough to scan over large portions of the slide, it does allow the same experimentation on slide thickness described above, with the data taken again as observed in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
233
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
234
+ page_content=' Having both the data for slide thickness taken from the commercial translations stage and the OpenFluxure Block stage allows an additional means of verifying the calibration for 10 distance per step on the block stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
235
+ page_content=' Using these data, our calibration was determined to be 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
236
+ page_content='1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
237
+ page_content='4 nm, consistent with previous measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
238
+ page_content='11 The use of 3D-printed translation stages makes the scanning confocal microscope even more accessible, and would combine well with projects commonly undertaken in an undergraduate lab environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
239
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
240
+ page_content=' CONCLUSIONS AND FUTURE DIRECTIONS The design for a homebuilt scanning confocal microscope presented here is ideal for an un- dergraduate instructional laboratory setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
241
+ page_content=' Through the measurement of microscope slide thicknesses and surface variations, undergraduates develop a better sense of the methods of scanning confocal microscopy and the information it provides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
242
+ page_content=' This design provides a robust setup with understandable mechanisms at a low price that enables easy implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
243
+ page_content=' Our setup is intended for pedagogical purposes rather than quality imaging competitive with commercial confocal microscopes, but can be extended for more complete imaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
244
+ page_content=' If desired, the OpenFlexure stage can be automated, allowing for expansion of this project into acquiring scanned three dimensional images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
245
+ page_content=' This is especially appealing in the context of an independent project or senior thesis looking to incorporate additional programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
246
+ page_content=' An investigation of the resolution of the system2 can also be employed as an extension to the experiments presented here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
247
+ page_content=' Investigating the limiting resolution would require materials with much finer features, as with this wavelength and numerical aperture lens the theoretical lateral resolution is expected to be near 1 µm, and the axial resolution on order 15 µm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
248
+ page_content=' Here several factors could be investigated to determine maximum resolution, including beam size, the minimal size and location of the iris, and the numerical aperture of the objective lens used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
249
+ page_content=' The confocal microscope system described in this paper can easily be expanded, either to improve its quality or examine further applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
250
+ page_content=' Enhancements include the addition of an additional beamsplitter and camera, the use of achromatic doublets to decrease aberrations, improving the laser quality through fiber-coupling before use, or a galvo-scanning mirror in place of the x-y translation stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
251
+ page_content=' In addition, the experiments outlined here can easily be extended to simple biological samples or fluorescence microscopy, allowing further insight into the way the optical system is commonly applied in biomedical fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
252
+ page_content=' 11 ACKNOWLEDGMENTS We wish to acknowledge the contributions of Joseph Coonen and the programming insight of Michael Olson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
253
+ page_content=' ∗ erik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
254
+ page_content='brekke@snc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
255
+ page_content='edu 1 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
256
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
257
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
258
+ page_content=' Sheppard and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
259
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
260
+ page_content=' Schotton, Confocal Laser Scanning Microscopy, (Springer-Verlag, Singapore, 1997).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
261
+ page_content=' 2 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
262
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
263
+ page_content=' Elliot, Confocal Microscopy: Principles and Modern Practices, Curr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
264
+ page_content=' Protoc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
265
+ page_content=' Cytom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
266
+ page_content=' 92, e68 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
267
+ page_content=' 3 J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
268
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
269
+ page_content=' Erie, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
270
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
271
+ page_content=' McLaren, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
272
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
273
+ page_content=' Patel, “Confocal Microscopy in Ophthalmology,” Am.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
274
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
275
+ page_content=' Ophthalmol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
276
+ page_content=' 148, 639 (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
277
+ page_content=' 4 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
278
+ page_content=' Xi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
279
+ page_content=' Rajwa, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
280
+ page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
281
+ page_content=' Jones, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
282
+ page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
283
+ page_content=' Robinson, “The design and construction of a cost-efficient confocal laser scanning microscope,” Am.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
284
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
285
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
286
+ page_content=' 75, 203 (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
287
+ page_content=' 5 J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
288
+ page_content=' Hsu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
289
+ page_content=' Dhingra, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
290
+ page_content=' D’Urso, “Design and construction of a cost-efficient Arduino-based mirror galvanometer system for scanning optical microscopy,” Am.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
291
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
292
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
293
+ page_content=' 85, 68 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
294
+ page_content=' 6 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
295
+ page_content=' Arunkarthick, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
296
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
297
+ page_content=' Bijeesh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
298
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
299
+ page_content=' Vetcha, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
300
+ page_content=' Rastogi, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
301
+ page_content=' Nandakumar, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
302
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
303
+ page_content=' Varier, “Design and construction of a confocal laser scanning microscope for biomolecular imaging,” Current Science 107, 1965 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
304
+ page_content=' 7 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
305
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
306
+ page_content='. Jennings, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
307
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
308
+ page_content=' King, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
309
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
310
+ page_content=' Parekh, “Low-cost, minimalist line-scanning confocal microscopy,” Opt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
311
+ page_content=' Letters 47, 4191 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
312
+ page_content=' 8 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
313
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
314
+ page_content='. Shakhi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
315
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
316
+ page_content=' Bijeesh, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
317
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
318
+ page_content=' Varier, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
319
+ page_content=' Nandakumar, “An in-house constructed dual channel confocal fluorescence microscope for biomolecular imaging,” OSA Continuum 4, 2177 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
320
+ page_content=' 9 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
321
+ page_content=' Gong, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
322
+ page_content=' Kulkarni, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
323
+ page_content=' Zhu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
324
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
325
+ page_content=' Nguyen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
326
+ page_content=' Curiel-Lewandrowski, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
327
+ page_content=' Kang, “Low-cost, high-speed near infrared reflectance confocal microscope,” Bio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
328
+ page_content=' Opt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
329
+ page_content=' Express 10, 3497 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
330
+ page_content=' 10 J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
331
+ page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
332
+ page_content=' Sharkey, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
333
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
334
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
335
+ page_content=' Foo, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
336
+ page_content=' Kabla, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
337
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
338
+ page_content=' Baumberg, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
339
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
340
+ page_content=' Bowman, “A one-piece printed flexure stage for open-source microscopy,” Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
341
+ page_content=' of Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
342
+ page_content=' Inst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
343
+ page_content=' 87, 025104 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
344
+ page_content=' 11 Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
345
+ page_content=' Meng, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
346
+ page_content=' Harrington, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
347
+ page_content=' Stirling, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
348
+ page_content=' Bowman, “The OpenFlexure Block Stage: sub-100 12 nm fibre alignment with a monolithic plastic flexure stage,” Optics Express 28, 4763 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
349
+ page_content=' 12 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
350
+ page_content=' Mantia and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
351
+ page_content=' Bixby, “Optical measurements on a budget: A 3D printed ellipsometer,” Am.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
352
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
353
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
354
+ page_content=' 90, 445 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
355
+ page_content=' 13 E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
356
+ page_content=' Brekke, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
357
+ page_content=' Bennett, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
358
+ page_content=' Rook and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
359
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
360
+ page_content=' Hazlett, “3D printing an external-cavity diode laser housing,” Am.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
361
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
362
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
363
+ page_content=' 88, 1170 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
364
+ page_content=' 14 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
365
+ page_content=' Schmidt, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
366
+ page_content=' Pacholok, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
367
+ page_content=' King and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
368
+ page_content=' Kariuki, “Application of 3D Printers to Fabricate Low-Cost Electrode Components for Undergraduate Experiments and Research,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
369
+ page_content=' Chem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
370
+ page_content='Educ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
371
+ page_content=' 99, 1160 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
372
+ page_content=' 15 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
373
+ page_content=' Matsui and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
374
+ page_content=' Fujiwara, “Optical sectioning robotic microscopy for everyone: the structured illumination microscope with the OpenFlexure stages,” Optics Express 30, 23208 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
375
+ page_content=' 16 J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
376
+ page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
377
+ page_content=' Collins, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
378
+ page_content=' Knapper, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
379
+ page_content=' Sterling, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
380
+ page_content=' Mduda, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
381
+ page_content=' Mkindi, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
382
+ page_content=' Mayagaya, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
383
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
384
+ page_content=' Mwakajinga, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
385
+ page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
386
+ page_content=' Nyakyi, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
387
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
388
+ page_content=' Sanga, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
389
+ page_content=' Carbery, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
390
+ page_content=' White, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
391
+ page_content=' Dale, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
392
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
393
+ page_content=' Lim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
394
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
395
+ page_content=' Baumberg, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
396
+ page_content=' Cicuta, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
397
+ page_content=' McDermott, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
398
+ page_content=' Vodenicharski, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
399
+ page_content=' Bowman, “Robotic microscopy for everyone: the OpenFlexure microscope,” Bio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
400
+ page_content=' Opt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
401
+ page_content=' Express 11, 2447 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
402
+ page_content=' 17 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
403
+ page_content=' Diedrich, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
404
+ page_content=' Lachmann, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
405
+ page_content=' Marsikova, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
406
+ page_content=' Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
407
+ page_content=' Uwurukundo, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
408
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
409
+ page_content=' Mosig, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
410
+ page_content=' Heintz- mann, “A versatile and customizable low-cost 3D-printed open standard for microscopic imag- ing,” Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
411
+ page_content=' Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
412
+ page_content=' 11, 5979 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
413
+ page_content=' 18 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
414
+ page_content=' Del Rosario, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
415
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
416
+ page_content=' Heil, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
417
+ page_content=' Mendes, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
418
+ page_content=' Saggiomo, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
419
+ page_content=' Henriques, “The field guide to 3D printing in optical microscopy for life sciences,” Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
420
+ page_content=' Biology 6, 2100994 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
421
+ page_content=' 19 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
422
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
423
+ page_content=' Chagas, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
424
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
425
+ page_content=' Prieto-Godino, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
426
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
427
+ page_content=' Arrenberg, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
428
+ page_content=' Baden, “The e100 lab: A 3D-printable open-source platform for fluorescence microscopy, optgenetics, and accurate temperature control during behaviour of zebrafish, Drosophila, and Caenorhabdtis elegans,” PLOS Biology 15, e2002702 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
429
+ page_content=' 20 J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
430
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
431
+ page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
432
+ page_content=' Brown, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
433
+ page_content=' Bauer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
434
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
435
+ page_content=' Polinkovsky, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
436
+ page_content=' Bhumkar, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
437
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
438
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
439
+ page_content=' Hunter, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
440
+ page_content=' Gaus, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
441
+ page_content=' Sierecki, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
442
+ page_content=' Gambin, “Single-molecule detection on a portable 3D-printed microscope,” Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
443
+ page_content=' Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
444
+ page_content=' 10, 5662 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
445
+ page_content=' 13' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/cNE_T4oBgHgl3EQf0Bwx/content/2301.08326v1.pdf'}
dNAzT4oBgHgl3EQfnv3m/content/tmp_files/2301.01587v1.pdf.txt ADDED
@@ -0,0 +1,720 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Condensed Matter Physics, 2022, Vol. 25, No. 4, 43704: 1–11
2
+ DOI: 10.5488/CMP.25.43704
3
+ http://www.icmp.lviv.ua/journal
4
+ Temperature dependence of dielectric permittivity in
5
+ incommensurately modulated phase of ammonium
6
+ fluoroberyllate
7
+ B. I. Horon
8
+ 1,2∗, O. S. Kushnir
9
+ 2, P. A. Shchepanskyi
10
+ 1, V. Yo. Stadnyk
11
+ 1
12
+ 1 General Physics Department, Ivan Franko National University of Lviv, 23 Drahomanov Street, 79005 Lviv,
13
+ Ukraine
14
+ 2 Optoelectronics and Information Technologies Department, Ivan Franko National University of Lviv,
15
+ 107 gen. Tarnavskyi Street, 79013 Lviv, Ukraine
16
+ Received July 15, 2022, in final form September 28, 2022
17
+ We study the temperature dependence of dielectric permittivity along the polar axis for ferroelectric ammonium
18
+ fluoroberyllate (AFB) crystal in the vicinity of its phase transition points. The experimental data within incom-
19
+ mensurately modulated phase of AFB is compared with the predictions of phenomenological models known
20
+ from the literature: the Curie-Weiss (CW) law, the generalized Curie-Weiss (GCW) law, and the models by Le-
21
+ vanyuk and Sannikov (LS) and by Prelovšek, Levstik and Filipič (PLF) suggested for improper ferroelectrics. It is
22
+ shown that the LS approach describes the temperature behavior of the dielectric permittivity for the AFB crystal
23
+ better than the CW, GWC and PLF models. The main physical reasons of this situation are elucidated.
24
+ Key words: phase transitions, incommensurate phases, improper ferroelectrics, dielectric permittivity,
25
+ ammonium fluoroberyllate
26
+ 1. Introduction
27
+ Ammonium fluoroberyllate (NH4)2BeF4 (or AFB) is an improper ferroelectric crystal that belongs
28
+ to a large A2BX4 family. It undergoes two phase transitions (PTs) approximately at the temperatures
29
+ 𝑇C ≈ 177 K and 𝑇i ≈ 183 K [1–6], which separate a low-temperature ferroelectric phase, an intermediate
30
+ incommensurate phase and a high-temperature paraelectric phase. Although the AFB crystals have been
31
+ thoroughly studied during decades (see, e.g., [6–11]), some problems of their PTs and critical phenomena
32
+ still remain a matter of dispute.
33
+ In particular, AFB reveals an intriguing temperature dependence of its dielectric permittivity: unlike
34
+ the optical birefringence and many other characteristics, dielectric anomaly at 𝑇i is in fact absent, while
35
+ the 𝑇C point is marked by only a weak dielectric peak [2–5, 12]. The dielectric properties of AFB have
36
+ been the main subject of theoretical studies by Levanyuk and Sannikov [3] and by Prelovšek, Levstik
37
+ and Filipič [5] (abbreviated respectively as LS and PLF), which are both based upon the hypothesis of
38
+ improper ferroelectricity in AFB. In spite of this fact, the final expressions obtained in [3, 5] turn out to
39
+ be different in many respects.
40
+ The other notable fact is that there has been no study where an experimental temperature dependence
41
+ of the dielectric permittivity for the AFB crystals would be simultaneously compared with different
42
+ theoretical formulae in order to estimate the advantages and shortcomings of the latter. The only exception,
43
+ our recent work [13], represents a short technical report based upon contemporary methods of nonlinear
44
+ fitting and statistical techniques (see the works [14, 15]). Although a number of weak methodical points
45
+ ∗Corresponding author: [email protected]
46
+ This work is licensed under a Creative Commons Attribution 4.0 International License. Further distribution
47
+ of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.
48
+ 43704-1
49
+ arXiv:2301.01587v1 [cond-mat.mtrl-sci] 4 Jan 2023
50
+
51
+ B. I. Horon, O. S. Kushnir, P. A. Shchepanskyi, V. Yo. Stadnyk
52
+ −5
53
+ 0
54
+ 5
55
+ 10
56
+ 15
57
+ 20
58
+ T
59
+
60
+ T
61
+ C
62
+ , K
63
+ 10
64
+ 30
65
+ 50
66
+ ε
67
+ T
68
+ i
69
+ Curie-W
70
+ eiss
71
+ power law
72
+ a
73
+ −5
74
+ 0
75
+ 5
76
+ 10
77
+ 15
78
+ 20
79
+ T
80
+
81
+ T
82
+ C
83
+ , K
84
+ 10
85
+ 30
86
+ 50
87
+ ε
88
+ T
89
+ i
90
+ LS
91
+ PLF
92
+ b
93
+ Figure 1. (Colour online) Experimental temperature dependence 𝜀(𝑇) of dielectric permittivity for the
94
+ AFB crystals (circles) and its fitting (lines) with the theoretical models (1), (2) (panel a) and (3), (4)
95
+ (panel b) within the incommensurate phase.
96
+ associated with fitting [16–18] are omitted in this work, no physical reasoning and data interpretation
97
+ have been made there.
98
+ In the present study we compare all of the available phenomenological approaches which can, in
99
+ principle, be applied to describe the dielectric properties of the AFB crystals and explain why the LS
100
+ theory [3] exceeds the performance of other approaches and perfectly fits the experimental data for
101
+ dielectric permittivity.
102
+ 2. Experimental data and short description of theoretical models
103
+ A single crystal of AFB for our studies was grown from aqueous solution of a stoichiometric mixture
104
+ of NH4 and BeF2, using a standard method of slow cooling. The dielectric permittivity was measured
105
+ along the polar axis with an automated capacitive apparatus (the temperature region 170–200 K, the
106
+ tolerance of temperature measurement ∼0.1 K, and the working frequency 1 kHz). Figure 1 displays the
107
+ experimental temperature dependence of the dielectric permittivity for the AFB crystals. As seen from
108
+ figure 1, no anomaly is visible at 𝑇i, in compliance with the main bulk of experimental data known from
109
+ the literature.
110
+ Note also that the maximum dielectric permittivity detected by us (𝜀max ≈ 55) correlates well with the
111
+ data obtained in the earlier measurements for the improper AFB crystals (𝜀max ≈ 35–160 [1–5, 12, 19]).
112
+ This is contrary to the proper ferroelectrics where the values 𝜀max ∼ 103 − 105 are often detected
113
+ (see [10]). Such a small 𝜀max peak can indeed be successfully interpreted using the idea that the dielectric
114
+ anomaly in improper ferroelectrics is a secondary effect while a true order parameter has the symmetry
115
+ different from that of spontaneous electric polarization. For the same or somewhat different reasons,
116
+ weak dielectric anomalies are also typical of ferroelastics [20], multiferroics [21] and ferroelectrics with
117
+ noticeable amounts of structural defects [14, 15, 22].
118
+ Now we proceed to phenomenological consideration of the dielectric properties of AFB. Since the
119
+ both LS [3] and PLF [5] models have not dealt with the 𝜀(𝑇) function within the ferroelectric phase, we
120
+ analyze the dielectric data only within the incommensurate and paraelectric phases. Another important
121
+ point is a so-called ‘background’ dielectric permittivity 𝜀b which can be independent of the PTs. The
122
+ problem of the background versus the PT-driven anomaly is familiar in examining the specific heat
123
+ of ferroics, since the appropriate anomaly is often comparable with the lattice contributions (see, e.g.,
124
+ [23, 24]). However, this is not so in the field of dielectric studies of proper ferroelectrics where huge
125
+ anomalous peaks are mostly observed, so that neglecting the background would not hinder the possibility
126
+ of obtaining highly accurate fitting data. As a consequence, even a constant background term 𝜀b has
127
+ rarely been considered in the dielectric studies of ferroelectrics, not to mention a temperature dependent
128
+ background 𝜀b(𝑇). Still, a few relevant exceptions are known from the literature [14, 25]. Of course,
129
+ consideration of 𝜀b can become very important when improper ferroelectrics like AFB are addressed.
130
+ 43704-2
131
+
132
+ Temperature dependence of dielectric permittivity in incommensurately modulated phase of ammonium fluoroberyllate
133
+ Finally, the both LS and PLF models [3][5] treat the dielectric function as temperature-independent
134
+ within the paraelectric phase and this is consistent with our both experimental results and the whole bulk
135
+ of the literature data (especially with that obtained in a broad temperature range [20]). Therefore, we
136
+ restrict ourselves to the simplest assumption 𝜀b = const.
137
+ Next, the dielectric permittivity within the incommensurate phase can, in principle, be described by
138
+ one of the following theoretical models.
139
+ Model (1). A canonical Curie-Weiss law with a constant background 𝜀b and a Curie constant 𝐶CW:
140
+ 𝜀(𝑇) = 𝜀b + 𝐶CW
141
+ 𝑇 − 𝑇C
142
+ .
143
+ (2.1)
144
+ Model (2). A power law representing generalization of the Curie-Weiss formula (2.1), with the
145
+ exponent 𝛾 > 1:
146
+ 𝜀(𝑇) = 𝜀b +
147
+ 𝐶𝛾
148
+ (𝑇 − 𝑇C)𝛾 .
149
+ (2.2)
150
+ Model (3). The LS model [3] for the incommensurate phase which in fact states that
151
+ 𝜀(𝑇) = 𝜀b + 𝜀2
152
+ b𝐴 𝑡 6 + 𝑡
153
+ 4 − 𝑡 ,
154
+ (2.3)
155
+ where 𝐴 is a constant and 𝑡 implies the reduced temperature:
156
+ 𝑡 = 𝑇i − 𝑇
157
+ 𝑇i − 𝜃 ,
158
+ with 𝜃 (𝑇C < 𝜃 < 𝑇i) being an instability point for the order parameter. It can be defined in terms of the
159
+ distance Δ𝑇 from the 𝑇C point (𝜃 = 𝑇C + Δ𝑇). Note that Δ𝑇 can be expressed in terms of the free-energy
160
+ expansion [3] (see section 4).
161
+ According to formula (2.3), the 𝜀(𝑇) function diverges at 𝑡 = 4 and tends to 𝜀 = 𝜀b at 𝑇 = 𝑇i. Since
162
+ the model predicts the same constant value 𝜀 = 𝜀b in the paraelectric phase, the dielectric function
163
+ is continuous at the incommensurate–paraelectric PT, while the slope of 𝜀(𝑇) curve suffers an abrupt
164
+ change at 𝑇i. Although the authors [3] themselves have defined the applicability limits of formula (2.3)
165
+ as a narrow temperature region below the 𝑇i point (e.g., as a region of small 𝑡 in terms adopted in this
166
+ work), we have checked the model (3) in the overall range between the temperatures 𝑇C and 𝑇i.
167
+ Model (4). The PLF model [5] for the incommensurate phase:
168
+ 𝜀(𝑇) = 𝜀b + 𝜀b
169
+ 𝑐
170
+
171
+ 𝐸(𝜏)
172
+ (1 − 𝜏2)𝐾(𝜏) − 1
173
+
174
+ ,
175
+ (2.4)
176
+ where 𝑐 is a constant, while 𝐾(𝜏) and 𝐸(𝜏) denote the complete elliptic integrals of the first and second
177
+ kinds, respectively. The authors [5] have not linked the elliptic modulus 𝜏 with the PT parameters.
178
+ Nonetheless, the relation 𝜏 = 𝑇i−𝑇
179
+ 𝑇i−𝑇C can be postulated due to the properties 𝜏 → 0 and 𝜏 → 1 holding
180
+ respectively at 𝑇 → 𝑇i and 𝑇 → 𝑇C [5]. According to the PLF approach [5], formula (2.4) can be assumed
181
+ to be applicable within the entire incommensurate phase. A criticality at 𝑇C in formula (2.4) is due to the
182
+ behaviour of the terms (1 − 𝜏2) and 𝐾(𝜏) at 𝜏 → 1. Since the equality 𝐾(𝜏) = 𝐸(𝜏) takes place at 𝜏 = 0,
183
+ we have 𝜀 = 𝜀b at 𝑇 = 𝑇i. Finally, the 𝑇i point, which corresponds to the anomaly found experimentally
184
+ in the specific heat, is hardly detectable in the dielectric permittivity. Similarly to the LS model, the
185
+ only track of this PT is a change in the 𝜀(𝑇) slope, which can be detected in a smoothed temperature
186
+ dependence d𝜀(𝑇)/d𝑇 (not shown in figure 1).
187
+ It is worthwhile that, contrary to the models (1) and (2), the temperature-independent background 𝜀b
188
+ is introduced into the formulae (2.3) and (2.4) directly from the expansion of free energy Φ (in fact, via
189
+ the relation Φ ∼ 𝑃2/(2𝜀b), with 𝑃 being the electric polarization [3] — see section 4). Notice also that,
190
+ in case of AFB, we have evidently different experimental background levels found in the paraelectric and
191
+ ferroelectric phases (see figure 1). Finally, theoretical considerations [3] testify that it is unnecessary to
192
+ retain any temperature-dependent 𝜀b(𝑇) terms.
193
+ 43704-3
194
+
195
+ B. I. Horon, O. S. Kushnir, P. A. Shchepanskyi, V. Yo. Stadnyk
196
+ 3. Fitting the results and their discussion
197
+ Now we fit our experimental data 𝜀(𝑇) using the phenomenological models (1)–(4), determine the
198
+ best model and explain in detail the practical advantages and disadvantages of those models. Procedures of
199
+ nonlinear fitting are implemented according to a standard Levenberg–Marquardt algorithm. A goodness-
200
+ of-fit is evaluated with 𝜒2 and Wald–Wolfowitz statistical tests. Finally, the error margins for the model
201
+ parameters are found with a bootstrap technique, using 2000 synthetic datasets. The appropriate details
202
+ are elucidated elsewhere [13]. Figure 1 illustrates the fitting results and table 1 displays a short account
203
+ of the main model parameters.
204
+ Table 1. Some of fitting parameters of the 𝜀(𝑇) dependence corresponding to phenomenological mod-
205
+ els (1)–(4).
206
+ Model
207
+ Results
208
+ Parameter
209
+ Value
210
+ (1)
211
+ 𝐶CW
212
+ 0.179
213
+ (2)
214
+ 𝐶𝛾
215
+ 7.392
216
+ 𝛾
217
+ 0.326
218
+ (3)
219
+ 𝐴
220
+ 0.0094
221
+ Δ𝑇, K
222
+ 4.01
223
+ (4)
224
+ 𝑐
225
+ 30.373
226
+ The Curie-Weiss law underestimates the experimental 𝜀(𝑇) curve at the temperatures more or less
227
+ distant from the PT (figure 1a) and so fails to appropriately fit the dielectric permittivity. Moreover, the
228
+ Curie-Weiss fit reveals too large Z-score (see table 2). The PLF model (4) has the characteristics similar to
229
+ those of the Curie-Weiss law (see figure 1b and table 2). The other pattern takes place for the model (2),
230
+ which corresponds to generalized power-law for the temperature dependence 𝜀(𝑇). Here, the fitting
231
+ function overestimates most of the experimental data point values and fails to catch the background (see
232
+ figure 1a), whereas the Z-score is just as large as that for the other models mentioned above. Moreover,
233
+ the model provides a 𝛾 value noticeably less than unity (see table 1), which is a physically shallow result.
234
+ Hence, the generalized power law for the dielectric permittivity of AFB is also insufficient.
235
+ Table 2. Results of 𝜒2 and Wald–Wolfowitz tests for phenomenological models (1)–(4).
236
+ Model
237
+ Statistical Tests
238
+ Parameter
239
+ Value
240
+ (1)
241
+ 𝜒2
242
+ 5252.33
243
+ Reduced 𝜒2
244
+ 165.22
245
+ Z-score
246
+ −3.45
247
+ (2)
248
+ 𝜒2
249
+ 1525.25
250
+ Reduced 𝜒2
251
+ 47.66
252
+ Z-score
253
+ −4.85
254
+ Correlation(𝐶𝛾, 𝛾)
255
+ −0.87
256
+ (3)
257
+ 𝜒2
258
+ 320.87
259
+ Reduced 𝜒2
260
+ 9.72
261
+ Z-score
262
+ 0.34
263
+ Correlation(𝐴, Δ𝑇)
264
+ −0.95
265
+ (4)
266
+ 𝜒2
267
+ 2189.92
268
+ Reduced 𝜒2
269
+ 72.99
270
+ Z-score
271
+ −4.39
272
+ 43704-4
273
+
274
+ Temperature dependence of dielectric permittivity in incommensurately modulated phase of ammonium fluoroberyllate
275
+ 0
276
+ 1
277
+ 2
278
+ 3
279
+ 4
280
+ 5
281
+ T
282
+
283
+ T
284
+ C
285
+ , K
286
+ 0
287
+ 10
288
+ 20
289
+ 30
290
+ 1
291
+ ε
292
+
293
+ ε
294
+ b
295
+ PLF
296
+ LS
297
+ CW
298
+ a
299
+ 10
300
+ −2
301
+ 10
302
+ −1
303
+ 10
304
+ 0
305
+ T
306
+
307
+ T
308
+ C
309
+ , K
310
+ 10
311
+ −1
312
+ 10
313
+ 0
314
+ 10
315
+ 1
316
+ ε
317
+
318
+ ε
319
+ b
320
+ PLF
321
+ LS
322
+ CW
323
+ b
324
+ Figure 2. Temperature dependences of reciprocal dielectric permittivity (a) and log-log plots of dielectric
325
+ permittivity (b). Circles correspond to experimental data and lines correspond to different theoretical
326
+ models: Curie-Weiss (CW), LS and PLF (see the legend). In the both cases, the background term 𝜀𝑏 is
327
+ extracted from the experimental data.
328
+ On the contrary, the theoretical curve referred to the LS model (3) fits fairly well the experimental
329
+ data, and the appropriate statistical tests provide quite satisfactory results (see figure 1b, table 1 and
330
+ table 2). Moreover, it becomes evident that the model (3) can in fact be applied in the entire temperature
331
+ range under study, contrary to the cautions of the authors [3]. Finally, the term 𝜀b = 7.12 found from
332
+ the LS fitting (not shown in table 1) turns out to be very close to the experimental dielectric background
333
+ averaged over the paraelectric phase. In other words, the LS phenomenology obviously exceeds the
334
+ performance of the other models. For completeness, we list the PT points derived with the model (3),
335
+ which are not displayed in table 1 for the sake of brevity: 𝑇C = 177.64 K (found from the dielectric peak),
336
+ 𝑇i = 183.19 K and 𝜃 = 181.65 K. Finally, the confidence intervals for the model parameters 𝐴 and Δ𝑇
337
+ are given respectively by −0.0053–0.0272 and 3.668–4.366 (cf. with the data of table 1).
338
+ Now, we wish to clarify more scrupulously why, contrary to the model (3), the models (1), (2) and (4)
339
+ agree worse with the experimental results for the AFB crystals. For this purpose, we display both the
340
+ experimental and theoretical data 𝜀(𝑇) either in the ‘Curie-Weiss’ coordinates (𝜀−𝜀b)−1 vs. (𝑇 −𝑇C) (see
341
+ figure 2a) or on the double logarithmic scale log(𝜀 − 𝜀b) vs. log(𝑇 − 𝑇C) (see figure 2b). To prevent the
342
+ overloading of these figures, we do not show the data obtained with the generalized power law (2.2). The
343
+ results for this model differ only by insignificant details from those illustrated in figure 2a and figure 2b
344
+ for the models (1) and (4).
345
+ Although the data of figure 2 can hardly be used for a thorough quantitative interpretation (see the
346
+ discussion in section 2 and [16–18]), it illustrates well the main practical tendencies for the above models.
347
+ The Curie-Weiss law for the dielectric permittivity implies a straight line in the (𝜀 − 𝜀b)−1 vs. (𝑇 − 𝑇C)
348
+ coordinates and a straight line with the slope −1 on the log-log scale (see the dot lines in figure 2a and
349
+ figure 2b). A close examination of the behavior of theoretical PLF function 2.4 involving the elliptical
350
+ integrals testifies that there exists a temperature region where the model (4) can be approximately reduced
351
+ to the inverse power law, i.e., the Curie-Weiss relation. Namely, this is an intermediate region above the PT
352
+ temperature 𝑇C given by 𝑇 −𝑇C ≈ 10−1–100 K (see a dashed line in figure 2a and, especially, in figure 2b).
353
+ It corresponds to moderately large reduced temperatures 𝜏 (𝜏 ≈ 0.82–0.98). Note that formula (2.4) was
354
+ used in the work [26] for interpretation of the dielectric properties of incommensurate Rb2ZnCl4 crystals,
355
+ and the authors [26] actually confirmed that, in the region of intermediate relative temperatures (𝑇 −𝑇C),
356
+ the relation (2.4) yields the results very close to the Curie-Weiss law.
357
+ At the temperatures more distant from 𝑇C (i.e., in the region defined by the inequality 𝑇 − 𝑇C > 1 K,
358
+ or at 𝜏 ≈ 0–0.82), one can observe severe deviations of the PLF model from the inverse power law (see
359
+ figure 2a). Finally, the predictions of the PLF model become progressively different from those of the
360
+ Curie-Weiss law also in the region given by 𝑇 − 𝑇C < 0.1 K, i.e. at 𝜏 > 0.98 (see figure 2b). Eventually,
361
+ both the Curie-Weiss and the PLF models do not claim to consider the fluctuation corrections in the
362
+ critical region since they correspond to the mean-field theory, which is inapplicable in the closest vicinity
363
+ of the PT points.
364
+ 43704-5
365
+
366
+ B. I. Horon, O. S. Kushnir, P. A. Shchepanskyi, V. Yo. Stadnyk
367
+ Now that we have ascertained the main formal differences among the theoretical models, we are in
368
+ a position to compare better the experiment with all the theoretical predictions. As seen from figure 2a,
369
+ the experimental dependence 𝜀(𝑇) without the background deviates significantly from the Curie-Weiss
370
+ law and, moreover, from any other inverse power law. The latter fact is also evident from figure 2b, since
371
+ the slope of the experimental curve on the log-log scale changes continuously. Of course, the PLF model
372
+ predicting nearly inverse power law would fail in describing such data. On the contrary, the LS model
373
+ (see dash-dot lines in figure 2) is governed by a combination of terms linear in temperature, which enter
374
+ both the numerator and the denominator of formula (2.3). This mathematical structure provides a gradual
375
+ change in the slope of the LS curve in both figure 2a and figure 2b and so satisfactorily describes the
376
+ experimental data.
377
+ 4. Comparison of LS and PLF models for the dielectric permittivity
378
+ The fact that the LS model describes better the dielectric permittivity of the AFB crystals than
379
+ the PLF model is unexpected and even counter-intuitive. Indeed, the LS approach looks simpler than
380
+ the PLF model [5] which was developed later, the authors [5] may have been familiar with the LS
381
+ results [3] and, moreover, they supposed their model to be applicable in a wider temperature region of the
382
+ incommensurate phase, compared with the LS model. To understand better these problems, we elucidate
383
+ in brief the main differences in the physical assumptions underlying the two models. To do that, we
384
+ outline the main points of derivation of formulae (2.3) and (2.4).
385
+ The authors [3, 5] started from the same free-energy expansion in the framework of the mean-field
386
+ theory. In polar coordinates, it can be written as follows:
387
+ Φ = 𝛼
388
+ 2 𝜌2 + 𝛽1
389
+ 4 𝜌4 + 𝛽2
390
+ 4 𝜌4 cos 4𝜑 − 𝜎𝜌2 d𝜑
391
+ d𝑧 + 𝛿
392
+ 2
393
+ ��d𝜌
394
+ d𝑧
395
+ �2
396
+ + 𝜌2
397
+ �d𝜑
398
+ d𝑧
399
+ �2�
400
+ − 𝐸𝑃 + 𝜅
401
+ 2𝑃2 + 𝑎𝜌2𝑃 cos 2𝜑.
402
+ (4.1)
403
+ Here, 𝛼 = 𝛼′(𝑇 − 𝜃), 𝜌 and 𝜑 are respectively the amplitude and the phase of the order parameter,
404
+ 𝐸 is the electric field, 𝑃 is the polarization, 𝜃 is the temperature point of structural instability, and
405
+ 𝛼′, 𝛽1, 𝛽2, 𝜎, 𝛿, 𝜅 and 𝑎 denote temperature-independent constants (see also section 2). The most
406
+ fundamental fact is the availability in formula (4.1) of a Lifshitz invariant proportional to 𝜎, which is
407
+ symmetry-allowed for incommensurately modulated media. Due to nonzero 𝜎 term in (4.1), the initial
408
+ high-temperature phase does not lose its stability under the condition 𝛼 = 0 (i.e., at 𝑇 = 𝜃). This occurs
409
+ with respect to inhomogeneous displacements at some other value 𝛼 = 𝛼0 = 𝜎2/𝛿 corresponding to a
410
+ higher paraelectric–incommensurate temperature 𝑇i.
411
+ Note that the 𝛽2 term in formula (4.1), which is associated with spatial anisotropy of the order
412
+ parameter, notably complicates the problem of finding a steady-state solution for the free energy. Since
413
+ the above approach is focused first of all on some (small but not too small) vicinity of 𝑇i, as is always
414
+ the case with the mean-field theory, strongly anisotropic higher-order terms in the order parameter are
415
+ omitted in (4.1). They drive a lock-in commensurate PT at 𝑇C rather than the incommensurate PT at 𝑇i
416
+ (see, e.g., the work [26]).
417
+ Probably, the main difference in the LS and PLF models [3, 5] is the approaches adopted by the
418
+ authors to solve the system of Lagrange–Euler equations
419
+ 𝜕Φ
420
+ 𝜕𝜌 − d
421
+ d𝑧
422
+ 𝜕Φ
423
+ 𝜕𝜌′ = 0,
424
+ (4.2)
425
+ 𝜕Φ
426
+ 𝜕𝜑 − d
427
+ d𝑧
428
+ 𝜕Φ
429
+ 𝜕𝜑′ = 0,
430
+ (4.3)
431
+ where 𝜌′ =
432
+ d𝜌
433
+ d𝑧 and 𝜑′ =
434
+ d𝜑
435
+ d𝑧 . This system of equations allows one to find the values of 𝜌 and 𝜑
436
+ corresponding to stationary state. Following the work [27], PLF use a constant-amplitude approximation,
437
+ 43704-6
438
+
439
+ Temperature dependence of dielectric permittivity in incommensurately modulated phase of ammonium fluoroberyllate
440
+ according to which the amplitude of the order parameter does not depend on the coordinates [𝜌(𝑧) = 𝜌0]:
441
+ 𝜌2
442
+ 0 = 𝛼′
443
+ 𝛽1
444
+ (𝑇i − 𝑇).
445
+ (4.4)
446
+ This enables the authors [5] to come to something like a time-independent sine-Gordon equation,
447
+ d2𝜑
448
+ d𝑧2 = 𝛽2
449
+ 4𝛿 𝜌2
450
+ 0 sin 4𝜑.
451
+ (4.5)
452
+ The explicit solution of this equation is found for the temperature behavior of the phase 𝜑 (see also [28–
453
+ 30]:
454
+ 2𝜑 = am(2𝑞𝑧, 𝜖),
455
+ (4.6)
456
+ with 𝑞2 = 2𝛽2𝜌2
457
+ 0/(𝛿𝜖2), 𝜖2 = 2𝛽2𝜌4
458
+ 0/(𝐶 + 𝛽2𝜌4
459
+ 0) and 𝐶 being an integration constant. Here, am(2𝑞𝑧, 𝜖)
460
+ represents the Jacobian elliptic function with the modulus 𝜖 (0 ⩽ 𝜖 ⩽ 1). Such an approach would imply
461
+ a full-scale consideration of anisotropy 𝛽2.
462
+ LS treat the same problem in another manner. They completely neglect the anisotropy (𝛽2 = 0) in the
463
+ initial stage so that the equation (4.5) is simplified to d2𝜑/d𝑧2 = 0. This yields a standard formula for the
464
+ plane-wave region of incommensurate phase [31]:
465
+ 𝜑 = 𝑘0𝑧,
466
+ (4.7)
467
+ with the wave vector 𝑘0 = |𝜎|/𝛿. On the other hand, in fact LS also start from the constant-amplitude
468
+ approximation (4.4) under the condition 𝛽2 = 0. As a result, their approach seems to be notably simpler
469
+ than that of PLF. However, after that LS take into account higher-order corrections to formula (4.7)
470
+ given by a power series in 𝛽2 (more exactly, in the parameter Δ =
471
+ 𝛼′ 𝛿 |𝛽2 |
472
+ 𝜎2𝛽1 (𝑇i − 𝑇) =
473
+ |𝛽2 |
474
+ 𝛽1 𝑡 ≪ 1), the
475
+ lowest-order of which is proportional to Δ2 [3]. This corresponds to what can be termed as a ‘weak
476
+ anisotropy approximation’, which would eventually affect the final solution for the amplitude, too.
477
+ Since the dielectric permittivity 𝜀 is defined as 𝜀 = d𝑃/d𝐸, we obtain
478
+ 𝜀 = 1
479
+ 𝜅 − 2𝑎𝜌
480
+ 𝜅
481
+ � 𝜕𝜌
482
+ 𝜕𝐸 cos 2𝜑 − 𝜕𝜑
483
+ 𝜕𝐸 𝜌 sin 𝜑
484
+
485
+ .
486
+ (4.8)
487
+ It is obvious that, unlike the PLF model, the phase in (4.7) does not depend on thermal changes in the
488
+ approximation Δ = 0. Taking derivatives in (4.8), we arrive at a temperature-independent expression 𝜀(𝑇)
489
+ which coincides with that obtained for the commensurate phase [3]:
490
+ 𝜀com(𝑇) = 1
491
+ 𝜅 +
492
+ 2𝑎2
493
+ 𝜅2(𝛽′
494
+ 1 − |𝛽′
495
+ 2|) ,
496
+ (4.9)
497
+ with 𝛽′
498
+ 1 and 𝛽′
499
+ 2 being renormalized coefficients (𝛽′
500
+ 1 = 𝛽1 − 2𝑎2/𝜅 and 𝛽′
501
+ 2 = 𝛽2 − 2𝑎2/𝜅). Having left a
502
+ zero approximation, one can obtain a more complex expression in some vicinity of the incommensurate
503
+ PT (at Δ ≪ 1) (cf. also with the earlier, less correct formula in the work [32]):
504
+ 𝜀LS(𝑇) = 1
505
+ 𝜅 +
506
+ 𝑎2
507
+ 𝜅2𝛽′
508
+ 1
509
+ 𝑡 6 + 𝑡
510
+ 4 − 𝑡 .
511
+ (4.10)
512
+ Formula (4.10) coincides with (2.3) with the notation 𝜀b = 1/𝜅 and 𝐴 = 𝑎2/𝛽′
513
+ 1. Finally, formulae (4.4),
514
+ (4.6) and (4.8) obtained in frames of the PLF model result in
515
+ 𝜀PLF(𝑇) = 1
516
+ 𝜅 +
517
+ 𝑎2
518
+ 𝜅2𝛽′
519
+ 1
520
+
521
+ 𝐸(𝜏)
522
+ (1 − 𝜏2)𝐾(𝜏) − 1
523
+
524
+ ,
525
+ (4.11)
526
+ where the substitutions 𝜀b = 1/𝜅 and 𝑐 = 𝜅𝛽′
527
+ 1/𝑎2 lead to formula (2.4).
528
+ 43704-7
529
+
530
+ B. I. Horon, O. S. Kushnir, P. A. Shchepanskyi, V. Yo. Stadnyk
531
+ Now we are in a position to compare the physical backgrounds of the LS and PLF models for the
532
+ dielectric properties of AFB. At the first glance, the PLF result (4.6) underlying the formula (4.11) looks
533
+ stronger compared to formula (4.7) obtained by LS, which is limited to a narrow vicinity of paraelectric–
534
+ incommensurate PT. However, as pointed out in the work [30], any phenomenological model like those
535
+ suggested by LS and PLF [3, 5] is anyway applicable only near the PT point𝑇i, where the spatial anisotropy
536
+ is small enough. In some sense, the approximations of constant amplitude and weak anisotropy have close
537
+ applicability regions. Then, the decision of PLF to maintain the exact solution for the phase 𝜑 and, at the
538
+ same time, restrict themselves to the limit 𝜌(𝑧) = 𝜌0 can prove to be partly inconsistent, as if someone
539
+ would exceed the accuracy of a given approximation. Probably, this is the main reason why the PLF
540
+ formula is less accurate in describing the 𝜀(𝑇) function for the AFB crystals.
541
+ On the other hand, the fact that the LS model has turned out to work fairly well in the overall range of
542
+ incommensurate phase can be explained, at least partly, owing to the following circumstance: in terms of a
543
+ variable (𝑇i −𝑇C)/𝑇i characterizing the temperature width of this phase, the latter is very narrow (∼ 0.03).
544
+ Eventually, this factor also degrades a potential advantage of the PLF model associated with consideration
545
+ of the spatial anisotropy, which would have played a more significant part in a wider temperature region.
546
+ In this respect, we suppose that an LS-like model could hardly succeed when describing the dielectric
547
+ properties of Rb2ZnCl4 where the incommensurate phase is very wide ((𝑇i−𝑇C)/𝑇i ∼ 0.36) and, moreover,
548
+ the experimental data in the vicinity of 𝑇i are scarce [26].
549
+ 5. Potential influence of fluctuations and structural defects
550
+ It is well known that the incommensurate phases in A2BX4 crystals are highly sensitive to any
551
+ structural imperfections, e.g., due to pinning of the phase of the order parameter [33]. This implies that
552
+ the dielectric permittivity can manifest some dependence on crystal samples or experimental conditions
553
+ (heating or cooling run, temperature change rate, etc.). This poses a question of the potential influence
554
+ of these phenomena on our data and conclusions. The next question is associated with the effect of the
555
+ order-parameter fluctuations on the dielectric data.
556
+ As stressed above, both the LS and PLF models represent mean-filed approaches. The temperature
557
+ region 𝛿𝑇 = 𝑇 −𝑇C (or 𝛿𝜏′ in terms of a redefined reduced temperature, 𝛿𝜏′ = 𝛿𝑇/𝑇C) around the phase-
558
+ transition point where the fluctuations and the critical phenomena begin to dominate and the Landau
559
+ theory can no longer be employed is given by a so-called Ginzburg parameter 𝐺: 𝜏′ ≪ 𝐺 or, at least,
560
+ 𝜏′ < 𝐺 — see, e.g., [34, 35]). The corresponding results derived by us for AFB with a highly sensitive
561
+ optical-birefringence technique (see [36]) will be reported elsewhere. Here, we only state that they yield
562
+ in 𝐺 ≈ 0.0026. Then, we have the conditions 𝛿𝑇 ≪ 0.5 K or 𝛿𝑇 < 0.5 K. Inspection of the data in
563
+ figure 1 (or, better, in figure 2b) testifies that some eight data points correspond to the region 0.5 K above
564
+ 𝑇C and only three data points correspond to the region 0.1 K. In other words, our study does not directly
565
+ address the scaling region and includes, at the most, a region where the mean-filed theory can be applied
566
+ with small fluctuation corrections. Then, the order-parameter fluctuations can hardly affect our results.
567
+ The next point is concerned with ‘frozen-in fluctuations’, i.e., with structural defects (see [37]).
568
+ Having no direct facilities for estimating a defect state of our sample, we rely upon indirect methods.
569
+ Namely, it is known that the influence of structural defects can disguise itself as a fluctuation effect in
570
+ a close vicinity of PT. Therefore, the defects usually widen the ‘fluctuation’ region, i.e., they contribute
571
+ additively to the Ginzburg parameter (see [14, 36]). This enables us to perform a rough comparison of the
572
+ structural perfection for different samples of a given crystal: the larger is the Ginzburg number obtained
573
+ for a crystal sample, the higher is the concentration of its defects. The following fact is worthwhile in this
574
+ respect. When comparing our results with the corresponding data for the other A2BX4 crystals [36], one
575
+ observes that the Ginzburg parameter for our AFB crystal (𝐺 ∼ 0.003) is relatively small (although the
576
+ same in the order of magnitude). This indirectly indicates that the structural imperfection typical of our
577
+ crystal sample is not so high to dominate the temperature dependences of its physical properties.
578
+ Moreover, the defects with heavy concentrations can even ‘smear’ the divergent-like anomalies
579
+ detected at the PT points. However, we observe no such situation with our sample, thus confirming
580
+ again that the effects studied by us are nor defect-driven. Another similar argument against a significant
581
+ contribution of the defects into the dielectric behavior of our AFB crystal is as follows. One of the
582
+ 43704-8
583
+
584
+ Temperature dependence of dielectric permittivity in incommensurately modulated phase of ammonium fluoroberyllate
585
+ common consequences of strong influence of structural defects is a decrease in the dielectric peak 𝜀max.
586
+ However, our parameter 𝜀max ≈ 55 is very close to the average value 𝜀avg
587
+ max ≈ 57 found from [1–5, 12, 19]
588
+ at comparable electric frequencies. This is another evidence that the structural defects should play only
589
+ a secondary role in the dielectric behavior of our crystal sample.
590
+ We would also like to emphasize that, in some other terms, our main conclusion is that the temperature
591
+ anomaly of the dielectric permittivity in the incommensurate phase of improper ferroelectric AFB near
592
+ the 𝑇C point is ‘slower’ than that predicted by the inverse power law (see, e.g., a gradual decrease in the
593
+ slope — i.e., the power-law ‘exponent’ — with approaching 𝑇C, which is seen in the double logarithmic-
594
+ scale plot in figure 2b), although this law is a common regularity known in the theory of PTs. A (very
595
+ loose) analogy with the situation occurring in proper uniaxial ferroelectrics can be mentioned: therein, the
596
+ leading temperature-dependent terms are also ‘slower’ than those given by the inverse power law, being
597
+ described by logarithmic corrections. However, there is still no theory predicting such a ‘slow-down’ in
598
+ the dielectric divergence near the PT point as a result of structural defects.
599
+ Finally, an important question arises in view of a potential effect of structural defects on the dielectric
600
+ properties of the AFB crystals: is the LS model universally better than the PLF model — or some
601
+ experimental data could be found in the literature which prefer the latter model? Since we cannot rule
602
+ out completely the sample dependence of the dielectric permittivity, it would be difficult to expect a
603
+ straightforward answer. However, this situation seems to be quite unlikely because both of the LS and
604
+ PLF models refer to defect-free crystals. Then, the application of these models to essentially imperfect
605
+ crystal samples might have rather resulted in the failure of both models than in changing the balance of
606
+ their efficiencies [13].
607
+ 6. Conclusions
608
+ We have studied the dielectric properties of improper ferroelectric AFB crystals in their paraelectric,
609
+ incommensurately modulated and commensurate ferroelectric phases. Similar to the previous experimen-
610
+ tal studies, the dielectric permittivity of AFB is not affected by the incommensurate PT at 𝑇i but reveals
611
+ a weak peak at the commensurate PT point 𝑇C. The experimental results for the incommensurate phase
612
+ of AFB are compared with the data following from the four phenomenological theories: the Curie-Weiss
613
+ and generalized Curie-Weiss laws and the LS and PLF models [3, 5]. It is ascertained that the PLF
614
+ model provides the results very similar to those of the inverse power laws given by the Curie-Weiss
615
+ and generalized Curie-Weiss formulae. According to the results of rigorous statistical tests, all of these
616
+ models provide a much worse fit of the experimental data than the LS model. In addition, the latter model
617
+ can be efficiently applied within the overall temperature range of the incommensurate phase.
618
+ The analysis of the experimental data shows that the temperature slopes of both the reciprocal
619
+ permittivity with subtracted dielectric background and the permittivity plotted on the double logarithmic
620
+ scale change continuously with temperature. However, any inverse power law would have implied a
621
+ constant slope. This is a formal reason why the models (1), (2) and (4) fail in describing the experimental
622
+ results. On the contrary, the temperature dependence of the permittivity in frames of the LS model (3) is
623
+ governed by a combination of terms linear in temperature, including a divergent term in the denominator.
624
+ Then, the peak at the PT point 𝑇C is ‘damped’ by the temperature-dependent terms in the numerator.
625
+ This mathematical structure provides a necessary change in the slope and so appropriately describes the
626
+ experimental data.
627
+ In order to compare different phenomenological models more in detail, the main physical hypotheses
628
+ underlying the LS and PLF approaches are elucidated. In particular, it is stressed that the LS model is
629
+ based upon the approximation of weak spatial anisotropy of the order parameter and small corrections to
630
+ the approximation of constant amplitude of the order parameter. These approximations are fully justified
631
+ only within the plane-wave region of the incommensurate phase. On the other hand, the PLF model
632
+ employs the constant-amplitude approximation and finds an exact solution for the phase of the order
633
+ parameter, thus not relying on the assumption of weak anisotropy. However, the two approximations
634
+ partly contradict each other, which may be the reason of a lower efficiency of the PLF model, compared
635
+ to the LS model. Most likely, the LS model remains applicable within the entire incommensurate phase
636
+ in AFB due to a very narrow temperature range of the latter. This fact also undermines a potential
637
+ 43704-9
638
+
639
+ B. I. Horon, O. S. Kushnir, P. A. Shchepanskyi, V. Yo. Stadnyk
640
+ advantage of the PLF approach, i.e., its applicability outside the plane-wave region, when it is applied to
641
+ the incommensurate crystals like AFB.
642
+ Possible contributions of the structural defects and the critical fluctuations into the 𝜀(𝑇) function of
643
+ our AFB crystals are discussed. It is shown that the influence of the defects can hardly be decisive, while
644
+ the fluctuations typical of a very close vicinity of the PT point are out of the scope of our study and so
645
+ cannot affect its main conclusions.
646
+ Acknowledgements
647
+ This study has been supported by the Ministry of Education and Science of Ukraine (the Project
648
+ #0120U102320).
649
+ References
650
+ 1. Strukov B. A., Skomorokhova T. L., Koptsik V. A., Boiko A. A., Izrailenko A. N., Kristallografiya, 1973,
651
+ 18, 143–146, (in Russian).
652
+ 2. Gesi K., Ozawa K., J. Phys. Soc. Jpn., 1974, 36, 1496, doi:10.1143/JPSJ.36.1496.
653
+ 3. Levanyuk A. P., Sannikov D. G., Fiz. Tverd. Tela, 1976, 18, 423–428, (in Russian).
654
+ 4. Strukov B. A., Arutyunova V. M., Uesu I., Fiz. Tverd. Tela, 1982, 24, 3061–3067, (in Russian).
655
+ 5. Prelovšek P., Levstik A., Filipič C., Phys. Rev. B, 1983, 28, 6610–6612, doi:10.1103/PhysRevB.28.6610.
656
+ 6. Srivastava R. C., Klooster W. T., Koetzle T. F., Acta Cryst. B, 1999, 55, 17–23,
657
+ doi:10.1107/S010876819800737X.
658
+ 7. Strukov B. A., Smirnov P. S., Ferroelectrics, 1986, 66, 85–88, doi:10.1080/00150198608227875.
659
+ 8. Palatinus L., Smaalen S. V., Ferroelectrics, 2004, 305, 49–52, doi:10.1080/00150190490462388.
660
+ 9. Brik M. G., Kityk I. V., Solid State Commun., 2007, 143, 326–330, doi:10.1016/j.ssc.2007.05.042.
661
+ 10. Strukov B. A., Levanyuk A. P., Ferroelectric Phenomena in Crystals: Physical Foundations,
662
+ Springer-Verlag, Berlin, Heidelberg, 1998.
663
+ 11. Palatinus L., Amami M., van Smaalen S., Acta Cryst. B, 2004, 60, 127–137, doi:10.1107/S0108768104000874.
664
+ 12. Jakubas R., Czpala Z., Solid State Commun., 1984, 51, 617–619, doi:10.1016/0038-1098(84)91072-X.
665
+ 13. Horon B. I., Kushnir O. S., Stadnyk V. Y., Kashuba A. I., 12th IEEE Int. Conf. on Electron. and Inf. Technol.,
666
+ 2020, 261–264, doi:10.1109/ELIT53502.2021.9501126.
667
+ 14. Kushnir O. S., Shopa R. Y., Vlokh R. O., Ukr. J. Phys. Opt., 2008, 9, 169–181,
668
+ doi:10.3116/16091833/9/3/169/2008.
669
+ 15. Girnyk I. S., Klymovych Y. G., Kushnir O. S., Shopa R. Y., Ferroelectrics, 2014, 462, 55–63,
670
+ doi:10.1080/00150193.2014.890856.
671
+ 16. Goldstein M. L., Morris S. A., Yen G. G., Eur. Phys. J. B, 2004, 41, 255–258, doi:10.1140/epjb/e2004-00316-5.
672
+ 17. Bauke H., Eur. Phys. J. B, 2007, 58, 167–173, doi:10.1140/epjb/e2007-00219-y.
673
+ 18. Perline R., Stat. Sci., 2005, 20, 68–88, doi:10.1214/088342304000000215.
674
+ 19. Hoshino S., Vedam K., Okaya Y., Pepinsky R., Phys. Rev., 1958, 112, 405–412, doi:10.1103/PhysRev.112.405.
675
+ 20. Kushnir O. S., Shopa Y. I., Polovynko I. I., Phase Transitions, 2007, 80, 89–94,
676
+ doi:10.1080/01411590601092761.
677
+ 21. Kundys B., Lappas A., Viret M., Kapustianyk V., Rudyk V., Semak S., Simon C., Bakaimi I.,
678
+ Phys. Rev. B, 2010, 81, 224434, doi:10.1103/PhysRevB.81.224434.
679
+ 22. Otko A. I., Zapart W., Zapart M. B., Kapustianyk V. B., Kusznir O., Ferroelectrics, 1993, 141, 43–48,
680
+ doi:10.1080/00150199308008418.
681
+ 23. Uetani M., Yamamuro O., Inaba I., Matsuo T., Ichikawa M., J. Korean Phys. Soc., 1998, 32, S397–S399.
682
+ 24. Matsuo T., Tanaka N., Fukai M., Yamamuro O., Inaba A., Ichikawa M., Thermochim. Acta, 2003, 403, 137–151,
683
+ doi:10.1016/S0040-6031(03)00150-3.
684
+ 25. Sandvold E., Courtens E., Phys. Rev. B, 1983, 27, 5660–5668, doi:10.1103/PhysRevB.27.5660.
685
+ 26. Levstik A., Prelošek P., Filipič C., Žekš B., Phys. Rev. B, 1982, 25, 3416–3419, doi:10.1103/PhysRevB.25.3416.
686
+ 27. Ishibashi Y., Ferroelectrics, 1980, 24, 119–126, doi:10.1080/00150198008238630.
687
+ 28. Sannikov D. G., Fiz. Tverd. Tela, 1981, 23, 953–958, (in Russian).
688
+ 29. Sannikov D. G., Kristallografiya, 1982, 27, 5–10, (in Russian).
689
+ 30. Sannikov D. G., Solid State Commun., 1985, 54, 173–175, doi:10.1016/0038-1098(85)91145-7.
690
+ 31. Kushnir O. S., J. Phys.: Condens. Matter, 1997, 9, 9259–9273, doi:10.1088/0953-8984/9/43/011.
691
+ 32. Levanyuk A. P., Sannikov D. G., Ferroelectrics, 1976, 14, 643–645, doi:10.1080/00150197608236689.
692
+ 43704-10
693
+
694
+ Temperature dependence of dielectric permittivity in incommensurately modulated phase of ammonium fluoroberyllate
695
+ 33. Cummins H. Z., Phys. Rep., 1990, 185, 211–409, doi:10.1016/0370-1573(90)90058-A.
696
+ 34. Patashinskii A. Z., Pokrovsky V. L., Fluctuation Theory of Critical Phenomena, Pergamon, Oxford, 1979.
697
+ 35. Ivanov N. R., Levanyuk A. P., Minyukov S. A., Kroupa J., Fousek J., J. Phys.: Condens. Matter, 1990, 2,
698
+ 5777–5786, doi:10.1088/0953-8984/2/26/015.
699
+ 36. Kushnir O. S., Kityk A. V., Dzyubanski V. S., Shopa R. Y., J. Phys.: Condens. Matter, 2011, 23, 225403,
700
+ doi:10.1088/0953-8984/23/22/225403.
701
+ 37. Levanyuk A. P., Sigov A. S., Defects and Structural Phase Transitions, Gordon and Breach, New York, 1988.
702
+ Температурна залежнiсть дiелектричної проникностi в
703
+ несумiрно модульованiй фазi фторберилату амонiю
704
+ Б. I. Горон1,2, О. С. Кушнiр2, П. А. Щепанський1, В. Й. Стадник1
705
+ 1 Кафедра загальної фiзики, Львiвський нацiональний унiверситет iменi Iвана Франка,
706
+ вул. Драгоманова, 23, 79005 Львiв, Україна
707
+ 2 Кафедра оптоелектронiки та iнформацiйних технологiй, Львiвський нацiональний унiверситет
708
+ iменi Iвана Франка, вул. ген. Тарнавського, 107, 79013 Львiв, Україна
709
+ Дослiджено температурну залежнiсть дiелектричної проникностi вздовж полярної осi сегнетоелектрично-
710
+ го кристала фторберилату амонiю (ФБА) в околi точок його фазових переходiв. Експериментальнi данi для
711
+ несумiрно модульованої фази ФБА порiвняно з передбаченнями феноменологiчних моделей, вiдомих з
712
+ лiтератури: закону Кюрi–Вейса (КВ), узагальненого закону Кюрi–Вейса (УКВ), а також моделей Леванюка i
713
+ Саннiкова (ЛС) i Преловшека, Левстiка та Фiлiпiча (ПЛФ), запропонованих для невласних сегнетоелектри-
714
+ кiв. Показано, що пiдхiд ЛС краще описує температурну поведiнку дiелектричної проникностi для кристала
715
+ ФБА, нiж моделi КВ, УКВ i ПЛФ. З’ясовано основнi фiзичнi причини такої ситуацiї.
716
+ Ключовi слова: фазовi переходи, несумiрнi фази, невласнi сегнетоелектрики, дiелектрична
717
+ проникнiсть, фторберилат амонiю
718
+ 43704-11
719
+
720
+