jackkuo commited on
Commit
9ad2ba7
·
verified ·
1 Parent(s): 99b427f

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -tAzT4oBgHgl3EQfvf0n/content/tmp_files/2301.01706v1.pdf.txt +1697 -0
  2. -tAzT4oBgHgl3EQfvf0n/content/tmp_files/load_file.txt +0 -0
  3. .gitattributes +48 -0
  4. 0tE4T4oBgHgl3EQfZwwm/content/tmp_files/2301.05058v1.pdf.txt +1527 -0
  5. 0tE4T4oBgHgl3EQfZwwm/content/tmp_files/load_file.txt +0 -0
  6. 1NE0T4oBgHgl3EQf_gL8/content/2301.02829v1.pdf +3 -0
  7. 1NE0T4oBgHgl3EQf_gL8/vector_store/index.faiss +3 -0
  8. 1NE0T4oBgHgl3EQf_gL8/vector_store/index.pkl +3 -0
  9. 1tAyT4oBgHgl3EQfPfba/content/2301.00027v1.pdf +3 -0
  10. 2tAyT4oBgHgl3EQfPvai/content/2301.00031v1.pdf +3 -0
  11. 2tAyT4oBgHgl3EQfPvai/vector_store/index.faiss +3 -0
  12. 2tAyT4oBgHgl3EQfPvai/vector_store/index.pkl +3 -0
  13. 3NAzT4oBgHgl3EQf9P6r/content/tmp_files/2301.01917v1.pdf.txt +1398 -0
  14. 3NAzT4oBgHgl3EQf9P6r/content/tmp_files/load_file.txt +0 -0
  15. 3dE3T4oBgHgl3EQfPwmd/content/2301.04406v1.pdf +3 -0
  16. 49AyT4oBgHgl3EQfcPf_/content/2301.00281v1.pdf +3 -0
  17. 49AyT4oBgHgl3EQfcPf_/vector_store/index.faiss +3 -0
  18. 49AyT4oBgHgl3EQfcPf_/vector_store/index.pkl +3 -0
  19. 4NFAT4oBgHgl3EQfExwU/vector_store/index.faiss +3 -0
  20. 4NFAT4oBgHgl3EQfExwU/vector_store/index.pkl +3 -0
  21. 4dE2T4oBgHgl3EQfjwe5/content/tmp_files/2301.03972v1.pdf.txt +1160 -0
  22. 4dE2T4oBgHgl3EQfjwe5/content/tmp_files/load_file.txt +0 -0
  23. 59E3T4oBgHgl3EQfQwmQ/content/2301.04416v1.pdf +3 -0
  24. 59E3T4oBgHgl3EQfQwmQ/vector_store/index.faiss +3 -0
  25. 59E3T4oBgHgl3EQfQwmQ/vector_store/index.pkl +3 -0
  26. 79E1T4oBgHgl3EQfTwOd/vector_store/index.faiss +3 -0
  27. 79E1T4oBgHgl3EQfTwOd/vector_store/index.pkl +3 -0
  28. 7NE4T4oBgHgl3EQfcgzB/content/tmp_files/2301.05084v1.pdf.txt +0 -0
  29. 7NE4T4oBgHgl3EQfcgzB/content/tmp_files/load_file.txt +0 -0
  30. 8dFLT4oBgHgl3EQftC_m/content/2301.12150v1.pdf +3 -0
  31. 8dFLT4oBgHgl3EQftC_m/vector_store/index.faiss +3 -0
  32. 8dFLT4oBgHgl3EQftC_m/vector_store/index.pkl +3 -0
  33. 9dAzT4oBgHgl3EQfFPqV/content/tmp_files/2301.01008v1.pdf.txt +1078 -0
  34. 9dAzT4oBgHgl3EQfFPqV/content/tmp_files/load_file.txt +0 -0
  35. 9dFLT4oBgHgl3EQfuS8F/vector_store/index.faiss +3 -0
  36. 9tE4T4oBgHgl3EQfDQub/content/2301.04868v1.pdf +3 -0
  37. 9tE4T4oBgHgl3EQfDQub/vector_store/index.faiss +3 -0
  38. 9tE4T4oBgHgl3EQfDQub/vector_store/index.pkl +3 -0
  39. A9E0T4oBgHgl3EQfxwJo/content/tmp_files/2301.02650v1.pdf.txt +1694 -0
  40. A9E0T4oBgHgl3EQfxwJo/content/tmp_files/load_file.txt +0 -0
  41. ANFKT4oBgHgl3EQfVi5k/content/2301.11788v1.pdf +3 -0
  42. ANFKT4oBgHgl3EQfVi5k/vector_store/index.faiss +3 -0
  43. AtFLT4oBgHgl3EQfFC_E/content/2301.11986v1.pdf +3 -0
  44. AtFLT4oBgHgl3EQfFC_E/vector_store/index.pkl +3 -0
  45. CNE1T4oBgHgl3EQfDwNk/content/tmp_files/2301.02881v1.pdf.txt +1204 -0
  46. CNE1T4oBgHgl3EQfDwNk/content/tmp_files/load_file.txt +0 -0
  47. D9AzT4oBgHgl3EQfif2Z/vector_store/index.pkl +3 -0
  48. DNE2T4oBgHgl3EQfoQhP/content/tmp_files/2301.04016v1.pdf.txt +0 -0
  49. DNE2T4oBgHgl3EQfoQhP/content/tmp_files/load_file.txt +0 -0
  50. EtE1T4oBgHgl3EQfEgOL/content/tmp_files/2301.02891v1.pdf.txt +1149 -0
-tAzT4oBgHgl3EQfvf0n/content/tmp_files/2301.01706v1.pdf.txt ADDED
@@ -0,0 +1,1697 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ On-chip Hong-Ou-Mandel interference from separate quantum dot emitters in an
2
+ integrated circuit
3
+ �Lukasz Dusanowski,1, 2, ∗ Dominik K¨ock,1 Christian Schneider,1, 3 and Sven H¨ofling1
4
+ 1Technische Physik and W¨urzburg-Dresden Cluster of Excellence ct.qmat,
5
+ University of W¨urzburg, Physikalisches Institut and Wilhelm-Conrad-R¨ontgen-Research
6
+ Center for Complex Material Systems, Am Hubland, D-97074 W¨urzburg, Germany
7
+ 2currently at: Department of Electrical and Computer Engineering,
8
+ Princeton University, Princeton, NJ 08544, USA
9
+ 3Institute of Physics, University of Oldenburg, D-26129 Oldenburg, Germany
10
+ (Dated: January 5, 2023)
11
+ Scalable quantum photonic technologies require low-loss integration of many identical single-
12
+ photon sources with photonic circuitry on a chip. Relatively complex quantum photonic circuits
13
+ have already been demonstrated; however, sources used so far relied on parametric-down-conversion.
14
+ Hence, the efficiency and scalability are intrinsically limited by the probabilistic nature of the sources.
15
+ Quantum emitter-based single-photon sources are free of this limitation, but frequency matching of
16
+ multiple emitters within a single circuit remains a challenge. In this work, we demonstrate a key
17
+ component in this regard in the form of a fully monolithic GaAs circuit combing two frequency-
18
+ matched quantum dot single-photon sources interconnected with a low-loss on-chip beamsplitter
19
+ connected via single-mode ridge waveguides. This device enabled us to perform a two-photon inter-
20
+ ference experiment on-chip with visibility reaching 66%, limited by the coherence of the emitters.
21
+ Our device could be further scaled up, providing a clear path to increase the complexity of quantum
22
+ circuits toward fully scalable integrated quantum technologies.
23
+ Optical quantum computing and communication ap-
24
+ plications with single photons and linear optics rely crit-
25
+ ically on the quantum interference of two photons on
26
+ a beamsplitter [1].
27
+ This process, known as Hong-Ou-
28
+ Mandel (HOM) effect, occurs when two identical single
29
+ photons enter a 50:50 beamsplitter, one in each input
30
+ port. When the photons are indistinguishable, they will
31
+ coalesce into a two-photon Fock state [2], in which they
32
+ exit the same but random output port. This process un-
33
+ derlines the simplest non-trivial path-entangled NOON
34
+ state generation and introduces an optical non-linearity
35
+ which is the base for the implementation of more-complex
36
+ photonic gates and protocols.
37
+ Consequently, scalable optical quantum information
38
+ technologies will require integrating many identical in-
39
+ distinguishable single-photon sources with reliable pho-
40
+ tonic circuits consisting of beamsplitters. Utilizing well-
41
+ developed integrated photonics technology is particularly
42
+ appealing in this regard, as it dramatically reduces the
43
+ footprint of quantum devices. Furthermore, it allows con-
44
+ trolling photon states with high fidelity due to the in-
45
+ trinsic sub-wavelength stability of the path-lengths, low-
46
+ losses, and near-perfect mode overlap at an integrated
47
+ beamsplitter for high-fidelity quantum interference [3–5].
48
+ Advances in integrated photonic technology allowed
49
+ already realizations of relatively complex quantum cir-
50
+ cuits demonstrating CNOT-gate operation [6, 7], boson
51
+ sampling [7, 8], quantum walks [9], some simple quan-
52
+ tum algorithms [7, 10] and chip-to-chip quantum tele-
53
+ portation [11]. A combination of integrated photonic cir-
54
+ cuits with spontaneous four-wave mixing photon sources
55
+ has also been achieved [11–13].
56
+ However, due to the
57
+ used sources’ probabilistic nature, their efficiency and
58
+ scalability are intrinsically limited. Quantum emitters-
59
+ based single-photon sources are free of this limitation
60
+ and recently have been shown to outperform spontaneous
61
+ four-wave mixing and down-conversion photon sources
62
+ in simultaneously reaching high levels of photons indis-
63
+ tinguishability and brightness [14–18].
64
+ Moreover, re-
65
+ mote interference between two quantum emitters was al-
66
+ ready demonstrated using trapped ions [19, 20], quan-
67
+ tum dots [21–30], organic molecules [31, 32] or vacancy
68
+ centers in diamond [33–36]. The vast majority of those
69
+ experiments have been performed in free space as proof-
70
+ of-principle demonstrations.
71
+ Performing similar exper-
72
+ iments on-chip, taking advantage of the photonic cir-
73
+ cuit consisting of fully integrated quantum emitters and
74
+ beamsplitter, has not been performed yet, and it is still
75
+ a missing component towards scaling-up aforementioned
76
+ quantum technologies.
77
+ In this work, we demonstrate a crucial component in
78
+ this regard in the form of a fully monolithic GaAs cir-
79
+ cuit combing two frequency-matched quantum dot single-
80
+ photon sources interconnected with on-chip beamsplitter
81
+ via single-mode ridge waveguides. This device enabled
82
+ performing two-photon interference experiments on-chip
83
+ with visibility limited by the coherence of our emitters.
84
+ Our semiconductor photonic device is schematically
85
+ presented in Fig. 1a and b. It is based on InAs/GaAs
86
+ distributed Bragg-reflector ridge waveguides, which have
87
+ been proven to facilitate high optical quality quantum
88
+ dot single-photon sources [37]. The central part of the de-
89
+ vice consists of the single-mode directional coupler (DC),
90
+ which is the integrated optical analog of the bulk beam-
91
+ arXiv:2301.01706v1 [quant-ph] 4 Jan 2023
92
+
93
+ 2
94
+ QD1
95
+ QD2
96
+ DC
97
+ Output Arm 1
98
+ Output Arm 2
99
+ a
100
+ c
101
+ QD1
102
+ QD1
103
+ QD2
104
+ QD2
105
+ Input Arm 1
106
+ Input Arm 2
107
+ QDs
108
+ 5x DBR
109
+ 24x DBR
110
+ 0.6 µm
111
+ 1.3 µm
112
+ λ/2 cav.
113
+ b
114
+
115
+ 1k
116
+ 2k
117
+ 3k
118
+ 4k
119
+ 5k
120
+ 1.3930
121
+ 1.3935
122
+ 1k
123
+ 2k
124
+ 3k
125
+ 4k
126
+ 5k
127
+ Output Arm 2
128
+ QD2
129
+ PL intensity (arb. units)
130
+ QD1
131
+ 1.3930
132
+ 1.3935
133
+ Energy (eV)
134
+ Output Arm 1
135
+ FIG. 1. On-chip two-photon interference circuit and beam splitting operation. a, Schematic representation of the
136
+ photonic circuit based on a directional coupler (DC) interconnected with two input waveguides with coupled quantum dots and
137
+ two output arms with inverted tapers for photons collection. b, Ridge waveguide cross-section with marked layer structure.
138
+ c, Demonstration of beam splitting operation for fabricated DC. The photoluminescence signal from QD1 and QD2 is recorded
139
+ for Output Arms 1 and 2, respectively. QD1 and QD2 are frequency matched with a precision of 5 µeV.
140
+ splitter. In the two input arms of the DC, two frequency-
141
+ matched quantum dots (QDs) are located. For single-
142
+ photon generation, the QDs are excited non-resonantly
143
+ from the top using two separated picosecond pulsed laser
144
+ beams.
145
+ Photons interfered on the DC are finally col-
146
+ lected off the chip using inverse taper out-couplers. Spec-
147
+ tral filtering and detection are performed off-chip using a
148
+ monochromator and two superconducting single-photon
149
+ detectors.
150
+ To find two quantum dots with matching optical tran-
151
+ sition energies, the position of the excitation beam spot
152
+ on each input arm of the DC was scanned using an au-
153
+ tomatized translation stage.
154
+ Within such a scanning
155
+ routine, we localized two matching emission lines at
156
+ 1.3931 eV energy originating from the QDs located in
157
+ two individual input arms of the DC and separated spa-
158
+ tially by around 200 µm. In Fig. 1b, photoluminescence
159
+ (PL) spectra from QD1 and QD2 recorded from DC out-
160
+ put arms 1 and 2 are presented at a temperature of 4.5 K.
161
+ Single well-resolved emission lines matching within 5 µeV
162
+ fit precision are visible. Comparing amplitudes of QD1
163
+ and QD2 emission peaks visible within both output arms,
164
+ a beam splitting ratio of 48:52 is derived (including un-
165
+ even transmission through out-coupling arms - more de-
166
+ tails in Supplementary Section 8).
167
+ To show that optical excitation of our QDs leads to
168
+ the generation of single photons, we analyzed the photon
169
+ emission statistics of separate QDs by performing sec-
170
+ ond order-correlation experiments in Hanbury Brown and
171
+ Twiss (HBT) configuration. For that purpose, QDs have
172
+ been excited non-resonantly from the top by an 813 nm
173
+ wavelength train of picosecond pulses at a repetition rate
174
+ of 76 HMz. Photons emitted by the QDs were then cou-
175
+ pled into the circuit input arm waveguides and guided
176
+ into the directional coupler, where the signal was divided
177
+ between two output arms. Next, photons were collected
178
+ off-chip from the side of the sample using out-couplers
179
+ and subsequently filtered spectrally by a monochroma-
180
+ tor (70 µeV width) and coupled into two single-mode
181
+ fibres connected with superconducting single-photon de-
182
+ tectors (SSPD). Finally, the photon correlation statistics
183
+ were acquired by a multichannel picosecond event timer.
184
+ Data have been recorded under excitation powers corre-
185
+ sponding to half of the QD saturation intensity.
186
+ Fig. 2a and c present the second-order autocorrelation
187
+ function g(2)
188
+ HBT (τ) measurement recorded for each QD in-
189
+ dividually. In the case of both QDs, a clear suppression of
190
+ the central peak counts is visible, proving single-photon
191
+ emission. To quantitatively evaluate the probability of
192
+ multi-photon emission, g(2)
193
+ HBT (0) values were calculated
194
+ by integrating residual counts of the zero delay peak
195
+ with respect to the neighboring six peaks, resulting in
196
+ g(2)
197
+ HBT (0) = 0.35 ± 0.08 and g(2)
198
+ HBT (0) = 0.15 ± 0.02 for
199
+ QD1 and QD2, respectively.
200
+ In Fig. 2b and d, time-
201
+ resolved photoluminescence traces of the QD1 and QD2
202
+ emission are shown. In this case, the repetition rate of
203
+ the laser was reduced to 19 MHz using a pulse picker.
204
+ Clear bi-exponential signal decays are visible, with a fast
205
+ and slow time constant of 720±5 ps and 12±1 ns for QD1
206
+ and 600±5 ps and 22±1 ns for QD2. We attribute the
207
+ fast decay to the spontaneous recombination of electron-
208
+ hole pairs in QD (T1) and the slow one, which corre-
209
+ sponds to about 2% (1.2%) of the total QD1 (QD2) line
210
+ intensity, is tentatively interpreted as the recapturing of
211
+ the carriers by the QD. Using fit parameters obtained
212
+ from the time-resolved experiments, g(2)
213
+ HBT (τ) correla-
214
+ tion histograms have been fitted with double-sided bi-
215
+ exponential decay convoluted with 80 ps width Gaussian
216
+
217
+ 3
218
+ 0.0
219
+ 0.2
220
+ 0.4
221
+ 0.6
222
+ 0.8
223
+ 1.0
224
+ 1.2
225
+ 1.4
226
+ 1.6
227
+ experiment
228
+ fit
229
+ 20
230
+ 40
231
+ 60
232
+ 80
233
+ 100
234
+ 120
235
+ 140
236
+ 160
237
+ 180
238
+ Raw coincidences
239
+ 0
240
+ 5
241
+ 10 15 20
242
+ 101
243
+ 102
244
+ 103
245
+ PL (coincidences)
246
+ Time (ns)
247
+ Afast/Aslow = 85
248
+ -45 -30 -15
249
+ 0
250
+ 15 30 45
251
+ 0.0
252
+ 0.2
253
+ 0.4
254
+ 0.6
255
+ 0.8
256
+ 1.0
257
+ 1.2
258
+ 1.4
259
+ g(2)
260
+ HBT(t)
261
+ Delay time (ns)
262
+ 20
263
+ 40
264
+ 60
265
+ 80
266
+ 100
267
+ 120
268
+ 140
269
+ 160
270
+ 101
271
+ 102
272
+ 103
273
+ experiment
274
+ bi-exp fit
275
+ Afast/Aslow = 50
276
+ QD1
277
+ QD2
278
+ a
279
+ b
280
+ c
281
+ d
282
+ FIG. 2.
283
+ Single-photon generation and emission dy-
284
+ namics. a,c Second order auto-correlation histograms of QD1
285
+ and QD2 emission under pulsed 76 MHz repetition rate excita-
286
+ tion. Data have been recorded in HBT configuration using an
287
+ on-chip beamsplitter. b,d Time-resolved PL traces revealing
288
+ bi-exponential decays with fast (slow) time constant of 720 ps
289
+ (12 ns) and 600 ps (22 ns) for QD1 and QD2, respectively.
290
+ instrumental response function (black dashed lines).
291
+ To demonstrate on-chip two-photon interference, the
292
+ single QDs in both input arms of the DC are excited us-
293
+ ing the picosecond pulses. For that, the laser beam is di-
294
+ vided into two independently controllable optical excita-
295
+ tion axis and synchronized in advance to ensure optimal
296
+ temporal overlap of emitted photons on the DC. It is per-
297
+ formed by sending the emission from each QD separately
298
+ through the on-chip DC and using time-resolved detec-
299
+ tion to eliminate the time delay difference between inde-
300
+ pendently generated single photons. The same technique
301
+ is used to introduce an intentional 0.5 ns time delay for
302
+ reference measurements. The excitation laser powers for
303
+ each QD are adjusted such that their emission intensities
304
+ are the same (around half of QD1 saturated intensity).
305
+ As we utilize on-chip beam splitting operation using DC
306
+ with single-mode inputs and outputs, we expect a very
307
+ high spatial mode overlap of our interferometer. To test
308
+ this, we send the continuous wave laser simultaneously
309
+ into both DC input arms and record classical interference
310
+ fringes with 98±1% visibility. In earlier experiments per-
311
+ formed on ridge waveguide structures, we observed that
312
+ the QD emission couples into the well-defined transverse-
313
+ electric mode of the WG with close to unity degree of
314
+ linear polarization [37]. In the case of the investigated
315
+ device, the polarization of the emitted photons was an-
316
+ alyzed after passing the whole circuit consisting of bend
317
+ regions, DC itself, and out-couplers (more details in Sup-
318
+ plementary Section 6). We found that for both QDs, the
319
+ degree of polarization is above 95%, suggesting optimal
320
+ polarization alignment for interference experiments.
321
+ Within the above-mentioned prerequisites, the two-
322
+ photon interference should be mainly limited by the co-
323
+ herence of our single-photon emitters. To get access to
324
+ the coherence times of our QDs, we performed a high-
325
+ resolution measurement of the emission linewidths us-
326
+ ing a scanning Fabry-Perot interferometer. We extract
327
+ the full-width at half-maximum of 13.5±2.5 µeV and
328
+ 3.0±0.2 µeV by Lorentzian fit for QD1 and QD2, re-
329
+ spectively (see Supplementary Section 7).
330
+ The coher-
331
+ ence times calculated based on observed broadenings are
332
+ T QD1
333
+ 2
334
+ = 100±20 ps and T QD2
335
+ 2
336
+ = 440±30 ps. As the mea-
337
+ surements are performed on the tens of seconds timescale,
338
+ we speculate that recorded coherence times might be lim-
339
+ ited by charge and spin noise [38, 39]. Following Ref. [40],
340
+ we calculated the expected interference visibility of our
341
+ two independent emitters and derived the theoretical vis-
342
+ ibility in the range of Vtheory = 10-15%.
343
+ Figure 3a shows two-photon interference data in the
344
+ form of second-order HOM cross-correlation between
345
+ photons exiting the two output arms of the on-chip beam-
346
+ splitter. The height of the central peak is clearly below
347
+ the half intensity of the neighboring peaks, proving that
348
+ photons emitted by two separate QDs indeed interfere
349
+ on the DC. Another interference signature is the pres-
350
+ ence of the coincidences dip superimposed on the central
351
+ peak around zero time delay. The depth of this dip con-
352
+ stitutes to the interference events where photons arrive
353
+ simultaneously at the DC, giving rise to the narrow-time
354
+ window post-selected coalescence. In our case, the exact
355
+ value of g(2)
356
+ HOM(τ) at τ = 0 is equal to 0.17 in the case
357
+ of background-corrected data and 0.31 for as-measured
358
+ data. The same type of time post-selected interference
359
+ can be observed for cw HOM correlations.
360
+ Figure 3b
361
+ shows the non-corrected cw, and pulsed HOM interfer-
362
+ ence histograms overlapped on each other (correspond-
363
+ ing cw g(2)
364
+ HBT (τ) graphs are shown in Supplementary Sec-
365
+ tion 10). Similar to the pulsed case, the cw correlation
366
+ shows clear suppression of coincident counts at zero time
367
+ delay, with time post-selected g(2)
368
+ HOM(0) of 0.35, close to
369
+ the pulsed as-measured value of 0.31.
370
+ To evaluate photons full wave-packet interference
371
+ probability (non-post-selected), we calculate the pulsed
372
+ HOM correlation central peak area normalized by
373
+ the
374
+ average
375
+ area
376
+ of
377
+ the
378
+ neighboring
379
+ six
380
+ peaks.
381
+ For
382
+ integration
383
+ window
384
+ ∆t
385
+ of
386
+ 3
387
+ ns,
388
+ we
389
+ obtain
390
+ g(2)
391
+ HOM(0, ∆t) = 0.459±0.002 for background corrected
392
+ data and g(2)
393
+ HOM(0, ∆t) = 0.587±0.002 for raw data,
394
+ where uncertainty is based on the standard deviation
395
+ of non-central peaks areas. In the case of background-
396
+ corrected data, we reach a value below the 0.5 classical
397
+ limit. It needs to be noted that derived g(2)
398
+ HOM(0, ∆t) and
399
+ g(2)
400
+ HOM(0) values are partially influenced by the non-zero
401
+ multi-photon emission extend observed in HBT measure-
402
+ ments.
403
+
404
+ 4
405
+ -45
406
+ -30
407
+ -15
408
+ 0
409
+ 15
410
+ 30
411
+ 45
412
+ 0.0
413
+ 0.2
414
+ 0.4
415
+ 0.6
416
+ 0.8
417
+ 1.0
418
+ 1.2
419
+ 1.4
420
+ 1.6
421
+ 1.8
422
+ g(2)
423
+ HOM(t)
424
+ Delay time (ns)
425
+ 100
426
+ 200
427
+ 300
428
+ 400
429
+ 500
430
+ 600
431
+ Raw coincidences
432
+ -5 -4 -3 -2 -1
433
+ 0
434
+ 1
435
+ 2
436
+ 3
437
+ 4
438
+ 5
439
+ 0.0
440
+ 0.2
441
+ 0.4
442
+ 0.6
443
+ 0.8
444
+ 1.0
445
+ 1.2
446
+ 1.4
447
+ 1.6
448
+ HOM peak area - g(2)
449
+ HOM(t,Dt)
450
+ Peak number
451
+ indist. (sync.)
452
+ dist. (0.5 ns delay)
453
+ Dtint = 3 ns
454
+ V = 17.8±0.7%
455
+ g(2)
456
+ HOM(0,Dt) = 0.459±0.002
457
+ -4
458
+ -2
459
+ 0
460
+ 2
461
+ 4
462
+ 0.0
463
+ 0.2
464
+ 0.4
465
+ Dtint
466
+ -6
467
+ -4
468
+ -2
469
+ 0
470
+ 2
471
+ 4
472
+ 6
473
+ 0.0
474
+ 0.2
475
+ 0.4
476
+ 0.6
477
+ 0.8
478
+ 1.0
479
+ 1.2
480
+ cw
481
+ pulsed
482
+ g(2)
483
+ HOM(t)
484
+ Delay time (ns)
485
+ a
486
+ b
487
+ c
488
+ QD1
489
+ QD2
490
+ FIG. 3.
491
+ On-chip two-photon interference from separate quantum emitters.
492
+ a, Two-photon Hong-Ou-Mandel
493
+ interference measurement between QD1 and QD2 showing the normalized HOM coincidences versus the delay time.
494
+ The
495
+ central peak area is suppressed with respect to neighboring peaks. Inset: Magnified view of the central peak area. b, Raw
496
+ HOM interference measurement recorded under cw (red points) and pulsed (blue points) excitation. c, Integrated counts of
497
+ the central eleven peaks (∆t = 3 ns integration window) of the HOM correlation in case of synchronized (blue bars) and 0.5 ns
498
+ delayed (red bars) photons from QD1 and QD2. All presented data are recorded using an on-chip beamsplitter.
499
+ As it has been recently pointed out in Ref. [41, 42],
500
+ to estimate two-photon interference visibility for remote
501
+ emitters correctly, it is necessary to perform reference
502
+ HOM measurements for distinguishable photons, due to
503
+ the possible blinking effect. Since within the fabricated
504
+ circuit polarization rotation is impossible to unambigu-
505
+ ously confirm the two-photon interference and properly
506
+ evaluate visibility, photons were made distinguishable by
507
+ introducing a 0.5 ns time delay between excitation pulses.
508
+ Such delay should be sufficient to lose the temporal pho-
509
+ tons overlap on the DC within the emitters coherence
510
+ times and record reference data.
511
+ Figure 3c demonstrates the normalized histogram of
512
+ the central eleven peaks areas (∆t = 3 ns) of the
513
+ HOM second-order cross-correlation in case of synchro-
514
+ nized - indistinguishable (red bars) and 0.5 ns de-
515
+ layed - distinguishable (grey bars) photons.
516
+ The cen-
517
+ tral peak area in the case of unsynchronized pho-
518
+ tons is equal to g(2)
519
+ HOMd(0, ∆t) = 0.558±0.002, which
520
+ is slightly above the theoretically expected 0.5 value.
521
+ We relate this discrepancy with non-zero multi-photon
522
+ emission extend.
523
+ Finally, we calculate remote sources
524
+ two-photon interference visibility V
525
+ following V
526
+ =
527
+ [g(2)
528
+ HOMd(0, ∆t) − g(2)
529
+ HOM(0, ∆t)]/g(2)
530
+ HOMd(0, ∆t), resulting
531
+ in V = 17.8±0.7% for background corrected data. This
532
+ value is relatively close to the theoretically expected vis-
533
+ ibility and even partially exceeds it, suggesting that it is
534
+ limited solely by the coherence of the emitters (more de-
535
+ tails Supplementary Section 9). Using background cor-
536
+ rected data for the pulsed case, give post-selected visi-
537
+ bility of V ′
538
+ p = 66%. The probability of the time post-
539
+ selected interference is known to depend on the ratio of
540
+ the emitters coherence times to the setup timing reso-
541
+ lution [21, 22, 43], thus possibly even higher V ′ values
542
+ could be potentially achieved with faster detectors.
543
+ While our results provide clear scientific evidence for
544
+ on-chip generation and interference of on-demand sin-
545
+ gle photons in a circuit, the recorded visibility values
546
+ need to be improved for future practical applications. In
547
+ the current device architecture, the indistinguishability
548
+ of interfered photons is limited by T2/(2T1) of particu-
549
+ lar QDs. We propose a few strategies for improving this
550
+ ratio. Firstly, the QD charge environment could be stabi-
551
+ lized via passivation [44], weak optical illumination [45],
552
+ or gating [15, 46]. At the same time, by embedding QDs
553
+ into optical cavities, the Purcell effect might be used to
554
+ enhance the radiative emission rate 1/T1 [14–16, 46, 47].
555
+ Recently, we demonstrated a QD circuit with ring cav-
556
+ ities allowing to significantly increase the QD coupling
557
+ efficiency into the WG mode and decrease the T1 be-
558
+ low 200 ps [48].
559
+ Finally, by applying a resonant exci-
560
+ tation, the photons emission time-jitter could be mini-
561
+ mized, and strong suppression of multi-photon emission
562
+ events achieved [14–16, 37, 46, 48].
563
+ Within such cir-
564
+ cuit and excitation improvements, two-photon interfer-
565
+ ence with near-unity visibility seems to be within reach.
566
+ To realize circuits combining multiple QD sources cou-
567
+ pled to cavities, deterministic fabrication technologies
568
+ such as in-situ electron-beam lithography or imaging will
569
+ be required. This will allow to preselect emitters with
570
+ identical spectral characteristics, build cavities around
571
+ them and combine them within a single functional pho-
572
+ tonic circuit. In principle, since QD imaging could be
573
+ performed in an automatized manner, a very large num-
574
+ ber of emitters could be combined on a single chip. At
575
+ such a stage of complexity, separate control over QD
576
+ emission energies might also be desired.
577
+ This could
578
+ be directly implemented by a local laser drive via AC
579
+ Stark effect [49] or adapting the circuit for the electric
580
+ field [21, 46] and strain [22, 50–52] control. Ultimately, a
581
+ practical quantum photonic chip will require the presence
582
+ of additional functionalities such as single-photon detec-
583
+ tors and phase-shifter.
584
+ Fortunately, the GaAs circuits
585
+ are compatible with superconducting detectors technol-
586
+ ogy [53–55] and thanks to the large χ2 nonlinear coeffi-
587
+
588
+ 5
589
+ DC
590
+ Circuit
591
+ Ring
592
+ cavity
593
+ QD emitters
594
+ Phase shifter
595
+ Detectors
596
+ FIG. 4.
597
+ Envisioned fully integrated quantum photonic circuit.
598
+ Draft of the possible circuit design with multiple
599
+ quantum dot-based single photon sources coupled to ring cavities, interconnected with ridge waveguides, directional couplers,
600
+ phase shifters, and superconducting detectors.
601
+ cient of the GaAs, electro-optical phase shifters have al-
602
+ ready been demonstrated [56]. Such an envisioned fully
603
+ functional QDs-GaAs circuit is shown schematically in
604
+ Fig. 4.
605
+ In conclusion, we have shown that two identical QD
606
+ single-photon sources can be integrated monolithically in
607
+ a waveguide circuit and made to interfere with visibility
608
+ limited by the coherence of those sources. We pointed
609
+ out the potential strategies to improve the QDs perfor-
610
+ mance by employing deterministic fabrication and cavity
611
+ enhancement. The implemented integrated system could
612
+ be potentially further extended to facilitate more com-
613
+ plex circuits and fully on-chip operation. Results shown
614
+ in this article, along with a clearly outlined path for fu-
615
+ ture improvements, take us one step closer to scalable
616
+ integrated quantum circuits based on quantum emitters
617
+ capable of generating and manipulating large photonic
618
+ states.
619
+ Methods
620
+ Sample description.
621
+ To fabricate our integrated
622
+ single-photon source waveguide device, we use a semi-
623
+ conductor sample that contains self-assembled In(Ga)As
624
+ QDs grown by the Stranski-Krastanow method at the
625
+ center of a planar GaAs microcavity.
626
+ The lower
627
+ and upper cavity mirrors contain 24 and 5 pairs of
628
+ Al0.9Ga0.1As/GaAs λ/4-layers, respectively, yielding a
629
+ quality factor of ∼200.
630
+ A δ-doping layer of Si donors
631
+ with a surface density of roughly ∼1010 cm−2 was grown
632
+ 10 nm below the layer of QDs to dope them probabilis-
633
+ tically.
634
+ To fabricate ridge waveguides devices, the top
635
+ mirror layer along with the GaAs cavity is etched down,
636
+ forming the ridge with a width of ∼0.6 µm and a height
637
+ of ∼1.3 µm.
638
+ The cross-section of the WG with layer
639
+ structure is shown in Figure 1b (see also Supplementary
640
+ Section 1). Ridges have been defined by e-beam lithog-
641
+ raphy and reactive ion etching.
642
+ After processing, the
643
+ sample was cleaved perpendicularly to the WGs, around
644
+ 30 µm away from the tapered out-coupler edges to get
645
+ clear side access.
646
+ Integrated circuit design We designed and fabricated
647
+ GaAs directional couplers with different coupling lengths
648
+ and gaps. A directional coupler with a near 50:50 cou-
649
+ pling ratio at around 1.3931 eV was obtained when the
650
+ gap distance was set to 120 nm and the coupling length to
651
+ 30 µm (see Supplementary Section 2 for layout scheme).
652
+ The total length of the device was about 1 mm, including
653
+ four S-bends with a radius of 60 µm and the input/output
654
+ waveguides.
655
+ Experimental setup.
656
+ For all experiments, the sam-
657
+ ple is kept in a low-vibrations closed-cycle cryostat (at-
658
+ toDry800) at temperatures of ∼4.5 K. The cryostat is
659
+ equipped with two optical windows allowing for access
660
+ from the side and the top of the sample.
661
+ A spectro-
662
+ scopic setup consisting of two independent perpendicu-
663
+ larly aligned optical paths is employed (see Supplemen-
664
+ tary Section 3 for more details).
665
+ QDs embedded into
666
+ WGs are excited from the top through a first microscope
667
+ objective with NA = 0.26, while the emission signal is
668
+ detected from a side facet of the WG with a second ob-
669
+ jective with NA = 0.4. The photoluminescence signal,
670
+ simultaneously collected from both output arms of the
671
+ DC, is then passed through a Dove prim to rotate the
672
+ sample image plane from a horizontal into a vertical di-
673
+ rection to fit the monochromator slit orientation.
674
+ For
675
+ PL analysis, the signal is then spectrally dispersed by a
676
+ 75 cm focal length monochromator and focused on a low-
677
+ noise liquid-nitrogen-cooled CCD camera (around 40 µeV
678
+ spectral resolution), allowing to resolve signal from both
679
+ DC output arms spatially. For HBT and HOM experi-
680
+ ments, the monochromator serves as a spectral filter with
681
+ 70 µeV width, and the signal from both DC outputs is
682
+ introduced into separate single-mode optical fibres con-
683
+ nected with superconducting single-photon counting de-
684
+ tectors (30 ps time-response).
685
+ Integrated beamsplitter visibility. To test the clas-
686
+ sical visibility of the DC device, we simultaneously send
687
+ the continuous wave laser light tuned to the energy of
688
+ QDs transitions, using circular reflectors placed on the
689
+ ends of the input arms waveguides (see Supplementary
690
+ Section 8).
691
+ The power of the laser coupled into both
692
+
693
+ O6
694
+ arms was adjusted such that the intensity from both in-
695
+ put arms was the same. Next, we focused on the signal
696
+ passing through DC and out coupled from one output
697
+ arm. We observed intensity modulation in the function of
698
+ time, related to small path-length difference fluctuation,
699
+ allowing us to see interference pattern and calculate the
700
+ interferometer visibility. In the case of the investigated
701
+ device, the classical visibility of 98±1% was extracted.
702
+ Correlation histograms analysis. For the time post-
703
+ selected visibility V ′ analysis, we assume that for distin-
704
+ guishable photons g(2)
705
+ HOMd(0) is equal to 0.5, as reference
706
+ measurement is not possible. It allows to calculate V ′ ac-
707
+ cording to V ′ = [0.5 − g(2)
708
+ HOM(0)]/0.5. Data from Fig.3b
709
+ lead to raw visibility of V ′
710
+ cw = 30% and V ′
711
+ p = 38% for
712
+ cw and pulsed excitation, respectively. For g(2)
713
+ HBT (τ) and
714
+ g(2)
715
+ HOM(τ) correlation functions evaluation we take into
716
+ account a presence of the time-independent background
717
+ offset in recorded histograms (it constitutes to around
718
+ 15-20% of the coincidences), which we relate to the dark
719
+ counts of the SSPDs (100-500 cps).
720
+ Non-background-
721
+ corrected HOM graphs can be found in Fig. 3c and the
722
+ Supplementary Section 9.
723
+ The authors thank Silke Kuhn for fabricating the
724
+ structures.
725
+ �L.D. acknowledges the financial support
726
+ from the Alexander von Humboldt Foundation. We ac-
727
+ knowledge financial support by the German Ministry
728
+ of Education and Research (BMBF) within the project
729
+ ”Q.Link.X” (FKZ: 16KIS0871).
730
+ We are furthermore
731
+ grateful for the support by the State of Bavaria.
732
733
+ [1] Kok, P.; Lovett, B. Introduction to Optical Quantum In-
734
+ formation Processing; Cambridge University Press, 2010.
735
+ [2] Hong, C. K.; Ou, Z. Y.; Mandel, L. Measurement of sub-
736
+ picosecond time intervals between two photons by inter-
737
+ ference. Physical Review Letters 1987, 59, 2044–2046.
738
+ [3] Bonneau, D.; Silverstone, J. W.; Thompson, M. G. Top-
739
+ ics in Applied Physics; Springer, 2016; pp 41–82.
740
+ [4] Dietrich, C. P.; Fiore, A.; Thompson, M. G.; Kamp, M.;
741
+ H¨ofling, S. GaAs integrated quantum photonics: Towards
742
+ compact and multi-functional quantum photonic inte-
743
+ grated circuits. Laser & Photonics Reviews 2016, 10,
744
+ 870.
745
+ [5] Wang, J.; Sciarrino, F.; Laing, A.; Thompson, M. G. Inte-
746
+ grated photonic quantum technologies. Nature Photonics
747
+ 2019,
748
+ [6] Crespi, A.; Ramponi, R.; Osellame, R.; Sansoni, L.; Bon-
749
+ gioanni, I.; Sciarrino, F.; Vallone, G.; Mataloni, P. Inte-
750
+ grated photonic quantum gates for polarization qubits.
751
+ Nature Communications 2011, 2, 566.
752
+ [7] Carolan, J. et al. Universal linear optics. Science 2015,
753
+ 349, 711–716.
754
+ [8] Crespi, A.; Osellame, R.; Ramponi, R.; Brod, D. J.;
755
+ Galv˜ao, E. F.; Spagnolo, N.; Vitelli, C.; Maiorino, E.;
756
+ Mataloni, P.; Sciarrino, F. Integrated multimode interfer-
757
+ ometers with arbitrary designs for photonic boson sam-
758
+ pling. Nature Photonics 2013, 7, 545–549.
759
+ [9] Peruzzo, A.; Lobino, M.; Matthews, J. C. F.; Mat-
760
+ suda, N.; Politi, A.; Poulios, K.; Zhou, X.-Q.; Lahini, Y.;
761
+ Ismail, N.; Worhoff, K.; Bromberg, Y.; Silberberg, Y.;
762
+ Thompson, M. G.; OBrien, J. L. Quantum Walks of Cor-
763
+ related Photons. Science 2010, 329, 1500–1503.
764
+ [10] Politi, A.; Matthews, J. C. F.; O’Brien, J. L. Shor’s
765
+ Quantum Factoring Algorithm on a Photonic Chip. Sci-
766
+ ence 2009, 325, 1221–1221.
767
+ [11] Llewellyn, D. et al. Chip-to-chip quantum teleportation
768
+ and multi-photon entanglement in silicon. Nature Physics
769
+ 2019,
770
+ [12] Silverstone, J. W.; Bonneau, D.; Ohira, K.; Suzuki, N.;
771
+ Yoshida, H.; Iizuka, N.; Ezaki, M.; Natarajan, C. M.;
772
+ Tanner, M. G.;
773
+ Hadfield, R. H.;
774
+ Zwiller, V.;
775
+ Mar-
776
+ shall, G. D.; Rarity, J. G.; O’Brien, J. L.; Thomp-
777
+ son, M. G. On-chip quantum interference between silicon
778
+ photon-pair sources. Nature Photonics 2013, 8, 104–108.
779
+ [13] Wang, J. et al. Multidimensional quantum entanglement
780
+ with large - scale integrated optics. Science 2018, 360,
781
+ 285–291.
782
+ [14] Ding,
783
+ X.;
784
+ He,
785
+ Y.;
786
+ Duan,
787
+ Z.
788
+ C.;
789
+ Gregersen,
790
+ N.;
791
+ Chen, M. C.; Unsleber, S.; Maier, S.; Schneider, C.;
792
+ Kamp, M.;
793
+ H¨ofling, S.;
794
+ Lu, C.-Y.;
795
+ Pan, J.-W. On-
796
+ Demand Single Photons with High Extraction Efficiency
797
+ and Near-Unity Indistinguishability from a Resonantly
798
+ Driven Quantum Dot in a Micropillar. Physical Review
799
+ Letters 2016, 116, 020401.
800
+ [15] Somaschi, N. et al. Near-optimal single-photon sources in
801
+ the solid state. Nature Photonics 2016, 10, 340–345.
802
+ [16] Unsleber, S.; He, Y.-M.; Maier, S.; Gerhardt, S.; Lu, C.-
803
+ Y.; Pan, J.-W.; Kamp, M.; Schneider, C.; H¨ofling, S.
804
+ Highly indistinguishable on-demand resonance fluores-
805
+ cence photons from a deterministic quantum dot mi-
806
+ cropillar device with 75% extraction efficiency. Optics ex-
807
+ press 2016, 24, 8539–8546.
808
+ [17] Aharonovich, I.; Englund, D.; Toth, M. Solid-state single-
809
+ photon emitters. Nature Photonics 2016, 10, 631–641.
810
+ [18] Senellart, P.; Solomon, G.; White, A. High-performance
811
+ semiconductor quantum-dot single-photon sources. Na-
812
+ ture Nanotechnology 2017, 12, 1026–1039.
813
+ [19] Beugnon, J.; Jones, M. P.; Dingjan, J.; Darqui´e, B.;
814
+ Messin, G.; Browaeys, A.; Grangier, P. Quantum inter-
815
+ ference between two single photons emitted by indepen-
816
+ dently trapped atoms. Nature 2006, 440, 779–782.
817
+ [20] Maunz,
818
+ P.;
819
+ Moehring,
820
+ D.
821
+ L.;
822
+ Olmschenk,
823
+ S.;
824
+ Younge,
825
+ K.
826
+ C.;
827
+ Matsukevich,
828
+ D.
829
+ N.;
830
+ Monroe,
831
+ C.
832
+ Quantum interference of photon pairs from two remote
833
+ trapped atomic ions. Nature Physics 2007, 3, 538–541.
834
+ [21] Patel, R. B.; Bennett, A. J.; Farrer, I.; Nicoll, C. A.;
835
+ Ritchie, D. A.; Shields, A. J. Two-photon interference of
836
+ the emission from electrically tunable remote quantum
837
+ dots. Nature Photonics 2010, 4, 632–635.
838
+ [22] Flagg, E.; Muller, A.; Polyakov, S.; Ling, A.; Migdall, A.;
839
+ Solomon, G. Interference of Single Photons from Two
840
+ Separate Semiconductor Quantum Dots. Physical Review
841
+ Letters 2010, 104, 137401.
842
+ [23] Konthasinghe, K.; Peiris, M.; Yu, Y.; Li, M. F.; He, J. F.;
843
+ Wang, L. J.; Ni, H. Q.; Niu, Z. C.; Shih, C. K.; Muller, A.
844
+ Field-Field and Photon-Photon Correlations of Light
845
+ Scattered by Two Remote Two-Level InAs Quantum
846
+ Dots on the Same Substrate. Physical Review Letters
847
+ 2012, 109, 267402.
848
+
849
+ 7
850
+ [24] Gold, P.; Thoma, A.; Maier, S.; Reitzenstein, S.; Schnei-
851
+ der, C.; H¨ofling, S.; Kamp, M. Two-photon interference
852
+ from remote quantum dots with inhomogeneously broad-
853
+ ened linewidths. Physical Review B 2014, 89, 035313.
854
+ [25] Kim, J. H.; Richardson, C. J. K.; Leavitt, R. P.; Waks, E.
855
+ Two-Photon Interference from the Far-Field Emission of
856
+ Chip-Integrated Cavity-Coupled Emitters. Nano Letters
857
+ 2016, 16, 7061–7066.
858
+ [26] Reindl, M.; J¨ons, K. D.; Huber, D.; Schimpf, C.; Huo, Y.;
859
+ Zwiller, V.; Rastelli, A.; Trotta, R. Phonon-Assisted
860
+ Two-Photon Interference from Remote Quantum Emit-
861
+ ters. Nano Letters 2017, 17, 4090–4095.
862
+ [27] Ellis, D. J. P.; Bennett, A. J.; Dangel, C.; Lee, J. P.; Grif-
863
+ fiths, J. P.; Mitchell, T. A.; Paraiso, T.-K.; Spencer, P.;
864
+ Ritchie, D. A.; Shields, A. J. Independent indistinguish-
865
+ able quantum light sources on a reconfigurable photonic
866
+ integrated circuit. Applied Physics Letters 2018, 112,
867
+ 211104.
868
+ [28] Weber, J. H.; Kambs, B.; Kettler, J.; Kern, S.; Maisch, J.;
869
+ Vural, H.; Jetter, M.; Portalupi, S. L.; Becher, C.; Mich-
870
+ ler, P. Two-photon interference in the telecom C-band
871
+ after frequency conversion of photons from remote quan-
872
+ tum emitters. Nature Nanotechnology 2019, 14, 23–26.
873
+ [29] Zhai, L.; Nguyen, G. N.; Spinnler, C.; Ritzmann, J.;
874
+ L¨obl, M. C.; Wieck, A. D.; Ludwig, A.; Javadi, A.; War-
875
+ burton, R. J. Quantum interference of identical photons
876
+ from remote GaAs quantum dots. Nature Nanotechnol-
877
+ ogy 2022, 17, 829–833, Number: 8 Publisher: Nature
878
+ Publishing Group.
879
+ [30] You, X. et al. Quantum interference between inde-
880
+ pendent solid-state single-photon sources separated by
881
+ 300 km fiber. 2021; http://arxiv.org/abs/2106.15545,
882
+ arXiv:2106.15545 [cond-mat, physics:quant-ph].
883
+ [31] Lettow, R.; Rezus, Y. L.; Renn, A.; Zumofen, G.; Iko-
884
+ nen, E.; G¨otzinger, S.; Sandoghdar, V. Quantum inter-
885
+ ference of tunably indistinguishable photons from remote
886
+ organic molecules. Physical Review Letters 2010, 104,
887
+ 26–29.
888
+ [32] Duquennoy, R.; Colautti, M.; Emadi, R.; Majumder, P.;
889
+ Lombardi, P.; Toninelli, C. Real-time two-photon inter-
890
+ ference from distinct molecules on the same chip. Optica
891
+ 2022, 9, 731.
892
+ [33] Bernien, H.; Childress, L.; Robledo, L.; Markham, M.;
893
+ Twitchen, D.; Hanson, R. Two-Photon Quantum Inter-
894
+ ference from Separate Nitrogen Vacancy Centers in Dia-
895
+ mond. Physical Review Letters 2012, 108, 043604.
896
+ [34] Sipahigil, A.; Goldman, M. L.; Togan, E.; Chu, Y.;
897
+ Markham,
898
+ M.;
899
+ Twitchen,
900
+ D.
901
+ J.;
902
+ Zibrov,
903
+ A.
904
+ S.;
905
+ Kubanek, A.; Lukin, M. D. Quantum Interference of Sin-
906
+ gle Photons from Remote Nitrogen-Vacancy Centers in
907
+ Diamond. Physical Review Letters 2012, 108, 143601.
908
+ [35] Sipahigil, A.; Jahnke, K. D.; Rogers, L. J.; Teraji, T.;
909
+ Isoya, J.; Zibrov, A. S.; Jelezko, F.; Lukin, M. D. In-
910
+ distinguishable Photons from Separated Silicon-Vacancy
911
+ Centers in Diamond. Physical Review Letters 2014, 113,
912
+ 113602.
913
+ [36] Stolk, A. et al. Telecom-Band Quantum Interference of
914
+ Frequency-Converted Photons from Remote Detuned NV
915
+ Centers. PRX Quantum 2022, 3, 020359.
916
+ [37] Dusanowski, �L.; Kwon, S.-h.; Schneider, C.; H¨ofling, S.
917
+ Near-Unity Indistinguishability Single Photon Source for
918
+ Large-Scale Integrated Quantum Optics. Physical Review
919
+ Letters 2019, 122, 173602.
920
+ [38] Kuhlmann, A. V.; Houel, J.; Ludwig, A.; Greuter, L.;
921
+ Reuter, D.; Wieck, A. D.; Poggio, M.; Warburton, R. J.
922
+ Charge noise and spin noise in a semiconductor quantum
923
+ device. Nature Physics 2013, 9, 570–575.
924
+ [39] Makhonin, M. N.; Dixon, J. E.; Coles, R. J.; Royall, B.;
925
+ Luxmoore, I. J.; Clarke, E.; Hugues, M.; Skolnick, M. S.;
926
+ Fox, A. M. Waveguide coupled resonance fluorescence
927
+ from on-chip quantum emitter. Nano Letters 2014, 14,
928
+ 6997–7002.
929
+ [40] Kambs, B.; Becher, C. Limitations on the indistinguisha-
930
+ bility of photons from remote solid state sources. New
931
+ Journal of Physics 2018, 20, 115003.
932
+ [41] J¨ons, K. D.;
933
+ Stensson, K.;
934
+ Reindl, M.;
935
+ Swillo, M.;
936
+ Huo, Y.; Zwiller, V.; Rastelli, A.; Trotta, R.; Bj¨ork, G.
937
+ Two-photon interference from two blinking quantum
938
+ emitters. Physical Review B 2017, 96, 075430.
939
+ [42] Weber, J. H.;
940
+ Kettler, J.;
941
+ Vural, H.;
942
+ M¨uller, M.;
943
+ Maisch, J.; Jetter, M.; Portalupi, S. L.; Michler, P. Over-
944
+ coming correlation fluctuations in two-photon interfer-
945
+ ence experiments with differently bright and indepen-
946
+ dently blinking remote quantum emitters. Physical Re-
947
+ view B 2018, 97, 195414.
948
+ [43] Kiraz, A.; Atat¨ure, M.; Imamo˘glu, A. Quantum-dot
949
+ single-photon sources: Prospects for applications in lin-
950
+ ear optics quantum-information processing. Physical Re-
951
+ view A 2004, 69, 032305.
952
+ [44] Press, D.; De Greve, K.; McMahon, P. L.; Ladd, T. D.;
953
+ Friess, B.;
954
+ Schneider, C.;
955
+ Kamp, M.;
956
+ H¨ofling, S.;
957
+ Forchel, A.; Yamamoto, Y. Ultrafast optical spin echo
958
+ in a single quantum dot. Nature Photonics 2010, 4, 367.
959
+ [45] Majumdar, A.; Kim, E. D.; Vuˇckovi´c, J. Effect of photo-
960
+ generated carriers on the spectral diffusion of a quantum
961
+ dot coupled to a photonic crystal cavity. Physical Review
962
+ B 2011, 84, 195304.
963
+ [46] Liu, F.; Brash, A. J.; O’Hara, J.; Martins, L. M. P. P.;
964
+ Phillips, C. L.; Coles, R. J.; Royall, B.; Clarke, E.; Ben-
965
+ tham, C.; Prtljaga, N.; Itskevich, I. E.; Wilson, L. R.;
966
+ Skolnick, M. S.; Fox, A. M. High Purcell factor genera-
967
+ tion of indistinguishable on-chip single photons. Nature
968
+ Nanotechnology 2018, 13, 835.
969
+ [47] Iles-Smith, J.; McCutcheon, D. P. S.; Nazir, A.; Mørk, J.
970
+ Phonon scattering inhibits simultaneous near-unity effi-
971
+ ciency and indistinguishability in semiconductor single-
972
+ photon sources. Nature Photonics 2017, 11, 521–526.
973
+ [48] Dusanowski, �L.; K¨ock, D.; Shin, E.; Kwon, S.-H.; Schnei-
974
+ der, C.; H¨ofling, S. Purcell-Enhanced and Indistinguish-
975
+ able Single-Photon Generation from Quantum Dots Cou-
976
+ pled to On-Chip Integrated Ring Resonators. Nano Let-
977
+ ters 2020, 20, 6357–6363.
978
+ [49] Dusanowski, �L.; Gustin, C.; Hughes, S.; Schneider, C.;
979
+ H¨ofling, S. All-Optical Tuning of Indistinguishable Single
980
+ Photons Generated in Three-Level Quantum Systems.
981
+ Nano Letters 2022, 22, 3562–3568, Publisher: American
982
+ Chemical Society (ACS).
983
+ [50] Beetz,
984
+ J.;
985
+ Braun,
986
+ T.;
987
+ Schneider,
988
+ C.;
989
+ H¨ofling,
990
+ S.;
991
+ Kamp, M. Anisotropic strain-tuning of quantum dots in-
992
+ side a photonic crystal cavity. Semiconductor Science and
993
+ Technology 2013, 28, 122002.
994
+ [51] Elshaari, A. W.; B¨uy¨uk¨ozer, E.; Zadeh, I. E.; Lettner, T.;
995
+ Zhao, P.; Sch¨oll, E.; Gyger, S.; Reimer, M. E.; Dalacu, D.;
996
+ Poole, P. J.; J¨ons, K. D.; Zwiller, V. Strain-Tunable
997
+ Quantum Integrated Photonics. Nano Letters 2018, 18,
998
+ 7969–7976.
999
+ [52] Mocza�la-Dusanowska, M.; Dusanowski, �L.; Gerhardt, S.;
1000
+ He, Y. M.;
1001
+ Reindl, M.;
1002
+ Rastelli, A.;
1003
+ Trotta, R.;
1004
+
1005
+ 8
1006
+ Gregersen,
1007
+ N.;
1008
+ H¨ofling,
1009
+ S.;
1010
+ Schneider,
1011
+ C.
1012
+ Strain-
1013
+ Tunable Single-Photon Source Based on a Quantum
1014
+ Dot–Micropillar System. ACS Photonics 2019, 6, 2025–
1015
+ 2031.
1016
+ [53] Sprengers, J. P.; Gaggero, A.; Sahin, D.; Jahanmirine-
1017
+ jad, S.; Frucci, G.; Mattioli, F.; Leoni, R.; Beetz, J.; Ler-
1018
+ mer, M.; Kamp, M.; H¨ofling, S.; Sanjines, R.; Fiore, A.
1019
+ Waveguide superconducting single-photon detectors for
1020
+ integrated quantum photonic circuits. Applied Physics
1021
+ Letters 2011, 99, 181110.
1022
+ [54] Reithmaier, G.; Kaniber, M.; Flassig, F.; Lichtman-
1023
+ necker, S.;
1024
+ M¨uller, K.;
1025
+ Andrejew, A.;
1026
+ Vuˇckovi´c, J.;
1027
+ Gross, R.; Finley, J. J. On-Chip Generation, Routing,
1028
+ and Detection of Resonance Fluorescence. Nano Letters
1029
+ 2015, 15, 5208–5213.
1030
+ [55] Schwartz, M.; Schmidt, E.; Rengstl, U.; Hornung, F.;
1031
+ Hepp, S.;
1032
+ Portalupi, S. L.;
1033
+ Llin, K.;
1034
+ Jetter, M.;
1035
+ Siegel, M.; Michler, P. Fully On-Chip Single-Photon
1036
+ Hanbury-Brown and Twiss Experiment on a Monolithic
1037
+ Semiconductor–Superconductor Platform. Nano Letters
1038
+ 2018, 18, 6892–6897.
1039
+ [56] Wang, J. et al. Gallium arsenide (GaAs) quantum pho-
1040
+ tonic waveguide circuits. Optics Communications 2014,
1041
+ 327, 49–55.
1042
+
1043
+ Supporting Information:
1044
+ On-chip Hong-Ou-Mandel interference from separate quantum
1045
+ dot emitters in an integrated circuit
1046
+ �Lukasz Dusanowski,1, 2, ∗ Dominik K¨ock,1 Christian Schneider,1, 3 and Sven H¨ofling1
1047
+ 1Technische Physik and W¨urzburg-Dresden Cluster of Excellence ct.qmat,
1048
+ University of W¨urzburg, Physikalisches Institut and
1049
+ Wilhelm-Conrad-R¨ontgen-Research Center for Complex Material Systems,
1050
+ Am Hubland, D-97074 W¨urzburg, Germany
1051
+ 2currently at: Department of Electrical and Computer Engineering,
1052
+ Princeton University, Princeton, NJ 08544, USA
1053
+ 3Institute of Physics, University of Oldenburg, D-26129 Oldenburg, Germany
1054
+ (Dated: January 5, 2023)
1055
+ 1
1056
+ arXiv:2301.01706v1 [quant-ph] 4 Jan 2023
1057
+
1058
+ S1: SAMPLE STRUCTURE
1059
+ The full layer structure of the sample is shown in Figure S1.
1060
+ etched
1061
+ etched
1062
+ 600 nm
1063
+ 1250 nm
1064
+ Bottom DBR mirror:
1065
+ 24x Al0.9Ga0.1As/GaAs
1066
+ Top DBR mirror:
1067
+ 5x Al0.9Ga0.1As/GaAs
1068
+ GaAs substrate
1069
+ In(Ga)As QDs and WL
1070
+ GaAs
1071
+ λ-cavity
1072
+ 1 µm
1073
+ FIG. S1. Planar sample scanning electron microscope cross-section image with visible layers and
1074
+ schematically marked areas for etching.
1075
+ The quantum dot layer is placed inside a center of a
1076
+ λ cavity sandwiched between two distributed Bragg Reflectors consisting of the 5/24 alternating
1077
+ λ/4-thick layers of Al0.9Ga0.1As and GaAs.
1078
+ S2: INTEGRATED CIRCUIT LAYOUT
1079
+ Figure S2 shows the layout scheme of the fabricated GaAs device. It is based on 600 nm
1080
+ width single-mode ridge waveguides. The central part of the circuit is a directional cou-
1081
+ pler (DC) formed by two WGs separated by a 120 nm gap along a 30 µm long coupling
1082
+ region. WGs were brought together using two circular bend regions with a radius of 60 µm.
1083
+ Waveguides on the right end of the circuit are terminated by inverse taper (30 µm length)
1084
+ out-couplers, minimizing reflection and optimized for better light extraction out of the chip.
1085
+ The left side of the DC consists of 1.2 mm long straight WG sections designated for search-
1086
+ ing two quantum dots with the same transition frequencies. WGs on the left side of the
1087
+ circuit are terminated with circular Bragg grating mirrors optimized for increased reflectiv-
1088
+ ity (around 80% expected at 900 nm). Figure S3 shows the scanning electron microscope
1089
+ images of the fabricated integrated circuits.
1090
+ 2
1091
+
1092
+ 350nm
1093
+ 30µm
1094
+ 30µm
1095
+ Gap=120nm
1096
+ W=600nm
1097
+ 70µm
1098
+ R=60µm
1099
+ R=60µm
1100
+ 1200µm
1101
+ 730nm
1102
+ FIG. S2. Integrated circuit layout. Scheme of the fabricated GaAs directional coupler including
1103
+ circular Bragg reflectors located at the ends of the input WG arms and inverse taper out-couplers
1104
+ at the end of the output arms. Bending regions are based on circular profiles with a radius of
1105
+ 60 µm.
1106
+ a
1107
+ b
1108
+ c
1109
+ d
1110
+ e
1111
+ f
1112
+ 120 µm
1113
+ 60 µm
1114
+ 30 µm
1115
+ 3 µm
1116
+ 3 µm
1117
+ 3 µm
1118
+ FIG. S3.
1119
+ Scanning electron microscope images of the fabricated devices.
1120
+ a,b, The DCs with
1121
+ different coupling lengths. c,d, The DC with 30 µm length coupling region. e, An inverse taper
1122
+ outcoupler. f, A circular Bragg grating reflector.
1123
+ S3: OPTICAL SET-UP
1124
+ For all experiments, the sample is kept in a low-vibrations closed-cycle cryostat (at-
1125
+ toDry800) at temperatures of ∼4.5 K. The cryostat is equipped with two optical windows
1126
+ allowing for access from both side and top of the sample. A spectroscopic setup consisting of
1127
+ two independent perpendicularly aligned optical paths is employed as shown schematically
1128
+ 3
1129
+
1130
+ SSPD1
1131
+ SSPD2
1132
+ Ti:Si pulsed
1133
+ laser 813nm
1134
+ 76 MHz
1135
+ Pulse picker
1136
+ 19MHz
1137
+ (for TRPL)
1138
+ BS 50:50
1139
+ CW 660nm laser
1140
+ Obj.
1141
+ x10
1142
+ BS 92:8
1143
+ Dove prism
1144
+ Obj. x20
1145
+ L1
1146
+ L2
1147
+ L3
1148
+ CCD
1149
+ Slit
1150
+ Slit
1151
+ Knife edge
1152
+ mirror
1153
+ Monochromator
1154
+ Sample in cryostat
1155
+ Time-tagger
1156
+ L4
1157
+ SM fiber
1158
+ SM fiber
1159
+ SM fiber
1160
+ 0.5 ns delay
1161
+ fiber
1162
+ ND filter
1163
+ excitation
1164
+ XYZ position
1165
+ control
1166
+ HWP+LP
1167
+ 90 deg image rotation
1168
+ Dove prism
1169
+ Mirror
1170
+ FIG. S4. Optical setup. Scheme of the experimental configuration used for top excitation (blue
1171
+ path) and side detection (red path) photoluminescence and resonance fluorescence measurements.
1172
+ In the case of two-photon interference experiments, a QD was excited twice every laser pulse cycle
1173
+ with a delay of 3 ns, and the subsequently emitted photons, spatially and temporally overlapped
1174
+ in an unbalanced Mach-Zehnder interferometer (dashed lines) utilizing polarization maintaining
1175
+ (PM) fibers and beam-splitters (BS). For signal detection, two avalanche photo-diodes (APD) with
1176
+ 350 ps response time were used. For polarization control in free space, a half-wave-plate (HWP)
1177
+ combined with a linear polarizer (LP) was used, while for polarization rotation (PR) in the fiber-
1178
+ based HOM interferometer ceramic sleeve connectors between two fiber facets were used allowing
1179
+ to align fast and slow axis at the desired angle.
1180
+ in Figure S4. Additionally, the excitation path allows for the separate routing of two laser
1181
+ beams for the simultaneous excitation of two spots on the sample. For HOM and HBT
1182
+ experiments, a tunable Ti:Si picosecond pulsed laser is used. QDs embedded into the two
1183
+ input arms of the DC are excited from the top through a first microscope objective with
1184
+ x10 magnification and NA = 0.26, while the emission signal from both DC output arms
1185
+ is detected simultaneously from the side facet of the sample with a second objective with
1186
+ x20 magnification and NA = 0.4. Photoluminescence signal from both arms is then passed
1187
+ through a spatial filter (lenses L1 and L2) and polarization optics. For light polarization
1188
+ analysis, a half-wave plate (HWP) combined with a linear polarizer (LP) is used. The sam-
1189
+ ple image plane is rotated from a horizontal into a vertical direction using a Dove prism,
1190
+ which allows simultaneous coupling signals from both DC output arms into the monochro-
1191
+ 4
1192
+
1193
+ mator. The collected light is analyzed by a high-resolution monochromator equipped with
1194
+ a liquid nitrogen-cooled low-noise charge-coupled device detector (CCD), featuring a spec-
1195
+ tral resolution of ∼40 µeV. Taking advantage of the spatial separation of DC output arms,
1196
+ the spectrum from both WG arms is resolved spatially on the CCD camera.
1197
+ For HBT
1198
+ and HOM experiments, the monochromator is used as a spectral filter with 70 µeV width.
1199
+ Next, the signal from both arms is separated spatially using a knife-edge mirror and cou-
1200
+ pled into single-mode fibres interconnected with superconducting single-photon detectors
1201
+ (SSPD). The time-correlated measurements are acquired using a stand-alone time-tagger.
1202
+ S4: POWER-RESOLVED PL
1203
+ In Fig. S5 QD1 and QD2 emission intensity as a function of excitation power is shown.
1204
+ An almost linear dependence of the emission intensity on excitation power suggests that the
1205
+ analyzed lines originate from the recombination of neutral or charged excitonic complexes.
1206
+ a
1207
+ b
1208
+ 0.01
1209
+ 0.1
1210
+ 1
1211
+ 0.01
1212
+ 0.1
1213
+ 1
1214
+ PL intensity (arb. units)
1215
+ Power (P/Psat)
1216
+ QD1
1217
+ I~ P0.90±0.05
1218
+ QD2
1219
+ I~ P0.93±0.05
1220
+ PL intensity (arb. units)
1221
+ Power (P/Psat)
1222
+ FIG. S5.
1223
+ Photoluminescence intensity vs incident excitation power. Solid red/blue curve: fit with
1224
+ a power function revealing linear dependence of the emission intensity on excitation power.
1225
+ 5
1226
+
1227
+ S5: WAVEGUIDE TRANSMISSION LOSSES
1228
+ To estimate the quality of the etched ridge waveguides, the optical WG transmission
1229
+ losses were determined. For that purpose, the sample was excited with very high pumping
1230
+ power, allowing us to observe spectrally broad QD ensemble emission. The beam spot was
1231
+ scanned along the DC input arm 1/2, and emission was detected from the side through the
1232
+ waveguide arm 1. Figure S6a and b show the corresponding attenuation of the measured
1233
+ intensities at 890 nm plotted as a function of the distance to the DC bends for input arms 1
1234
+ and 2, respectively. Input arm 1 exhibits transmission losses on the level of 6.5±0.5 dB/mm
1235
+ and arm 2 of 5.0±0.6 dB/mm. Waveguide transmission characteristics are limited by ridge
1236
+ sidewall imperfections, which could be potentially further improved by optimizing the etch-
1237
+ ing process.
1238
+ a
1239
+ b
1240
+ 0.0
1241
+ 0.2
1242
+ 0.4
1243
+ 0.6
1244
+ 0.8
1245
+ 1.0
1246
+ 8
1247
+ 7
1248
+ 6
1249
+ 5
1250
+ 4
1251
+ 3
1252
+ 2
1253
+ 1
1254
+ 0
1255
+ -1
1256
+ 0.0
1257
+ 0.2
1258
+ 0.4
1259
+ 0.6
1260
+ 0.8
1261
+ 5
1262
+ 4
1263
+ 3
1264
+ 2
1265
+ 1
1266
+ 0
1267
+ -1
1268
+ Input Arm 1
1269
+ Losses: 6.5±0.5 dB/mm
1270
+ experiment
1271
+ fit
1272
+ Attenuation (dB)
1273
+ Distance (mm)
1274
+ Input Arm 2
1275
+ Losses: 5.0±0.6 dB/mm
1276
+ experiment
1277
+ fit
1278
+ Attenuation (dB)
1279
+ Distance (mm)
1280
+ FIG. S6. Waveguides transmission losses. Attenuation of the side detected ensemble PL signal in
1281
+ function of the distance from DC bend regions.
1282
+ 6
1283
+
1284
+ S6: POLARIZATION-RESOLVED PL
1285
+ Side detected emission from both studied emission lines show a high degree of linear
1286
+ polarization (DOLP) of around 95%, oriented in the sample plane as shown in Fig. S7. A
1287
+ high DOLP and its direction are related to the QDs dipole moments, which are mainly
1288
+ in-plane oriented and thus emitted photons mostly couple to and propagate in the TE
1289
+ waveguide mode. It needs to be noted that high DOLP is maintained after passing the
1290
+ whole circuits consisting of bends, DC, and out-coupler, which could potentially spoil the
1291
+ detected polarization contrast. The same polarization level is observed for both output arms
1292
+ of the DC.
1293
+ QD1
1294
+ QD2
1295
+ 0
1296
+ 30
1297
+ 60
1298
+ 90
1299
+ 120
1300
+ 150
1301
+ 180
1302
+ 210
1303
+ 240
1304
+ 270
1305
+ 300
1306
+ 330
1307
+ 0.0
1308
+ 0.2
1309
+ 0.4
1310
+ 0.6
1311
+ 0.8
1312
+ 1.0
1313
+ 0.0
1314
+ 0.2
1315
+ 0.4
1316
+ 0.6
1317
+ 0.8
1318
+ 1.0
1319
+ 0
1320
+ 30
1321
+ 60
1322
+ 90
1323
+ 120
1324
+ 150
1325
+ 180
1326
+ 210
1327
+ 240
1328
+ 270
1329
+ 300
1330
+ 330
1331
+ 0.0
1332
+ 0.2
1333
+ 0.4
1334
+ 0.6
1335
+ 0.8
1336
+ 1.0
1337
+ 1.2
1338
+ 0.0
1339
+ 0.2
1340
+ 0.4
1341
+ 0.6
1342
+ 0.8
1343
+ 1.0
1344
+ 1.2
1345
+ 0
1346
+ 30
1347
+ 60
1348
+ 90
1349
+ 120
1350
+ 150
1351
+ 180
1352
+ 210
1353
+ 240
1354
+ 270
1355
+ 300
1356
+ 330
1357
+ 0.0
1358
+ 0.2
1359
+ 0.4
1360
+ 0.6
1361
+ 0.8
1362
+ 1.0
1363
+ 1.2
1364
+ 0.0
1365
+ 0.2
1366
+ 0.4
1367
+ 0.6
1368
+ 0.8
1369
+ 1.0
1370
+ 1.2
1371
+ 0
1372
+ 30
1373
+ 60
1374
+ 90
1375
+ 120
1376
+ 150
1377
+ 180
1378
+ 210
1379
+ 240
1380
+ 270
1381
+ 300
1382
+ 330
1383
+ 0.0
1384
+ 0.2
1385
+ 0.4
1386
+ 0.6
1387
+ 0.8
1388
+ 1.0
1389
+ 0.0
1390
+ 0.2
1391
+ 0.4
1392
+ 0.6
1393
+ 0.8
1394
+ 1.0
1395
+ Norm. PL intensity (arb. units)
1396
+ experiment
1397
+ Sine fit
1398
+ DOLP = 95±2%
1399
+ DOLP = 95±2%
1400
+ DOLP = 94±2%
1401
+ DOLP = 95±2%
1402
+ Output Arm 1
1403
+ Output Arm 2
1404
+ FIG. S7. Polarization characteristics of the QD1 and QD2 emission coupled to input arms of the
1405
+ DC and detected from side facet output arms 1 and 2. Both QDs PL emission is strongly linearly
1406
+ polarized with around 95% degree of linear polarization.
1407
+ 7
1408
+
1409
+ S7: EMITTERS TRANSITION LINEWIDTHS
1410
+ Figure S8 shows a high-resolution spectrum of the QD1 and QD2 emission recorded under
1411
+ cw 660 nm excitation. Measurements are performed using a scanning Fabry-Perot interfer-
1412
+ ometer with a 3 µeV Lorentzian profile linewidth. Both spectra are fit using Lorentzian
1413
+ functions with full-width at half-maximum (FWHM) of 16.5±2.5 µeV and 6.0±0.2 µeV, for
1414
+ QD1 and QD2, respectively (error related to fit precision). It can be shown that the convo-
1415
+ lution of two Lorentzian profiles with FWHM1 and FWHM2 is also a Lorentzian profile with
1416
+ broadening of FWHM1+FWHM2. Following the above, we can correct the recorded optical
1417
+ linewidths for the finite resolution of our setup by simply subtracting its linewidth. The
1418
+ deconvoluted linewidths are 13.5±2.5 µeV and 3.0±0.2 µeV for QD1 and QD2, respectively.
1419
+ a
1420
+ b
1421
+ -60
1422
+ -40
1423
+ -20
1424
+ 0
1425
+ 20
1426
+ 40
1427
+ 60
1428
+ 80
1429
+ -60
1430
+ -40
1431
+ -20
1432
+ 0
1433
+ 20
1434
+ 40
1435
+ 60
1436
+ QD1
1437
+ FWHMfit= 16.5±1.5 meV
1438
+ FWHMdec.= 13.5±1.5 meV
1439
+ experiment
1440
+ Lorentz fit
1441
+ Intensity (arb. units)
1442
+ Detuning (meV)
1443
+ QD2
1444
+ FWHMfit = 6.0±0.2 meV
1445
+ FWHMdec.= 3.0±0.2 meV
1446
+ experiment
1447
+ Lorentz fit
1448
+ Intensity (arb. units)
1449
+ Detuning (meV)
1450
+ FIG. S8. A high-resolution PL spectrum of QD1 and QD2, obtained using a home-built Fabry-
1451
+ Perot scanning cavity with a resolution of 3 µeV (FWHM), and free spectral range of 140 µeV at
1452
+ 890 nm. Solid lines are fits with the Lorentz function.
1453
+ S8: DIRECTIONAL COUPLER CHARACTERISTICS
1454
+ To extract the DC splitting ratio r : t accounting for the different performance of both
1455
+ output arms due to the fabrication imperfections, the following procedure has been used.
1456
+ First, QD located in input arm 1 was excited, and PL signal from output arm 1 II1
1457
+ O1 and
1458
+ arm 2 II1
1459
+ O2 collected. Next, the same measurement has been repeated on QD located in arm
1460
+ 2, revealing II2
1461
+ O1 and II2
1462
+ O2. If the uneven outcoupling efficiency between the two output arms
1463
+ 8
1464
+
1465
+ is quantified as a constant x (transmission ratio between two arms), the following set of
1466
+ equations applies
1467
+ r + t = 1,
1468
+ xr
1469
+ t = II1
1470
+ O1
1471
+ II1
1472
+ O2
1473
+ ,
1474
+ x t
1475
+ r = II2
1476
+ O1
1477
+ II2
1478
+ O2
1479
+ .
1480
+ (1)
1481
+ Based on that, the DC splitting ratio r : t can be derived, accounting for the imbalanced
1482
+ outcoupling
1483
+ r : t =
1484
+
1485
+ II1
1486
+ O1
1487
+ II1
1488
+ O2
1489
+ II2
1490
+ O2
1491
+ II2
1492
+ O1
1493
+ .
1494
+ (2)
1495
+ In the case of the investigated DC with II1
1496
+ O1/II1
1497
+ O2 of 51:49 and II2
1498
+ O2/II2
1499
+ O1 of 46:54 we ended up
1500
+ with the r:t ratio of 48:52, which is very close to the desired 50:50.
1501
+ 0
1502
+ 50
1503
+ 100
1504
+ 150
1505
+ 200
1506
+ 0
1507
+ 500
1508
+ 1000
1509
+ 1500
1510
+ 2000
1511
+ 2500
1512
+ 3000
1513
+ 3500
1514
+ 4000
1515
+ Intensity (cps)
1516
+ Time frame
1517
+ laser coupled to:
1518
+ Input Arm 1
1519
+ Input Arm 2
1520
+ Arm 1+2 (interference)
1521
+ classical visibility at 890 nm
1522
+ V = (Imax-Imin)/(Imax+Imin) = 98±1%
1523
+ FIG. S9. Intensity fluctuations of the cw laser light transmitted simultaneously through two input
1524
+ arms of the DC. Fluctuations in time are related to the off-chip path-length difference fluctuations
1525
+ of signal interfered on the directional coupler.
1526
+ To test the classical visibility of the DC device, we simultaneously send cw laser light
1527
+ tuned to the energy of QDs transitions (890 nm), using circular reflectors placed on the
1528
+ ends of the input arms waveguides. The power of the laser coupled into both arms was
1529
+ adjusted, so the intensity from both input arms was the same. Next, we focused on the
1530
+ 9
1531
+
1532
+ signal passing through the DC and out-coupled from one of the output arms. We observed
1533
+ intensity modulation in the function of time, related to the small path-length difference
1534
+ fluctuation, allowing us to record interference pattern as shown in Fig. S8. Classical DC
1535
+ visibility was obtained by calculating the intensity contrast
1536
+ VDC = Imax − Imin
1537
+ Imax + Imin
1538
+ .
1539
+ (3)
1540
+ In the case of the investigated DC device, a classical visibility of 98±1% was extracted.
1541
+ 10
1542
+
1543
+ S9: RAW HOM CORRELATION DATA
1544
+ Figure S10a shows a non-corrected two-photon Hong-Ou-Mandel interference experiment
1545
+ result between QD1 and QD2 performed under 76MHz and 19MHz pulsed laser repetition
1546
+ rate. In the case of both graphs, the same time-independent background offset is visible,
1547
+ corresponding to around 15% of the normalized peak intensity. Since the background level
1548
+ is the same for the HOM graphs at different laser repetition rates, we exclude the emit-
1549
+ ters long-decay contribution to the observable background. We attribute the observed cw
1550
+ offset to the SSPDs dark counts (100-500 cps). Figure S10b shows the non-corrected nor-
1551
+ malized histogram of the central eleven peaks areas (∆ = 3 ns integration window) of the
1552
+ HOM second-order cross-correlation in case of synchronized - indistinguishable (red bars)
1553
+ and 0.5 ns delayed - distinguishable (grey bars) photons. The non-corrected central peak
1554
+ area in case of synchronized photons is equal to g(2)
1555
+ HOM(0, ∆t) = 0.587±0.002, while in case
1556
+ of unsynchronized photons g(2)
1557
+ HOMd(0, ∆t) = 0.680±0.002. In this case, the non-corrected
1558
+ two-photon interference visibility yields Vraw = 12.1±0.3% in correspondence to 17.8±0.7%
1559
+ visibility obtained for background-corrected graphs (see Fig.3c in the main text).
1560
+ a
1561
+ b
1562
+ -50
1563
+ -25
1564
+ 0
1565
+ 25
1566
+ 50
1567
+ 75
1568
+ 100
1569
+ 0.0
1570
+ 0.2
1571
+ 0.4
1572
+ 0.6
1573
+ 0.8
1574
+ 1.0
1575
+ 1.2
1576
+ Raw data
1577
+ Raw norm. HOM counts - g(2)
1578
+ HOM(t)
1579
+ Delay time (ns)
1580
+ 19 MHz
1581
+ 76 MHz
1582
+ cw bck
1583
+ -5 -4 -3 -2 -1
1584
+ 0
1585
+ 1
1586
+ 2
1587
+ 3
1588
+ 4
1589
+ 5
1590
+ 0.0
1591
+ 0.2
1592
+ 0.4
1593
+ 0.6
1594
+ 0.8
1595
+ 1.0
1596
+ 1.2
1597
+ 1.4
1598
+ 1.6 Dtint = 3 ns
1599
+ Vraw = 12.1±0.3%
1600
+ g(2)
1601
+ HOM(0,Dt) = 0.587±0.002
1602
+ Raw norm. HOM
1603
+ peak area - g(2)
1604
+ HOM(t,Dt)
1605
+ Peak number
1606
+ indist. (sync.)
1607
+ dist. (0.5 ns delay)
1608
+ bck
1609
+ FIG. S10.
1610
+ a, Two-photon Hong-Ou-Mandel interference measurement between QD1 and QD2
1611
+ performed using on-chip beamsplitter showing the normalized coincidences versus the delay time
1612
+ with no background counts correction. An experiment performed under 76MHz and 19MHz pulsed
1613
+ laser repetition rate yields the same cw background. b, Non-corrected integrated counts of the
1614
+ central eleven peaks (3 ns integration window) of the HOM correlation in case of synchronized (blue
1615
+ bars) and 0.5 ns delayed (red bars) photons from QD1 and QD2 under pulsed 76MHz excitation.
1616
+ 11
1617
+
1618
+ S10: SINGLE PHOTON EMISSION UNDER CW EXCITATION
1619
+ In Figure S11, HBT second-order correlation histograms recorded under cw (660 nm)
1620
+ excitation for QD1 and QD2 are shown. The data in Fig. S11 are fit with the function
1621
+ g(2)
1622
+ HBT(τ) = (1 − g(2)
1623
+ HBT(0))exp(−|τ|/τd) convoluted with 80 ps width Gaussian instrumental
1624
+ response function, where τ is the delay time between detection events, g(2)
1625
+ HBT(0) is the prob-
1626
+ ability of two-photon emission events, τd is decay time constant corresponding to the sum
1627
+ of the spontaneous emission rate 1/T1 and pump rate G of the source.
1628
+ -20
1629
+ -15
1630
+ -10
1631
+ -5
1632
+ 0
1633
+ 5
1634
+ 10
1635
+ 15
1636
+ 20
1637
+ 0.0
1638
+ 0.2
1639
+ 0.4
1640
+ 0.6
1641
+ 0.8
1642
+ 1.0
1643
+ 1.2
1644
+ 1.4
1645
+ g(2)
1646
+ HBT(t)
1647
+ Delay time (ns)
1648
+ 0
1649
+ 10
1650
+ 20
1651
+ 30
1652
+ 40
1653
+ 50
1654
+ 60
1655
+ 70
1656
+ Raw coincidences
1657
+ -20
1658
+ -15
1659
+ -10
1660
+ -5
1661
+ 0
1662
+ 5
1663
+ 10
1664
+ 15
1665
+ 20
1666
+ 0.0
1667
+ 0.2
1668
+ 0.4
1669
+ 0.6
1670
+ 0.8
1671
+ 1.0
1672
+ 1.2
1673
+ 1.4
1674
+ g(2)(0) = 0.08±0.01
1675
+ g(2)
1676
+ HBT(t)
1677
+ Delay time (ns)
1678
+ g(2)(0) = 0.16±0.01
1679
+ 0
1680
+ 10
1681
+ 20
1682
+ 30
1683
+ 40
1684
+ 50
1685
+ 60
1686
+ 70
1687
+ Raw coincidences
1688
+ a
1689
+ b
1690
+ QD1
1691
+ QD2
1692
+ FIG. S11. Second order auto-correlation histograms of a QD1 and b QD2 emission under non-
1693
+ resonant (660 nm) cw excitation. Data have been recorded in HBT configuration using an on-chip
1694
+ beamsplitter. Presented data are shown as measured with no background subtraction or other
1695
+ corrections.
1696
+ 12
1697
+
-tAzT4oBgHgl3EQfvf0n/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
.gitattributes CHANGED
@@ -784,3 +784,51 @@ LNE4T4oBgHgl3EQf8A7c/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex
784
  AtAzT4oBgHgl3EQfF_uW/content/2301.01021v1.pdf filter=lfs diff=lfs merge=lfs -text
785
  AtFAT4oBgHgl3EQfrx4U/content/2301.08654v1.pdf filter=lfs diff=lfs merge=lfs -text
786
  sNAzT4oBgHgl3EQfBPpp/content/2301.00939v1.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
784
  AtAzT4oBgHgl3EQfF_uW/content/2301.01021v1.pdf filter=lfs diff=lfs merge=lfs -text
785
  AtFAT4oBgHgl3EQfrx4U/content/2301.08654v1.pdf filter=lfs diff=lfs merge=lfs -text
786
  sNAzT4oBgHgl3EQfBPpp/content/2301.00939v1.pdf filter=lfs diff=lfs merge=lfs -text
787
+ 49AyT4oBgHgl3EQfcPf_/content/2301.00281v1.pdf filter=lfs diff=lfs merge=lfs -text
788
+ Z9E5T4oBgHgl3EQfDg58/content/2301.05406v1.pdf filter=lfs diff=lfs merge=lfs -text
789
+ xtFST4oBgHgl3EQfSjhy/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
790
+ y9E3T4oBgHgl3EQfPglf/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
791
+ INAyT4oBgHgl3EQfffiq/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
792
+ 49AyT4oBgHgl3EQfcPf_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
793
+ sNAzT4oBgHgl3EQfBPpp/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
794
+ INAyT4oBgHgl3EQfffiq/content/2301.00342v1.pdf filter=lfs diff=lfs merge=lfs -text
795
+ ZNAyT4oBgHgl3EQfifg0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
796
+ 3dE3T4oBgHgl3EQfPwmd/content/2301.04406v1.pdf filter=lfs diff=lfs merge=lfs -text
797
+ 59E3T4oBgHgl3EQfQwmQ/content/2301.04416v1.pdf filter=lfs diff=lfs merge=lfs -text
798
+ 9dFLT4oBgHgl3EQfuS8F/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
799
+ Q9E3T4oBgHgl3EQfyQvd/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
800
+ ZNAyT4oBgHgl3EQfifg0/content/2301.00395v1.pdf filter=lfs diff=lfs merge=lfs -text
801
+ ANFKT4oBgHgl3EQfVi5k/content/2301.11788v1.pdf filter=lfs diff=lfs merge=lfs -text
802
+ ANFKT4oBgHgl3EQfVi5k/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
803
+ 2tAyT4oBgHgl3EQfPvai/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
804
+ 59E3T4oBgHgl3EQfQwmQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
805
+ s9E5T4oBgHgl3EQfKg7i/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
806
+ 1tAyT4oBgHgl3EQfPfba/content/2301.00027v1.pdf filter=lfs diff=lfs merge=lfs -text
807
+ 1NE0T4oBgHgl3EQf_gL8/content/2301.02829v1.pdf filter=lfs diff=lfs merge=lfs -text
808
+ xtFST4oBgHgl3EQfSjhy/content/2301.13766v1.pdf filter=lfs diff=lfs merge=lfs -text
809
+ 1NE0T4oBgHgl3EQf_gL8/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
810
+ ndE5T4oBgHgl3EQfjg9a/content/2301.05656v1.pdf filter=lfs diff=lfs merge=lfs -text
811
+ 9tE4T4oBgHgl3EQfDQub/content/2301.04868v1.pdf filter=lfs diff=lfs merge=lfs -text
812
+ 2tAyT4oBgHgl3EQfPvai/content/2301.00031v1.pdf filter=lfs diff=lfs merge=lfs -text
813
+ 4NFAT4oBgHgl3EQfExwU/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
814
+ _dFAT4oBgHgl3EQfqx1t/content/2301.08649v1.pdf filter=lfs diff=lfs merge=lfs -text
815
+ 79E1T4oBgHgl3EQfTwOd/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
816
+ 8dFLT4oBgHgl3EQftC_m/content/2301.12150v1.pdf filter=lfs diff=lfs merge=lfs -text
817
+ 8dFLT4oBgHgl3EQftC_m/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
818
+ 9tE4T4oBgHgl3EQfDQub/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
819
+ WtFPT4oBgHgl3EQfrjVl/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
820
+ i9AyT4oBgHgl3EQfkfhZ/content/2301.00434v1.pdf filter=lfs diff=lfs merge=lfs -text
821
+ ndE5T4oBgHgl3EQfjg9a/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
822
+ jdE1T4oBgHgl3EQf0AWa/content/2301.03451v1.pdf filter=lfs diff=lfs merge=lfs -text
823
+ _dFAT4oBgHgl3EQfqx1t/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
824
+ H9AyT4oBgHgl3EQfTPcY/content/2301.00100v1.pdf filter=lfs diff=lfs merge=lfs -text
825
+ ydFKT4oBgHgl3EQf7S6i/content/2301.11944v1.pdf filter=lfs diff=lfs merge=lfs -text
826
+ qdE0T4oBgHgl3EQfaQC1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
827
+ i9AyT4oBgHgl3EQfkfhZ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
828
+ Y9E3T4oBgHgl3EQfcgoK/content/2301.04525v1.pdf filter=lfs diff=lfs merge=lfs -text
829
+ edE3T4oBgHgl3EQfewrM/content/2301.04547v1.pdf filter=lfs diff=lfs merge=lfs -text
830
+ PdE3T4oBgHgl3EQfCglF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
831
+ bNAzT4oBgHgl3EQfLPu9/content/2301.01112v1.pdf filter=lfs diff=lfs merge=lfs -text
832
+ AtFLT4oBgHgl3EQfFC_E/content/2301.11986v1.pdf filter=lfs diff=lfs merge=lfs -text
833
+ H9AyT4oBgHgl3EQfTPcY/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
834
+ Y9E3T4oBgHgl3EQfcgoK/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
0tE4T4oBgHgl3EQfZwwm/content/tmp_files/2301.05058v1.pdf.txt ADDED
@@ -0,0 +1,1527 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Sparse Coding in a Dual Memory System for Lifelong Learning
2
+ Fahad Sarfraz*1, Elahe Arani*1,2, Bahram Zonooz1,2
3
+ 1Advanced Research Lab, NavInfo Europe, The Netherlands
4
+ 2Department of Mathematics and Computer Science, Eindhoven University of Technology, The Netherlands
5
6
+ Abstract
7
+ Efficient continual learning in humans is enabled by a rich set
8
+ of neurophysiological mechanisms and interactions between
9
+ multiple memory systems. The brain efficiently encodes in-
10
+ formation in non-overlapping sparse codes, which facilitates
11
+ the learning of new associations faster with controlled inter-
12
+ ference with previous associations. To mimic sparse coding
13
+ in DNNs, we enforce activation sparsity along with a dropout
14
+ mechanism which encourages the model to activate simi-
15
+ lar units for semantically similar inputs and have less over-
16
+ lap with activation patterns of semantically dissimilar inputs.
17
+ This provides us with an efficient mechanism for balancing
18
+ the reusability and interference of features, depending on the
19
+ similarity of classes across tasks. Furthermore, we employ
20
+ sparse coding in a multiple-memory replay mechanism. Our
21
+ method maintains an additional long-term semantic memory
22
+ that aggregates and consolidates information encoded in the
23
+ synaptic weights of the working model. Our extensive eval-
24
+ uation and characteristics analysis show that equipped with
25
+ these biologically inspired mechanisms, the model can fur-
26
+ ther mitigate forgetting1.
27
+ 1
28
+ Introduction
29
+ The ability to continually acquire, consolidate, and retain
30
+ knowledge is a hallmark of intelligence. Particularly, as we
31
+ look to deploy deep neural networks (DNNs) in the real
32
+ world, it is essential that learning agents continuously inter-
33
+ act and adapt to the ever-changing environment. However,
34
+ standard DNNs are not designed for lifelong learning and
35
+ exhibit catastrophic forgetting of previously learned knowl-
36
+ edge (McCloskey and Cohen 1989) when required to learn
37
+ tasks sequentially from a stream of data (McCloskey and
38
+ Cohen 1989).
39
+ The core challenge in continual learning (CL) in DNNs
40
+ is to maintain an optimal balance between plasticity and
41
+ the stability of the model. Ideally, the model should be sta-
42
+ ble enough to retain previous knowledge while also plastic
43
+ enough to acquire and consolidate new knowledge. Catas-
44
+ trophic forgetting in DNNs can be attributed to the lack of
45
+ stability, and multiple approaches have been proposed to ad-
46
+ dress it. Among them, Rehearsal-based methods, (Riemer
47
+ *These authors contributed equally.
48
+ Copyright © 2023, Association for the Advancement of Artificial
49
+ Intelligence (www.aaai.org). All rights reserved.
50
+ 1Code available at https://github.com/NeurAI-Lab/SCoMMER
51
+ et al. 2018; Aljundi et al. 2019b) which aim to reduce for-
52
+ getting by continual rehearsal of previously seen tasks, have
53
+ proven to be an effective approach in challenging CL tasks
54
+ (Farquhar and Gal 2018). They attempt to approximate the
55
+ joint distribution of all the observed tasks by saving samples
56
+ from previous tasks in a memory buffer and intertwine the
57
+ training of the new task with samples from memory. How-
58
+ ever, due to the limited buffer size, it is difficult to approx-
59
+ imate the joint distribution with the samples alone. There
60
+ is an inherent imbalance between the samples of previous
61
+ tasks and the current task. This results in the network update
62
+ being biased towards the current task, leading to forgetting
63
+ and recency bias in predictions. Therefore, more informa-
64
+ tion from the previous state of the model is needed to better
65
+ approximate the joint distribution and constrain the update
66
+ of the model to preserve the learned knowledge. However, it
67
+ is still an open question what the optimal information is for
68
+ replay and how to extract and preserve it.
69
+ The human brain provides an existence proof for success-
70
+ ful CL in complex dynamic environments without intransi-
71
+ gence or forgetting. Therefore, it can provide insight into
72
+ the design principles and mechanisms that can enable CL
73
+ in DNNs. The human brain maintains a delicate balance
74
+ between stability and plasticity through a complex set of
75
+ neurophysiological mechanisms (Parisi et al. 2019; Zenke
76
+ et al. 2017) and the effective use of multiple memory sys-
77
+ tems (Hassabis et al. 2017). In particular, evidence suggests
78
+ that the brain employs Sparse Coding, that the neural code is
79
+ characterized by strong activations of a relatively small set
80
+ of neurons. The efficient utilization of sparsity for informa-
81
+ tion representation enables new associations to be learned
82
+ faster with controlled interference with previous associa-
83
+ tions while maintaining sufficient representation capacity.
84
+ Furthermore, complementary learning systems (CLS) the-
85
+ ory posits that effective learning requires two complemen-
86
+ tary learning systems. The hippocampus rapidly encodes
87
+ episodic information into non-overlapping representations,
88
+ which are then gradually consolidated into the structural
89
+ knowledge representation in the neocortex through the re-
90
+ play of neural activities.
91
+ Inspired by these mechanisms in the brain, we hypothe-
92
+ size that employing a mechanism to encourage sparse cod-
93
+ ing in DNNs and mimic the interplay of multiple memory
94
+ systems can be effective in maintaining a balance between
95
+ arXiv:2301.05058v1 [cs.NE] 28 Dec 2022
96
+
97
+ Long-term
98
+ Memory
99
+
100
+
101
+
102
+
103
+ Working
104
+ Memory
105
+ Episodic
106
+ Memory
107
+ Consolidation
108
+ Data Stream
109
+ Rehearsal
110
+ Layer
111
+ c: 1 2 3 4
112
+ Activation Count
113
+ Semantic Dropout
114
+ Knowledge Retrieval
115
+ Activation
116
+ k-WTA
117
+ Figure 1: SCoMMER employs sparse coding in a multi-memory experience replay mechanism. In addition to the instance-based
118
+ episodic memory, we maintain a long-term memory that consolidates the learned knowledge in the working memory throughout
119
+ training. The long-term memory interacts with the episodic memory to enforce consistency in the functional space of working
120
+ memory through the knowledge retrieval loss. To mimic sparse coding in the brain, we enforce activation sparsity along with
121
+ semantic dropout, whereby the model tracks the class-wise activations during training and utilizes them to enforce sparse code,
122
+ which encourages the model to activate similar units for semantically similar inputs. Schematic shows how the activations from
123
+ layer l are propagated to the next layer. Darker shades indicate higher values. Given a sample from class 4, semantic dropout
124
+ retains the units with higher activation counts for the class, and top-k remaining (here 2) units with higher activations are
125
+ propagated to the next layer. This enables the network to form semantically conditioned subnetworks and mitigate forgetting.
126
+ stability and plasticity. To this end, we propose a multi-
127
+ memory experience replay mechanism that employs sparse
128
+ coding, SCoMMER. We enforce activation sparsity along
129
+ with a complementary dropout mechanism, which encour-
130
+ ages the model to activate similar units for semantically sim-
131
+ ilar inputs while reducing the overlap with activation pat-
132
+ terns of semantically dissimilar inputs. The proposed se-
133
+ mantic dropout provides us with an efficient mechanism to
134
+ balance the reusability and interference of features depend-
135
+ ing on the similarity of classes across tasks. Furthermore,
136
+ we maintain additional long-term semantic memory that ag-
137
+ gregates the information encoded in the synaptic weights
138
+ of the working memory. Long-term memory interacts with
139
+ episodic memory to retrieve structural knowledge from pre-
140
+ vious tasks and facilitates information consolidation by en-
141
+ forcing consistency in functional space.
142
+ Our empirical evaluation on challenging CL settings and
143
+ characteristic analysis show that equipping the model with
144
+ these biologically inspired mechanisms can further mitigate
145
+ forgetting and effectively consolidate information across the
146
+ tasks. Furthermore, sparse activations in conjunction with
147
+ semantic dropout in SCoMMER leads to the emergence of
148
+ subnetworks, enables efficient utilization of semantic mem-
149
+ ory, and reduces the bias towards recent tasks.
150
+ 2
151
+ Related Work
152
+ The different approaches to address the problem of catas-
153
+ trophic forgetting in CL can be broadly divided into three
154
+ categories: Regularization-based methods regularize the up-
155
+ date of the model in the parameter space (Farajtabar et al.
156
+ 2020; Kirkpatrick et al. 2017; Ritter et al. 2018; Zenke et al.
157
+ 2017) or the functional space (Rannen et al. 2017; Li and
158
+ Hoiem 2017), Dynamic architecture expands the network
159
+ to dedicate a distinct set of parameters to each task, and
160
+ Rehearsal-based methods (Riemer et al. 2018; Aljundi et al.
161
+ 2019b) mitigate forgetting by maintaining an episodic mem-
162
+ ory buffer and continual rehearsal of samples from previous
163
+ tasks. Among these, our method focuses on rehearsal-based
164
+ methods, as it has proven to be an effective approach in
165
+ challenging continual learning scenarios (Farquhar and Gal
166
+ 2018). The base method, Experience Replay (ER) (Riemer
167
+ et al. 2018) interleaves the training of the current task with
168
+ the memory sample to train the model on the approximate
169
+ joint distribution of tasks. Several studies focus on the differ-
170
+ ent aspects of rehearsal: memory sample selection (Lopez-
171
+ Paz and Ranzato 2017; Isele and Cosgun 2018), sample re-
172
+ trieval from memory (Aljundi et al. 2019a) and what infor-
173
+ mation to extract and replay from the previous model (Li and
174
+ Hoiem 2017; Ebrahimi et al. 2020; Bhat et al. 2022).
175
+ Dark Experience Replay (DER++) samples the output
176
+ logits along with the samples in the memory buffer through-
177
+ out the training trajectory and applies a consistency loss on
178
+ the update of the model. Recently, CLS theory has inspired
179
+ a number of approaches that utilize multiple memory sys-
180
+ tems (Wang et al. 2022a,b; Pham et al. 2021) and show the
181
+ benefits of multiple systems in CL. CLS-ER (Arani et al.
182
+ 2022) mimics the interplay between fast and slow learning
183
+ systems by maintaining two additional semantic memories
184
+ that aggregate the weights of the working model at differ-
185
+ ent timescales using an exponential moving average. Our
186
+ method enforces sparse coding for efficient representation
187
+ and utilization of multiple memories.
188
+ 3
189
+ Methodology
190
+ We first provide an overview of motivation from biologi-
191
+ cal systems before formally introducing the different com-
192
+ ponents of the proposed approach.
193
+
194
+ 3.1
195
+ Continual Learning in the Biological System
196
+ Effective CL in the brain is facilitated by a complex set of
197
+ mechanisms and multiple memory systems. Information in
198
+ the brain is represented by neural activation patterns, which
199
+ form a neural code (Foldiak and Endres 2008). Specifically,
200
+ evidence suggests that the brain employs Sparse Coding, in
201
+ which sensory events are represented by strong activations
202
+ of a relatively small set of neurons. A different subset of
203
+ neurons is used for each stimulus (Foldiak 2003; Barth and
204
+ Poulet 2012). There is a correlation between these sparse
205
+ codes (Lehky et al. 2021) that could capture the similar-
206
+ ity between different stimuli. Sparse codes provide several
207
+ advantages: they enable faster learning of new associations
208
+ with controlled interference with previous associations and
209
+ allow efficient maintenance of associative memory while re-
210
+ taining sufficient representational capacity.
211
+ Another salient feature of the brain is the strong differ-
212
+ entiation and specialization of the nervous systems (Had-
213
+ sell et al. 2020). There is evidence for modularity in bio-
214
+ logical systems, which supports functional specialization of
215
+ brain regions (Kelkar and Medaglia 2018) and reduces in-
216
+ terference between different tasks. Furthermore, the brain
217
+ is believed to utilize multiple memory systems (Atkinson
218
+ and Shiffrin 1968; McClelland et al. 1995). Complementary
219
+ learning systems (CLS) theory states that efficient learning
220
+ requires at least two complementary systems. The instance-
221
+ based hippocampal system rapidly encodes new episodic
222
+ events into non-overlapping representations, which are then
223
+ gradually consolidated into the structured knowledge repre-
224
+ sentation in the parametric neocortical system. Consolida-
225
+ tion of information is accompanied by replay of the neural
226
+ activities that accompanied the learning event.
227
+ The encoding of information into efficient sparse codes,
228
+ the modular and dynamic processing of information, and
229
+ the interplay of multiple memory systems might play a cru-
230
+ cial role in enabling effective CL in the brain. Therefore, our
231
+ method aims to incorporate these components in ANNs.
232
+ 3.2
233
+ Sparse coding in DNNs
234
+ The sparse neural codes in the brain are in stark contrast
235
+ to the highly dense connections and overlapping representa-
236
+ tions in standard DNNs which are prone to interference. In
237
+ particular, for CL, sparse representations can reduce the in-
238
+ terference between different tasks and therefore result in less
239
+ forgetting, as there will be fewer task-sensitive parameters
240
+ or fewer effective changes to the parameters (Abbasi et al.
241
+ 2022; Iyer et al. 2021). Activation sparsity can also lead to
242
+ the natural emergence of modules without explicitly impos-
243
+ ing architectural constraints (Hadsell et al. 2020). Therefore,
244
+ to mimic sparse coding in DNNs, we enforce activation spar-
245
+ sity along with a complementary semantic dropout mecha-
246
+ nism which encourages the model to activate similar units
247
+ for semantically similar samples.
248
+ Sparse Activations:
249
+ To enforce the sparsity in activations,
250
+ we employ the k-winner-take-all (k-WTA) activation func-
251
+ tion (Maass 2000). k-WTA only retains the top-k largest val-
252
+ ues of an N × 1 input vector and sets all the others to zero
253
+ before propagating the vector to the next layer of the net-
254
+ work. Importantly, we deviate from the common implemen-
255
+ tation of k-WTA in convolutional neural networks (CNNs)
256
+ whereby the activation map of a layer (C × H × W ten-
257
+ sor where C is the number of channels and H and W are
258
+ the spatial dimensions) is flattened into a long CHW × 1
259
+ vector input and the k-WTA activation is applied similar
260
+ to the fully connected network (Xiao et al. 2019; Ahmad
261
+ and Scheinkman 2019). We believe that this implementation
262
+ does not take into account the functional integrity of an in-
263
+ dividual convolution filter as an independent feature extrac-
264
+ tor and does not lend itself to the formation of task-specific
265
+ subnetworks with specialized feature extractors. Instead, we
266
+ assign an activation score to each filter in the layer by taking
267
+ the absolute sum of the corresponding activation map and
268
+ select the top-k filters to propagate to the next layer.
269
+ Given the activation map, we flatten the last two dimen-
270
+ sions and assign a score to each filter by taking the absolute
271
+ sum of the activations. Based on the sparsity ratio for each
272
+ layer, the activation maps of the filters with higher scores are
273
+ propagated to the next layers, and the others are set to zero.
274
+ This enforces global sparsity, whereby each stimulus is pro-
275
+ cessed by only a selected set of convolution filters in each
276
+ layer, which can be considered as a subnetwork. We also
277
+ consider each layer’s role when setting the sparsity ratio.
278
+ The earlier layers have a lower sparsity ratio as they learn
279
+ general features, which can enable higher reusability, and
280
+ forward transfer to subsequent tasks use a higher sparsity for
281
+ later layers to reduce the interference between task-specific
282
+ features.
283
+ Semantic Dropout:
284
+ While the k-WTA activation function
285
+ enforces the sparsity of activation for each stimulus, it does
286
+ not encourage semantically similar inputs to have similar ac-
287
+ tivation patterns and reduce overlap with semantically dis-
288
+ similar inputs. To this end, we employ a complementary
289
+ Semantic Dropout mechanism, which controls the degree
290
+ of overlap between neural activations between samples be-
291
+ longing to different tasks while also encouraging the sam-
292
+ ples belonging to the same class to utilize a similar set of
293
+ units. We utilize two sets of activation trackers: global ac-
294
+ tivity counter, Ag ∈ RN, counts the number of times each
295
+ unit has been activated throughout training, whereas class-
296
+ wise activity counter, As ∈ RC×N, tracks the number of
297
+ times each unit has been active for samples belonging to a
298
+ particular class. N and C denote the total number of units
299
+ and classes, respectively. For each subsequent task, we first
300
+ employ Heterogeneous Dropout (Abbasi et al. 2022) to en-
301
+ courage the model to learn the new classes by using neu-
302
+ rons that have been less active for previously seen classes by
303
+ setting the probability of a neuron being dropped to be in-
304
+ versely proportional to its activation counts. Concretely, let
305
+ [Al
306
+ g]j denote the number of times that the unit j in layer l has
307
+ been activated after learning t sequential tasks. For learning
308
+ the new classes in task t+1, the probability of retaining this
309
+ unit is given by:
310
+ [P l
311
+ h]j = exp(
312
+ −[Al
313
+ g]j
314
+ maxi [Alg]i
315
+ πh)
316
+ (1)
317
+
318
+ Algorithm 1: SCoMMER Algorithm for Sparse Coding in Multiple-Memory Experience Replay System
319
+ Input: data stream D; learning rate η; consistency weight γ; update rate r and decay parameter α, dropout rates πh and πs
320
+ Initialize: θs = θw
321
+ M ←− {}
322
+ 1: for Dt ∈ D do
323
+ 2:
324
+ while Training do
325
+ 3:
326
+ Sample training data: (xt, yt) ∼ Dt and (xm, ym) ∼ M, and interleave x ← (xt, xm)
327
+ 4:
328
+ Retrieve structural knowledge: Zs ← f(xm; θs)
329
+ 5:
330
+ Evaluate overall loss loss: L = Lce(f(x; θw), y) + γLkr(f(xm; θw), Zs) (Eq. 4)
331
+ 6:
332
+ Update working memory: θw ←− θw − η∇θwL
333
+ 7:
334
+ Aggregate knowledge: θs ← αθs + (1 − α) θw,
335
+ if r > a ∼ U(0, 1) (Eq. 3)
336
+ 8:
337
+ Update episodic memory: M ←− Reservoir(M, (xt, yt))
338
+ 9:
339
+ After Eh epochs, update semantic dropout probabilities at the end of each epoch: Ps
340
+ (Eq. 2)
341
+ 10:
342
+ Update heterogeneous dropout probabilities: Ph
343
+ (Eq. 1)
344
+ return θs
345
+ where πh controls the strength of dropout with larger val-
346
+ ues leading to less overlap between representations. We then
347
+ allow the network to learn with the new task with hetero-
348
+ geneous dropout in place of a fixed number of epochs, Eh.
349
+ During this period, we let the class-wise activations emerge
350
+ and then employ Semantic Dropout. It encourages the model
351
+ to utilize the same set of units by setting the probability of
352
+ retention of a unit for each class c as proportional to the
353
+ number of times it has been activated for that class so far:
354
+ [P l
355
+ s]c,j = 1 − exp(
356
+ −[Al
357
+ s]c,j
358
+ maxi [Als]c,i
359
+ πs)
360
+ (2)
361
+ where πs controls the strength of dropout. The probabilities
362
+ for semantic dropout are updated at the end of each epoch
363
+ to enforce the emerging pattern. This provides us with an
364
+ efficient mechanism for controlling the degree of overlap
365
+ in representations as well as enabling context-specific pro-
366
+ cessing of information which facilitates the formation of se-
367
+ mantically conditioned subnetworks. Activation sparsity, to-
368
+ gether with semantic dropout, also provides us with an effi-
369
+ cient mechanism for balancing the reusability and interfer-
370
+ ence of features depending on the similarity of classes across
371
+ the tasks.
372
+ 3.3
373
+ Multiple Memory Systems
374
+ Inspired by the interaction of multiple memory systems in
375
+ the brain, in addition to a fixed-size instance-based episodic
376
+ memory, our method builds a long-term memory that aggre-
377
+ gates the learned information in the working memory.
378
+ Episodic Memory:
379
+ Information consolidation in the brain
380
+ is facilitated by replaying the neural activation patterns that
381
+ accompanied the learning event. To mimic this mechanism,
382
+ we employ a fixed-size episodic memory buffer, which can
383
+ be thought of as a very primitive hippocampus. The memory
384
+ buffer is maintained with Reservoir Sampling, (Vitter 1985)
385
+ which aims to match the distribution of the data stream by
386
+ assigning an equal probability to each incoming sample.
387
+ Long-Term Memory:
388
+ We aim to build a long-term
389
+ semantic memory that can consolidate and accumulate
390
+ the structural knowledge learned in the working memory
391
+ throughout the training trajectory. The knowledge acquired
392
+ in DNNs resides in the learned synaptic weights (Krishnan
393
+ et al. 2019). Hence, progressively aggregating the weights
394
+ of the working memory (θw) as it sequentially learns tasks
395
+ allows us to consolidate the information efficiently. To this
396
+ end, we build long-term memory (θs) by taking the expo-
397
+ nential moving average of the working memory weights
398
+ in a stochastic manner (which is more biologically plausi-
399
+ ble (Arani et al. 2021)), similar to (Arani et al. 2022):
400
+ θs ← αθs + (1 − α) θw,
401
+ if r > a ∼ U(0, 1)
402
+ (3)
403
+ where α is the decay parameter and r is the update rate.
404
+ Long-term memory builds structural representations for
405
+ generalization and mimics the slow acquisition of struc-
406
+ tured knowledge in the neocortex, which can generalize well
407
+ across tasks. The long-term memory then interacts with the
408
+ instance-level episodic memory to retrieve structural rela-
409
+ tional knowledge (Sarfraz et al. 2021) for the previous tasks
410
+ encoded in the output logits. Consolidated logits are then
411
+ utilized to enforce consistency in the functional space of the
412
+ working model. This facilitates the consolidation of infor-
413
+ mation by encouraging the acquisition of new knowledge
414
+ while maintaining the functional relation of previous knowl-
415
+ edge and aligning the decision boundary of working mem-
416
+ ory with long-term memory.
417
+ 3.4
418
+ Overall Formulation
419
+ Given a continuous data stream D containing a sequence of
420
+ tasks (D1, D2, .., DT ), the CL task is to learn the joint dis-
421
+ tribution of all the observed tasks without the availability of
422
+ task labels at test time. Our proposed method, SCoMMER,
423
+ involves training a working memory θw, and maintains an
424
+ additional long-term memory θs and an episodic memory
425
+ M. The long-term memory is initialized with the same pa-
426
+ rameters as the working memory and has the same spar-
427
+ sity constraints. Therefore, long-term memory aggregates
428
+ the weights of working memory. We initialize heterogeneous
429
+ dropout probabilities πh randomly to set the probability of
430
+ retention of a fraction of units to 1 and others to 0 so that the
431
+ first task is learned using a few, but sufficient units and the
432
+ remaining can be utilized to learn subsequent tasks.
433
+
434
+ Table 1: Comparison on different CL settings. The baseline results for S-CIFAR100 and GCIL are from (Arani et al. 2022).
435
+ Buffer
436
+ Method
437
+ S-CIFAR10
438
+ S-CIFAR100
439
+ GCIL
440
+ Class-IL
441
+ Task-IL
442
+ Class-IL
443
+ Task-IL
444
+ Unif
445
+ Longtail
446
+
447
+ JOINT
448
+ 92.20±0.15
449
+ 98.31±0.12
450
+ 70.62±0.64
451
+ 86.19±0.43
452
+ 58.36±1.02
453
+ 56.94±1.56
454
+ SGD
455
+ 19.62±0.05
456
+ 61.02±3.33
457
+ 17.58±0.04
458
+ 40.46±0.99
459
+ 12.67±0.24
460
+ 22.88±0.53
461
+ 200
462
+ ER
463
+ 44.79±1.86
464
+ 91.19±0.94
465
+ 21.40±0.22
466
+ 61.36±0.39
467
+ 16.40±0.37
468
+ 19.27±0.77
469
+ DER++
470
+ 64.88±1.17
471
+ 91.92±0.60
472
+ 29.60±1.14
473
+ 62.49±0.78
474
+ 18.84±0.60
475
+ 26.94±1.27
476
+ CLS-ER
477
+ 66.19±0.75
478
+ 93.90±0.60
479
+ 35.23±0.86
480
+ 67.34±0.79
481
+ 25.06±0.81
482
+ 28.54±0.87
483
+ SCoMMER
484
+ 69.19±0.61
485
+ 93.20±0.10
486
+ 40.25±0.05
487
+ 69.39±0.43
488
+ 30.84±0.80
489
+ 29.08±0.31
490
+ 500
491
+ ER
492
+ 57.74±0.27
493
+ 93.61±0.27
494
+ 28.02±0.31
495
+ 68.23±0.16
496
+ 28.21±0.69
497
+ 20.30±0.63
498
+ DER++
499
+ 72.70±1.36
500
+ 93.88±0.50
501
+ 41.40±0.96
502
+ 70.61±0.11
503
+ 32.92±0.74
504
+ 25.82±0.83
505
+ CLS-ER
506
+ 75.22±0.71
507
+ 94.94±0.53
508
+ 47.63±0.61
509
+ 73.78±0.86
510
+ 36.34±0.59
511
+ 28.63±0.68
512
+ SCoMMER
513
+ 74.97±1.05
514
+ 94.36±0.06
515
+ 49.63±1.43
516
+ 75.49±0.43
517
+ 36.87±0.36
518
+ 35.20±0.21
519
+ T1
520
+ T2
521
+ T3
522
+ T4
523
+ T5
524
+ After T1
525
+ After T2
526
+ After T3
527
+ After T4
528
+ After T5
529
+ 98.0
530
+ 88.3
531
+ 85.4
532
+ 86.2
533
+ 38.3
534
+ 88.5
535
+ 81.5
536
+ 29.9
537
+ 42.0
538
+ 96.8
539
+ 47.4
540
+ 45.0
541
+ 55.5
542
+ 70.0
543
+ 94.9
544
+ Working Memory
545
+ T1
546
+ T2
547
+ T3
548
+ T4
549
+ T5
550
+ 98.6
551
+ 92.0
552
+ 84.8
553
+ 87.7
554
+ 57.5
555
+ 79.7
556
+ 85.5
557
+ 49.0
558
+ 64.5
559
+ 86.7
560
+ 70.0
561
+ 52.0
562
+ 60.8
563
+ 79.2
564
+ 86.5
565
+ Long-Term Memory
566
+ Figure 2: Task-wise performance of working memory and
567
+ the long-term memory. The long-term memory effectively
568
+ aggregates knowledge encoded in the working memory and
569
+ generalizes well across the tasks.
570
+ During each training step, we interleave the batch of sam-
571
+ ples from the current task xt ∼ Dt, with a random batch of
572
+ exemplars from episodic memory xm ∼ M. Working mem-
573
+ ory is trained with a combination of cross-entropy loss on
574
+ the interleaved batch x ← (xt, xb), and knowledge retrieval
575
+ loss on the exemplars. Thus, the overall loss is given by:
576
+ L = Lce(f(x; θw), y) + γLkr(f(xm; θw), f(xm; θs)) (4)
577
+ where γ controls the strength of the enforcement of con-
578
+ sistency, and mean-squared error loss is used for Lkr. The
579
+ training step is followed by stochastically updating the long-
580
+ term memory (Eq. 3). The semantic dropout and heteroge-
581
+ neous dropout probabilities are updated at the end of each
582
+ epoch and task, respectively (using Eqs. 1 and 3). We use
583
+ long-term memory for inference, as it aggregates knowledge
584
+ and generalizes well across tasks (cf. Figure 2). Agorithm 1
585
+ provides further training details.
586
+ 4
587
+ Evaluation Protocol
588
+ To gauge the effectiveness of SCoMMER in tackling the dif-
589
+ ferent challenges faced by a lifelong learning agent, we con-
590
+ sider multiple CL settings that test different aspects of the
591
+ model.
592
+ Class-IL presents a challenging CL scenario where each
593
+ task presents a new set of disjoint classes, and the model
594
+ must learn to distinguish between all the classes seen so
595
+ far without the availability of task labels at the test time.
596
+ It requires the model to effectively consolidate information
597
+ across tasks and learn generalizable features that can be
598
+ reused to acquire new knowledge. Generalized Class-IL
599
+ (GCIL) (Mi et al. 2020) extends the Class-IL setting to more
600
+ realistic scenarios where the agent has to learn an object over
601
+ multiple recurrences spread across tasks and tackle the chal-
602
+ lenges of class imbalance and a varying number of classes
603
+ in each task. GCIL utilizes probabilistic modeling to sam-
604
+ ple the number of classes, the appearing classes, and their
605
+ sample sizes. Details of the datasets used in each setting are
606
+ provided in the Appendix. Though our method does not uti-
607
+ lize separate classification heads or subnets, for completion,
608
+ we also evaluate the performance under the Task-IL setting,
609
+ where the model has access to the task labels at inference.
610
+ In this setting, we use the task label to select the subset of
611
+ output logits to select from.
612
+ 5
613
+ Empirical Evaluation
614
+ We compare SCoMMER with state-of-the-art rehearsal-
615
+ based methods across different CL settings under uniform
616
+ experimental settings (details provided in Appendix). SGD
617
+ provides the lower bound with standard training on sequen-
618
+ tial tasks, and JOINT gives the upper bound on performance
619
+ when the model is trained on the joint distribution.
620
+ Table 1 shows that SCoMMER provides performance
621
+ gains in the majority of the cases and demonstrates the ef-
622
+ fectiveness of our approach under varying challenging CL
623
+ settings. In particular, it provides considerable improve-
624
+ ment under low buffer size settings, which suggests that our
625
+ method is able to mitigate forgetting with fewer samples
626
+ from previous tasks. The performance gains over CLS-ER,
627
+ which employs two semantic memories, show that sparse
628
+ coding in our method enables the effective utilization of a
629
+ single semantic memory. In particular, the gains in the GCIL
630
+ setting, where the agent has to face the challenges of class
631
+ imbalance and learn over multiple occurrences of objects,
632
+ alludes to several advantages of our method. Our proposed
633
+ semantic dropout in conjunction with sparse activations en-
634
+ ables the model to reuse the sparse code associated with the
635
+
636
+ T1
637
+ T2
638
+ T3
639
+ T4
640
+ T5
641
+ After T1
642
+ After T2
643
+ After T3
644
+ After T4
645
+ After T5
646
+ 98.8
647
+ 67.0
648
+ 92.1
649
+ 54.0
650
+ 16.9
651
+ 95.8
652
+ 55.9
653
+ 13.1
654
+ 22.9
655
+ 98.7
656
+ 15.8
657
+ 15.1
658
+ 36.0
659
+ 73.8
660
+ 98.2
661
+ ER
662
+ T1
663
+ T2
664
+ T3
665
+ T4
666
+ T5
667
+ 98.2
668
+ 89.2
669
+ 87.3
670
+ 82.0
671
+ 50.0
672
+ 90.0
673
+ 79.3
674
+ 33.4
675
+ 63.0
676
+ 94.8
677
+ 58.4
678
+ 29.5
679
+ 67.5
680
+ 81.1
681
+ 95.7
682
+ DER++
683
+ T1
684
+ T2
685
+ T3
686
+ T4
687
+ T5
688
+ 98.7
689
+ 89.0
690
+ 89.5
691
+ 78.2
692
+ 53.5
693
+ 89.0
694
+ 81.2
695
+ 42.4
696
+ 76.3
697
+ 87.5
698
+ 69.2
699
+ 41.5
700
+ 76.8
701
+ 83.3
702
+ 41.1
703
+ CLS-ER
704
+ T1
705
+ T2
706
+ T3
707
+ T4
708
+ T5
709
+ 98.6
710
+ 92.0
711
+ 84.8
712
+ 87.7
713
+ 57.5
714
+ 79.7
715
+ 85.5
716
+ 49.0
717
+ 64.5
718
+ 86.7
719
+ 70.0
720
+ 52.0
721
+ 60.8
722
+ 79.2
723
+ 86.5
724
+ SCoMMER
725
+ Figure 3: Task-wise performance of different methods. The heatmaps provide the test set of each task (x-axis) evaluated at the
726
+ end of each sequential learning task (y-axis). SCoMMER retains the performance of earlier tasks better without compromising
727
+ on the current task.
728
+ Table 2: Ablation Study: Effect of systematically removing
729
+ different components of SCoMMER on the performance of
730
+ the models on S-CIFAR10. All components contribute to the
731
+ performance gain.
732
+ Sparse
733
+ Long-Term
734
+ Semantic
735
+ Accuracy
736
+ Activations
737
+ Memory
738
+ Dropout
739
+ 
740
+ 
741
+ 
742
+ 69.19±0.61
743
+ 
744
+ 
745
+ 
746
+ 67.38±1.51
747
+ 
748
+ 
749
+ 
750
+ 61.88±2.43
751
+ 
752
+ 
753
+ 
754
+ 49.44±5.43
755
+ 
756
+ 
757
+ 
758
+ 44.79±1.86
759
+ recurring object and learn better representations with the ad-
760
+ ditional samples by adapting the corresponding subset of fil-
761
+ ters. Furthermore, compared to the dense activations in CLS-
762
+ ER, the sparse coding in SCoMMER leads to the emergence
763
+ of subnetworks that provide modularity and protection to
764
+ other parts of the network since the entire network is not
765
+ updated for each input image. This increases the robustness
766
+ of the model to the class imbalance.
767
+ Overall, our method provides an effective approach to em-
768
+ ploy sparse coding in DNN and enables better utilization of
769
+ long-term memory, which can effectively consolidate infor-
770
+ mation across tasks and further mitigate forgetting.
771
+ 6
772
+ Ablation Study
773
+ To gain further insight into the contribution of each com-
774
+ ponent of our method, we systematically remove them and
775
+ evaluate the performance of the model in Table 2. The results
776
+ show that all components of SCoMMER contribute to the
777
+ performance gains. The drop in performance from remov-
778
+ ing semantic dropout suggests that it is effective in enforc-
779
+ ing sparse coding on the representations of the model, which
780
+ reduces the interference between tasks and allows semanti-
781
+ cally similar classes to share information. We also observe
782
+ the benefits of multiple memory systems in CL. Additional
783
+ long-term memory provides considerable performance im-
784
+ provement and suggests that the EMA of the learned synap-
785
+ tic weights can effectively consolidate knowledge across
786
+ tasks. Furthermore, we observe that sparsity is a critical
787
+ component for enabling CL in DNNs. Sparse activations
788
+ alone significantly improve ER performance and also en-
789
+ able efficient utilization of semantic memory. We highlight
790
+ that these individual components complement each other
791
+ and that the combined effect leads to the observed perfor-
792
+ mance improvement in our method.
793
+ 7
794
+ Characteristics Analysis
795
+ We look at different characteristics of the model to under-
796
+ stand what enables the performance gains in our method.
797
+ We analyze the models trained on S-CIFAR100 with a buffer
798
+ size of 200.
799
+ 7.1
800
+ Stability-Plasticity Dilemma
801
+ To better understand how well different methods maintain
802
+ a balance between stability and plasticity, we look at how
803
+ task-wise performance evolves as the model learns tasks se-
804
+ quentially. The diagonal of the heatmap shows the plastic-
805
+ ity of the model as it learns the new task, whereas the dif-
806
+ ference between the accuracy of the task when it was first
807
+ learned and at the end of the training indicates the stabil-
808
+ ity of the model. Figure 3 shows that SCoMMER is able to
809
+ maintain a better balance and provides a more uniform per-
810
+ formance on tasks compared to baselines. While CLS-ER
811
+ provides better stability than DER++, it comes at the cost of
812
+ the model’s performance on the last task, which could be due
813
+ to the lower update rate of the stable model. SCoMMER, on
814
+ the other hand, retains performance on the earlier tasks (T1
815
+ and T2) and provides good performance on the recent task.
816
+ We also compare the long-term semantic and working mem-
817
+ ory performance in Figure 2. Long-term memory effectively
818
+ aggregates the learned knowledge into the synaptic weights
819
+ of working memory and generalizes well across tasks.
820
+ 7.2
821
+ Emergence of Subnetworks
822
+ To evaluate the effectiveness of activation sparsity and se-
823
+ mantic dropout in enforcing sparse coding in the model, we
824
+ look at the average activity of the units in the penultimate
825
+ layer. The emerging sparse code for each class is tracked
826
+ during training using the class-wise activity counter and en-
827
+ forced using semantic dropout probabilities (Equation 2).
828
+
829
+ Figure 4: Class-wise activation counts of the filters in the penultimate layer of the model trained on S-CIFAR10 with 200 buffer
830
+ size. Comparison of the activation counts on the test set with the learned class-wise probabilities, Ps, during training shows the
831
+ effectiveness of semantic dropout in enforcing sparse coding. Right plot shows the cosine similarities between the activation
832
+ counts of different classes. Semantically similar classes have higher correlation in activations. Darker color shows higher values.
833
+ Given a test sample from class c, ideally, we would want
834
+ the model to use the subset of neurons that had higher activ-
835
+ ity for the training samples from class c without providing
836
+ any task information. Concretely, we track the class-wise
837
+ activity on the test set and plot the normalized activation
838
+ counts for a set of neurons next to their class-wise proba-
839
+ bilities at the end of training. Figure 4 shows a high correla-
840
+ tion between the test set activation counts and the semantic
841
+ dropout probabilities at the end of training, particularly for
842
+ recent classes. The activation counts also hint at the natu-
843
+ ral emergence of semantically conditioned subnets, as the
844
+ model utilizes a different set of units for different classes.
845
+ Furthermore, we observe that semantically similar classes
846
+ have a higher degree of correlation between their activation
847
+ patterns. For instance, cat and dog share the most active neu-
848
+ rons, a similar pattern is observed between horse and deer,
849
+ and car and truck. The cosine similarities between the ac-
850
+ tivation counts of the different classes further supports the
851
+ observation. This is even more remarkable given that these
852
+ classes are observed in different tasks, particularly for cars
853
+ and trucks, which are observed in the first and last tasks.
854
+ 7.3
855
+ Task Recency Bias
856
+ A major challenge in CL is the recency bias, in which the
857
+ update of the model on new task samples biases its predic-
858
+ tions toward the current task (Wu et al. 2019). This leads to
859
+ considerable forgetting of earlier tasks. To compare the de-
860
+ gree to which SCoMMER tackles this issue, we evaluate the
861
+ probabilities of predicting each task by aggregating the soft-
862
+ max output of samples from the test set of all seen tasks and
863
+ averaging the probabilities of classes in each task. Figure
864
+ 5 shows that SCoMMER provides more uniform probabili-
865
+ ties to predict each task. CLS-ER is able to mitigate the bias
866
+ towards the last task, which can be attributed to the aggrega-
867
+ tion of knowledge in the semantic memories; however, CLS-
868
+ ER reduces the probability of predicting the last task, which
869
+ explains the low performance. SCoMMER effectively mit-
870
+ igates recency bias and provides uniform prediction proba-
871
+ ER
872
+ DER++
873
+ CLS-ER
874
+ SCoMMER
875
+ 0.0
876
+ 0.1
877
+ 0.2
878
+ 0.3
879
+ 0.4
880
+ 0.5
881
+ 0.6
882
+ Task Probability
883
+ Task 1
884
+ Task 2
885
+ Task 3
886
+ Task 4
887
+ Task 5
888
+ Figure 5: Average probabilities of predicting classes from
889
+ each tasks at the end of training. SCoMMER provides more
890
+ uniform probabilities across the tasks.
891
+ bilities across tasks without any explicit regularization.
892
+ 8
893
+ Conclusion
894
+ Motivated by the mechanisms for information representation
895
+ and utilization of multiple memory systems in the brain, we
896
+ proposed a novel approach to employ sparse coding in mul-
897
+ tiple memory systems. SCoMMER enforces activation spar-
898
+ sity along with a complementary semantic dropout mecha-
899
+ nism, which encourages the model to activate similar units
900
+ for semantically similar inputs and reduce the overlap with
901
+ dissimilar inputs. Additionally, it maintains long-term mem-
902
+ ory, which consolidates the learned knowledge in working
903
+ memory. Our empirical evaluation shows the effectiveness
904
+ of the approach in mitigating forgetting in challenging CL
905
+ scenarios. Furthermore, sparse coding enables efficient con-
906
+ solidation of knowledge in the long-term memory, reduces
907
+ the bias towards recent tasks, and leads to the emergence
908
+ of semantically conditioned subnetworks. We hope that our
909
+ study inspires further research in this promising direction.
910
+
911
+ References
912
+ Abbasi, A.; Nooralinejad, P.; Braverman, V.; Pirsiavash, H.;
913
+ and Kolouri, S. 2022. Sparsity and Heterogeneous Dropout
914
+ for Continual Learning in the Null Space of Neural Activa-
915
+ tions. arXiv preprint arXiv:2203.06514.
916
+ Ahmad, S.; and Scheinkman, L. 2019. How can we be so
917
+ dense? The benefits of using highly sparse representations.
918
+ arXiv preprint arXiv:1903.11257.
919
+ Aljundi, R.; Belilovsky, E.; Tuytelaars, T.; Charlin, L.; Cac-
920
+ cia, M.; Lin, M.; and Page-Caccia, L. 2019a. Online contin-
921
+ ual learning with maximal interfered retrieval. In Advances
922
+ in Neural Information Processing Systems, 11849–11860.
923
+ Aljundi, R.; Lin, M.; Goujaud, B.; and Bengio, Y. 2019b.
924
+ Gradient based sample selection for online continual learn-
925
+ ing. In Advances in Neural Information Processing Systems,
926
+ 11816–11825.
927
+ Arani, E.; Sarfraz, F.; and Zonooz, B. 2021. Noise as a re-
928
+ source for learning in knowledge distillation. In Proceed-
929
+ ings of the IEEE/CVF Winter Conference on Applications of
930
+ Computer Vision, 3129–3138.
931
+ Arani, E.; Sarfraz, F.; and Zonooz, B. 2022.
932
+ Learning
933
+ Fast, Learning Slow: A General Continual Learning Method
934
+ based on Complementary Learning System. In International
935
+ Conference on Learning Representations.
936
+ Atkinson, R. C.; and Shiffrin, R. M. 1968. Human memory:
937
+ A proposed system and its control processes. In Psychology
938
+ of learning and motivation, volume 2, 89–195. Elsevier.
939
+ Barth, A. L.; and Poulet, J. F. 2012. Experimental evidence
940
+ for sparse firing in the neocortex. Trends in neurosciences,
941
+ 35(6): 345–355.
942
+ Bhat, P.; Zonooz, B.; and Arani, E. 2022. Consistency is the
943
+ key to further mitigating catastrophic forgetting in continual
944
+ learning. arXiv preprint arXiv:2207.04998.
945
+ Buzzega, P.; Boschini, M.; Porrello, A.; Abati, D.; and
946
+ Calderara, S. 2020. Dark Experience for General Contin-
947
+ ual Learning: a Strong, Simple Baseline.
948
+ arXiv preprint
949
+ arXiv:2004.07211.
950
+ Ebrahimi, S.; Petryk, S.; Gokul, A.; Gan, W.; Gonzalez,
951
+ J. E.; Rohrbach, M.; et al. 2020. Remembering for the Right
952
+ Reasons: Explanations Reduce Catastrophic Forgetting. In
953
+ International Conference on Learning Representations.
954
+ Farajtabar, M.; Azizan, N.; Mott, A.; and Li, A. 2020. Or-
955
+ thogonal gradient descent for continual learning. In Inter-
956
+ national Conference on Artificial Intelligence and Statistics,
957
+ 3762–3773. PMLR.
958
+ Farquhar, S.; and Gal, Y. 2018. Towards robust evaluations
959
+ of continual learning. arXiv preprint arXiv:1805.09733.
960
+ Foldiak, P. 2003. Sparse coding in the primate cortex. The
961
+ handbook of brain theory and neural networks.
962
+ Foldiak, P.; and Endres, D. 2008. Sparse coding.
963
+ Hadsell, R.; Rao, D.; Rusu, A. A.; and Pascanu, R. 2020.
964
+ Embracing change: Continual learning in deep neural net-
965
+ works. Trends in cognitive sciences, 24(12): 1028–1040.
966
+ Hassabis, D.; Kumaran, D.; Summerfield, C.; and Botvinick,
967
+ M. 2017. Neuroscience-inspired artificial intelligence. Neu-
968
+ ron, 95(2): 245–258.
969
+ Isele, D.; and Cosgun, A. 2018. Selective experience replay
970
+ for lifelong learning. In Proceedings of the AAAI Conference
971
+ on Artificial Intelligence, volume 32.
972
+ Iyer, A.; Grewal, K.; Velu, A.; Souza, L. O.; Forest, J.;
973
+ and Ahmad, S. 2021. Avoiding Catastrophe: Active Den-
974
+ drites Enable Multi-Task Learning in Dynamic Environ-
975
+ ments. arXiv preprint arXiv:2201.00042.
976
+ Kelkar, A.; and Medaglia, J. 2018. Evidence of brain modu-
977
+ larity. Encyclopedia of Evolutionary Psychological Science.
978
+ Springer, Cham. https://doi. org/10.1007/978-3-319-16999-
979
+ 6 2422-1.
980
+ Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Des-
981
+ jardins, G.; Rusu, A. A.; Milan, K.; Quan, J.; Ramalho, T.;
982
+ Grabska-Barwinska, A.; et al. 2017.
983
+ Overcoming catas-
984
+ trophic forgetting in neural networks. Proceedings of the
985
+ national academy of sciences, 114(13): 3521–3526.
986
+ Krishnan, G. P.; Tadros, T.; Ramyaa, R.; and Bazhenov, M.
987
+ 2019.
988
+ Biologically inspired sleep algorithm for artificial
989
+ neural networks. arXiv preprint arXiv:1908.02240.
990
+ Lehky, S. R.; Tanaka, K.; and Sereno, A. B. 2021. Pseu-
991
+ dosparse neural coding in the visual system of primates.
992
+ Communications biology, 4(1): 1–12.
993
+ Li, Z.; and Hoiem, D. 2017. Learning without forgetting.
994
+ IEEE transactions on pattern analysis and machine intelli-
995
+ gence, 40(12): 2935–2947.
996
+ Lopez-Paz, D.; and Ranzato, M. 2017. Gradient episodic
997
+ memory for continual learning. In Advances in neural infor-
998
+ mation processing systems, 6467–6476.
999
+ Maass, W. 2000. On the computational power of winner-
1000
+ take-all. Neural computation, 12(11): 2519–2535.
1001
+ McClelland, J. L.; McNaughton, B. L.; and O’Reilly, R. C.
1002
+ 1995. Why there are complementary learning systems in
1003
+ the hippocampus and neocortex: insights from the successes
1004
+ and failures of connectionist models of learning and mem-
1005
+ ory. Psychological review, 102(3): 419.
1006
+ McCloskey, M.; and Cohen, N. J. 1989. Catastrophic inter-
1007
+ ference in connectionist networks: The sequential learning
1008
+ problem.
1009
+ In Psychology of learning and motivation, vol-
1010
+ ume 24, 109–165. Elsevier.
1011
+ Mi, F.; Kong, L.; Lin, T.; Yu, K.; and Faltings, B. 2020.
1012
+ Generalized Class Incremental Learning. In Proceedings of
1013
+ the IEEE/CVF Conference on Computer Vision and Pattern
1014
+ Recognition Workshops, 240–241.
1015
+ Mirzadeh,
1016
+ S.
1017
+ I.;
1018
+ Farajtabar,
1019
+ M.;
1020
+ Pascanu,
1021
+ R.;
1022
+ and
1023
+ Ghasemzadeh, H. 2020.
1024
+ Understanding the role of
1025
+ training regimes in continual learning. Advances in Neural
1026
+ Information Processing Systems, 33: 7308–7320.
1027
+ Parisi, G. I.; Kemker, R.; Part, J. L.; Kanan, C.; and Wermter,
1028
+ S. 2019. Continual lifelong learning with neural networks:
1029
+ A review. Neural Networks, 113: 54–71.
1030
+ Pham, Q.; Liu, C.; and Hoi, S. 2021. Dualnet: Continual
1031
+ learning, fast and slow.
1032
+ Advances in Neural Information
1033
+ Processing Systems, 34: 16131–16144.
1034
+ Rannen, A.; Aljundi, R.; Blaschko, M. B.; and Tuytelaars,
1035
+ T. 2017. Encoder based lifelong learning. In Proceedings
1036
+
1037
+ of the IEEE International Conference on Computer Vision,
1038
+ 1320–1328.
1039
+ Riemer, M.; Cases, I.; Ajemian, R.; Liu, M.; Rish, I.; Tu, Y.;
1040
+ and Tesauro, G. 2018. Learning to learn without forgetting
1041
+ by maximizing transfer and minimizing interference. arXiv
1042
+ preprint arXiv:1810.11910.
1043
+ Ritter, H.; Botev, A.; and Barber, D. 2018. Online structured
1044
+ laplace approximations for overcoming catastrophic forget-
1045
+ ting. In Advances in Neural Information Processing Sys-
1046
+ tems, 3738–3748.
1047
+ Sarfraz, F.; Arani, E.; and Zonooz, B. 2021.
1048
+ Knowledge
1049
+ distillation beyond model compression. In 2020 25th Inter-
1050
+ national Conference on Pattern Recognition (ICPR), 6136–
1051
+ 6143. IEEE.
1052
+ van de Ven, G. M.; and Tolias, A. S. 2019. Three scenarios
1053
+ for continual learning. arXiv preprint arXiv:1904.07734.
1054
+ Vitter, J. S. 1985. Random sampling with a reservoir. ACM
1055
+ Transactions on Mathematical Software (TOMS), 11(1): 37–
1056
+ 57.
1057
+ Wang, Z.; Zhang, Z.; Ebrahimi, S.; Sun, R.; Zhang, H.; Lee,
1058
+ C.-Y.; Ren, X.; Su, G.; Perot, V.; Dy, J.; et al. 2022a. Dual-
1059
+ Prompt: Complementary Prompting for Rehearsal-free Con-
1060
+ tinual Learning. arXiv preprint arXiv:2204.04799.
1061
+ Wang, Z.; Zhang, Z.; Lee, C.-Y.; Zhang, H.; Sun, R.; Ren,
1062
+ X.; Su, G.; Perot, V.; Dy, J.; and Pfister, T. 2022b. Learn-
1063
+ ing to prompt for continual learning.
1064
+ In Proceedings of
1065
+ the IEEE/CVF Conference on Computer Vision and Pattern
1066
+ Recognition, 139–149.
1067
+ Wu, Y.; Chen, Y.; Wang, L.; Ye, Y.; Liu, Z.; Guo, Y.; and Fu,
1068
+ Y. 2019. Large scale incremental learning. In Proceedings of
1069
+ the IEEE/CVF Conference on Computer Vision and Pattern
1070
+ Recognition, 374–382.
1071
+ Xiao, C.; Zhong, P.; and Zheng, C. 2019. Enhancing ad-
1072
+ versarial defense by k-winners-take-all.
1073
+ arXiv preprint
1074
+ arXiv:1905.10510.
1075
+ Zenke, F.; Poole, B.; and Ganguli, S. 2017. Continual learn-
1076
+ ing through synaptic intelligence. Proceedings of machine
1077
+ learning research, 70: 3987.
1078
+
1079
+ A Appendix
1080
+ B
1081
+ Experimental Setting
1082
+ For a fair comparison with different CL methods in uni-
1083
+ form experimental settings, we extended the Mammoth
1084
+ framework (Buzzega et al. 2020). To disentangle the per-
1085
+ formance improvement of the algorithm from the training
1086
+ regimes (Mirzadeh et al. 2020), we use the same network
1087
+ (ResNet-18), optimizer (SGD), batch size for task data and
1088
+ memory buffer (32), data augmentations (random crop and
1089
+ random horizontal flip), and the number of epochs (50) for
1090
+ all our experiments.
1091
+ For hyperparameter tuning, we use a small held-out val-
1092
+ idation set and perform a grip search on activation sparsity,
1093
+ γ, dropout strengths, πh and πs, and the update frequency
1094
+ for long-term memory r. Table S1 provides the selected hy-
1095
+ perparameters for each setting. Note that our method does
1096
+ not require an extensive hyperparameter search for differ-
1097
+ ent buffer sizes, and sensitivity to hyperparameters section
1098
+ shows that the different parameters are complementary in
1099
+ nature and the model performs well for a number of differ-
1100
+ ent combinations. Therefore, majority of parameters can be
1101
+ fixed, which reduces the search space of hyperparameters
1102
+ significantly. We report the average and one standard devia-
1103
+ tion of three different seeds.
1104
+ C
1105
+ Continual Learning Datasets
1106
+ We consider the Class-IL and Generalized Class-IL setting
1107
+ for our empirical evaluation to extensively assess the ver-
1108
+ satility of our approach. Here, we provide details of the
1109
+ datasets used in each of the settings.
1110
+ C.1
1111
+ Class Incremental Learning (Class-IL)
1112
+ Class-IL (van de Ven and Tolias 2019) requires the agent
1113
+ to learn a new disjoint set of classes with each task, and
1114
+ the agent has to distinguish between all the classes seen so
1115
+ far without the availability of task labels at the test time.
1116
+ We consider the split variants of the benchmark datasets S-
1117
+ CIFAR10 and S-CIFAR100 where the classes are split into
1118
+ 5 tasks with 2 and 20 classes each, respectively. The order
1119
+ of the classes in the experiments remains fixed, whereby for
1120
+ CIFAR10 the first task includes the first two classes, and so
1121
+ forth.
1122
+ C.2
1123
+ Generalized Class Incremental Learning
1124
+ (GCIL)
1125
+ GCIL (Mi et al. 2020) extends the Class-IL setting to more
1126
+ realistic scenarios. In addition to avoiding forgetting, the
1127
+ model has to tackle the challenges of class imbalance, learn-
1128
+ ing an object over multiple recurrences. GCIL utilizes prob-
1129
+ abilistic modeling to sample three characteristics of a task:
1130
+ the number of classes, the classes that appear, and their sam-
1131
+ ple sizes. Similarly to (Arani et al. 2022), we consider GCIL
1132
+ on the CIFAR100 dataset with 20 tasks, each with 1000 sam-
1133
+ ples, and the maximum number of classes in a single task set
1134
+ to 50. To disentangle the effect of class imbalance from
1135
+ the ability of the model to learn from recurring classes un-
1136
+ der non-uniform task lengths, we evaluate the model on uni-
1137
+ form (Unif) and longtail data distributions. we set the GCIL
1138
+ dataset seed to 1993 for all the experiments.
1139
+ D
1140
+ Implementation Details
1141
+ Here, we provide more details on the implementation of
1142
+ k-WTA activation for CNNs and the proposed semantic
1143
+ dropout mechanism.
1144
+ E
1145
+ k-WTA for Convolutional Neural
1146
+ Networks
1147
+ The common implementation of k-WTA in convolutional
1148
+ neural networks involves flattening the activation map into a
1149
+ long CHW ×1 vector and applying the activation of k-WTA
1150
+ in a way similar to that of the fully connected network (Xiao
1151
+ et al. 2019; Ahmad and Scheinkman 2019). This translates to
1152
+ setting some spatial dimensions of a filter to zero while prop-
1153
+ agating others. However, this implementation does not take
1154
+ into account the functional integrity of an individual con-
1155
+ volution filter as an independent feature extractor and does
1156
+ not enable the formation of task-specific subnetworks with
1157
+ specialized feature extractors. Different tasks cannot utilize
1158
+ a different subset of filters, and we cannot track the activity
1159
+ of an individual filter.
1160
+ Our implementation, on the other hand, assigns an activa-
1161
+ tion score to each filter in the layer by taking the absolute
1162
+ sum of the corresponding activation map. Given the activa-
1163
+ tion map of the layer l, Al, (C × W × H) where C is the
1164
+ number of filters, W and H are the width and height, we
1165
+ flatten the spatial dimensions, C × WH), and the activation
1166
+ score for each filter j is given by the absolute sum of its ac-
1167
+ tivations, [Cscore]j = �HW
1168
+ i=1 |[Al]j,i|. We then find the value
1169
+ k for the layer using the activation sparsity (% of the active
1170
+ filters in the layer), k ← %k × N l
1171
+ filters where N l
1172
+ filters is
1173
+ the number of filters in the layer l. The kth highest value of
1174
+ the filter activation scores vector, Cscore ∈ RC×1 gives the
1175
+ threshold value used to apply a mask to the input activation
1176
+ map, which only propagates the activations of filters with a
1177
+ score above threshold by setting the others to zero. Finally,
1178
+ the ReLU activation function is applied to the masked acti-
1179
+ vations. Algorithm 2 provides more details.
1180
+ For the ResNet-18 network in our method, we set the ac-
1181
+ tivation sparsity for each ResNet block, for example % k =
1182
+ [0.9, 0.9, 0.9, 0.8] enforces the activation sparsity of 0.9 in
1183
+ the first three ResNet blocks, that is 90% of the filters in each
1184
+ convolutional layer are active for a given stimulus and 80%
1185
+ in the convolutional layers of the last ResNet block.
1186
+
1187
+ Table S1: Selected parameters for SCoMMER. For each of our experiments, we apply Heterogeneous and Semantic dropout
1188
+ only on the output of the last residual block in ResNet-18, the decay parameter for long-term memory is set to 0.999, the batch
1189
+ size of 32 is used for both the current task and the memory buffer, and the models are train for 50 epochs. For the first three
1190
+ ResNet blocks, we use an activation sparsity of 0.9 and vary the sparsity ratio for the last block.
1191
+ Dataset
1192
+ Buffer
1193
+ size
1194
+ Activation
1195
+ Sparsity
1196
+ η
1197
+ πh
1198
+ πs
1199
+ γ
1200
+ r
1201
+ S-CIFAR10
1202
+ 200
1203
+ 0.8
1204
+ 0.1
1205
+ 0.5
1206
+ 2.0
1207
+ 0.15
1208
+ 0.5
1209
+ 500
1210
+ 0.8
1211
+ 0.1
1212
+ 0.5
1213
+ 2.0
1214
+ 0.15
1215
+ 0.7
1216
+ S-CIFAR100
1217
+ 200
1218
+ 0.9
1219
+ 0.1
1220
+ 0.5
1221
+ 3.0
1222
+ 0.15
1223
+ 0.1
1224
+ 500
1225
+ 0.9
1226
+ 0.1
1227
+ 0.5
1228
+ 3.0
1229
+ 0.15
1230
+ 0.1
1231
+ GCIL - Unif
1232
+ 200
1233
+ 0.9
1234
+ 0.05
1235
+ 0.5
1236
+ 3.0
1237
+ 0.2
1238
+ 0.6
1239
+ 500
1240
+ 0.9
1241
+ 0.05
1242
+ 0.5
1243
+ 3.0
1244
+ 0.2
1245
+ 0.6
1246
+ GCIL - Longtail
1247
+ 200
1248
+ 0.9
1249
+ 0.05
1250
+ 0.5
1251
+ 2.0
1252
+ 0.2
1253
+ 0.5
1254
+ 500
1255
+ 0.9
1256
+ 0.05
1257
+ 0.5
1258
+ 3.0
1259
+ 0.2
1260
+ 0.6
1261
+ Algorithm 2: Global k-WTA for CNNs
1262
+ Input: Activation map A; activation ratio %k; number
1263
+ of filters Nfilters
1264
+ Evaluate activation scores:
1265
+ 1: Flatten the spatial dimensions:
1266
+ 2: Cscore ← Reshape(Cscore, C × HW)
1267
+ 3: Assign score to each filter:
1268
+ 4: Cscore = abs sum(Cscore, dim=1)
1269
+ Calculate threshold:
1270
+ 5: Get k value for the layer:
1271
+ 6: k ← %k × Nfilters
1272
+ 7: Return kth largest value
1273
+ 8: Cthresh = kth value(Cscore, k)
1274
+ Mask Activation Map:
1275
+ 9: Initialize mask with zeros
1276
+ 10: M ← Zeros(C × H × W)
1277
+ 11: Set filter mask with score above threshold to 1
1278
+ 12: M[Cscore > Cthresh] = 1
1279
+ 13: Apply mask
1280
+ 14: A ← M · A
1281
+ Apply ReLU activation function:
1282
+ 15: A ← ReLU(A)
1283
+ return A
1284
+ E.1
1285
+ Semantic Dropout
1286
+ At the beginning of training, we initialize the heterogeneous
1287
+ dropout probabilities Ph so that for each layer l the prob-
1288
+ ability of (1.1 × %kl × N l
1289
+ filters) filters is set to 1 and the
1290
+ remaining set to 0. This is done to ensure that the learning
1291
+ of the first task does not utilize all filters while having the
1292
+ flexibility to learn a different subset of units for the classes
1293
+ in the first task. The semantic dropout probabilities Ps are
1294
+ updated at the end of each epoch, once the epoch num-
1295
+ ber e for the task is higher than the heterogeneous dropout
1296
+ warm-up period Eh to allow the emergence of class-wise ac-
1297
+ tivity patterns before it is explicitly enforced with seman-
1298
+ tic dropout. Note that to ensure that we have enough ac-
1299
+ tive filters before applying k-WTA activation, when apply-
1300
+ ing heterogeneous, we use the probabilities Ph to sample the
1301
+ (1.1 × %kl × N l
1302
+ filters) filters for the given layer before ap-
1303
+ plying k-WTA activation. The 1.1 factor is arbitrarily chosen
1304
+ and works well in practice; however, a different value can be
1305
+ selected. Further details of the method are provided in Algo-
1306
+ rithm 3.
1307
+ Importantly, we disable the dropout activation counter up-
1308
+ date for the buffer samples so that the sparse code is learned
1309
+ during task training. Also, dropout is applied only to work-
1310
+ ing memory as it is learned with gradient decent, whereas
1311
+ the long-term memory aggregates the weights of working
1312
+ memory. Our analysis shows that the learned sparse coding
1313
+ is effectively transferred to long-term memory through ema.
1314
+ For the ResNet-18 model used in our experiments, we ap-
1315
+ ply dropout at the output of each ResNet block. Although
1316
+ our method provides the flexibility to apply different dropout
1317
+ strengths for each block, we observe empirically that it
1318
+ works better if applied only at the output of the last ResNet
1319
+ block. This allows the model to learn features in the earlier
1320
+ layers that can generalize well across the tasks and to learn
1321
+ specialized features for the classes in later layers.
1322
+ F
1323
+ Performance of working memory
1324
+ To gain a better understanding of the performance of the
1325
+ different memories, Table S2 provides the performance of
1326
+ both working memory and long-term memory in the settings
1327
+ considered. Long-term memory consistently provides better
1328
+ generalization across tasks, especially in the Class-IL set-
1329
+ ting. This shows the benefits of using multiple memory sys-
1330
+ tems in CL. Furthermore, it demonstrates the effectiveness
1331
+ of the exponential moving average of the working memory
1332
+ weights as an efficient approach for aggregating the learned
1333
+ knowledge.
1334
+
1335
+ Algorithm 3: Semantic Dropout
1336
+ Input: Activation map A; class labels y; activation ratio %k; number of filters Nfilters; dropout probabilities Ph and Ps
1337
+ Get the Heterogeneous Dropout Mask:
1338
+ 1: Initialize Heterogeneous dropout mask with zeros
1339
+ 2: Hmask ← Zeros(C × H × W)
1340
+ 3: Calculate the sampling probabilities so that they sum to zero
1341
+ Psample = Ph / sum(Ph)
1342
+ 4: Get the indices of retained filters
1343
+ Nretain = 1.1 × %k × Nfilters
1344
+ idx = Sample(range=Nfilters, #samples=Nretain, prob=Psample, replace=False)
1345
+ 5: Set the mask at retained indices to 1
1346
+ Hmask[idx] = 1
1347
+ Get the Heterogeneous Dropout Mask:
1348
+ 6: Initialize Semantic dropout mask with zeros
1349
+ 7: Smask ← Zeros(C × H × W)
1350
+ 8: Use the semantic dropout probabilities to select units
1351
+ retain = N ∼ U(0, 1) ≤ Ps
1352
+ 9: Set the mask at retained indices to 1
1353
+ Smask[retain] = 1
1354
+ Select the mask for each input sample
1355
+ 10: For each sample, select Semantic dropout mask if available for the class label, otherwise use Heterogeneous dropout mask:
1356
+ 11: M = Smask if Ps[y] > 0, otherwise Hmask
1357
+ Mask Activation Map:
1358
+ 12: A ← M · A
1359
+ return A
1360
+ G
1361
+ Sensitivity to Hyperparameters
1362
+ SCoMMER employs sparse coding in a multiple-memory
1363
+ replay mechanism. Therefore, the setting of two sets of pa-
1364
+ rameters is required: sparse coding (activation sparsity %k
1365
+ and dropout strength πs and πh) and the aggregation of in-
1366
+ formation in long-term memory (r, α). We show the effect
1367
+ of different sets of hyperparameters in Table S3. We can
1368
+ see that the different components are complementary in na-
1369
+ ture and therefore different combinations of parameters can
1370
+ provide similar performance. Interestingly, we observe that
1371
+ increasing the semantic dropout strength considerably in-
1372
+ creases the performance of the working model, but the long-
1373
+ term memory performance remains quite stable. The method
1374
+ is not highly sensitive to a particular set of parameters, and
1375
+ often we can fix the majority of parameters and fine-tune
1376
+ only a few, which significantly reduces the search space.
1377
+
1378
+ Table S2: Performance of working memory and long-term memory in different settings. Long-term memory consistently pro-
1379
+ vides better performance.
1380
+ Buffer
1381
+ Memory
1382
+ S-CIFAR10
1383
+ S-CIFAR100
1384
+ GCIL
1385
+ Class-IL
1386
+ Task-IL
1387
+ Class-IL
1388
+ Task-IL
1389
+ Unif
1390
+ Longtail
1391
+ 200
1392
+ Working
1393
+ 58.03±5.17
1394
+ 92.58±0.56
1395
+ 30.07±0.71
1396
+ 67.18±0.16
1397
+ 27.64±0.30
1398
+ 27.06±0.97
1399
+ Long-Term
1400
+ 69.19±0.61
1401
+ 93.20±0.10
1402
+ 40.25±0.05
1403
+ 69.39±0.43
1404
+ 30.84±0.80
1405
+ 29.08±0.31
1406
+ 500
1407
+ Working
1408
+ 66.10±3.60
1409
+ 93.59±0.09
1410
+ 41.36±1.07
1411
+ 73.52±0.37
1412
+ 34.34±0.88
1413
+ 33.39±0.74
1414
+ Long-Term
1415
+ 74.97±1.05
1416
+ 94.36±0.06
1417
+ 49.63±1.43
1418
+ 75.49±0.43
1419
+ 36.87±0.36
1420
+ 35.20±0.21
1421
+ Table S3: Sensitivity to different hyperparameters. We pro-
1422
+ vide the performance of Working memory and Long-term
1423
+ memory of models trained on S-CIFAR-10 with 200 buffer
1424
+ size. For all experiments γ = 0.15, lr = 0.1, decay parameter
1425
+ = 0.999, πh = 0.5, and the model is trained for 50 epochs.
1426
+ For the first three ResNet blocks, we use an activation spar-
1427
+ sity of 0.9 and vary the sparsity ratio for the last block (%k)
1428
+ r
1429
+ %k
1430
+ πs
1431
+ Memory
1432
+ Working
1433
+ Long-Term
1434
+ 0.4
1435
+ 0.7
1436
+ 1.0
1437
+ 56.65
1438
+ 69.58
1439
+ 2.0
1440
+ 59.46
1441
+ 68.30
1442
+ 3.0
1443
+ 59.89
1444
+ 68.93
1445
+ 0.8
1446
+ 1.0
1447
+ 50.25
1448
+ 67.19
1449
+ 2.0
1450
+ 58.01
1451
+ 69.89
1452
+ 3.0
1453
+ 56.91
1454
+ 68.72
1455
+ 0.9
1456
+ 1.0
1457
+ 51.26
1458
+ 67.49
1459
+ 2.0
1460
+ 56.58
1461
+ 68.32
1462
+ 3.0
1463
+ 56.87
1464
+ 66.89
1465
+ 0.5
1466
+ 0.7
1467
+ 1.0
1468
+ 57.01
1469
+ 66.80
1470
+ 2.0
1471
+ 59.61
1472
+ 69.26
1473
+ 3.0
1474
+ 60.51
1475
+ 69.00
1476
+ 0.8
1477
+ 1.0
1478
+ 49.09
1479
+ 67.36
1480
+ 2.0
1481
+ 58.03
1482
+ 69.19
1483
+ 3.0
1484
+ 60.37
1485
+ 67.99
1486
+ 0.9
1487
+ 1.0
1488
+ 49.38
1489
+ 66.27
1490
+ 2.0
1491
+ 60.47
1492
+ 68.16
1493
+ 3.0
1494
+ 57.64
1495
+ 67.88
1496
+ 0.6
1497
+ 0.7
1498
+ 1.0
1499
+ 56.91
1500
+ 67.85
1501
+ 2.0
1502
+ 61.2
1503
+ 67.64
1504
+ 3.0
1505
+ 62.44
1506
+ 67.94
1507
+ 0.8
1508
+ 1.0
1509
+ 51.11
1510
+ 65.97
1511
+ 2.0
1512
+ 58.61
1513
+ 66.55
1514
+ 3.0
1515
+ 61.01
1516
+ 69.36
1517
+ 0.9
1518
+ 1.0
1519
+ 49.26
1520
+ 66.93
1521
+ 2.0
1522
+ 58.35
1523
+ 67.44
1524
+ 3.0
1525
+ 60.18
1526
+ 67.90
1527
+
0tE4T4oBgHgl3EQfZwwm/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
1NE0T4oBgHgl3EQf_gL8/content/2301.02829v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78bf96f832f0c414ba82262627609ed0f27945fb12505f81e5b2da7aec4d7b59
3
+ size 220996
1NE0T4oBgHgl3EQf_gL8/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:697e439386adfc86b0b4e7c841c1ff1f1b9ebfcd96e63d683fd30d3d488f5552
3
+ size 589869
1NE0T4oBgHgl3EQf_gL8/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70767726e0c4dda6c3190ca9129f1f93a83097c6027490d92efe817efc51bcd1
3
+ size 31000
1tAyT4oBgHgl3EQfPfba/content/2301.00027v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a431e23041c4cd361cb7bde77c4b704b18be2e2d76b685b3fb15cb2511bf4bfe
3
+ size 4626897
2tAyT4oBgHgl3EQfPvai/content/2301.00031v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d185fbd7603676339372325e62040d0ae190de4d38150ba0d3a86109b44d44c6
3
+ size 1342462
2tAyT4oBgHgl3EQfPvai/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72bd70a17ad076e99c6ade273f5e9ab77bd7513723bfc1cc6629c1defcbdf60e
3
+ size 1638445
2tAyT4oBgHgl3EQfPvai/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:368d6b54c131989040e5459aafce028669291d6712e840da9cf0033833868b92
3
+ size 63194
3NAzT4oBgHgl3EQf9P6r/content/tmp_files/2301.01917v1.pdf.txt ADDED
@@ -0,0 +1,1398 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. XX, NO. XX, JANUARY 2023
2
+ 1
3
+ Small Moving Object Detection Algorithm Based
4
+ on Motion Information
5
+ Ziwei Sun, Zexi Hua, and Hengcao Li, Fellow, IEEE
6
+ Abstract—A Samll Moving Object Detection algorithm Based
7
+ on Motion Information (SMOD-BMI) was proposed to detect
8
+ small moving objects with low Signal-to-Noise Ratio (SNR).
9
+ Firstly, To capture suspicious moving objects, a ConvLSTM-
10
+ SCM-PAN model structure was designed, in which the Convo-
11
+ lutional Long and Short Time Memory (ConvLSTM) network
12
+ fused temporal and spatial information, the Selective Concatenate
13
+ Module (SCM) was selected to solve the problem of channel
14
+ unbalance during feature fusion, and the Path Aggregation
15
+ Network (PAN) located the suspicious moving objects. Then, an
16
+ object tracking algorithm is used to track suspicious moving
17
+ objects and calculate their Motion Range (MR). At the same time,
18
+ according to the moving speed of the suspicious moving objects,
19
+ the size of their MR is adjusted adaptively (To be specific, if the
20
+ objects move slowly, we expand their MR according their speed
21
+ to ensure the contextual environment information) to obtain their
22
+ Adaptive Candidate Motion Range (ACMR), so as to ensure that
23
+ the SNR of the moving object is improved while the necessary
24
+ context information is retained adaptively. Finally, a LightWeight
25
+ SCM U-Shape Net (LW-SCM-USN) based on ACMR with a
26
+ SCM module is designed to classify and locate small moving
27
+ objects accurately and quickly. In this paper, the moving bird in
28
+ surveillance video is used as the experimental dataset to verify
29
+ the performance of the algorithm. The experimental results show
30
+ that the proposed small moving object detection method based
31
+ on motion information can effectively reduce the missing rate
32
+ and false detection rate, and its performance is better than the
33
+ existing moving small object detection method of SOTA.
34
+ Index Terms—Object Detection; Small Moving Objects; Mo-
35
+ tion Information; Motion Range; Low Signal-to-Noise Ratio
36
+ I. INTRODUCTION
37
+ T
38
+ HE intelligent video analysis technology can reduce the
39
+ work intensity of the monitoring center staff and reduce
40
+ the false positives and missing positives caused by manual
41
+ monitoring. And moving object detection is one of the basic
42
+ tasks of intelligent video analysis technology [1], [2]. Through
43
+ moving object detection technology, information such as the
44
+ category, location, size and motion speed of moving objects
45
+ can be obtained, which can provide basic data support for in-
46
+ telligent video analysis technology such as behavior prediction
47
+ and trajectory tracking of moving objects in the next step.
48
+ For the detection of small moving objects, there are two
49
+ main challenges.
50
+ • The object has a low SNR. For the general unattended
51
+ monitoring scene, the monitoring area is usually a room
52
+ or an outdoor area. If a mouse or bird intrudes into the
53
+ Manuscript received January 4, 2023.
54
+ Ziwei Sun, Zexi Hua and Hengcao Li are with the School of Information
55
+ Science and Technology, Southwest JiaoTong University, chengdu 611756,
56
+ China.
57
+ monitoring area, the number of pixels is usually small,
58
+ as shown by Bird A in Fig. 1.
59
+ • The moving object may blur. Since most of the low-cost
60
+ surveillance cameras do not have the ability of low-delay
61
+ photography, the moving object captured has a certain
62
+ trailing phenomenon, which may lead the moving object
63
+ blur, as shown by Bird B in Fig. 1.
64
+ Fig. 1: Small and blurred moving birds in the surveillance
65
+ area. The Bird A is small but clear, the Bird B is small and
66
+ blur.
67
+ To solve the above problems, researchers mainly use the
68
+ motion information (spatio-temporal features). Of course, like
69
+ other vision tasks, the detection method of moving small
70
+ objects has also experienced the development from traditional
71
+ methods based on knowledge-driven to deep learning methods
72
+ based on data-driven.
73
+ At present, the knowledge-driven moving object detection
74
+ algorithms mainly include frame difference method [3], back-
75
+ ground difference method [4], robust principal component
76
+ Analysis method [5] and optical flow method [6]. In the early
77
+ stage, the frame difference method, background difference
78
+ method, and robust principal component analysis method
79
+ were only suitable for the situation that the background was
80
+ static and there was no more complex interference (such as
81
+ illumination change, branches and leaves swaggling, water
82
+ waves and so on). The optical flow method was suitable for the
83
+ situation of moving background, but it still could not overcome
84
+ some interference such as illumination change, the object stop
85
+ or slow motion. However, through the continuous efforts of
86
+ researchers, the traditional methods can accurately extract the
87
+ moving object to a certain extent [7], [8], [9]. However, the
88
+ traditional methods can only extract the pixels of the moving
89
+ object at most, can not obtain other attributes of the moving
90
+ object, and can not distinguish the interesting and uninteresting
91
+ moving objects.
92
+ arXiv:2301.01917v1 [cs.CV] 5 Jan 2023
93
+
94
+ Bird B
95
+ oBirdAIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. XX, NO. XX, JANUARY 2023
96
+ 2
97
+ In the early stage, methods based on deep learning were
98
+ mainly combined with traditional methods and object detection
99
+ methods: traditional methods such as frame difference method,
100
+ background difference method, principal component analysis
101
+ method, and optical flow method were combined with object
102
+ detection methods, in which the traditional method provided
103
+ time-related motion information, and the object detection
104
+ provided space-related positioning information [10], [11], [12],
105
+ [13]. These traditional methods with object detection have
106
+ made considerable progress in detecting moving objects. How-
107
+ ever, the detection performance of these methods is affected
108
+ by the motion information provided by the traditional methods
109
+ to a certain extent. At present, some researchers gradually pay
110
+ attention to the full deep learning to obtain the temporal infor-
111
+ mation and spatial information of moving objects at the same
112
+ time, such as using ConvLSTM(Convolution Long Short Term
113
+ Memory) for moving object detection [14], [15]. Or moving
114
+ object detection with input of consecutive multiple frames
115
+ merged [16], [17]. The method based on deep learning has
116
+ a certain improvement in effect and function compared with
117
+ the traditional method. It can distinguish between interested
118
+ and uninterested moving objects, and can obtain the category
119
+ and location of moving objects. However, there are still many
120
+ false detections and missed detections when detect the small
121
+ moving objects. We further analyze and find that most of the
122
+ missed detections occur because the object is small or similar
123
+ to the environment, and most of the false detections that occur
124
+ are caused by various tiny moving things or things whose
125
+ appearance is similar to the object of interest. Therefore, the
126
+ main reason for this problem is that most moving objects
127
+ account for a small proportion of pixels in the whole video
128
+ frame, and the problem of low SNR (unbalanced positive and
129
+ negative samples) is not easy to be eliminated in the training
130
+ process.
131
+ In order to solve the above problems, this paper analyzes
132
+ our human method of small moving object recognition in
133
+ complex environments. Our human approach to identifying
134
+ small moving objects is divided into two stages. In the first
135
+ stage, we will find out where the object may exist according
136
+ to the motion information. In the second stage, we will focus
137
+ on the area where the object may exist and carefully observe,
138
+ so as to remove more interference information. Therefore, we
139
+ propose a Small Moving Object Detection algorithm Based
140
+ on Motion Information (SMOD-BMI). Firstly, a moving ob-
141
+ ject detection model ConvLSTM-SCM-PAN (coarse-detection
142
+ model) is designed to fuse spatio-temporal information, which
143
+ can capture suspicious moving objects according to the mo-
144
+ tion information of moving objects. Then, the Motion Range
145
+ (MR) of suspicious moving objects is extracted by using the
146
+ object tracking algorithm. At the same time, according to
147
+ the moving speed of the suspicious moving objects, the size
148
+ of their MR is adjusted adaptively (To be specific, if the
149
+ objects move slowly, we expand their MR according their
150
+ speed to ensure the contextual environment information) to
151
+ obtain their Adaptive Candidate Motion Range (ACMR), so
152
+ as to ensure that the SNR of the moving object is improved
153
+ while the necessary context information is retained adaptively.
154
+ Finally, a lightweight moving object detection model LW-
155
+ SCM-USN (Fine detection model) based on the ACMR of the
156
+ moving object is designed to accurately classify and locate the
157
+ moving object on the basis of ensuring real-time. The main
158
+ contributions of this paper are as follows.
159
+ • The ConvLSTM-SCM-PAN model structure is designed
160
+ to capture the suspicious moving objects. Among them,
161
+ Convolution Long Short-Term Memory Network (ConvL-
162
+ STM) fuses spatio-temporal information, Selective Con-
163
+ catenation Module (SCM) to solve the problem of chan-
164
+ nel imbalance during feature fusion, and PAN locates
165
+ suspicious moving objects.
166
+ • An adaptive method of extracting ACMR based on the
167
+ amount of motion of the moving object is proposed. By
168
+ using the object tracking technology and the amount of
169
+ motion of the moving object, the ACMR of the suspected
170
+ moving object are extracted adaptively, which improves
171
+ the SNR of the moving object and retains the necessary
172
+ context information of the moving object.
173
+ • A LightWeight U-Shaped Network with SCM module
174
+ (LW-SCM-USN) model structure is designed, and the
175
+ accurate classification and location of moving objects are
176
+ realized by using the ACMR of suspected objects.
177
+ The remainder of this paper is structured as follows: Section
178
+ II is a survey of related work on moving object detection.
179
+ Section III describes the proposed SMOD-BMI in detail.
180
+ Section IV is devoted to ablation experiments and comparison
181
+ experiments of the proposed algorithm. Section V concludes
182
+ our work.
183
+ II. RELATED WORK
184
+ According to the use of different characteristics of the
185
+ object, the methods of moving object detection can be mainly
186
+ divided into three categories: methods based on appearance
187
+ information, methods based on motion information and meth-
188
+ ods based on deep learning for moving object detection. In
189
+ this section we will review these three categories.
190
+ A. Appearance-based Object Detection
191
+ From traditional methods [18], [19], [20] to deep learning-
192
+ based methods [21], [22], [23], [24], [25], [26], [27], [28],
193
+ object detection technology has now made great progress,
194
+ which can accurately determine the specific class of each
195
+ object and give the bounding box of each object. However,
196
+ since these object detection algorithms only rely on the
197
+ appearance features of the object, the detection effect is not
198
+ good for small moving objects with complex backgrounds and
199
+ unobvious appearance features [11], [29].
200
+ B. Moving Object Detection based on Motion Information
201
+ Since the object detection algorithm based on appearance
202
+ feature can not detect small moving objects in complex
203
+ background well, researchers have proposed various moving
204
+ object detection algorithms based on motion information. The
205
+ main methods are frame difference, background subtraction,
206
+ optical flow and robust principal component analysis.
207
+
208
+ IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. XX, NO. XX, JANUARY 2023
209
+ 3
210
+ 1) Frame Difference Method: Because the object is mov-
211
+ ing, there is a certain displacement between the position of
212
+ the historical frame and the position of the current frame. The
213
+ changed pixel, which is the pixel of the moving object, can be
214
+ extracted by subtracting the historical frame from the current
215
+ frame. When the simple frame difference method obtains the
216
+ moving object, it is easy to appear the hole or ghost phe-
217
+ nomenon [30]. Therefore, researchers have proposed various
218
+ complex frame difference methods to solve this problem [31],
219
+ [32], which have achieved certain effect improvement, but the
220
+ problem cannot be completely solved.
221
+ 2) Background Subtraction Method: The environment and
222
+ moving object are regarded as background and foreground.
223
+ The background remains static, while the moving object moves
224
+ in front of the background as the foreground. The key point
225
+ of this method is the background modeling. There are many
226
+ methods of background modeling, which are widely used at
227
+ present, such as multi-frame average background modeling,
228
+ simple Gaussian modeling [33], Gaussian mixture modeling
229
+ [34], ViBe algorithm [35], etc., although the modeling effect
230
+ of these background modeling methods is getting better and
231
+ better. However, it still cannot completely overcome various
232
+ disturbances such as wind and water waves, resulting in more
233
+ interference in the extracted foreground.
234
+ 3) Optical Flow Method: The moving object detection
235
+ method based on optical flow method distinguishes the back-
236
+ ground and moving object by using optical flow field according
237
+ to the feature that the brightness of adjacent points in the image
238
+ is similar [6]. The key technology of optical flow method
239
+ is to solve the estimation of optical flow. At present, the
240
+ main optical flow estimation algorithms include correlation
241
+ method, energy method, discrete optimization method and
242
+ phase method [6]. The optical flow method does not need
243
+ prior information to detect moving objects and can be used in
244
+ dynamic background. However, the calculation of optical flow
245
+ field distribution is difficult due to the change of light source,
246
+ shadow and occlusion.
247
+ 4) Robust Principal Component Analysis (RPCA): The
248
+ background is considered as a low-rank matrix and the moving
249
+ objects are sparse. Therefore, this method converts the detec-
250
+ tion of moving objects into low-rank sparse decomposition
251
+ of the matrix composed of multiple frames, so as to obtain
252
+ sparse moving objects [36]. Since the original robust principal
253
+ component analysis method is time-consuming, subsequent
254
+ researchers have proposed some improved schemes, such as
255
+ Faster RPCA [37], which greatly improves the decomposition
256
+ speed. However, when the background moves or the back-
257
+ ground changes complex, the background matrix loses its low
258
+ rank property, and it is difficult to decompose the moving
259
+ object at this time. Therefore, the robust principal component
260
+ analysis method is mainly suitable for the situation that the
261
+ background is static or the background changes simple.
262
+ C. Moving Object Detection with Deep Learning
263
+ In recent years, influenced by the great progress of deep
264
+ learning technology in vision tasks, researchers have begun
265
+ to use deep learning technology to detect moving objects.
266
+ Researchers have used deep learning techniques in two differ-
267
+ ent ways to investigate how to detect moving objects, but all
268
+ related studies follow the same basic rule, that is, you need
269
+ to consider both time-based motion information and space-
270
+ based position information. The difference between these two
271
+ methods lies in how to obtain time-based motion information.
272
+ One way is to obtain the motion information by using the
273
+ traditional moving object detection method, which is called
274
+ the traditional plus deep learning method, and the other way
275
+ is to obtain the motion information directly by using deep
276
+ learning, which is called the full deep learning method.
277
+ 1) Traditional plus Deep Learning Method: The traditional
278
+ and deep learning moving object detection methods are sum-
279
+ marized into two categories. 1) Firstly, the motion information
280
+ is used to extract the foreground, and then the foreground is
281
+ used for moving object detection [10], [11], [12], [13]. For
282
+ example, literature [10] introduces the Fast RPCA algorithm
283
+ to separate the foreground, and then implements Faster R-CNN
284
+ object detection on the foreground map to effectively detect
285
+ the moving small object in the panoramic video. In literature
286
+ [11], the frame difference method was used to obtain the
287
+ moving foreground, and then the CNN classification network
288
+ was used to screen the region of interest. Finally, the CNN
289
+ regression network was used to perform coordinate regression
290
+ on the region of interest to obtain the moving object. Literature
291
+ [12] uses the ViBe background modeling method to extract the
292
+ foreground, and uses this foreground as the candidate moving
293
+ object area of Fast R-CNN to set ANCHORS, so as to realize
294
+ the detection of moving objects. In reference [10], the motion
295
+ region was obtained by frame difference method, and then
296
+ the motion region was connected and expanded. Finally, Deep
297
+ CNN was used to classify and position regression the object
298
+ in the motion region. 2) Traditional methods are directly fused
299
+ with object detection to detect moving objects [38], [39]. For
300
+ example, literature [38] inputted the frame difference between
301
+ the original image and the two frames into VGG16 for fusion,
302
+ and then inputted the fused feature layer into Faster R-CNN
303
+ for object detection. Literature [39] proposed a method based
304
+ on deep learning combining RGB and optical flow to segment
305
+ moving objects.
306
+ 2) Full Deep Learning Method: There are two main cat-
307
+ egories of moving object detection methods based on full
308
+ deep learning. 1) ConvLSTM is used to fuse temporal and
309
+ spatial information to segment or detect moving objects [14],
310
+ [15]. For example, reference [14] introduces the attention
311
+ Long Short-Term Memory (attention ConvLSTM) model to
312
+ simulate the change of pixels over time, and then uses a
313
+ spatial Transformer and conditional random field (CRF) to
314
+ segment moving objects. In reference [15], the Pyramid dilated
315
+ convolution (PDC) module was designed to extract multi-
316
+ scale spatial features, and then these spatial features were
317
+ concatenated and fed into the Extended deep Bidirectional
318
+ ConvLSTM (DB-ConvLSTM) to obtain spatio-temporal in-
319
+ formation. Finally, the moving objects in the video are de-
320
+ tected by using the spatio-temporal information. 2) Detecting
321
+ moving objects by merging and fusing temporal and spatial
322
+ information of consecutive frames [16], [17]. For example, the
323
+ paper [16] propose regions of objects of interest (ROOBI) by
324
+
325
+ IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. XX, NO. XX, JANUARY 2023
326
+ 4
327
+ using the region proposal network, which combines the spatio-
328
+ temporal information by merging the input of consecutive
329
+ frames. After getting the Propose regions, the exact position of
330
+ the object is located again by merging the input of consecutive
331
+ multiple frames. In literature [17], continuous multiple frames
332
+ are merged into CNN for background estimation, and then a
333
+ compact encoder-decoder network is used to segment moving
334
+ objects.
335
+ III. THE PROPOSED SMOD-BMI
336
+ Fig. 2 shows the overview diagram of the proposed SMOD-
337
+ BMI, which contains three parts. Firstly, ConvLSTM-SCM-
338
+ PAN model structure was designed to capture the suspicious
339
+ moving objects. Secondly, An object tracking algorithm is
340
+ used to track suspicious moving objects and calculate their
341
+ MR. At the same time, according to the moving speed of the
342
+ suspicious moving objects, the size of their MR is adjusted
343
+ adaptively to obtain their ACMR, so as to ensure that the
344
+ SNR of the moving object is improved while the necessary
345
+ context information is retained adaptively. Finally, LW-SCM-
346
+ USN based on ACMR with a SCM module is designed to clas-
347
+ sify and locate small moving objects accurately and quickly.
348
+ Section III-A describes the ConvLSTM-based suspicious mov-
349
+ ing object detection method. Section III-B ACMR extraction
350
+ method of suspicious moving object based on object tracking
351
+ technology and motion amount. Section III-C describes the
352
+ moving object detection method based on ACMR.
353
+ A. To Capture the Suspicious Moving Object
354
+ In this paper, we perform two steps to capture suspicious
355
+ moving objects (coarse-detection of moving objects) in con-
356
+ secutive video images. Firstly, the spatio-temporal information
357
+ of the moving object was fused. Secondly, the spatio-temporal
358
+ information is used to locate the suspicious moving object by
359
+ object detection. This subsection will introduce the acquisition
360
+ of spatio-temporal information of moving objects (Section
361
+ III-A1), and the localization of suspicious moving objects
362
+ (Section III-A2), respectively.
363
+ 1) Fusion of Spatio-temporal Information for Small Moving
364
+ Objects: Motion is mainly reflected in time and space, that
365
+ is, at different times, according to the spatial location of
366
+ the object can show motion. Therefore, to capture the Small
367
+ moving object, it is necessary to fuse its temporal and spatial
368
+ information.
369
+ As we have introduced in Section II-C2, there are two main
370
+ ways to fuse the spatio-temporal information of the object
371
+ based on deep learning. One is based on the recurrent neural
372
+ network ConvLSTM, and the other is based on the input
373
+ merging of consecutive multiple frames. ConvLSTM(structure
374
+ shown in Fig. 3) contains three gates, namely input gate,
375
+ output gate and forget gate, which are used to control the
376
+ input and output and what information needs to be forgotten
377
+ and discarded. At the same time, the input gate and output
378
+ gate can also be understood as controlling the writing and
379
+ reading of the memory cell. Continuous multi-frame merging
380
+ input is to simply Concatenate consecutive frames of video
381
+ images together and then input into the neural network.
382
+ The coarse-detection phase captures the suspicious moving
383
+ object, and the input is the whole video, which has the char-
384
+ acteristics of many background interference and redundant in-
385
+ formation (different frames have many identical backgrounds).
386
+ According to the characteristics of ConvLSTM structure, it
387
+ can remove unimportant or redundant information while fusing
388
+ spatio-temporal information. So, in the first stage, we use Con-
389
+ vLSTM to extract and fuse the spatio-temporal information of
390
+ moving objects. Specifically, given the input n consecutive
391
+ frames of images Xt ∈ R(H×W×3)|t = (1, 2, · · · , n) (Where
392
+ H and W are the height and width of the input image, and n is
393
+ an odd number), the ConvLSTM network FConvLSTM is used to
394
+ fuse and extract the spatio-temporal features Hn ∈ R(H×W×C)
395
+ (Where C is the number of channels) of the n consecutive
396
+ frames of images,
397
+ Ht = FConvLSTM ([Xt, Ht−1] ; ΘConvLSTM) ,
398
+ (1)
399
+ Where, when t = 1, H0 = 0. ΘConvLSTM is the learnable
400
+ parameter of the ConvLSTM network. The spatio-temporal
401
+ features Hn of n consecutive frames of images are input
402
+ into the subsequent classification and positioning module to
403
+ determine the category and spatial location information of the
404
+ suspicious moving object.
405
+ 2) Localization of Suspicious Moving Objects: In convolu-
406
+ tional neural networks, deeper layers, which generally have
407
+ smaller size, have better global semantic information, and
408
+ can predict larger objects. The layers with shallower depth,
409
+ which generally have larger size, have more delicate spatial
410
+ information and can predict smaller objects. However, the
411
+ large feature layer often does not have a relatively high
412
+ degree of semantic information, and the small feature layer
413
+ does not have fine spatial positioning information. Therefore,
414
+ relevant researchers have proposed the structure of FPN [40] to
415
+ combine the strong semantic information of the small feature
416
+ layer and the strong spatial positioning information of the
417
+ large feature layer. However, the researchers of PANet(Path
418
+ Aggregation Network) [41] found that when FPN transmitted
419
+ information, there was information loss due to the transfer
420
+ distance when the information was transmitted to the low-level
421
+ feature layer. Therefore, path-enhanced FPN, namely PANet
422
+ structure, was proposed. The PANet structure opens up a green
423
+ channel for low-level information transmission and avoids low-
424
+ level information loss to a certain extent. At the same time,
425
+ we find that the detection performance will be improved when
426
+ Selective Concatenation Module (SCM) [42] is added to the
427
+ model (reference [42] introduces that SCM can help to better
428
+ fuse high and low layer information (refer to reference [42] for
429
+ details)). We believe that SCM can not only balance the fusion
430
+ of channel information in different layers, but also suppress
431
+ unimportant information and highlight the information that the
432
+ model needs to focus on. So, we introduce the SCM and design
433
+ the feature extraction structure of SCM-PANet (see Fig. 4).
434
+ The spatio-temporal features Hn of n consecutive frames
435
+ are input into the SCM-PANet structure to extract the features
436
+ of the suspicious moving object FMOn,
437
+ FMOn = FSCM-PAN (Hn; ΘSCM-PAN) ,
438
+ (2)
439
+
440
+ IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. XX, NO. XX, JANUARY 2023
441
+ 5
442
+ Fig. 2: Overview of the proposed SMOD-BMI. (a) Capture the suspicious moving object, the blue box in the figure represents
443
+ the detected suspicious moving object. (b) The ACMR of the suspicious moving object is obtained. In the figure, the green
444
+ box represents the original MR of the moving object tracked by the tracking algorithm, and the red box represents the MR
445
+ adaptively adjusted according to the motion amount of the moving object. (c) Classification and localization of moving objects.
446
+ Fig. 3: Structure diagram of ConvLSTM
447
+ Where, ΘSCM-PAN is the learnable parameter of the SCM-
448
+ PANet network.
449
+ When the distance between the moving object and the
450
+ surveillance camera is different, the size of the moving object
451
+ is also different, so the moving object to be detected has the
452
+ multi-scale property. According to the multi-scale property of
453
+ the moving object, this paper uses the MultiScale Detection
454
+ Head (MS-D Head) to detect the suspicious moving object.
455
+ The objects in the middle frame of n consecutive frames
456
+ has symmetric contextual information, which can get more
457
+ accurate results in prediction. Therefore, this paper predicts
458
+ the suspicious object in the middle frame of n consecutive
459
+ frames as the detection result of the Coarse-detection stage.
460
+ Specifically, the feature FMOn of the moving object is input
461
+ into the MS-D Head to obtain the output of the model,
462
+ On = FMS-D (FMOn; ΘMS-D) ,
463
+ (3)
464
+
465
+ (a) To capture the suspicious Moving Object (MO)
466
+ MS-D
467
+ ConyLSTM
468
+ SCM-PAN
469
+ Head
470
+ Suspicious MO (Blue box)
471
+ Video Frame
472
+ Coarse Detection Model
473
+ Track and calculate the Motion Range (MR)
474
+ (b) To obtain the Adaptive Candidate MR (ACMR)
475
+ (n+4)th
476
+ MR (Green box) of the Suspicious MO
477
+ Adaptively resize and crop the MR
478
+ (c) To classify and locate the MO
479
+ BackGround
480
+ SS-D
481
+ LW-SCM-USN
482
+ Head
483
+ Bird
484
+ crop
485
+ crop
486
+ crop
487
+ ACMR (Red box) of The Suspicious MO
488
+ Fine Detection Model
489
+ BirdConvLSTM
490
+ Conv
491
+ Bias
492
+ I+X
493
+ write
494
+ read
495
+ +Bc
496
+ *
497
+ tanh
498
+ tanh
499
+ C
500
+ Xt
501
+ W:*
502
+ +B:
503
+ 0
504
+ Ct
505
+ J
506
+ Concat
507
+ +Bf
508
+ W.*
509
+ Ht-1
510
+ Ot
511
+ +Bo
512
+ W
513
+ *
514
+ 0
515
+ 0
516
+ Ht
517
+ + Data flow
518
+ Next iterationIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. XX, NO. XX, JANUARY 2023
519
+ 6
520
+ Fig. 4: Structure diagram of the SCM-PANet model
521
+ where, ΘMS-D is the learnable parameter of the MS-D Head.
522
+ Then post-processing operations such as Boxes Decoding and
523
+ non-maximum suppression were performed on the output of
524
+ the model to obtain the location of the suspicious object in
525
+ the middle frame of n consecutive frames,
526
+ {PID1, · · · , PIDk}
527
+ frame( n+1
528
+ 2 ) = FP (On) ,
529
+ (4)
530
+ Where, {·}
531
+ frame( n+1
532
+ 2 ) means the locations of the moving ob-
533
+ jects in
534
+ � n+1
535
+ 2
536
+ �th frames. PIDk indicates the predicted position
537
+ of the object with IDk (the object with IDk is taken as an
538
+ example unless otherwise specified). FP (·) denotes the post-
539
+ processing method.
540
+ B. To Obtain the ACMR
541
+ In this paper, the MR of the suspicious moving object on
542
+ n consecutive frames is extracted to improve the SNR of the
543
+ moving object. At the same time, in order to ensure the context
544
+ information of the suspicious moving object, the size of the
545
+ MR is adaptively adjusted according to the motion amount of
546
+ the suspicious moving object, so that the subsequent detection
547
+ results are more accurate. Specifically, we will divide into
548
+ two steps to obtain the ACMR of suspicious moving objects.
549
+ Respectively, the original MR of the suspicious moving object
550
+ is extracted using the object tracking technology (Section
551
+ III-B1) and the MR is adaptively adjusted using the motion
552
+ amount of the suspicious moving object to obtain the ACMR
553
+ (Section III-B2).
554
+ 1) Acquisition of the Original MR of the Suspicious Moving
555
+ Object: From the
556
+ � n+1
557
+ 2
558
+ �th frame, there are detection results
559
+ of the suspicious moving object, and we start to track the
560
+ suspicious moving object from the
561
+ � n+1
562
+ 2
563
+ + 1
564
+ �th frame. In some
565
+ cases, the appearance characteristics of small moving objects
566
+ are not obvious, so we only use their motion information when
567
+ tracking them, and use a relatively simple SORT [43] object
568
+ tracking algorithm to track suspicious moving objects,
569
+
570
+ {PIDk}frame(i), {PIDk}frame(i+1), · · ·
571
+
572
+ = Ftrack (IDk) ,
573
+ (5)
574
+ where,
575
+
576
+ {PIDk}frame(i), {PIDk}frame(i+1), · · ·
577
+
578
+ represents the po-
579
+ sition on consecutive image frames of a suspicious moving
580
+ object with ID number k, and Ftrack (·) represents the SORT
581
+ object tracking method. After obtaining the position of the
582
+ suspicious moving object on consecutive image frames, we
583
+ can find the Motion Range (MR) of the suspicious moving
584
+ object on n consecutive frames. Specifically, the minimum
585
+ circumscribed rectangle RectIDk at n positions is calculated
586
+ according to the position of the same object on n consecutive
587
+ frames of images,
588
+ RectIDk =
589
+ FMinRect
590
+ ��
591
+ {PIDk}frame(i+1), · · · , {PIDk}frame(i+n)
592
+ ��
593
+ ,
594
+ (6)
595
+ where, FMinRect (·) denotes the function to find the mini-
596
+ mum circumscribed rectangle of n rectangular boxes. For
597
+ example,
598
+ to
599
+ find
600
+ the
601
+ minimum
602
+ circumscribed
603
+ rectangle
604
+ [(xmin, ymin) , (xmax, ymax)] (Using the horizontal and vertical
605
+ coordinates of the top left and bottom right vertices of the rect-
606
+ angle) of {box1, · · · , boxn}, the specific calculation method is
607
+ as follows,
608
+ xmin = min
609
+
610
+ x1box1 , · · · , x1boxn
611
+
612
+ ,
613
+ ymin = min
614
+
615
+ y1box1 , · · · , y1boxn
616
+
617
+ ,
618
+ xmax = max
619
+
620
+ x2box1 , · · · , x2boxn
621
+
622
+ ,
623
+ ymax = max
624
+
625
+ y2box1 , · · · , y2boxn
626
+
627
+ ,
628
+ (7)
629
+ where,
630
+ ��
631
+ x1boxn , y1boxn
632
+
633
+ ,
634
+
635
+ x2boxn , y2boxn
636
+ ��
637
+ denotes the hori-
638
+ zontal and vertical coordinates of the upper left and lower
639
+ right vertices of boxn in the image. The obtained minimum
640
+ circumscribed rectangle RectIDk is the MR of the moving
641
+ object in n consecutive frames. Fig. 5 illustrates the MR of
642
+ the moving object on five consecutive frames of images.
643
+ 2) Adaptively Adjust the MR to Obtain ACMR Based on the
644
+ Amount of Motion: We crop the MR of suspicious moving
645
+ object in n consecutive frames to remove the interference of
646
+ other background and negative samples, which can improve
647
+ the SNR of the moving object. However, if the moving
648
+ object moves too slowly, the clipped MR will lack contextual
649
+ environmental information (see the Raw MR In Fig. 6),
650
+ which is not conducive to the detection of moving objects. In
651
+ order to balance the contradiction between SNR and context
652
+ information, this paper proposes an ACMR extraction method
653
+ based on the amount of motion of the moving object, which
654
+ adaptively adjusts the size of the MR of the moving object
655
+ according to the speed of the object motion. There are two
656
+ steps.
657
+ Firstly, the amount of motion of the moving object over n
658
+ consecutive frames is calculated. For an object of the same
659
+ size, if it moves fast on n consecutive frames, its MR is large;
660
+
661
+ Backbone
662
+ SCM
663
+ SCM
664
+ FPN
665
+ PAN
666
+ MS-D HeadIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. XX, NO. XX, JANUARY 2023
667
+ 7
668
+ Fig. 5: MR of the moving bird over 5 consecutive frames. The blue box shows the position of the bird in each frame; The
669
+ green box represents the minimum bounding rectangle of the five blue boxes, which is the MR of the moving bird over five
670
+ consecutive frames.
671
+ Fig. 6: The left picture shows the original monitoring picture, the right picture shows the MR of the moving object on five
672
+ consecutive frames in the dashed frame, and the ACMR of the moving object in the solid frame. It is obvious that the object
673
+ in the original MR is difficult to be correctly recognized, and the object in the ACMR is easier to be recognized.
674
+ otherwise, its MR is small. Therefore, we use the ratio of the
675
+ area of the MR of the moving object on n consecutive frames
676
+ to the area of the single frame image occupied by the moving
677
+ object to define its motion amount on n consecutive frames,
678
+ σmov = S (RectIDk)
679
+ S (ObjIDk) ,
680
+ (8)
681
+ where, σmov is the motion amount, S (·) represents the func-
682
+ tion to calculate the area, and ObjIDk represents the object with
683
+ ID number k. The area of MR is then the area of the minimum
684
+ circumscribed rectangle RectIDk. Since the area occupied by
685
+ a moving object in a single image frame may vary due to its
686
+ shape changes, and it is difficult to calculate accurately, we
687
+ use the rectangular area of its bounding box to approximately
688
+ represent its area in this paper.
689
+ Then, according to the amount of motion of the moving
690
+ object, the MR of the moving object is adaptively adjusted as
691
+ the Adaptive Motion Range (AMR) ARectIDk of the moving
692
+ object. Specifically, a motion hyperparameter γ is introduced.
693
+ When the amount of motion of the moving object is less than
694
+ γ, the MR of the moving object is expanded to make the
695
+ amount of motion of the moving object reach γ. Therefore,
696
+ the AMR of the moving object can be expressed as follows,
697
+ ARectIDk =
698
+
699
+ ARectIDk,
700
+ σmov ≥ γ
701
+ γ × ObjIDk,
702
+ otherwise.
703
+ (9)
704
+ The ARectIDk is used to crop the corresponding n consecutive
705
+ frames of video image
706
+
707
+ frame(1), · · · , frame(n)�
708
+ respectively,
709
+ and the n frame screenshots obtained are the Adaptive Can-
710
+ didate Moving Region (ACMR) (ACMRIDk) of the moving
711
+ object,
712
+ f(i)
713
+ ARectIDk = Fcut
714
+
715
+ frame(i), ARectIDk
716
+
717
+ ,
718
+ (10)
719
+ ACMRIDk =
720
+
721
+ f(i)
722
+ ARectIDk|i ∈ (1, · · · , n)
723
+
724
+ .
725
+ (11)
726
+ C. Moving Object Detection based on ACMR
727
+ After the previous processing, we improve the SNR of the
728
+ moving object, retain its contextual environmental information,
729
+ and obtain the ACMR of the moving object. In the fine-
730
+ detection stage, we can use the ACMR of the moving object
731
+
732
+ Raw MR
733
+ ACMR
734
+ Raw MR
735
+ ACMRIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. XX, NO. XX, JANUARY 2023
736
+ 8
737
+ to classify and locate the moving object. Specifically, the fine-
738
+ detection phase includes the fusion of spatio-temporal infor-
739
+ mation (III-C1) and the classification and localization(III-C2)
740
+ of moving objects.
741
+ 1) Fusion of Spatio-temporal Information of Moving Ob-
742
+ jects: The input of the fine-detection model is the ACMR
743
+ of the moving object extracted earlier. The coarse-detection
744
+ model may detect multiple suspicious moving objects at one
745
+ time, so there will be multiple ACMRs, and the fine detection
746
+ model will detect each ACMR separately. So it’s possible to
747
+ run a coarse-detection model once and a fine-detection model
748
+ many times. Therefore, in order to balance accuracy and speed,
749
+ the method of fusing the spatio-temporal information of the
750
+ moving target in the fine-detection stage uses the way of
751
+ merging consecutive multiple frames. At the same time, in
752
+ order to reduce data redundancy, except the middle frame, the
753
+ rest of the frames are input in the form of grayscale image
754
+ single channel. Specifically, firstly, grayscale the screenshots
755
+ of the ACMR of the moving object except for the middle
756
+ screenshot,
757
+ f(i)′
758
+ ARectIDk =
759
+
760
+
761
+
762
+ f(i)
763
+ ARectIDk,
764
+ i = int
765
+ � n
766
+ 2
767
+
768
+ FGray
769
+
770
+ f(i)
771
+ ARectIDk
772
+
773
+ ,
774
+ otherwise.
775
+ (12)
776
+ where, FGray (·) is a function that finds the grayscale of a color
777
+ image. Then, the processed screenshots of the ACMRs are
778
+ Concatenate in the channel dimension as the input of the fine-
779
+ detection stage,
780
+ XSIDK = Fconcat
781
+ ��
782
+ f(1)′
783
+ ARectIDk, · · · , f(n)′
784
+ ARectIDk
785
+
786
+ , 2
787
+
788
+ ,
789
+ (13)
790
+ Where, the second argument of the Fconcat function indicates
791
+ that the concatenation operation is performed in the third input
792
+ dimension (height, width, channel). The length and width of
793
+ XSIDK is equal to the length and width of rectangle ARectIDk,
794
+ and the number of channels is n+2, which contains the motion
795
+ information and appearance information of the moving object.
796
+ It is input into the fine-detection model to accurately classify
797
+ and locate the moving object.
798
+ 2) Classification and Localization of Moving objects: In
799
+ order to further improve the speed of the whole moving object
800
+ detection process, this paper uses a lightweight U-Shaped
801
+ Network (USN) (in the experiment, we use MobilenetV2 [44]
802
+ as the backbone network of the USN) as the feature extraction
803
+ network of the moving object in the fine-detection stage. At
804
+ the same time, in order to better fuse high and low layer infor-
805
+ mation, similar to the network of the coarse-detection model,
806
+ we introduce the SCM [42] module and design the lightweight
807
+ LW-SCM-USN feature extraction network structure, as shown
808
+ in Fig. 7.
809
+ The XSIDK fused with the spatio-temporal information of
810
+ the moving object is input into the LW-SCM-USN feature
811
+ extraction network to obtain the moving object feature FIDK
812
+ fused with the spatio-temporal information,
813
+ FIDK = FLW-SCM-USN (XSIDK; ΘLW-SCM-USN) ,
814
+ (14)
815
+ where, ΘLW-SCM-USN is the learnable parameter of the LW-
816
+ SCM-USN.
817
+ Fig. 7: Structure diagram of LW-SCM-USN
818
+ The ACMR of moving object may contain more than one
819
+ object. And due to the interference of background and negative
820
+ samples, the detection accuracy of the coarse-detection model
821
+ is not satisfactory, there will be false detection and missed
822
+ detection. So, the ACMR may contain no object, one object
823
+ or multiple objects. Therefore, the detection model in the fine-
824
+ detection stage should still have the ability of multi-object
825
+ detection. However, since the ACMRs of moving objects are
826
+ only a small area (relative to the input image) and cannot
827
+ contain a large number of moving objects, the output of the
828
+ fine-detection model need not be designed with a complex
829
+ structure. In summary, the paper uses a relatively simple Single
830
+ Scale Detection Head (SS-D Head) structure as the output
831
+ structure of the fine-detection model (see Fig. 7). Specifically,
832
+ FIDK is fed into the SS-D Head to obtain the output of the
833
+ fine-detection model,
834
+ OIDK = FSS-D (FIDK; ΘSS-D) ,
835
+ (15)
836
+ where, ΘSS-D is the learnable parameter of the SS-D Head.
837
+ Then, post-processing operations such as Boxes Decoding and
838
+ non-maximum suppression are performed on the output to
839
+ obtain the final detection result of moving object,
840
+ {ClassesIDK, BoxesIDK} = FP (OIDK) ,
841
+ (16)
842
+ where, ClassesIDK represents the category of the object in
843
+ the ACMR ACMRIDk of the moving object, and BoxesIDK
844
+ (in this paper, the position of the object in the middle frame
845
+ of n consecutive frames is taken as the detection result) is the
846
+ bounding box of the corresponding object in this region.
847
+ Finally, the bounding box of the moving object in the
848
+ ACMR is mapped to the original video image, that is, the
849
+ final detection result of the moving object is obtained.
850
+ IV. EXPERIMENT
851
+ In this section, A series of experiments are conducted to
852
+ quantitatively and qualitatively evaluate the proposed SMOD-
853
+ BMI. Next, we will introduce datasets (IV-A), evaluation
854
+ metrics (IV-B), experimental platforms (IV-C), implementa-
855
+ tion details (IV-D), parameter analysis experiment (IV-F) and
856
+ comparative analysis experiment (IV-E).
857
+
858
+ SCM
859
+ Backbone
860
+ SS-D HeadIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. XX, NO. XX, JANUARY 2023
861
+ 9
862
+ Fig. 8: Size distribution of the moving birds in the datasets
863
+ A. Datasets
864
+ We collected and annotated 20 videos containing moving
865
+ bird objects (the size of video images is 1280 × 720) in
866
+ an unattended traction substation. We end up with 10,381
867
+ continuous annotated images with 11,631 objects in total.
868
+ From Fig. 8, we can see that the size of moving birds is
869
+ mainly distributed between 0 × 0 and 80 × 80 pixels, and
870
+ about more than 50% of them are below 40 × 40 pixels. So
871
+ these birds can be called moving small objects.
872
+ B. Evaluation Metrics
873
+ In this paper, the widely used measures in object detection,
874
+ precision (Prec), recall (Rec), and average precision (AP)
875
+ are adopted to evaluate the proposed SMOD-BMI. More
876
+ specifically, Prec50, Rec50, AP50 (The subscript 50 means that
877
+ the detection result is regarded as the True Positive, when
878
+ the IOU between the detection result and the ground truth is
879
+ greater than or equal to 50%. That is, the IOU threshold is set
880
+ 50% ), Prec75, Rec75, AP75 (The subscript 75 has the Similar
881
+ meaning with the subscript 50) and AP (Average Precision
882
+ averaged over multiple thresholds, IOU threshold is set from
883
+ 50% to 95%, in intervals of 5%) are adopted.
884
+ C. Experimental Platforms
885
+ All the experiments are implemented on a desktop computer
886
+ with an Intel Core i7-9700 CPU, 32 GB of memory, and a
887
+ single NVIDIA GeForce RTX 3090 with 24 GB GPU memory.
888
+ D. Implementation Details
889
+ We implemented the proposed method based on YOLOV4
890
+ [28] with modifications.
891
+ Specifically, for the coarse-detection model, a ConvLSTM
892
+ module is embedded between the second and third layers of
893
+ CSPDarkNet53, the backbone network of YOLOV4 model,
894
+ and a SCM [42] is added to its PANet structure. For the input
895
+ size of the coarse-detection model, we set it to 640 × 384 to
896
+ ensure the ratio of effective input pixels as much as possible
897
+ and at the same time ensure the running speed. During training,
898
+ the input is n consecutive frames of images, the label is the
899
+ position of the object on the intermediate frame, and the loss
900
+ function of the YOLOV4 algorithm is reused.
901
+ For the fine-detection model, the lightweight MobilenetV2
902
+ is used as the backbone network of the U-shaped network, and
903
+ the SCM [42] is added to the upsampling structure of the U-
904
+ shaped network. For the input size of the fine-detection model,
905
+ we set it to 160160. For the training data, we used the coarse-
906
+ detection model and the object tracking SORT algorithm to
907
+ collect the Motion Region (MR) containing the moving object
908
+ as the positive samples and the negative samples without the
909
+ object. During training, the input is the screenshot of the
910
+ MR of n consecutive frames, the label is the position of the
911
+ object on the intermediate screenshot, and the loss function of
912
+ YOLOV4 is reused.
913
+ In this paper, all experiments are implemented under the
914
+ Pytorch framework. All network models are trained on an
915
+ NVIDIA GeForce RTX 3090 with 24G of video memory. For
916
+ the batch size setting, it is set to 4 when training the coarse-
917
+ detection model designed in this paper and other comparison
918
+ models, and it is set to 8 when training the fine-detection
919
+ model. All the experimental models were trained from scratch,
920
+ and no pre-trained models were used. The trainable parameters
921
+ of the network were randomly initialized using a normal
922
+ distribution with mean 0 and variance 0.01. Adam was chosen
923
+ as the optimizer for the model in this paper. The initial learning
924
+ rate is set to 0.001. For each iteration, the learning rate is
925
+ multiplied by 0.95 and the model is trained for a total of
926
+ 100 iterations. In the training phase, we used simple data
927
+ augmentation including random horizontal flipping, random
928
+ Gaussian noise, etc. to enhance the robustness of the model.
929
+ E. Comparative Analysis Experiments
930
+ In order to verify the advancement of the proposed moving
931
+ object detection algorithm. We design a series of comparative
932
+ experiments to compare the accuracy of different methods in
933
+ detecting moving objects. We designed and implemented some
934
+ deep learning-based methods following their main ideas. The
935
+ methods mainly compared in this paper have the following
936
+ categories.
937
+ • Object Detection method based on still images. We chose
938
+ YOLOV4 as the representative algorithm of this kind of
939
+ methods.
940
+ • Multi-frame input is used to fuse spatio-temporal features,
941
+ and then the method of object detection is used to realize
942
+ the detection or segmentation of moving objects. For
943
+ this class of methods, we use Mutlti-Input+YOLOV4
944
+ (MI YOLOV4) to represent.
945
+ • ConvLSTM is used to fuse spatio-temporal features,
946
+ and then the object detection method is used to realize
947
+
948
+ Distribute of Object Size
949
+ 5000
950
+ 4000
951
+ 1654
952
+ 1000
953
+ 0-20 20-40 40-60 60-80
954
+ 100-120
955
+ 160-180
956
+ 220-240
957
+ 280-300
958
+ 340-360
959
+ 400-420
960
+ 460-480
961
+ 520-540
962
+ 580-600
963
+ 640-660
964
+ 700-720
965
+ 760-780
966
+ Square Root of the Area(pixls)IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. XX, NO. XX, JANUARY 2023
967
+ 10
968
+ the detection or segmentation of moving objects. For
969
+ this type of methods, we use ConvLSTM+YOLOV4
970
+ (CL YOLOV4) to represent.
971
+ By the way, the parameters of the above model are designed
972
+ as follows. The inputs are all set to 640 × 384. The number of
973
+ consecutive input frames for MI YOLOV4, CL YOLOV4 and
974
+ SMOD-MBI are set to 5 frames. For SMOD-MBI, its motion
975
+ amount parameter σmov is set to 4.0.
976
+ In the qualitative comparison experiment, we choose
977
+ YOLOV4 as the baseline for comparison. YOLOV4 algorithm
978
+ only considers the appearance features of moving objects,
979
+ while the method proposed in this paper makes full use
980
+ of the motion cues of moving objects. By comparing the
981
+ experimental results as shown in Fig. 9, it can be seen that
982
+ when the appearance characteristics of the moving object are
983
+ obvious, YOLOV4 can also achieve a certain effect. However,
984
+ when the appearance characteristics of the moving object are
985
+ not obvious, YOLOV4 will miss detection, and YOLOV4 is
986
+ also prone to false detection. However, the proposed method
987
+ can achieve good results regardless of whether the appearance
988
+ characteristics of the moving object are obvious or not. There-
989
+ fore, for the detection of moving object, its motion cues are
990
+ particularly important.
991
+ Other methods considering the motion information of the
992
+ moving object and the method proposed in this paper have
993
+ little difference in qualitative comparison, so this paper designs
994
+ a quantitative comparison experiment to compare the method
995
+ proposed in this paper with other algorithms.
996
+ The results of quantitative comparison experiments are
997
+ shown in TABLE I. For the same detection method, the AP
998
+ decreases sharply with the increase of IOU threshold. The
999
+ reason for this is that the smaller the object, the harder it is for
1000
+ the detection to match the ground truth exactly, because subtle
1001
+ deviations in the detection results will be more noticeable
1002
+ compared to the ground truth. Compared with different detec-
1003
+ tion methods, the moving object detection method YOLOV4
1004
+ based only on appearance has a poor effect on detecting
1005
+ moving small objects, with AP50 of 64.34%. MI YOLOV4
1006
+ fuses the spatio-temporal information of the moving object
1007
+ by merging the input of multiple frames, which can improve
1008
+ the AP50 by 17.13%. Therefore, for the dataset we collected,
1009
+ motion information is a more important clue for detecting
1010
+ small moving targets in complex environments. CL YOLOV4
1011
+ uses ConvLSTM to merge the spatio-temporal information
1012
+ of the moving object, and can obtain an AP50 increase of
1013
+ 1.76%, which shows that ConvLSTM is more suitable for
1014
+ fusing the spatio-temporal information of the moving object
1015
+ than the multi-frame merged input, because ConvLSTM has
1016
+ some special structures to remove the influence of redundant
1017
+ information.
1018
+ On the basis of CL YOLOV4, the proposed method SMOD-
1019
+ BMI uses object tracking technology and combines the motion
1020
+ amount of the moving object to obtain the Adaptive Candidate
1021
+ Motion Range (ACMR) of the moving object, and then finely
1022
+ detects the moving object in the ACMR. We reduce the
1023
+ threshold for judging the moving objects in coarse-detection
1024
+ stage, which will cause some false detections but will improve
1025
+ the detection rate. At the same time, we increase the threshold
1026
+ that is judged as a moving object in the fine-detection stage
1027
+ to reject false detections. The experimental results show that
1028
+ the proposed method improves AP50 by 4.25%, and reaches
1029
+ to 87.46%.
1030
+ Through the qualitative and quantitative analysis of the
1031
+ experimental results, it can be concluded that the small moving
1032
+ object detection method proposed in this paper is advanced and
1033
+ effective.
1034
+ F. Parameter Analysis Experiments
1035
+ 1) Effect of Different Number of Consecutive Input Frames
1036
+ on the Performance of the Algorithm: We design test exper-
1037
+ iments with different numbers of consecutive frame inputs to
1038
+ evaluate the impact on the detection accuracy and efficiency
1039
+ of the proposed method. Specifically, there are 3 consecutive
1040
+ frames of input, 5 consecutive frames of input, 7 consecutive
1041
+ frames of input, etc. In theory, with the increase of the
1042
+ number of consecutive frames, the motion information of the
1043
+ moving object will be gradually enriched, and the detection
1044
+ accuracy of the algorithm will be gradually improved, but its
1045
+ running time will also increase accordingly. The results of
1046
+ the detection performance test of the algorithm are shown in
1047
+ Table II (the motion amount parameter σmov is set to 4.0).
1048
+ The experimental results show that the running speed of the
1049
+ algorithm is the fastest when 3 consecutive frames are input,
1050
+ and the detection accuracy is the highest when 7 consecutive
1051
+ frames are input. When the input is five consecutive frames,
1052
+ the speed and accuracy can have a good trade-off (the AP50
1053
+ reaches to 87.46%, and the running time is 0.12s).
1054
+ 2) Influence of Different Amount of Motion Parameter σmov
1055
+ on the Accuracy of the Algorithm: We obtain the Adaptive
1056
+ Candidate Motion Ranges (ACMRs) of different sizes of the
1057
+ moving object by setting different motion amount parameter
1058
+ σmov. If the MR is small, the context background information
1059
+ is less; if the MR is large, the SNR is large. Therefore, different
1060
+ sizes of MRs of the same moving object have different effects
1061
+ on the performance of the algorithm. Fig. 10 is the influence
1062
+ of different motion amount parameter σmov on the accuracy
1063
+ of the algorithm.
1064
+ It can be seen from Fig. 10 that when the motion amount
1065
+ parameter σmov is 1.0, the detection accuracy of the proposed
1066
+ method is lower than that of MI YOLOV4 and CL YOLOV4
1067
+ (When the motion amount parameter σmov is set to 1.0, it
1068
+ is equivalent to that the algorithm does not use the adaptive
1069
+ adjustment mechanism to adjust the MR of the moving object,
1070
+ because even if the moving object is still, it still satisfies
1071
+ the motion amount parameter σmov of 1.0, so there is no
1072
+ need to adjust the MR of the moving object according to the
1073
+ motion amount of the moving object). In other words, with the
1074
+ addition of the fine-detection stage, its detection accuracy is
1075
+ reduced instead. This proves that when the MR of the moving
1076
+ object is too small, it lacks enough context information, which
1077
+ leads to the decline of detection accuracy. When we increase
1078
+ the motion amount parameter σmov, the detection accuracy is
1079
+ rapidly improved. However, when it is greater than 5.0, the
1080
+ detection accuracy starts to slowly decrease again.
1081
+ As previously analyzed, when the MR is too small, it lacks
1082
+ contextual information, and when the MR is too large, it is
1083
+
1084
+ IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. XX, NO. XX, JANUARY 2023
1085
+ 11
1086
+ Scenario 1
1087
+ Scenario 2
1088
+ Scenario 3
1089
+ (a) YOLOV4
1090
+ (b) SMOD-BMI
1091
+ Fig. 9: Detection comparisons of YOLOV4 and SMOB-BMI (green box: ground truth bounding box; red box: YOLOV4
1092
+ bounding box; blue box: proposed method bounding box).
1093
+ TABLE I: Comparison with other moving object detection methods
1094
+ Frame num
1095
+ Prec50
1096
+ Rec50
1097
+ AP50
1098
+ Prec75
1099
+ Rec75
1100
+ AP75
1101
+ AP
1102
+ YOLOV4
1103
+ 0.3200
1104
+ 0.7074
1105
+ 0.6434
1106
+ 0.0790
1107
+ 0.1747
1108
+ 0.0553
1109
+ 0.2106
1110
+ MI YOLOV4
1111
+ 0.8561
1112
+ 0.8478
1113
+ 0.8145
1114
+ 0.4098
1115
+ 0.3846
1116
+ 0.2109
1117
+ 0.3298
1118
+ CL YOLOV4
1119
+ 0.8717
1120
+ 0.8592
1121
+ 0.8321
1122
+ 0.4165
1123
+ 0.3955
1124
+ 0.2123
1125
+ 0.3422
1126
+ SMOD-BMI(ours)
1127
+ 0.9197
1128
+ 0.9118
1129
+ 0.8746
1130
+ 0.4827
1131
+ 0.4786
1132
+ 0.2482
1133
+ 0.3827
1134
+ TABLE II: Effect of continuous image input with different number of frames on detection performance
1135
+ Frame num
1136
+ Prec50
1137
+ Rec50
1138
+ AP50
1139
+ Prec75
1140
+ Rec75
1141
+ AP75
1142
+ AP
1143
+ Run Time
1144
+ 3
1145
+ 0.9226
1146
+ 0.9162
1147
+ 0.8737
1148
+ 0.4817
1149
+ 0.4784
1150
+ 0.2349
1151
+ 0.3701
1152
+ 0.11s
1153
+ 5
1154
+ 0.9197
1155
+ 0.9118
1156
+ 0.8746
1157
+ 0.4827
1158
+ 0.4786
1159
+ 0.2482
1160
+ 0.3827
1161
+ 0.12s
1162
+ 7
1163
+ 0.9341
1164
+ 0.9109
1165
+ 0.8808
1166
+ 0.4745
1167
+ 0.4627
1168
+ 0.2412
1169
+ 0.3838
1170
+ 0.14s
1171
+ easy to introduce more noise (in an extreme case, when the
1172
+ MR is already consistent with the original input image, the
1173
+ previous processing will be meaningless, because the input
1174
+ of the fine-detection stage is directly the original image. At
1175
+ the same time, because the fine-detection model is relatively
1176
+ simple and the input size is small (160 × 160), the detection
1177
+ effect is bound to be poor), so whether the MR is too small or
1178
+ too large, it will affect the accuracy of the algorithm. Through
1179
+ experiments, we find that when the motion amount parameter
1180
+ σmov is 5.0, the detection performance of the algorithm is the
1181
+
1182
+
1183
+ 快网间8888D团多
1184
+ 快网间IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. XX, NO. XX, JANUARY 2023
1185
+ 12
1186
+ Fig. 10: Influence of Different Amount of Motion Parameter
1187
+ σmov on the Accuracy of the Algorithm
1188
+ best, and its AP50 is reaching 87.85%.
1189
+ Through parameter analysis experiments, we conclude that
1190
+ when the number of consecutive input frames is 5, the
1191
+ algorithm can get a good balance between accuracy and
1192
+ speed. When the motion parameter is 5.0, the accuracy of the
1193
+ algorithm reaches the highest. So we suggest the following
1194
+ parameter setting scheme. The number of consecutive input
1195
+ frames is set to 5, and the motion amount parameter σmov is
1196
+ set to 5.0.
1197
+ V. CONCLUSION
1198
+ Aiming at the problem that the moving object is difficult to
1199
+ detect in complex background, this paper analyzes the reason.
1200
+ The reason is that the proportion of moving small object pixels
1201
+ is small in complex background, which leads to low SNR. To
1202
+ solve this problem, this paper proposes a Small Moving Object
1203
+ Detection algorithm Based on Motion Information (SMOD-
1204
+ BMI). Firstly, we use the ConvLSTM-SCM-PANet model to
1205
+ coarsely detect the whole frame of a continuous video frame
1206
+ and capture the suspicious moving object. Then, we used
1207
+ the method of object tracking to track the suspicious moving
1208
+ object to determine the MR of the suspicious moving object
1209
+ on n consecutive frames. At the same time, according to the
1210
+ moving speed of the suspicious moving objects, the size of
1211
+ their MR is adjusted adaptively (To be specific, if the objects
1212
+ move slowly, we expand their MR according their speed to
1213
+ ensure the contextual environment information) to obtain their
1214
+ Adaptive Candidate Motion Range (ACMR), so as to ensure
1215
+ that the SNR of the moving object is improved while the
1216
+ necessary context information is retained adaptively. After
1217
+ that, we use LW-SCM-USN model to accurately classify and
1218
+ locate the suspicious moving object by using the ACMR of the
1219
+ suspicious moving object. Finally, qualitative and quantitative
1220
+ experiments verify the effectiveness and advancement of the
1221
+ proposed moving object detection algorithm based on motion
1222
+ information.
1223
+ REFERENCES
1224
+ [1] K. Sehairi, F. Chouireb, and J. Meunier, “Comparative study of motion
1225
+ detection methods for video surveillance systems,” Journal of electronic
1226
+ imaging, vol. 26, no. 2, pp. 023 025.1–023 025.29, 2017.
1227
+ [2] X. Zhang, H. Wu, M. Wu, and C. Wu, “Extended motion diffusion-based
1228
+ change detection for airport ground surveillance,” IEEE Transactions on
1229
+ Image Processing, vol. 29, pp. 5677–5686, 2020.
1230
+ [3] R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, and P. Burt,
1231
+ “A system for video surveillance and monitoring,” vsam final report
1232
+ carnegie mellon university technical report, 2000.
1233
+ [4] B. Azeez and F. Alizadeh, “Review and classification of trending
1234
+ background subtraction-based object detection techniques,” in 2020
1235
+ 6th International Engineering Conference Sustainable Technology and
1236
+ Development” (IEC), 2020.
1237
+ [5] T. Bouwmans, S. Javed, H. Zhang, Z. Lin, and R. Otazo, “On the
1238
+ applications of robust pca in image and video processing,” IEEE, no. 8,
1239
+ 2018.
1240
+ [6] A. Agarwal, S. Gupta, and D. K. Singh, “Review of optical flow
1241
+ technique for moving object detection,” in 2016 2nd International
1242
+ Conference on Contemporary Computing and Informatics (IC3I), 2017.
1243
+ [7] M. A. Hossain, M. I. Hossain, M. D. Hossain, N. T. Thu, and E.-N.
1244
+ Huh, “Fast-d: When non-smoothing color feature meets moving object
1245
+ detection in real-time,” IEEE Access, vol. 8, pp. 186 756–186 772, 2020.
1246
+ [8] J. Yuan, G. Zhang, F. Li, J. Liu, L. Xu, S. Wu, T. Jiang, D. Guo,
1247
+ and Y. Xie, “Independent moving object detection based on a vehicle
1248
+ mounted binocular camera,” IEEE Sensors Journal, vol. 21, no. 10, pp.
1249
+ 11 522–11 531, 2021.
1250
+ [9] A. Khalilian-Gourtani, S. Minaee, and Y. Wang, “Masked-rpca: Moving
1251
+ object detection with an overlaying model,” IEEE Open Journal of
1252
+ Signal Processing, vol. 1, pp. 274–286, 2020.
1253
+ [10] D.-w. WANG, X. YANG, P.-f. HAN, Y. LIU, Y.-j. XIE, and H.-j. SONG,
1254
+ “Panoramic video motion small target detection algorithm in complex
1255
+ background,” Control and Decision, vol. 36, no. 1, pp. 249–256, 2021.
1256
+ [11] Y. Zhou and S. Maskell, “Detecting and tracking small moving objects in
1257
+ wide area motion imagery (wami) using convolutional neural networks
1258
+ (cnns),” in 2019 22th International Conference on Information Fusion
1259
+ (FUSION), 2019, pp. 1–8.
1260
+ [12] C.-Y. Lin, H.-Y. Huang, W.-Y. Lin, C.-Y. Chang, W.-T. Chang, and Y.-K.
1261
+ Jan, “Limited-anchor deep neural network for moving object detection,”
1262
+ in 2020 IEEE International Conference on Consumer Electronics -
1263
+ Taiwan (ICCE-Taiwan), 2020, pp. 1–2.
1264
+ [13] H. Zhu, X. Yan, H. Tang, Y. Chang, B. Li, and X. Yuan, “Moving object
1265
+ detection with deep cnns,” IEEE Access, vol. 8, pp. 29 729–29 741, 2020.
1266
+ [14] Y. Chen, J. Wang, B. Zhu, M. Tang, and H. Lu, “Pixelwise deep sequence
1267
+ learning for moving object detection,” IEEE Transactions on Circuits
1268
+ and Systems for Video Technology, vol. 29, no. 9, pp. 2567–2579, 2019.
1269
+ [15] H. Song, W. Wang, S. Zhao, S. Jianbing, and K.-M. Lam, “Pyramid
1270
+ dilated deeper convlstm for video salient object detection,” in Computer
1271
+ Vision – ECCV 2018.
1272
+ Springer International Publishing, 2018, pp.
1273
+ 744–760.
1274
+ [16] R. LaLonde, D. Zhang, and M. Shah, “Clusternet: Detecting small ob-
1275
+ jects in large scenes by exploiting spatio-temporal information,” in 2018
1276
+ IEEE/CVF Conference on Computer Vision and Pattern Recognition,
1277
+ 2018, pp. 4003–4012.
1278
+ [17] P. W. Patil and S. Murala, “Msfgnet: A novel compact end-to-end deep
1279
+ network for moving object detection,” IEEE Transactions on Intelligent
1280
+ Transportation Systems, vol. 20, no. 11, pp. 4066–4077, 2019.
1281
+ [18] P. Viola and M. Jones, “Rapid object detection using a boosted cascade
1282
+ of simple features,” in Proceedings of the 2001 IEEE Computer Society
1283
+ Conference on Computer Vision and Pattern Recognition. CVPR 2001,
1284
+ vol. 1, 2001, pp. I–I.
1285
+ [19] N. Dalal and B. Triggs, “Histograms of oriented gradients for human
1286
+ detection,” in 2005 IEEE Computer Society Conference on Computer
1287
+ Vision and Pattern Recognition (CVPR’05), vol. 1, 2005, pp. 886–893
1288
+ vol. 1.
1289
+ [20] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan,
1290
+ “Object detection with discriminatively trained part-based models,”
1291
+ IEEE Transactions on Pattern Analysis and Machine Intelligence,
1292
+ vol. 32, no. 9, pp. 1627–1645, 2010.
1293
+ [21] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature
1294
+ hierarchies for accurate object detection and semantic segmentation,”
1295
+ in 2014 IEEE Conference on Computer Vision and Pattern Recognition,
1296
+ 2014, pp. 580–587.
1297
+ [22] R. Girshick, “Fast r-cnn,” in 2015 IEEE International Conference on
1298
+ Computer Vision (ICCV), 2015, pp. 1440–1448.
1299
+
1300
+ InferenceofDifferentomov ontheAccuracy
1301
+ 0.88
1302
+ 0.87
1303
+ 0.86
1304
+ 0.85
1305
+ SMOD-BMI
1306
+ AP
1307
+ 0.84
1308
+ MI YOLOV4
1309
+ CL YOLOV4
1310
+ 0.83
1311
+ 0.82
1312
+ 0.81
1313
+ 1
1314
+ 2
1315
+ 3
1316
+ 4
1317
+ 5
1318
+ 6
1319
+ 7
1320
+ 8
1321
+ OmovIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. XX, NO. XX, JANUARY 2023
1322
+ 13
1323
+ [23] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time
1324
+ object detection with region proposal networks,” IEEE Transactions on
1325
+ Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–
1326
+ 1149, 2017.
1327
+ [24] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look
1328
+ once: Unified, real-time object detection,” in 2016 IEEE Conference on
1329
+ Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779–788.
1330
+ [25] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and
1331
+ A. C. Berg, “Ssd: Single shot multibox detector,” in 2016 European
1332
+ Conference on Computer Vision (ECCV), 2016.
1333
+ [26] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” in 2017
1334
+ IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
1335
+ 2017, pp. 6517–6525.
1336
+ [27] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,”
1337
+ arXiv e-prints, 2018.
1338
+ [28] A. Bochkovskiy, C. Y. Wang, and H. Liao, “Yolov4: Optimal speed and
1339
+ accuracy of object detection,” 2020.
1340
+ [29] L. W. Sommer, M. Teutsch, T. Schuchert, and J. Beyerer, “A survey on
1341
+ moving object detection for wide area motion imagery,” in 2016 IEEE
1342
+ Winter Conference on Applications of Computer Vision (WACV), 2016,
1343
+ pp. 1–9.
1344
+ [30] I. Saleemi and M. Shah, “Multiframe manymany point correspondence
1345
+ for vehicle tracking in high density wide area aerial videos,” Interna-
1346
+ tional Journal of Computer Vision, vol. 104, no. 2, pp. 198–219, 2013.
1347
+ [31] J. Ju and J. Xing, “Moving object detection based on smoothing
1348
+ three frame difference method fused with rpca,” Multimedia Tools and
1349
+ Applications, vol. 78, pp. 29 937–29 951, 2019.
1350
+ [32] V. Joshi and S. Jain, “Tampering detection and localization in digital
1351
+ video using temporal difference between adjacent frames of actual
1352
+ and reconstructed video clip,” International Journal of Information
1353
+ Technology, vol. 12, pp. 273–282, 2020.
1354
+ [33] Y. Benezeth, P. Jodoin, B. Emile, H. Laurent, and C. Rosenberger,
1355
+ “Review and evaluation of commonly-implemented background sub-
1356
+ traction algorithms,” in 2008 19th International Conference on Pattern
1357
+ Recognition, 2008, pp. 1–4.
1358
+ [34] R. Meghana, Y. Chitkara, A. S., and Mohana, “Background-modelling
1359
+ techniques for foreground detection and tracking using gaussian mixture
1360
+ model,” in 2019 3rd International Conference on Computing Method-
1361
+ ologies and Communication (ICCMC), 2019, pp. 1129–1134.
1362
+ [35] O. Barnich and M. Van Droogenbroeck, “Vibe: A universal background
1363
+ subtraction algorithm for video sequences,” IEEE Transactions on Image
1364
+ Processing, vol. 20, no. 6, pp. 1709–1724, 2011.
1365
+ [36] R. He, B.-G. Hu, W.-S. Zheng, and X.-W. Kong, “Robust principal
1366
+ component analysis based on maximum correntropy criterion,” IEEE
1367
+ Transactions on Image Processing, vol. 20, no. 6, pp. 1485–1494, 2011.
1368
+ [37] P. Rodrguez and B. Wohlberg, “Fast principal component pursuit via
1369
+ alternating minimization,” in 2013 IEEE International Conference on
1370
+ Image Processing, 2013, pp. 69–73.
1371
+ [38] Y. Li, L. Jiao, X. Tang, X. Zhang, W. Zhang, and L. Gao, “Weak moving
1372
+ object detection in optical remote sensing video with motion-drive fusion
1373
+ network,” in IGARSS 2019 - 2019 IEEE International Geoscience and
1374
+ Remote Sensing Symposium, 2019, pp. 5476–5479.
1375
+ [39] M. Siam, H. Mahgoub, M. Zahran, S. Yogamani, M. Jagersand, and
1376
+ A. El-Sallab, “Modnet: Motion and appearance based moving object
1377
+ detection network for autonomous driving,” in 2018 21st International
1378
+ Conference on Intelligent Transportation Systems (ITSC), 2018, pp.
1379
+ 2859–2864.
1380
+ [40] T.-Y. Lin, P. Dollr, R. Girshick, K. He, B. Hariharan, and S. Belongie,
1381
+ “Feature pyramid networks for object detection,” in 2017 IEEE Confer-
1382
+ ence on Computer Vision and Pattern Recognition (CVPR), 2017, pp.
1383
+ 936–944.
1384
+ [41] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for
1385
+ instance segmentation,” in 2018 IEEE/CVF Conference on Computer
1386
+ Vision and Pattern Recognition, 2018, pp. 8759–8768.
1387
+ [42] X. Zhang, G. Wang, P. Zhu, T. Zhang, C. Li, and L. Jiao, “GRS-Det:
1388
+ An anchor-free rotation ship detector based on gaussian-mask in remote
1389
+ sensing images,” IEEE Trans. Geosci. Remote Sensing, vol. 59, no. 4,
1390
+ pp. 3518–3531, 2021.
1391
+ [43] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, “Simple online
1392
+ and realtime tracking,” in 2016 IEEE International Conference on Image
1393
+ Processing (ICIP), 2016, pp. 3464–3468.
1394
+ [44] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mo-
1395
+ bilenetv2: Inverted residuals and linear bottlenecks,” in 2018 IEEE/CVF
1396
+ Conference on Computer Vision and Pattern Recognition, 2018, pp.
1397
+ 4510–4520.
1398
+
3NAzT4oBgHgl3EQf9P6r/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
3dE3T4oBgHgl3EQfPwmd/content/2301.04406v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6aff2014ba79dc0b70918581fec522b22e7ca4e91f038e78585a0b8fc1f53fca
3
+ size 214289
49AyT4oBgHgl3EQfcPf_/content/2301.00281v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a3f3a695623bfd21465b0868aff80bde08fe8b8937e69d8057dbb8575209dfe
3
+ size 100126
49AyT4oBgHgl3EQfcPf_/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d1116362133526f8610d31d7d22d23b0b79b0d3d0d17564376cced8d14d22ef
3
+ size 589869
49AyT4oBgHgl3EQfcPf_/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9df9039e351b546ab2dd3a7c26550c9f71d3f40ccff03fa9c4f9af3ec811e1b6
3
+ size 27952
4NFAT4oBgHgl3EQfExwU/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff532ce1581b58e65cd59719fea6a3f0106e5fc971ccc3d90b65f48c2e65b017
3
+ size 14680109
4NFAT4oBgHgl3EQfExwU/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b76de65a81791e8e1e74648c8c58aab4af356a9523a7010d680f7b65a771b57d
3
+ size 507791
4dE2T4oBgHgl3EQfjwe5/content/tmp_files/2301.03972v1.pdf.txt ADDED
@@ -0,0 +1,1160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Maintaining Triconnected Components under
2
+ Node Expansion
3
+ Simon D. Fink � �
4
+ Faculty of Informatics and Mathematics, University of Passau, Germany
5
+ Ignaz Rutter � �
6
+ Faculty of Informatics and Mathematics, University of Passau, Germany
7
+ Abstract
8
+ SPQR-trees are a central component of graph drawing and are also important in many further
9
+ areas of computer science. From their inception onwards, they have always had a strong relation
10
+ to dynamic algorithms maintaining information, e.g., on planarity and triconnectivity, under edge
11
+ insertion and, later on, also deletion. In this paper, we focus on a special kind of dynamic update,
12
+ the expansion of vertices into arbitrary biconnected graphs, while maintaining the SPQR-tree and
13
+ further information. This will also allow us to efficiently merge two SPQR-trees by identifying the
14
+ edges incident to two vertices with each other. We do this working along an axiomatic definition
15
+ lifting the SPQR-tree to a stand-alone data structure that can be modified independently from the
16
+ graph it might have been derived from. Making changes to this structure, we can now observe how
17
+ the graph represented by the SPQR-tree changes, instead of having to reason which updates to the
18
+ SPQR-tree are necessary after a change to the represented graph.
19
+ Using efficient expansions and merges allows us to improve the runtime of the Synchronized
20
+ Planarity algorithm by Bläsius et al. [8] from O(m2) to O(m · ∆), where ∆ is the maximum
21
+ pipe degree. This also reduces the time for solving several constrained planarity problems, e.g. for
22
+ Clustered Planarity from O((n + d)2) to O(n + d · ∆), where d is the total number of crossings
23
+ between cluster borders and edges and ∆ is the maximum number of edge crossings on a single
24
+ cluster border.
25
+ 2012 ACM Subject Classification Mathematics of computing → Graph algorithms
26
+ Keywords and phrases SPQR-Tree, Dynamic Algorithm, Cluster Planarity
27
+ Funding Funded by DFG-grant RU-1903/3-1.
28
+ arXiv:2301.03972v1 [cs.DS] 10 Jan 2023
29
+
30
+ S. D. Fink and I. Rutter
31
+ 1
32
+ 1
33
+ Introduction
34
+ The SPQR-tree is a data structure that represents the decomposition of a graph at its
35
+ separation pairs, that is the pairs of vertices whose removal disconnects the graph. The
36
+ components obtained by this decomposition are called skeletons. SPQR-trees form a central
37
+ component of many graph visualization techniques and are used for, e.g., planarity testing
38
+ and variations thereof [13, 19, 29, 31, 39] and for computing embeddings and layouts [3, 7, 11,
39
+ 20, 28, 42]; see [37] for a survey of graph drawing applications. Outside of graph visualization
40
+ they are used in the context of, e.g., minimum spanning trees [6, 17], triangulations [5], and
41
+ crossing optimization [28, 42]. They also have multiple applications outside of graph theory
42
+ and even computer science, e.g. for creating integrated circuits [14, 44], business processes
43
+ modelling [40], electrical engineering [24], theoretical physics [41] and genomics [22].
44
+ Initially, SPQR-trees were devised by Di Battista and Tamassia for incremental planarity
45
+ testing [16, 19]. As such, even in their initial form, SPQR-trees already allowed dynamic
46
+ updates in the form of edge addition. Their use was quickly expanded to other on-line
47
+ problems [18, 17]. In addition to the applications mentioned above, this also sparked a series
48
+ of further papers improving the runtime of the incremental data structure [38, 39, 43] and
49
+ also extending it to be fully-dynamic, i.e., allowing insertion and deletion of vertices and
50
+ edges, in O(√n) time [21, 27], where n is the number of vertices in the graph. Recently,
51
+ Holm and Rotenberg described a fully-dynamic algorithm for maintaining planarity and
52
+ triconnectivity information in O(log3 n) time per operation [31, 32] (see also there for a short
53
+ history on dynamic SPQR-tree algorithms).
54
+ In this paper, we consider an incremental setting where we allow a single operation that
55
+ expands a vertex v into an arbitrary biconnected graph Gν. Using the approach of Holm
56
+ and Rotenberg [31], this takes O((deg(v) + |Gν|) · log3 n) time by first removing v and its
57
+ incident edges and then incrementally inserting Gν. We improve this to O(deg(v) + |Gν|)
58
+ using an algorithm that is much simpler and thus also more likely to improve performance in
59
+ practice. In addition, our approach also allows to efficiently merge two SPQR-trees as follows.
60
+ Given two biconnected graphs G1, G2 containing vertices v1, v2, respectively, together with
61
+ a bijection between their incident edges, we construct a new graph G by replacing v1 with
62
+ G2 − v2 in G1, identifying edges using the given bijection. Given the SPQR-trees of G1 and
63
+ G2, we show that the SPQR-tree of G can be found in O(deg(v1)) time. More specifically, we
64
+ present a data structure that supports the following operations: InsertGraphSPQR expands
65
+ a single vertex in time linear in the size of the expanded subgraph, MergeSPQR merges two
66
+ SPQR-trees in time linear in the degree of the replaced vertices, IsPlanar indicates whether
67
+ the currently represented graph is planar in constant time, and Rotation yields one of
68
+ the two possible planar rotations of a vertex in a triconnected skeleton in constant time.
69
+ Furthermore, our data structure can be adapted to yield consistent planar embeddings for
70
+ all triconnected skeletons and to test for the existence of three distinct paths between two
71
+ arbitrary vertices with an additional factor of α(n) for all operations, where α is the inverse
72
+ Ackermann function.
73
+ The main idea of our approach is that the subtree of the SPQR-tree affected by expanding
74
+ a vertex v has size linear in the degree of v, but may contain arbitrarily large skeletons. In a
75
+ “non-normalized” version of an SPQR-tree, the affected cycle (‘S’) skeletons can easily be
76
+ split to have a constant size, while we develop a custom splitting operation to limit the size
77
+ of triconnected ‘R’ skeletons. This limits the size of the affected structure to be linear in the
78
+ degree of v and allows us to perform the expansion efficiently.
79
+ In addition to the description of this data structure, the technical contribution of this
80
+
81
+ 2
82
+ Maintaining Triconnected Components under Node Expansion
83
+ Problem
84
+ Running Times
85
+ before [8]
86
+ using [8]
87
+ with this paper
88
+ Atomic
89
+ Embeddability
90
+ /
91
+ Synchronized Planarity
92
+ O(m8) [26]
93
+ O(m2)
94
+ O(m · ∆)
95
+ ClusterPlanarity
96
+ O((n + d)8) [26]
97
+ O((n + d)2)
98
+ O(n + d · ∆)
99
+ Connected SEFE
100
+ O(n16) [26]
101
+ O(n2)
102
+ O(n · ∆)
103
+ bicon: O(n2) [10]
104
+ Partially
105
+ PQ-Constrained
106
+ Planarity
107
+ bicon: O(m) [10]
108
+ O(m2)
109
+ O(m · ∆)
110
+ Row-Column
111
+ Independent
112
+ NodeTrix Planarity
113
+ bicon: O(n2) [35]
114
+ O(n2)
115
+ O(n · ∆)
116
+ Strip Planarity
117
+ O(n8) [4, 26]
118
+ O(n2)
119
+ O(n · ∆)
120
+ fixed emb: poly [4]
121
+ Table 1 The best known running times for various constrained planarity problems before Syn-
122
+ chronized Planarity [8] was published; using it as described in [8]; and using it together with
123
+ the speed-up from this paper. Running times prefixed with “bicon” only apply for certain problem
124
+ instances which expose some form of biconnectivity. The variables n and m refer to the number
125
+ of vertices and edges of the problem instance, respectively. The variable d refers to the number of
126
+ edge-cluster boundary crossings in Clustered Planarity instances, while ∆ refers to the maximum
127
+ pipe degree in the corresponding Synchronized Planarity instances. This is bounded by the
128
+ maximum number of edges crossing a single cluster border or the maximum vertex degree in the
129
+ input instance, depending on the problem.
130
+ paper is twofold: First, we develop an axiomatic definition of the decomposition at separation
131
+ pairs, putting the SPQR-tree as “mechanical” data structure into focus instead of relying on
132
+ and working along a given graph structure. As a result, we can deduce the represented graph
133
+ from the data structure instead of computing the data structure from the graph. This allows
134
+ us to make more or less arbitrary changes to the data structure (respecting its consistency
135
+ criteria) and observe how the graph changes, instead of having to reason which changes to
136
+ the graph require which updates to the data structure.
137
+ Second, we explain how our data structure can be used to improve the runtime of
138
+ the algorithm by Bläsius et al. [8] for solving Synchronized Planarity from O(m2) to
139
+ O(m · ∆), where ∆ is the maximum pipe degree (i.e. the maximum degree of a vertex with
140
+ synchronization constraints that enforce its rotation to be the same as that of another vertex).
141
+ Synchronized Planarity can be used to model and solve a vast class of different kinds of
142
+ constrained planarity, see Table 1 for an overview of problems benefiting from this speedup.
143
+ Among them is the notorious Clustered Planarity, whose complexity was open for 30
144
+ years before Fulek and Tóth gave an algorithm with runtime O((n + d)8) in 2019 [26], where
145
+ d is the total number of crossings between cluster borders and edges. Shortly thereafter,
146
+ Bläsius et al. [8] gave a solution in O((n + d)2) time. We improve this to O(n + d · ∆), where
147
+ ∆ is the maximum number of edge crossings on a single cluster border.
148
+ This work is structured as follows. Section 2 contains an overview of the definitions
149
+ used in this work. In Section 3, we describe the skeleton decomposition and show how it
150
+ relates to the SPQR-tree. Section 4 extends this data structure by the capability of splitting
151
+
152
+ S. D. Fink and I. Rutter
153
+ 3
154
+ triconnected components. In Section 5, we exploit this feature to ensure the affected part of
155
+ the SPQR-tree is small when we replace a vertex with a new graph. Section 6 contains more
156
+ details on the background of Synchronized and Clustered Planarity and shows how
157
+ our results can be used to reduce the time required for solving them.
158
+ 2
159
+ Preliminaries
160
+ In the context of this work, G = (V, E) is a (usually biconnected and loop-free) multi-graph
161
+ with n vertices V and m (possibly parallel) edges E. For a vertex v, we denote its open
162
+ neighborhood (excluding v itself) by N(v). For a bijection or matching ϕ we call ϕ(x) the
163
+ partner of an element x. We use A ·∪ B to denote the union of two disjoint sets A, B.
164
+ A separating k-set is a set of k vertices whose removal increases the number of connected
165
+ components. Separating 1-sets are called cutvertices, while separating 2-sets are called
166
+ separation pairs. A connected graph is biconnected if it does not have a cutvertex. A
167
+ biconnected graph is triconnected if it does not have a separation pair. Maximal biconnected
168
+ subgraphs are called blocks. Each separation pair divides the graph into bridges, the maximal
169
+ subgraphs which cannot be disconnected by removing or splitting the vertices of the separation
170
+ pair. A bond is a graph that consists solely of two pole vertices connected by multiple parallel
171
+ edges, a polygon is a simple cycle, while a rigid is any simple triconnected graph. A wheel is
172
+ a cycle with an additional central vertex connected to all other vertices.
173
+ Finally, the expansion that is central to this work is formally defined as follows. Let
174
+ Gα, Gβ be two graphs where Gα contains a vertex u and Gβ contains |N(u)| marked vertices,
175
+ together with a bijection ϕ between the neighbors of u and the marked vertices in Gβ. With
176
+ Gα[u →ϕ Gβ] we denote the graph that is obtained from the disjoint union of Gα, Gβ by
177
+ identifying each neighbor x of u with its respective marked vertex ϕ(x) in Gβ and removing
178
+ u, i.e. the graph Gα where the vertex u was expanded into Gβ.
179
+ 3
180
+ Skeleton Decompositions
181
+ A skeleton structure S = (G, origV, origE, twinE) that represents a graph GS = (V, E)
182
+ consists of a set G of disjoint skeleton graphs together with three total, surjective mappings
183
+ twinE, origE, and origV that satisfy the following conditions:
184
+ Each skeleton Gµ = (Vµ, Ereal
185
+ µ
186
+ ·∪ Evirt
187
+ µ
188
+ ) in G is a multi-graph where each edge is either in
189
+ Ereal
190
+ µ
191
+ and thus called real or in Evirt
192
+ µ
193
+ and thus called virtual.
194
+ Bijection twinE : Evirt → Evirt matches all virtual edges Evirt = �
195
+ µ Evirt
196
+ µ
197
+ such that
198
+ twinE(e) ̸= e and twinE2 = id.
199
+ Surjection origV : �
200
+ µ Vµ → V maps all skeleton vertices to graph vertices.
201
+ Bijection origE : �
202
+ µ Ereal
203
+ µ
204
+ → E maps all real edges to the graph edge set E.
205
+ Note that each vertex and each edge of each skeleton is in the domain of exactly one of the
206
+ three mappings. As the mappings are surjective, V and E are exactly the images of origV
207
+ and origE. For each vertex v ∈ GS, the skeletons that contain an allocation vertex v′ with
208
+ origV(v′) = v are called the allocation skeletons of v. Furthermore, let TS be the graph
209
+ where each node µ corresponds to a skeleton Gµ of G. Two nodes of TS are adjacent if their
210
+ skeletons contain a pair of virtual edges matched with each other.
211
+ We call a skeleton structure a skeleton decomposition if it satisfies the following conditions:
212
+ 1 (bicon) Each skeleton is biconnected.
213
+ 2 (tree) Graph TS is simple, loop-free, connected and acyclic, i.e., a tree.
214
+ 3 (orig-inj) For each skeleton Gµ, the restriction origV |Vµ is injective.
215
+
216
+ 4
217
+ Maintaining Triconnected Components under Node Expansion
218
+ u
219
+ (a)
220
+ (b)
221
+ (d)
222
+ (c)
223
+ Figure 1 Different views on the skeleton decomposition S. (a) The graph GS with a vertex u
224
+ marked in blue. (b) The skeletons of G. Virtual edges are drawn in gray with their matching twinE
225
+ being shown in orange. The allocation vertices of u are marked in blue. (c) The tree TS. The
226
+ allocation skeletons of u are marked in blue. (d) The embedding tree of vertex u as described in
227
+ Section 6.2. P-nodes are shown as white disks, Q-nodes are shown as large rectangles. The leaves of
228
+ the embedding tree correspond to the edges incident to u.
229
+ 4 (orig-real) For each real edge uv, the endpoints of origE(uv) are origV(u) and origV(v).
230
+ 5 (orig-virt) Let uv and u′v′ be two virtual edges with uv = twinE(u′v′). For their respective
231
+ skeletons Gµ and G′
232
+ µ (where µ and µ′ are adjacent in TS), it is origV(Vµ) ∩ origV(Vµ′) =
233
+ origV({u, v}) = origV({u′, v′}).
234
+ 6 (subgraph) The allocation skeletons of any vertex of GS form a connected subgraph of TS.
235
+ Figure 1 shows an example of S, GS, and TS. We call a skeleton decomposition with only
236
+ one skeleton Gµ trivial. Note that in this case, Gµ is isomorphic to GS, and origE and origV
237
+ are actually bijections between the edges and vertices of both graphs.
238
+ To model the decomposition into triconnected components, we define the operations
239
+ SplitSeparationPair and its converse, JoinSeparationPair, on a skeleton decomposition
240
+ S = (G, origV, origE, twinE). For SplitSeparationPair, let u, v be a separation pair of
241
+ skeleton Gµ and let (A, B) be a non-trivial bipartition of the bridges between u and v.1
242
+ Applying SplitSeparationPair(S, (u, v), (A, B)) yields a skeleton decomposition S′ = (G′,
243
+ origV′, origE′, twinE′) as follows. In G′, we replace Gµ by two skeletons Gα, Gβ, where Gα is
244
+ obtained from Gµ[A] by adding a new virtual edge eα between u and v. The same respectively
245
+ applies to Gβ with Gµ[B] and eβ. We set twinE′(eα) = eβ and twinE′(eβ) = eα. Note that
246
+ origV maps the endpoints of eα and eβ to the same vertices. All other skeletons and the
247
+ mappings defined on them remain unchanged.
248
+ For JoinSeparationPair, consider virtual edges eα, eβ with twinE(eα) = eβ and let
249
+ Gβ ̸= Gα be their respective skeletons.
250
+ Applying JoinSeparationPair(S, eα) yields a
251
+ skeleton decomposition S′ = (G′, origV′, origE′, twinE′) as follows. In G′, we merge Gα with
252
+ Gβ to form a new skeleton Gµ by identifying the endpoints of eα and eβ that map to the
253
+ same vertex of GS. Additionally, we remove eα and eβ. All other skeletons and the mappings
254
+ defined on them remain unchanged.
255
+ The main feature of both operations is that they leave the graph represented by the
256
+ skeleton decomposition unaffected while splitting a node or contracting and edge in TS,
257
+ which can be verified by checking the individual conditions.
258
+ ▶ Lemma 1. Applying SplitSeparationPair or JoinSeparationPair on a skeleton de-
259
+ composition S = (G, origV, origE, twinE) yields a skeleton decomposition S′ = (G′, origV′,
260
+ 1 Note that a bridge might consist out of a single edge between u and v and that each bridge includes the
261
+ vertices u and v.
262
+
263
+ S. D. Fink and I. Rutter
264
+ 5
265
+ origE′, twinE′) with an unchanged represented graph GS′ = GS.
266
+ Proof. We first check that all conditions still hold in the skeleton decomposition S′ returned
267
+ by SplitSeparationPair. As (A, B) is a non-trivial bipartition, each set contains at least one
268
+ bridge. Together with eα (and eβ), this bridge ensures that Gα (and Gβ) remain biconnected,
269
+ satisfying condition 1 (bicon). The operation splits a node µ of TS into two adjacent nodes
270
+ α, β, whose neighbors are defined exactly by the virtual edges in A, B, respectively. Thus,
271
+ condition 2 (tree) remains satisfied. The mappings origV′, origE′ and twinE′ obviously still
272
+ satisfy conditions 3 (orig-inj) and 4 (orig-real). We duplicated exactly two nodes, u and v of
273
+ adjacent skeletons Gα and Gβ. Because 3 (orig-inj) holds for Gµ, Gα and Gβ share no other
274
+ vertices that map to the same vertex of GS′. Thus, condition 5 (orig-virt) remains satisfied.
275
+ Condition 6 (subgraph) could only be violated if the subgraph of TS′ formed by the
276
+ allocation skeletons of some vertex z ∈ GS′ was no longer connected. This could only happen
277
+ if only one of Gα and Gβ were an allocation skeleton of z, while the other has a further
278
+ neighbor that is also an allocation skeleton of z. Assume without loss of generality that Gα
279
+ and the neighbor Gν of Gβ, but not Gβ itself, were allocation skeletons of z. Because Gν and
280
+ Gβ are adjacent in TS′ there are virtual edges xy = twinE′(x′y′) with xy ∈ Gβ and x′y′ ∈ Gν.
281
+ The same virtual edges are also present in the input instance, only with the difference that
282
+ xy ∈ Gµ and µ (instead of β) and ν are adjacent in TS. As the input instance satisfies
283
+ condition 5 (orig-virt), it is z ∈ origV(Vν) ∩ origV(Vµ) = origV({x, y}) = origV({x′, y′}). As
284
+ origV({x, y}) = origV′({x, y}), this is a contradiction to Gβ not being an allocation skeleton
285
+ of z.
286
+ Finally, the mapping origE remains unchanged and the only change to origV is to include
287
+ two new vertices mapping to already existing vertices. Due to condition 4 (orig-real) holding
288
+ for both the input and the output instance, this cannot affect the represented graph GS′.
289
+ Now consider the skeleton decomposition S′ returned by JoinSeparationPair. Identify-
290
+ ing distinct vertices of distinct connected components does not affect their biconnectivity,
291
+ thus condition 1 (bicon) remains satisfied. The operation effectively contracts and removes
292
+ an edge in TS, which does not affect TS′ being a tree satisfying condition 2 (tree). Note
293
+ that condition 2 (tree) holding for the input instance also ensures that Gα and Gβ are two
294
+ distinct skeletons. As the input instance also satisfies condition 5 (orig-virt), there are exactly
295
+ two vertices in each of the two adjacent skeletons Gα and Gβ, where origV maps to the
296
+ same vertex of GS. These two vertices must be part of the twinE pair making the two
297
+ skeletons adjacent, thus they are exactly the two pairs of vertices we identify with each other.
298
+ Thus, origV |Vµ is still injective, satisfying condition 3 (orig-inj). As we modify no real edges
299
+ and no other virtual edges, the mappings origV′ and origE′ obviously still satisfy condition
300
+ 4 (orig-real). As the allocation skeletons of each graph vertex form a connected subgraph,
301
+ joining two skeletons cannot change the intersection with any of their neighbors, leaving
302
+ 5 (orig-virt) satisfied. Finally, contracting a tree edge cannot lead to any of the subgraphs of
303
+ 6 (subgraph) becoming disconnected, thus the condition also remains satisfied. Again, no
304
+ changes were made to origE, while condition 5 (orig-virt) makes sure that origV mapped the
305
+ two pairs of merged vertices to the same vertex of GS. Thus, the represented graph GS′
306
+ remains unchanged.
307
+
308
+ This gives us a second way of finding the represented graph by exhaustively joining all
309
+ skeletons until there is only one left, obtaining the unique trivial skeleton decomposition:
310
+ ▶ Lemma 2. Exhaustively applying JoinSeparationPair to a skeleton decomposition S =
311
+ (G, origV, origE, twinE) yields a trivial skeleton decomposition S′ = (G′, origV′, origE′, twinE′)
312
+ where origE′ and origV′ define an isomorphism between G′
313
+ µ and GS′.
314
+
315
+ 6
316
+ Maintaining Triconnected Components under Node Expansion
317
+ Proof. As all virtual edges are matched, and the matched virtual edge always belongs to
318
+ a different skeleton (condition 2 (tree) ensures that TS is loop-free), we can always apply
319
+ JoinSeparationPair on a virtual edge until there are none left. As TS is connected, this
320
+ means that the we always obtain a tree with a single node, that is an instance with only a
321
+ single skeleton. As a single application of JoinSeparationPair preserves the represented
322
+ graph, any chain of multiple applications also does. Note that origE′ is a bijection and the
323
+ surjective origV′ is also injective on the single remaining skeleton due to condition 3 (orig-inj),
324
+ thus it also globally is a bijection. Together with condition 4 (orig-real), this ensures that any
325
+ two vertices u and v of G′
326
+ µ are adjacent if and only if origV′(u) and origV′(v) are adjacent
327
+ in GS′. Thus origV′ is an edge-preserving bijection, that is an isomorphism.
328
+
329
+ A key point about the skeleton decomposition and especially the operation SplitSepa-
330
+ rationPair now is that they model the decomposition of a graph at separation pairs. This
331
+ decomposition was formalized as SPQR-tree by Di Battista and Tamassia [16] and is unique
332
+ for a given graph [33, 36]; see also [28, 30]. Angelini et al. [1] describe a decomposition
333
+ tree that is conceptually equivalent to our skeleton decomposition. They also present an
334
+ alternative definition for the SPQR-tree as a decomposition tree satisfying further properties.
335
+ We adopt this definition for our skeleton decompositions as follows, not requiring planarity
336
+ of triconnected components and allowing virtual edges and real edges to appear within one
337
+ skeleton (i.e., having leaf Q-nodes merged into their parents).
338
+ ▶ Definition 3. A skeleton decomposition S = (G, origV, origE, twinE) where any skeleton
339
+ in G is either a polygon, a bond, or triconnected (“rigid”), and two skeletons adjacent in TS
340
+ are never both polygons or both bonds, is the unique SPQR-tree of GS.
341
+ The main difference between the well-known ideas behind decomposition trees and our
342
+ skeleton decomposition is that the latter allow an axiomatic access to the decomposition at
343
+ separation pairs. For the skeleton decomposition, we employ a purely functional, “mechanical”
344
+ data structure instead of relying on and working along a given graph structure. In our
345
+ case, the represented graph is deduced from the data structure (i.e. SPQR-tree) instead of
346
+ computing the data structure from the graph.
347
+ 4
348
+ Extended Skeleton Decompositions
349
+ Note that most skeletons, especially polygons and bonds, can easily be decomposed into
350
+ smaller parts. The only exception to this are triconnected skeletons which cannot be split
351
+ further using the operations we defined up to now. This is a problem when modifying
352
+ a vertex that occurs in triconnected skeletons that may be much bigger than the direct
353
+ neighborhood of the vertex. To fix this, we define a further set of operations which allow
354
+ us to isolate vertices out of arbitrary triconnected components by replacing them with a
355
+ (“virtual”) placeholder vertex. This placeholder then points to a smaller component that
356
+ contains the actual vertex, see Figure 2. Modification of the edges incident to the placeholder
357
+ is disallowed, which is why we call them “occupied”.
358
+ Formally, the structures needed to keep track of the components split in this way
359
+ in an extended skeleton decomposition S = (G, origV, origE, twinE, twinV) are defined as
360
+ follows. Skeletons now have the form Gµ = (Vµ ·∪ V virt
361
+ µ
362
+ , Ereal
363
+ µ
364
+ ·∪ Evirt
365
+ µ
366
+ ·∪ Eocc
367
+ µ ). Bijection
368
+ twinV : V virt → V virt matches all virtual vertices V virt = �
369
+ µ V virt
370
+ µ
371
+ , such that twinV(v) ̸= v,
372
+ twinV2 = id. The edges incident to virtual vertices are contained in Eocc
373
+ µ
374
+ and thus considered
375
+ occupied; see Figure 2b. Similar to the virtual edges matched by twinE, any two virtual
376
+ vertices matched by twinV induce an edge between their skeletons in TS. Condition 2 (tree)
377
+
378
+ S. D. Fink and I. Rutter
379
+ 7
380
+ v
381
+
382
+ u
383
+ (a)
384
+ v
385
+
386
+
387
+
388
+
389
+
390
+
391
+ (b)
392
+ Figure 2 (a) A triconnected skeleton Gµ with a highlighted vertex v incident to two gray virtual
393
+ edges. (b) The result of applying IsolateVertex to isolate v out of the skeleton. The red occupied
394
+ edges in the old skeleton Gα form a star with center vα, while the red occupied edges in Gβ connect
395
+ all neighbors of v to form a star with center vβ ̸= v. The centers vα and vβ are virtual and matched
396
+ with each other. Neighbor u of v was split into vertices uα and uβ.
397
+ also equally applies to those edges induced by twinV, which in particular ensures that there
398
+ are no parallel twinE and twinV tree edges in TS. Similarly, the connected subgraphs of
399
+ condition 6 (subgraph) can also contain tree edges induced by twinV. All other conditions
400
+ remain unchanged, but we add two further conditions to ensure that twinV is consistent:
401
+ 7 (stars) For each vα, vβ with twinV(vα) = vβ, it is deg(vα) = deg(vβ). All edges incident
402
+ to vα and vβ are occupied and have distinct endpoints (except for vα and vβ). Conversely,
403
+ each occupied edge is adjacent to exactly one virtual vertex.
404
+ 8 (orig-stars) Let vα and vβ again be two virtual vertices matched with each other by twinV.
405
+ For their respective skeletons Gα and Gβ (where α and β are adjacent in TS), it is
406
+ origV(Vα) ∩ origV(Vβ) = origV(N(vα)) = origV(N(vβ)).
407
+ Note that both conditions together yield a bijection γvαvβ between the neighbors of
408
+ vα and vβ, as origV is injective when restricted to a single skeleton (condition 3 (orig-
409
+ inj)) and deg(vα) = deg(vβ). Operations SplitSeparationPair and JoinSeparationPair
410
+ can also be applied to an extended skeleton decomposition, yielding an extended skeleton
411
+ decomposition without modifying twinV. To ensure that conditions 7 (stars) and 8 (orig-stars)
412
+ remain unaffected by both operations, SplitSeparationPair cannot be applied if a vertex
413
+ of the separation pair is virtual.
414
+ The operations IsolateVertex and Integrate now allow us to isolate vertices out of
415
+ triconnected components and integrate them back in, respectively. For IsolateVertex, let v
416
+ be a non-virtual vertex of skeleton Gµ, such that v has no incident occupied edges. Applying
417
+ IsolateVertex(S, v) on an extended skeleton decomposition S yields an extended skeleton
418
+ decomposition S′ = (G′, origV′, origE′, twinE′, twinV′) as follows. Each neighbor u of v is
419
+ split into two non-adjacent vertices uα and uβ, where uβ is incident to all edges connecting u
420
+ with v, while uα keeps all other edges of u. We set origV′(uα) = origV′(uβ) = origV(u). This
421
+ creates an independent, star-shaped component with center v, which we move to skeleton
422
+ Gβ, while we rename skeleton Gµ to Gα. We connect all uα to a single new virtual vertex
423
+ vα ∈ V virt
424
+ α
425
+ using occupied edges, and all uβ to a single new virtual vertex vβ ∈ V virt
426
+ β
427
+ using
428
+ occupied edges; see Figure 2. Finally, we set twinV′(vα) = vβ, twinV′(vβ) = vα, and add Gβ
429
+ to G′. All other mappings and skeletons remain unchanged.
430
+ For Integrate, consider two virtual vertices vα, vβ with twinV(vα) = vβ and the bijec-
431
+ tion γvαvβ between the neighbors of vα and vβ. An application of Integrate(S, (vα, vβ))
432
+ yields an extended skeleton decomposition S′ = (G′, origV′, origE′, twinE′, twinV′) as follows.
433
+ We merge both skeletons into a skeleton Gµ (also replacing both in G′) by identifying the
434
+ neighbors of vα and vβ according to γvαvβ. Furthermore, we remove vα and vβ together with
435
+ their incident occupied edges. All other mappings and skeletons remain unchanged.
436
+
437
+ 8
438
+ Maintaining Triconnected Components under Node Expansion
439
+ ▶ Lemma 4. Applying IsolateVertex or Integrate on an extended skeleton decomposition
440
+ S = (G, origV, origE, twinE, twinV) yields an extended skeleton decomposition S′ = (G′,
441
+ origV′, origE′, twinE′, twinV′) with GS′ = GS.
442
+ Proof. We first check that all conditions still hold in the extended skeleton decomposition S′
443
+ returned by IsolateVertex. Condition 1 (bicon) remains satisfied, as the structure of Gα
444
+ remains unchanged compared to Gµ and the skeleton Gβ is a bond. As we are again splitting
445
+ a node of TS, condition 2 (tree) also remains satisfied. Due to the neighbors of vβ and vα
446
+ mapping to the same vertices of GS′, conditions 3 (orig-inj), 4 (orig-real), and 5 (orig-virt)
447
+ remain satisfied. Conditions 7 (stars) and 8 (orig-stars) are satisfied by construction.
448
+ Lastly, condition 6 (subgraph) could only be violated if the subgraph of TS′ formed by the
449
+ allocation skeletons of some vertex z ∈ GS′ was no longer connected. This could only happen
450
+ if only one of Gα and Gβ were an allocation skeleton of z, while the other has a further
451
+ neighbor Gν that is also an allocation skeleton of z. Note that in any case, ν is adjacent
452
+ to µ in TS and µ must be an allocation skeleton of z, thus it is z ∈ origV(Gν) ∩ origV(Gµ).
453
+ Depending on the adjacency of ν, it is either origV(Gν)∩origV(Gµ) = origV′(Gν)∩origV(Gα)
454
+ or origV(Gν) ∩ origV(Gµ) = origV′(Gν) ∩ origV(Gβ), as ν is not modified by the operation
455
+ and both S and S′ satisfy 5 (orig-virt) and 8 (orig-stars). This immediately contradicts the
456
+ skeleton of {α, β}, that is adjacent to ν, not being an allocation skeleton of z.
457
+ Finally, the mapping origE remains unchanged and the only change to origV is to include
458
+ some duplicated vertices mapping to already existing vertices. Due to condition 4 (orig-real)
459
+ holding for both the input and the output instance, this cannot affect the represented graph
460
+ GS′.
461
+ Now consider the extended skeleton decomposition S′ returned by Integrate. The
462
+ merged skeleton is biconnected, as we are effectively replacing a single vertex by a connected
463
+ subgraph, satisfying 1 (bicon). The operation effectively contracts and removes an edge in
464
+ TS, which does not affect TS′ being a tree, satisfying condition 2 (tree). Note that condition
465
+ 2 (tree) holding for the input instance also ensures that vα and vβ belong to two distinct
466
+ skeletons. As the input instance satisfies condition 5 (orig-virt), the vertices in each of the
467
+ two adjacent skeletons where origV maps to the same vertex of GS are exactly the neighbors
468
+ of the matched vα and vβ. Thus, origV |Vα is still injective, satisfying condition 3 (orig-inj).
469
+ As we modify no real or virtual edges, the mappings origV′, origE′ and twinE′ obviously still
470
+ satisfy conditions 4 (orig-real) and 5 (orig-virt). Finally, contracting a tree edge cannot lead to
471
+ any of the subgraphs of 6 (subgraph) becoming disconnected, thus the condition also remains
472
+ satisfied. Conditions 7 (stars) and 8 (orig-stars) also remain unaffected, as we simply remove
473
+ an entry from twinV.
474
+ Again, no changes were made to origE, while condition 8 (orig-stars) makes sure that
475
+ origV mapped each pair of merged vertices to the same vertex of GS. Thus, the represented
476
+ graph GS′ remains unchanged.
477
+
478
+ Furthermore, as Integrate is the converse of IsolateVertex and has no preconditions,
479
+ any changes made by IsolateVertex can be undone at any time to obtain a (non-extended)
480
+ skeleton decomposition, and thus possibly the SPQR-tree of the represented graph.
481
+ ▶ Remark 5. Exhaustively applying Integrate to an extended skeleton decomposition
482
+ S = (G, origV, origE, twinE, twinV) yields a extended skeleton decomposition S′ = (G′,
483
+ origV′, origE′, twinE′, twinV′) where twinV′ = ∅. Thus, S′ is equivalent to a (non-extended)
484
+ skeleton decomposition S′ = (G′, origV′, origE′, twinE′).
485
+
486
+ S. D. Fink and I. Rutter
487
+ 9
488
+ v
489
+
490
+ (a)
491
+
492
+ (b)
493
+ (c)
494
+ Figure 3 Expanding a skeleton vertex v into a graph Gν in the SPQR-tree of Figure 4b. (a) The
495
+ single allocation skeleton Gµ of u with the single allocation vertex v of u from Figure 4b. The
496
+ neighbors of v are marked in orange. (b) The inserted graph Gν with orange marked vertices.
497
+ Note that the graph is biconnected when all marked vertices are collapsed into a single vertex.
498
+ (c) The result of applying InsertGraph(S, u, Gν, ϕ) followed by an application of Integrate on the
499
+ generated virtual vertices v and v′.
500
+ 5
501
+ Node Expansion in Extended Skeleton Decompositions
502
+ We now introduce our first dynamic operation that allows us to actually change the represented
503
+ graph by expanding a single vertex u into an arbitrary connected graph Gν. This is done
504
+ by identifying |N(u)| marked vertices in Gν with the neighbors of u via a bijection ϕ and
505
+ then removing u and its incident edges. We use the “occupied stars” from the previous
506
+ section to model the identification of these vertices, allowing us to defer the actual insertion
507
+ to an application of Integrate. We need to ensure that the inserted graph makes the same
508
+ “guarantees” to the surrounding graph in terms of connectivity as the vertex it replaces,
509
+ that is all neighbors of u (i.e. all marked vertices in Gν) need to be pairwise connected via
510
+ paths in Gν not using any other neighbor of u (i.e. any other marked vertex). Without this
511
+ requirement, a single vertex could e.g. also be split into two non-adjacent halves, which could
512
+ easily break a triconnected component apart. Thus, we require Gν to be biconnected when
513
+ all marked vertices are collapsed into a single vertex. Note that this also ensures that the
514
+ old graph can be restored by contracting the vertices of the inserted graph. For the sake of
515
+ simplicity, we require vertex u from the represented graph to have a single allocation vertex
516
+ v ∈ Gµ with origV−1(u) = {v} so that we only need to change a single allocation skeleton
517
+ Gµ in the skeleton decomposition. As we will make clear later on, this condition can be
518
+ satisfied easily.
519
+ Formally, let u ∈ GS be a vertex that only has a single allocation vertex v ∈ Gµ (and
520
+ thus only a single allocation skeleton Gµ). Let Gν be an arbitrary, new graph containing
521
+ |N(u)| marked vertices, together with a bijection ϕ between the marked vertices in Gν
522
+ and the neighbors of v in Gµ. We require Gν to be biconnected when all marked vertices
523
+ are collapsed into a single node. Operation InsertGraph(S, u, Gν, ϕ) yields an extended
524
+ skeleton decomposition S′ = (G′, origV′, origE′, twinE′, twinV′) as follows, see also Figure 3.
525
+ We interpret Gν as skeleton and add it to G′. For each marked vertex x in Gν, we set
526
+ origV′(x) = origV(ϕ(x)). For all other vertices and edges in Gν, we set origV′ and origE′
527
+ to point to new vertices and edges forming a copy of Gν in GS′. We connect every marked
528
+ vertex in Gν to a new virtual vertex v′ ∈ Gν using occupied edges. We also convert v to a
529
+ virtual vertex, converting its incident edges to occupied edges while removing parallel edges.
530
+ Finally, we set twinV′(v) = v′ and twinV′(v′) = v.
531
+ ▶ Lemma 6. Applying InsertGraph(S, u, Gν, ϕ) on an extended skeleton decomposition
532
+ S = (G, origV, origE, twinE, twinV) yields an extended skeleton decomposition S′ = (G′,
533
+ origV′, origE′, twinE′, twinV′) with GS′ isomorphic to GS[u →ϕ Gν].
534
+
535
+ 10
536
+ Maintaining Triconnected Components under Node Expansion
537
+ Proof. Condition 1 (bicon) remains satisfied, as the structure of Gµ remains unchanged
538
+ and the resulting Gν is biconnected by precondition. Regarding TS, we are attaching a
539
+ degree-1 node ν to an existing node µ, thus condition 2 (tree) also remains satisfied. As
540
+ all vertices of Gν except for the vertices in N(v′) got their new, unique copy assigned by
541
+ origV′ and origV′(N(v′)) = origV(N(v)), condition 3 (orig-inj) is also satisfied for the new
542
+ Gν. As we updated origE alongside origV and Gν contains no virtual edges, conditions
543
+ 4 (orig-real) and 5 (orig-virt) remain satisfied. As ν is a leaf of TS with µ being its only
544
+ neighbor, origV′(N(v′)) ⊂ origV(Vµ), and Gν is the only allocation skeleton for all vertices
545
+ in Gν \ N(v′), condition 6 (subgraph) remains satisfied. Conditions 7 (stars) and 8 (orig-stars)
546
+ are satisfied by construction. Finally, the mappings origE′ and origV′ are by construction
547
+ updated to correctly reproduce the structure of Gν in GS′.
548
+
549
+ On its own, this operation is not of much use though, as graph vertices only rarely have
550
+ a single allocation skeleton. Furthermore, our goal is to dynamically maintain SPQR-trees,
551
+ while this operation on its own will in most cases not yield an SPQR-tree. To fix this, we
552
+ introduce the full procedure InsertGraphSPQR(S, u, Gν, ϕ) that can be applied to any graph
553
+ vertex u and that, given an SPQR-tree S, yields the SPQR-tree of GS[u →ϕ Gν]. It consists
554
+ of three preparations steps, the insertion of Gν, and two further clean-up steps:
555
+ 1. We apply SplitSeparationPair to each polygon allocation skeleton of u with more than
556
+ three vertices, using the neighbors of the allocation vertex of u as separation pair.
557
+ 2. For each rigid allocation skeleton of u, we move the contained allocation vertex v of u to
558
+ its own skeleton by applying IsolateVertex(S, v).
559
+ 3. We exhaustively apply JoinSeparationPair to any pair of allocation skeletons of u that
560
+ are adjacent in TS. Due to condition 6 (subgraph), this yields a single component Gµ that
561
+ is the sole allocation skeleton of u with the single allocation vertex v of u. Furthermore,
562
+ the size of Gµ is linear in deg(u).
563
+ 4. We apply InsertGraph to insert Gν as skeleton, followed by an application of Integrate
564
+ to the virtual vertices {v, v′} introduced by the insertion, thus integrating Gν into Gµ.
565
+ 5. We apply SplitSeparationPair to all separation pairs in Gµ that do not involve a
566
+ virtual vertex. These pairs can be found in linear time, e.g. by temporarily duplicating
567
+ all virtual vertices and their incident edges and then computing the SPQR-tree.2
568
+ 6. Finally, we exhaustively apply Integrate and also apply JoinSeparationPair to any
569
+ two adjacent polygons and to any two adjacent bonds to obtain the SPQR-tree of the
570
+ updated graph.
571
+ The basic idea behind the correctness of this procedure is that splitting the newly inserted
572
+ component according to its SPQR-tree in step 5 yields biconnected components that are each
573
+ either a polygon, a bond, or “almost” triconnected. The latter (and only those) might still
574
+ contain virtual vertices and all their remaining separation pairs, which were not split in step 5,
575
+ contain one of these virtual vertices. This, together with the fact that there still may be
576
+ pairs of adjacent skeletons where both are polygons or both are bonds, prevents the instance
577
+ from being an SPQR-tree. Both issues are resolved in step 6: The adjacent skeletons are
578
+ obviously fixed by the JoinSeparationPair applications. To show that the virtual vertices
579
+ are removed by the Integrate applications, making the remaining components triconnected,
580
+ we need the following lemma.
581
+ 2 Note that the wheels replacing virtual vertices in the proof of Theorem 10 also ensure this.
582
+
583
+ S. D. Fink and I. Rutter
584
+ 11
585
+ (a)
586
+ v
587
+ (b)
588
+ Figure 4 The preprocessing steps of InsertGraphSPQR being applied to the SPQR-tree of Figure 1b.
589
+ (a) The state after step 2, after all allocation skeletons of u have been split. (b) The state after
590
+ step 3, after all allocation skeletons of u have been merged into a single one.
591
+ ▶ Lemma 7. Let Gα be a triconnected skeleton containing a virtual vertex vα matched with
592
+ a virtual vertex vβ of a biconnected skeleton Gβ. Furthermore, let P ⊆
593
+ �V (Gβ)
594
+ 2
595
+
596
+ be the set
597
+ of all separation pairs in Gβ. An application of Integrate(S, (vα, vβ)) yields a biconnected
598
+ skeleton Gµ with separation pairs P ′ = {{u, v} ∈ P | vβ /∈ {u, v}}.
599
+ Proof. We partition the vertices of Gµ into the sets A, B, and N depending on whether the
600
+ vertex stems from Gα, Gβ, or both, respectively. The set N thus contains the neighbors of vα,
601
+ which were identified with the neighbors of vβ. We will now show by contradiction that Gµ
602
+ contains no separation pairs except for those in P ′. Thus, consider a separation pair u, v ∈ Gµ
603
+ not in P ′. First, consider the case where u, v ∈ A∪N. Observe that removing u, v in this case
604
+ leaves B connected. Thus, we can contract all vertices of B into a single vertex, reobtain Gα
605
+ and see that u, v is a separation pair in Gα. This contradicts the precondition that Gα is
606
+ triconnected. Now consider the case where u, v ∈ B ∪ N. Analogously to above, we find that
607
+ u, v is a separation pair in Gβ that does not contain vβ, a contradiction to {u, v} /∈ P ′. Finally,
608
+ consider the remaining case where, without loss of generality, u ∈ A, v ∈ B. Since {u, v}
609
+ is a separation pair, u has two neighbors x, y that lie in different connected components of
610
+ Gµ−{u, v} and therefore also in different components of (Gµ−{u, v})−B which is isomorphic
611
+ to Gα − {u, vα}. This again contradicts the precondition that Gα is triconnected.
612
+
613
+ ▶ Theorem 8. Applying InsertGraphSPQR(S, u, Gν, ϕ) to an SPQR-tree S yields an SPQR-
614
+ tree S′ in O(|Gν|) time with GS′ isomorphic to GS[u →ϕ Gν].
615
+ Proof. As all operations that are applied leave the extended skeleton decomposition valid,
616
+ the final extended skeleton decomposition S′ is also valid. Observe that the purpose of
617
+ the preprocessing steps 1–3 is solely to ensure that the preconditions of InsertGraph are
618
+ satisfied and the affected component is not too large. Note that all rigids split in step 2
619
+ remain structurally unmodified in the sense that edges only changed their type, but the
620
+ graph and especially its triconnectedness remains unchanged. Step 4 performs the actual
621
+ insertion and yields the desired represented graph according to Lemma 6. It thus remains
622
+ to show that the clean-up steps turn the obtained extended skeleton decomposition into
623
+ an SPQR-tree.
624
+ Applying Integrate exhaustively in step 6 ensures that the extended
625
+ skeleton decomposition is equivalent to a non-extended one (Remark 5). Recall that a
626
+ non-extended skeleton decomposition is an SPQR-tree if all skeletons are either polygons,
627
+ bonds or triconnected and two adjacent skeletons are never both polygons or both bonds
628
+ (Definition 3). Step 6 ensures that the second half holds, as joining two polygons (or two
629
+ bonds) with JoinSeparationPair yields a bigger polygon (or bond, respectively). Before
630
+
631
+ 12
632
+ Maintaining Triconnected Components under Node Expansion
633
+ step 6, all skeletons that are not an allocation skeleton of u are still unmodified and thus
634
+ already have a suitable structure, i.e., they are either polygons, bonds or triconnected.
635
+ Furthermore, the allocation skeletons of u not containing virtual vertices also have a suitable
636
+ structure, as their splits were made according to the SPQR-tree in step 5. It remains to
637
+ show that the remaining skeletons, that is those resulting from the Integrate applications
638
+ in step 6, are triconnected. Note that in these skeletons, step 5 ensures that every separation
639
+ pair consists of at least one virtual vertex, as otherwise the computed SPQR-tree would
640
+ have split the skeleton further. Further note that, for each of these virtual vertices, the
641
+ matched partner vertex is part of a structurally unmodified triconnected skeleton that was
642
+ split in step 2. Lemma 7 shows that applying Integrate does not introduce new separation
643
+ pairs while removing two virtual vertices if one of the two sides is triconnected. We can
644
+ thus exhaustively apply Integrate and thereby remove all virtual vertices and thus also all
645
+ separation pairs, obtaining triconnected components. This shows that the criteria for being
646
+ an SPQR-tree are satisfied and, as InsertGraph expanded u to Gν in the represented graph,
647
+ we now have the unique SPQR-tree of GS[u →ϕ Gν].
648
+ Note that all operations we used can be performed in time linear in the degree of the
649
+ vertices they are applied on. For the bipartition of bridges input to SplitSeparationPair,
650
+ it is sufficient to describe each bridge via its edges incident to the separation pair instead of
651
+ explicitly enumerating all in vertices in the bridge. Thus, the applications of SplitSepara-
652
+ tionPair and IsolateVertex in steps 1 and 2 touch every edge incident to u at most once
653
+ and thus take O(deg(u)) time. Furthermore, they yield skeletons that have a size linear in
654
+ the degree of their respective allocation vertex of u. As the subtree of u’s allocation skeletons
655
+ has size at most deg(u), the JoinSeparationPair applications of step 3 also take at most
656
+ O(deg(u)) time. It also follows that the resulting single allocation skeleton of u has size
657
+ O(deg(u)). The applications of InsertGraph and Integrate in step 4 can be done in time
658
+ linear in the number of identified neighbors, which is O(deg(u)). Generating the SPQR-tree
659
+ of the inserted graph in step 5 (where all virtual vertices where replaced by wheels) can
660
+ be done in time linear in the size of the inserted graph [30, 33], that is O(|Gν|). Applying
661
+ SplitSeparationPair according to all separation pairs identified by this SPQR-tree can
662
+ also be done in O(|Gν|) time in total. Note that there are at most deg(u) edges between
663
+ the skeletons that existed before step 4 and those that were created or modified in steps 4
664
+ and 5, and these are the only edges that might now connect two polygons or two bonds. As
665
+ these tree edges have one endpoint in the single allocation skeleton of u, the applications of
666
+ Integrate and JoinSeparationPair in step 6 run in O(deg(u)) time in total. Furthermore,
667
+ they remove all pairs of adjacent polygons and all pairs of adjacent bonds. This shows that
668
+ all steps take O(deg(u)) time, except for step 5, which takes O(|Gν|) time. As the inserted
669
+ graph contains at least one vertex for each neighbor of u, the total runtime is in O(|Gν|).
670
+
671
+ ▶ Corollary 9. Let S1, S2 be two SPQR-trees together with vertices u1 ∈ GS1, u2 ∈ GS2, and
672
+ let ϕ be a bijection between the edges incident to u1 and the edges incident to u2. Operation
673
+ MergeSPQR(S1, S2, u1, u2, ϕ) yields the SPQR-tree of the graph GS1[u1 →ϕ GS2 − u2], i.e. the
674
+ union of both graphs where the edges incident to u1, u2 were identified according to ϕ and
675
+ u1, u2 removed, in time O(deg(u1)) = O(deg(u2)).
676
+ Proof. Operation MergeSPQR works similar to the more general InsertGraphSPQR, although
677
+ the running time is better because we already know the SPQR-tree for the graph being
678
+ inserted. We apply the preprocessing steps 1–3 to ensure that both u1 and u2 have sole
679
+ allocation vertices v1 and v2, respectively. To properly handle parallel edges, we subdivide
680
+ all edges incident to u1, u2 (and thus also the corresponding real edges incident to v1, v2) and
681
+
682
+ S. D. Fink and I. Rutter
683
+ 13
684
+ then identify the subdivision vertices of each pair of edges matched by ϕ. By deleting vertices
685
+ v1 and v2 and suppressing the subdivision vertices (that is, removing them and identifying
686
+ each pair of incident edges) we obtain a skeleton Gµ that has size O(deg(u1)) = O(deg(u2)).
687
+ Finally, we apply the clean-up steps 5 and 6 to Gµ to obtain the final SPQR-tree. Again,
688
+ as the partner vertex of every virtual vertex in the allocation skeletons of u is part of a
689
+ triconnected skeleton, applying Integrate exhaustively in step 6 yields triconnected skeletons.
690
+ As previously discussed, the preprocessing and clean-up steps run in time linear in degree of
691
+ the affected vertices, thus the overall runtime is O(deg(u1)) = O(deg(u2)) in this case.
692
+
693
+ 5.1
694
+ Maintaining Planarity and Vertex Rotations
695
+ Note that expanding a vertex of a planar graph using another planar graph using Insert-
696
+ GraphSPQR (or merging two SPQR-trees of planar graphs using Corollary 9) might actually
697
+ yield a non-planar graph. This is, e.g., because the rigids of both graphs might require
698
+ incompatible orders for the neighbors of the replaced vertex. The aim of this section is to
699
+ efficiently detect this case, that is a planar graph turning non-planar. To check a general
700
+ graph for planarity, it suffices to check the rigids in its SPQR-tree for planarity and each rigid
701
+ allows exactly two planar embeddings, where one is the reverse of the other [19]. Thus, if a
702
+ graph becomes non-planar through an application of InsertGraphSPQR, this will be noticeable
703
+ from the triconnected allocation skeletons of the replaced vertex. To be able to immediately
704
+ report if the instance became non-planar, we need to maintain a rotation, that is a cyclic
705
+ order of all incident edges, for each vertex in any triconnected skeleton. Note that we do not
706
+ track the direction of the orders, that is we only store the order up to reversal. As discussed
707
+ later, the exact orders can also be maintained with a slight overhead.
708
+ ▶ Theorem 10. SPQR-trees support the following operations:
709
+ InsertGraphSPQR(S, u, Gν, ϕ): expansion of a single vertex u in time O(|Gν|),
710
+ MergeSPQR(S1, S2, u1, u2, ϕ): merging of two SPQR-trees in time O(deg(u1)),
711
+ IsPlanar: queries whether the represented graph is planar in time O(1), and
712
+ Rotation(u): queries for one of the two possible rotations of vertices u in planar tricon-
713
+ nected skeletons in time O(1).
714
+ Proof. Note that the boolean flag IsPlanar together with the Rotation information can
715
+ be computed in linear time when creating a new SPQR-tree and that expanding a vertex or
716
+ merging two SPQR-trees cannot turn a non-planar graph planar. We make the following
717
+ changes to the operations InsertGraphSPQR and MergeSPQR described in Theorem 8 and
718
+ Corollary 9 to maintain the new information. After a triconnected component is split in
719
+ step 2 we now introduce further structure to ensure that the embedding is maintained on both
720
+ sides. The occupied edges generated around the split-off vertex v (and those around its copy
721
+ v′) are subdivided and connected cyclically according to Rotation(v). Instead of “stars”, we
722
+ thus now generate occupied “wheels” that encode the edge ordering in the embedding of the
723
+ triconnected component. When generating the SPQR-tree of the modified subgraph in step 5,
724
+ now containing occupied wheels instead of only stars, we also generate a planar embedding for
725
+ all its triconnected skeletons. If no planar embedding can be found for at least one skeleton,
726
+ we report that the resulting instance is non-planar by setting IsPlanar to false. Otherwise,
727
+ after performing all splits indicated by the SPQR-tree, we assign Rotation by generating
728
+ embeddings for all new rigids. Note that for all skeletons with virtual vertices, the generated
729
+ embedding will be compatible with the one of the neighboring triconnected component, that
730
+ is, the rotation of each virtual vertex will line up with that of its matched partner vertex,
731
+ thanks to the inserted wheel. Finally, before applying Integrate in step 6, we contract each
732
+
733
+ 14
734
+ Maintaining Triconnected Components under Node Expansion
735
+ occupied wheel into a single vertex to re-obtain occupied stars. The creation and contraction
736
+ of wheels adds an overhead that is at most linear in the degree of the expanded vertex and
737
+ the generation of embeddings for the rigids can be done in time linear in the size of the rigid.
738
+ Thus, this does not affect the asymptotic runtime of both operations.
739
+
740
+ ▶ Corollary 11. The data structure from Theorem 10 can be adapted to also provide the exact
741
+ rotations with matching direction for every vertex in a rigid. Furthermore, it can support
742
+ queries whether two vertices v1, v2 are connected by at least 3 different vertex-disjoint paths
743
+ via 3Paths(v1, v2) in O((deg(v1)+deg(v2))·α(n)) time. These adaptions change the runtime
744
+ of InsertGraphSPQR to O(deg(u) · α(n) + |Gν|), that of MergeSPQR to O(deg(u1) · α(n)), and
745
+ that of Rotation(u) to O(α(n)).
746
+ Proof. The exact rotation information for Rotation can be maintained by using union-find
747
+ to keep track of the rigid a vertex belongs to and synchronizing the reversal of all vertices
748
+ within one rigid when two rigids are merged by Integrate as follows. We create a union-find
749
+ set for every vertex in a triconnected component and apply Union to all vertices in the same
750
+ rigid. Next to the pointer indicating the representative in the union-find structure, we store
751
+ a boolean flag indicating whether the rotation information for the current vertex is reversed
752
+ with regard to rotation of its direct representative. To find whether a Rotation needs to
753
+ be flipped, we accumulate all flags along the path to the actual representative of a vertex
754
+ by using an exclusive-or. As Rotation(u) thus relies on the Find operation, its amortized
755
+ runtime is O(α(n)). When merging two rigids with Integrate, we also perform a Union on
756
+ their respective representatives (which we need to Find first), making Integrate(S, (vα, vβ))
757
+ run in O(deg(vα) + α(n)). We also compare the Rotation of the replaced vertices and flip
758
+ the flag stored with the vertex that does not end up as the representative if they do not
759
+ match. In total, this makes InsertGraphSPQR run in O(deg(u) · α(n) + |Gν|) time as there
760
+ can be up to deg(u) split rigids. Furthermore, MergeSPQR now runs in O(deg(u1) · α(n)) time.
761
+ Maintaining the information in which rigid a skeleton vertex is contained in can then
762
+ also be used to answer queries whether two arbitrary vertices are connected by three disjoint
763
+ paths. This is exactly the case if they are part of the same rigid, appear as poles of the same
764
+ bond or are connected by a virtual edge in a polygon. This can be checked by enumerating
765
+ all allocation skeletons of both vertices, which can be done in time linear in their degree.
766
+ As finding each of the skeletons may require a Find call, the total runtime for this is in
767
+ O((deg(v1) + deg(v2)) · α(n)).
768
+
769
+ 6
770
+ Application to Synchronized Planarity
771
+ In this section, we will give some background on the historical development of and further
772
+ details on the problems Clustered Planarity and Synchronized Planarity together
773
+ with summary of the algorithm of Bläsius et al. for solving both problems. Furthermore,
774
+ we will show how our and also previous work on dynamic SPQR-trees can be used in the
775
+ context of both problems.
776
+ 6.1
777
+ Background and Discussion
778
+ Lengauer [34] first discussed Clustered Planarity under a different name in 1989, which is
779
+ why it was later independently rediscovered by Feng et al. [23] in 1995. Both gave polynomial-
780
+ time algorithms for the case where the subgraph induced by any cluster is connected. In
781
+ contrast, the question whether the general problem with disconnected clusters allows an
782
+
783
+ S. D. Fink and I. Rutter
784
+ 15
785
+ Figure 5 Schematic representation of the three operations used by Bläsius et al. [8] for solving
786
+ Synchronized Planarity. Matched vertices are shown as bigger disks, the matching is indicated
787
+ by the orange dotted lines. Top: Two cut-vertices matched with each other (left), the result of
788
+ encapsulating their incident blocks (middle) and the bipartite graph resulting from joining both
789
+ cut-vertices (right). Middle: A matched non-cut-vertex with a non-trivial embedding tree (left)
790
+ that is propagated to replace both the vertex and its partner (right). Bottom: Three different
791
+ cases of matched vertices with trivial embedding trees (blue) and how their pipes can be removed or
792
+ replaced (red).
793
+ efficient solution remained open for 30 years. In that time, polynomial-time algorithms were
794
+ found for many special-cases [2, 15, 25, 29] before Fulek and Tóth [26] found an O((n + d)8)
795
+ solution in 2019.
796
+ Shortly thereafter, Bläsius et al. [8] gave a solution with runtime in
797
+ O((n + d)2) that also exposes the main concepts needed to solve Clustered Planarity.
798
+ The solution works via a linear-time reduction to the problem Synchronized Planarity,
799
+ for which Bläsius et al. gave a quadratic algorithm. We improve the runtime of the latter
800
+ algorithm. As Synchronized Planarity can be used as a modeling tool for several other
801
+ constrained planarity problems next to Clustered Planarity [8], this also improves the
802
+ time needed for solving any constrained planarity problem that can be solved via a linear-time
803
+ reduction to Synchronized Planarity; see Table 1.
804
+ In Clustered Planarity, the embedding has to respect a laminar family of clusters [9,
805
+ 34], that is every vertex is part of some (hierarchically nested) cluster and an edge may
806
+ only cross a cluster boundary if it connects a vertex from the inside with one from the
807
+ outside. In Synchronized Planarity, we are given a matching on some of the vertices in
808
+ the graph and seek an embedding such that the rotations matched vertices line up under a
809
+ given bijection [8]. The synchronization constraints imposed by matching two vertices are
810
+ also called pipe. The reduction from the former problem to the latter employs the CD-tree
811
+ representation of Clustered Planarity [9], where each cluster is represented as individual
812
+ skeleton in which adjacent clusters were collapsed into single “virtual vertices”. The order
813
+ of the edges “leaving” one cluster via a virtual vertex now needs to line up with the order
814
+ in which they “enter” an adjacent cluster via its corresponding virtual vertex (see also [8,
815
+ Figure 6]).
816
+
817
+ 16
818
+ Maintaining Triconnected Components under Node Expansion
819
+ The algorithm for solving Synchronized Planarity works by removing an arbitrary
820
+ pipe each step, using one of three operations depending on the graphs around the matched
821
+ vertices, see Figure 5.
822
+ EncapsulateAndJoin If both vertices of the pipe are cut-vertices, they are “encapsulated”
823
+ by taking a copy of their respective components and then collapsing each incident block
824
+ to a single vertex to obtain stars with matched centers that have multiple parallel edges
825
+ connecting them to their ray vertices. The original cut-vertices are split up so that each
826
+ incident block gets its own copy and these copies are synchronized with the respective
827
+ vertex representing a collapsed block. Now the cut-vertices can be removed by “joining”
828
+ both stars, that is identifying their incident edges according to the bijection that is given
829
+ alongside the matching.
830
+ PropagatePQ If one of the vertices is not a cut-vertex and has an embedding tree that
831
+ not only consists of a single P-node, two copies of this embedding tree are inserted
832
+ (“propagated”) in place of both matched vertices, respectively. The inner nodes of the
833
+ embedding trees are synchronized by matching corresponding vertices.
834
+ SimplifyMatching In the remaining case, one of the vertices is not a cut-vertex but has a
835
+ trivial embedding tree, i.e., only appears in a single parallel skeleton and no rigid skeleton
836
+ in the SPQR-tree. If the vertex (or, more precisely, the parallel that completely defines
837
+ it rotation) can respect arbitrary rotations, we can simply remove the pipe. The only
838
+ exception to this is when the other pole of the parallel is also matched, in which case we
839
+ can “short-circuit” the matching across the parallel.
840
+ To summarize, every operation removes a pipe from the matching, while potentially
841
+ introducing new pipes with vertices that have a smaller degree. Using a potential function,
842
+ it can be shown that the progress made by the removal always dominates overhead of the
843
+ newly-introduced pipes, and that the operations needed to remove all pipes is limited by the
844
+ total degree of all matched vertices. Furthermore, the resulting instance without pipes can
845
+ be solved in linear time. All of the three operations run in time linear in the degree of the
846
+ un-matched vertices if the embedding trees they depend on are available. The contribution of
847
+ this paper is to efficiently provide the embedding trees, which would require processing entire
848
+ connected components at each step when done naïvely. Using the fully-dynamic SPQR-tree
849
+ by Holm and Rotenberg [31, 32], this can be achieved with a poly-log cost of O(∆ · log3 n)
850
+ leading to an overall runtime of O(m · ∆ · log3 n). Using the node expansion from this paper,
851
+ we can improve the runtime from spending time linear in the size of the input instance (O(m))
852
+ for each of the linearly many operations, to only spending time linear in the maximum degree
853
+ (O(∆)) on each operation. The reduction from Clustered Planarity creates an instance
854
+ of size O(n+d) in which the total degree of matched vertices is in O(d), corresponding to the
855
+ total number of times an edge crosses a cluster boundary. Note that, while this means that
856
+ O(d) operations are sufficient to reach a reduced instance, the number of crossings between
857
+ edges and cluster boundaries can be quadratic in the number of vertices in a planar graph.
858
+ We also note that while the improvement over using the Holm and Rotenberg approach is
859
+ only poly-logarithmic, our datastructure has the additional benefit of being conceptually
860
+ simpler and thus also more likely to improve performance in practice.
861
+ 6.2
862
+ Using Node Expansion for Solving Synchronized Planarity
863
+ We show how extended skeleton decompositions and their dynamic operation InsertGraphSPQR
864
+ can be used to improve the runtime of the algorithm for solving Synchronized Planarity
865
+ by Bläsius et al. [8] from O(m2) to O(m · ∆), where ∆ is the maximum pipe degree. As
866
+
867
+ S. D. Fink and I. Rutter
868
+ 17
869
+ already explained in the previous section, the algorithm spends a major part of its runtime on
870
+ computing so-called embedding trees, which describe all possible rotations of a single vertex
871
+ in a planar graph and are used to communicate embedding restrictions between vertices with
872
+ synchronized rotation. Once the embedding trees are available, the at most O(m) executed
873
+ operations run in time linear in the degree of the pipe/vertex they are applied on, that is
874
+ in O(∆) [8]. Thus, being able to generate these embedding trees efficiently by maintaining
875
+ the SPQR-trees they are derived from is our main contribution towards the speedup of the
876
+ Synchronized Planarity algorithm.
877
+ An embedding tree Tv for a vertex v of a biconnected graph G describes the possible
878
+ cyclic orderings or rotations of the edges incident to v in all planar embeddings of G [12].
879
+ The leaves of Tv are the edges incident to v, while its inner nodes are partitioned into two
880
+ categories: Q-nodes define an up-to-reversal fixed rotation of their incident tree edges, while
881
+ P-nodes allow arbitrary rotation; see Figure 1d. To generate the embedding tree we use
882
+ the observation about the relationship of SPQR-trees and embedding trees described by
883
+ Bläsius and Rutter [10, Section 2.5]: there is a bijection between the P- and Q-nodes in the
884
+ embedding tree of v and the bond and triconnected allocation skeletons of v in the SPQR-tree
885
+ of G, respectively.
886
+ ▶ Lemma 12. Let S be an SPQR-tree with a planar represented graph GS. The embedding
887
+ tree for a vertex v ∈ GS can be found in time O(deg(v)).
888
+ Proof. We use the rotation information from Theorem 10 and furthermore maintain an
889
+ (arbitrary) allocation vertex for each vertex in GS. To compute the embedding tree of a
890
+ vertex v starting at the allocation vertex u of v, we will explore the SPQR-tree by using
891
+ twinE on one of the edges incident to u and then finding the next allocation vertex of v
892
+ as one endpoint of the obtained edge. If u has degree 2, it is part of a polygon skeleton
893
+ that does not induce a node in the embedding tree. We thus move on to its neighboring
894
+ allocation skeletons and will also similarly skip over any other polygon skeleton we encounter.
895
+ If u has degree 3 or greater, we inspect two arbitrary incident edges: if they lead to the
896
+ same vertex, u is the pole of a bond, and we generate a P-node. Otherwise it is part of a
897
+ triconnected component, and we generate a Q-node. We now iterate over the edges incident
898
+ to u, in the case of a triconnected component using the order given by the rotation of u. For
899
+ each real edge, we attach a corresponding leaf to the newly generated node. The graph edge
900
+ corresponding to the leaf can be obtained from origE. For each virtual edge, we recurse on
901
+ the respective neighboring skeleton and attach the recursively generated node to the current
902
+ node. As u can only be part of deg(u) many skeletons, which form a subtree of TS, and the
903
+ allocation vertices of u in total only have O(deg(u)) many virtual and real edges incident,
904
+ this procedure yields the embedding tree of u in time linear in its degree.
905
+
906
+ Our data structure can now be used to reduce the runtime of solving Synchronized
907
+ Planarity by generating an SPQR-tree upfront, maintaining it throughout all applied
908
+ operations, and deriving any needed embedding tree from the SPQR-tree.
909
+ ▶ Theorem 13. Synchronized Planarity can be solved in time in O(m · ∆), where m is
910
+ the number of edges and ∆ is the maximum degree of a pipe.
911
+ Proof. The algorithm works by splitting the pipes representing synchronization constraints
912
+ until they are small enough to be trivial. It does so by exhaustively applying the three
913
+ operations EncapsulateAndJoin, PropagatePQ and SimplifyMatching depending on the
914
+ graph structure around the pairs of synchronized vertices. As mentioned by Bläsius et al.,
915
+ all operations run in time linear in the degree of the pipe they are applied on if the used
916
+
917
+ 18
918
+ Maintaining Triconnected Components under Node Expansion
919
+ embedding trees are known, and O(m) operations are sufficient to solve a given instance [8].
920
+ Our modification is that we maintain an SPQR-tree for each biconnected component and
921
+ then generate the needed embedding trees on-demand in linear time using Lemma 12. See
922
+ Section 6.1 for more background on the Synchronized Planarity operations modified in
923
+ the following.
924
+ Operation SimplifyMatching can be applied if the graph around a synchronized vertex
925
+ v allows arbitrary rotations of v, that is the embedding tree of v is trivial. In this case, the
926
+ pipe can be removed without modifying the graph structure. Thus, we can now easily check
927
+ the preconditions of this operations without making any changes to the SPQR-tree.
928
+ PropagatePQ takes the non-trivial embedding tree of one synchronized vertex v and inserts
929
+ copies of the tree in place of v and its partner, respectively. Synchronization constraints on
930
+ the inner vertices of the inserted trees are used to ensure that they are embedded in the
931
+ same way. We use InsertGraphSPQR to also insert the embedding tree into the respective
932
+ SPQR trees, representing Q-nodes using wheels. When propagating into a cutvertex we also
933
+ need to check whether two or more incident blocks merge. We form equivalence classes on
934
+ the incident blocks, where two blocks are in the same class if 1) the two subtrees induced by
935
+ their respective edges share at least two nodes 2) both induced subtrees share a C-node that
936
+ has degree at least 2 in both subtrees. Blocks in the same equivalence class will end up in the
937
+ same biconnected component as follows: We construct the subtree induced by all edges in
938
+ the equivalence class and add a single further node for each block in the class, connecting all
939
+ leaves to the node of the block the edges they represent lead to. We calculate the SPQR-tree
940
+ for this biconnected graph and then merge the SPQR-trees of the individual blocks into it by
941
+ applying Corollary 9. As InsertGraphSPQR (and similarly all MergeSPQR applications) runs in
942
+ time linear in the size of the inserted PQ-tree, which is limited by the degree of the vertex it
943
+ represents, this does not negatively impact the running time of the operation.
944
+ Operation EncapsulateAndJoin generates a new bipartite component representing how
945
+ the edges of the blocks incident to two synchronized cutvertices are matched with each other.
946
+ The size of this component is linear in the degree of the synchronized vertices. Thus, we can
947
+ freshly compute the SPQR-tree for the generated component in linear time, which also does
948
+ not negatively impact the running time.
949
+ Furthermore, as we now no longer need to iterate over whole connected components to
950
+ generate the embedding trees, we are also no longer required to ensure those components do
951
+ not grow to big. We can thus also directly contract pipes between two distinct biconnected
952
+ components using Corollary 9 instead of having to insert PQ-trees using PropagatePQ. This
953
+ may improve the practical runtime, as PropagatePQ might require further operations to
954
+ clean up the generated pipes, while the direct contraction entirely removes a pipe without
955
+ generating new ones.
956
+
957
+ ▶ Corollary 14. Clustered Planarity can be solved in time in O(n + d · ∆), where d
958
+ is the total number of crossings between cluster borders and edges and ∆ is the maximum
959
+ number of edge crossings on a single cluster border.
960
+ Proof. Note that for a graph not containing parallel edges to be planar, the number of
961
+ edges has to be linear in the number of vertices. We apply the reduction from Clustered
962
+ Planarity to Synchronized Planarity as described by Bläsius et al. [8]. Ignoring the
963
+ parallel edges generated by the CD-tree, we can generate an SPQR-tree for every component
964
+ of the resulting instance in O(n) time in total. The instance contains one pipe for every
965
+ cluster boundary, where the degree of a pipe corresponds to the number of edges crossing the
966
+ respective cluster boundary. Thus, the potential described by Bläsius et al. [8], which sums
967
+
968
+ S. D. Fink and I. Rutter
969
+ 19
970
+ up the degrees of all pipes with a constant factor depending on the endpoints of each pipe,
971
+ is in O(d). Each operation applied when solving the Synchronized Planarity instance
972
+ runs in time O(∆) (the maximum degree of a pipe) and reduces the potential by at least 1.
973
+ Thus, a reduced instance without pipes, which can be solved in linear time, can be reached
974
+ in O(d · ∆) time.
975
+
976
+ References
977
+ 1
978
+ P. Angelini, T. Bläsius, and I. Rutter. Testing mutual duality of planar graphs. International
979
+ Journal of Computational Geometry & Applications, 24(4):325–346, 2014. arXiv:1303.1640,
980
+ doi:10.1142/S0218195914600103.
981
+ 2
982
+ P. Angelini and G. Da Lozzo. Clustered planarity with pipes. Algorithmica, 81(6):2484–2526,
983
+ 2019. doi:10.1007/s00453-018-00541-w.
984
+ 3
985
+ P. Angelini, G. Di Battista, and M. Patrignani.
986
+ Finding a minimum-depth embedding
987
+ of a planar graph in O(n4) time.
988
+ Algorithmica, 60(4):890–937, 2009.
989
+ doi:10.1007/
990
+ s00453-009-9380-6.
991
+ 4
992
+ P. Angelini, G. D. Lozzo, G. Di Battista, and F. Frati. Strip planarity testing for embedded
993
+ planar graphs. Algorithmica, 77(4):1022–1059, 2016. doi:10.1007/s00453-016-0128-9.
994
+ 5
995
+ T. C. Biedl, G. Kant, and M. Kaufmann. On triangulating planar graphs under the four-
996
+ connectivity constraint. Algorithmica, 19(4):427–446, 1997. doi:10.1007/PL00009182.
997
+ 6
998
+ D. Bienstock and C. L. Monma. Optimal enclosing regions in planar graphs. Networks,
999
+ 19(1):79–94, 1989. doi:10.1002/net.3230190107.
1000
+ 7
1001
+ D. Bienstock and C. L. Monma. On the complexity of embedding planar graphs to minimize
1002
+ certain distance measures. Algorithmica, 5(1):93–109, 1990. doi:10.1007/bf01840379.
1003
+ 8
1004
+ T. Bläsius, S. D. Fink, and I. Rutter. Synchronized planarity with applications to constrained
1005
+ planarity problems. In Proceedings of the 29th Annual European Symposium on Algorithms
1006
+ (ESA’21), volume 204 of LIPIcs, pages 19:1–19:14, 2021. doi:10.4230/LIPIcs.ESA.2021.19.
1007
+ 9
1008
+ T. Bläsius and I. Rutter.
1009
+ A new perspective on clustered planarity as a combinatorial
1010
+ embedding problem. Theoretical Computer Science, 609:306–315, 2016. arXiv:1506.05673,
1011
+ doi:10.1016/j.tcs.2015.10.011.
1012
+ 10
1013
+ T. Bläsius and I. Rutter. Simultaneous PQ-ordering with applications to constrained embedding
1014
+ problems. ACM Transactions on Algorithms, 12(2):16:1–16:46, 2016. doi:10.1145/2738054.
1015
+ 11
1016
+ T. Bläsius, I. Rutter, and D. Wagner. Optimal orthogonal graph drawing with convex bend
1017
+ costs. ACM Transactions on Algorithms, 12(3):33:1–33:32, 2016. doi:10.1145/2838736.
1018
+ 12
1019
+ K. S. Booth and G. S. Lueker. Testing for the consecutive ones property, interval graphs,
1020
+ and graph planarity using PQ-tree algorithms. Journal of Computer and System Sciences,
1021
+ 13(3):335–379, 1976. doi:10.1016/s0022-0000(76)80045-1.
1022
+ 13
1023
+ G. Brückner, M. Himmel, and I. Rutter. An SPQR-tree-like embedding representation for
1024
+ upward planarity.
1025
+ In D. Archambault and C. D. Tóth, editors, Proceedings of the 27th
1026
+ International Symposium on Graph Drawing and Network Visualization (GD’19), volume
1027
+ 11904 of LNCS, pages 517–531. Springer, 2019. doi:10.1007/978-3-030-35802-0_39.
1028
+ 14
1029
+ Z.-Z. Chen, X. He, and C.-H. Huang. Finding double euler trails of planar graphs in linear time
1030
+ [CMOS VLSI circuit design]. In Proceedings of the 40th Annual Symposium on Foundations of
1031
+ Computer Science (FOCS’99). IEEE, 1999. doi:10.1109/sffcs.1999.814603.
1032
+ 15
1033
+ P. F. Cortese, G. Di Battista, F. Frati, M. Patrignani, and M. Pizzonia. C-planarity of
1034
+ c-connected clustered graphs. Journal of Graph Algorithms and Applications, 12(2):225–262,
1035
+ 2008. doi:10.7155/jgaa.00165.
1036
+ 16
1037
+ G. Di Battista and R. Tamassia. Incremental planarity testing. In Proceedings of the 30th
1038
+ Annual Symposium on Foundations of Computer Science (FOCS’89), pages 436 – 441. IEEE,
1039
+ 1989. doi:10.1109/sfcs.1989.63515.
1040
+
1041
+ 20
1042
+ Maintaining Triconnected Components under Node Expansion
1043
+ 17
1044
+ G. Di Battista and R. Tamassia. On-line graph algorithms with SPQR-trees. In Proceedings
1045
+ of the 17th International Colloquium on Automata, Languages, and Programming (ICALP’90),
1046
+ pages 598–611. Springer, 1990. doi:10.1007/bfb0032061.
1047
+ 18
1048
+ G. Di Battista and R. Tamassia. On-line maintenance of triconnected components with
1049
+ SPQR-trees. Algorithmica, 15(4):302–318, 1996. doi:10.1007/bf01961541.
1050
+ 19
1051
+ G. Di Battista and R. Tamassia. On-line planarity testing. SIAM Journal on Computing,
1052
+ 25(5):956–997, 1996. doi:10.1137/s0097539794280736.
1053
+ 20
1054
+ W. Didimo, G. Liotta, G. Ortali, and M. Patrignani. Optimal orthogonal drawings of planar
1055
+ 3-graphs in linear time. In Proceedings of the 14th Annual ACM-SIAM Symposium on Discrete
1056
+ Algorithms (SODA’20), pages 806–825. SIAM, 2020. doi:10.1137/1.9781611975994.49.
1057
+ 21
1058
+ D. Eppstein, Z. Galil, G. F. Italiano, and T. H. Spencer. Separator based sparsification.
1059
+ Journal of Computer and System Sciences, 52(1):3–27, 1996. doi:10.1006/jcss.1996.0002.
1060
+ 22
1061
+ M. Fedarko, J. Ghurye, T. Treagen, and M. Pop. Metagenomescope: Web-based hierarchical
1062
+ visualization of metagenome assembly graphs. In F. Frati and K.-L. Ma, editors, Proceedings
1063
+ of the 25th International Symposium on Graph Drawing and Network Visualization (GD’17),
1064
+ pages 630–632. Springer, 2017. (Poster). URL: https://gd2017.ccis.northeastern.edu/
1065
+ files/posters/fedarko-metagenomescope.pdf, doi:10.1007/978-3-319-73915-1.
1066
+ 23
1067
+ Q.-W. Feng, R. F. Cohen, and P. Eades. Planarity for clustered graphs. In P. G. Spirakis,
1068
+ editor, Proceedings of the 3rd Annual European Symposium on Algorithms (ESA’95), volume
1069
+ 979 of LNCS, pages 213–226. Springer, 1995. doi:10.1007/3-540-60313-1_145.
1070
+ 24
1071
+ D. Franken, J. Ochs, and K. Ochs.
1072
+ Generation of wave digital structures for networks
1073
+ containing multiport elements. IEEE Transactions on Circuits and Systems I: Regular Papers,
1074
+ 52(3):586–596, 2005. doi:10.1109/tcsi.2004.843056.
1075
+ 25
1076
+ R. Fulek, J. Kynčl, I. Malinović, and D. Pálvölgyi. Clustered planarity testing revisited. The
1077
+ Electronic Journal of Combinatorics, 22(4), 2015. doi:10.37236/5002.
1078
+ 26
1079
+ R. Fulek and C. D. Tóth. Atomic embeddability, clustered planarity, and thickenability.
1080
+ Journal of the ACM, 69(2):13:1–13:34, 2022. arXiv:1907.13086v1, doi:10.1145/3502264.
1081
+ 27
1082
+ Z. Galil, G. F. Italiano, and N. Sarnak. Fully dynamic planarity testing with applications.
1083
+ Journal of the ACM, 46(1):28–91, 1999. doi:10.1145/300515.300517.
1084
+ 28
1085
+ C. Gutwenger. Application of SPQR-trees in the planarization approach for drawing graphs.
1086
+ PhD thesis, 2010.
1087
+ URL: https://eldorado.tu-dortmund.de/bitstream/2003/27430/1/
1088
+ diss_gutwenger.pdf.
1089
+ 29
1090
+ C. Gutwenger, M. Jünger, S. Leipert, P. Mutzel, M. Percan, and R. Weiskircher. Advances
1091
+ in c-planarity testing of clustered graphs. In S. G. Kobourov and M. T. Goodrich, editors,
1092
+ Proceedings of the 10th International Symposium on Graph Drawing (GD’02), volume 2528 of
1093
+ LNCS, pages 220–235. Springer, 2002. doi:10.1007/3-540-36151-0_21.
1094
+ 30
1095
+ C. Gutwenger and P. Mutzel. A linear time implementation of SPQR-trees. In Proceedings of
1096
+ the 8th International Symposium on Graph Drawing (GD’20), pages 77–90. Springer, 2001.
1097
+ doi:10.1007/3-540-44541-2_8.
1098
+ 31
1099
+ J. Holm and E. Rotenberg.
1100
+ Fully-dynamic planarity testing in polylogarithmic time.
1101
+ In K. Makarychev, Y. Makarychev, M. Tulsiani, G. Kamath, and J. Chuzhoy, edi-
1102
+ tors, Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Comput-
1103
+ ing (STOC’20), volume abs/1911.03449, pages 167–180. ACM, 2020. arXiv:1911.03449,
1104
+ doi:10.1145/3357713.3384249.
1105
+ 32
1106
+ J. Holm and E. Rotenberg. Worst-case polylog incremental SPQR-trees: Embeddings, planarity,
1107
+ and triconnectivity. In Proceedings of the 14th Annual ACM-SIAM Symposium on Discrete
1108
+ Algorithms (SODA’20), pages 2378–2397. SIAM, 2020. doi:10.1137/1.9781611975994.146.
1109
+ 33
1110
+ J. E. Hopcroft and R. E. Tarjan. Dividing a graph into triconnected components. SIAM
1111
+ Journal on Computing, 2(3):135–158, 1973. doi:10.1137/0202012.
1112
+ 34
1113
+ T. Lengauer. Hierarchical planarity testing algorithms. Journal of the ACM, 36(3):474–509,
1114
+ 1989. doi:10.1145/65950.65952.
1115
+
1116
+ S. D. Fink and I. Rutter
1117
+ 21
1118
+ 35
1119
+ G. Liotta, I. Rutter, and A. Tappini.
1120
+ Simultaneous FPQ-ordering and hybrid planarity
1121
+ testing. In Proceedings of the 46th International Conference on Current Trends in Theory
1122
+ and Practice of Informatics (SOFSEM’20), pages 617–626. Springer, 2020. doi:10.1007/
1123
+ 978-3-030-38919-2_51.
1124
+ 36
1125
+ S. Mac Lane. A structural characterization of planar combinatorial graphs. Duke Mathematical
1126
+ Journal, 3(3):460–472, 1937. doi:10.1215/S0012-7094-37-00336-3.
1127
+ 37
1128
+ P. Mutzel. The SPQR-tree data structure in graph drawing. In J. C. M. Baeten, J. K. Lenstra,
1129
+ J. Parrow, and G. J. Woeginger, editors, Proceedings of the 30th International Colloquium
1130
+ on Automata, Languages and Programming (ICALP’03), volume 2719 of LNCS, pages 34–46.
1131
+ Springer, 2003. doi:10.1007/3-540-45061-0_4.
1132
+ 38
1133
+ J. A. L. Poutré. Maintenance of triconnected components of graphs. In Proceedings of the
1134
+ 19th International Colloquium on Automata, Languages and Programming (ICALP’92), pages
1135
+ 354–365. Springer, 1992. doi:10.1007/3-540-55719-9_87.
1136
+ 39
1137
+ J. A. L. Poutré. Alpha-algorithms for incremental planarity testing (preliminary version). In
1138
+ Proceedings of the 26th annual ACM symposium on Theory of computing (STOC’94). ACM
1139
+ Press, 1994. doi:10.1145/195058.195439.
1140
+ 40
1141
+ J. Vanhatalo, H. Völzer, and J. Koehler.
1142
+ The refined process structure tree.
1143
+ Data and
1144
+ Knowledge Engineering, 68(9):793–818, 2009. doi:10.1016/j.datak.2009.02.015.
1145
+ 41
1146
+ A. von Manteuffel and C. Studerus. Reduze 2 - distributed feynman integral reduction. 2012.
1147
+ arXiv:1201.4330.
1148
+ 42
1149
+ R. Weiskircher. New applications of SPQR-trees in graph drawing. PhD thesis, Universität
1150
+ des Saarlandes, 2002. doi:10.22028/D291-25752.
1151
+ 43
1152
+ J. Westbrook. Fast incremental planarity testing. In Proceedings of the 19th International
1153
+ Colloquium on Automata, Languages and Programming (ICALP’92), pages 342–353. Springer,
1154
+ 1992. doi:10.1007/3-540-55719-9_86.
1155
+ 44
1156
+ Y. Zhang, W. Luk, H. Zhou, C. Yan, and X. Zeng. Layout decomposition with pairwise
1157
+ coloring for multiple patterning lithography. In J. Henkel, editor, Proceedings of the IEEE/ACM
1158
+ International Conference on Computer-Aided Design (ICCAD’13), pages 170–177. IEEE, 2013.
1159
+ doi:10.1109/ICCAD.2013.6691115.
1160
+
4dE2T4oBgHgl3EQfjwe5/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
59E3T4oBgHgl3EQfQwmQ/content/2301.04416v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78320dedde3a73683ae8c915ca2555c1c0fb2b3fe8f6c73cf75a50cc917031e4
3
+ size 1501616
59E3T4oBgHgl3EQfQwmQ/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b900dacaf5d8f7cd291c374e7b37f7508cb4ef453e4f37d55e552f23d16248ce
3
+ size 917549
59E3T4oBgHgl3EQfQwmQ/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efae3972e2a613c45d17ca1803db4a0374b8c2ca238d322a5b510097ebc04c44
3
+ size 35150
79E1T4oBgHgl3EQfTwOd/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f30ddd65571574f0930cd23bbbc96143c226eb79b33000042b4e652b0e8d000
3
+ size 852013
79E1T4oBgHgl3EQfTwOd/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52b4fa4e17472600823f19817e2ebeed62c8122b3267fc48d4eaca36fc2c5ea8
3
+ size 31741
7NE4T4oBgHgl3EQfcgzB/content/tmp_files/2301.05084v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
7NE4T4oBgHgl3EQfcgzB/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
8dFLT4oBgHgl3EQftC_m/content/2301.12150v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8653509ad5702def198923fef1cf4594bb5bc98144de636b755efac6e654c9d5
3
+ size 9654352
8dFLT4oBgHgl3EQftC_m/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:535465bc44c1508ed9583c6d3d09f4834b6e726773a5b77f8fd74705c25edcd8
3
+ size 2490413
8dFLT4oBgHgl3EQftC_m/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29a6e464cce0fea6d2631426dc606a92dfddc70d4e557ea76035ac813f1ca0ae
3
+ size 91193
9dAzT4oBgHgl3EQfFPqV/content/tmp_files/2301.01008v1.pdf.txt ADDED
@@ -0,0 +1,1078 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Topological Two-Dimensional Gravity
2
+ on Surfaces with Boundary
3
+ Jan Troostb
4
+ b Laboratoire de Physique de l’´Ecole Normale Sup´erieure
5
+ CNRS, ENS, Universit´e PSL, Sorbonne Universit´e,
6
+ Universit´e de Paris F-75005 Paris, France
7
+ E-mail:
8
9
+ Abstract
10
+ We solve two-dimensional gravity on surfaces with boundary in terms of contact
11
+ interactions and surface degenerations. The known solution of the bulk theory in terms
12
+ of a contact algebra is generalized to include boundaries and an enlarged set of boundary
13
+ operators. The latter allow for a linearization of the Virasoro constraints in terms of an
14
+ extended integrable KdV hierarchy.
15
+ arXiv:2301.01008v1 [hep-th] 3 Jan 2023
16
+
17
+ Contents
18
+ 1
19
+ Introduction
20
+ 1
21
+ 2
22
+ Open Topological Gravity
23
+ 2
24
+ 3
25
+ The Virasoro Algebra Representations
26
+ 4
27
+ 3.1
28
+ The Bulk Representation of the Virasoro Algebra
29
+ . . . . . . . . . . . . . . . .
30
+ 4
31
+ 3.2
32
+ The Extended Virasoro Representation . . . . . . . . . . . . . . . . . . . . . .
33
+ 5
34
+ 3.3
35
+ The Recursion Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
+ 6
37
+ 3.4
38
+ The Generalized Vertex Operators . . . . . . . . . . . . . . . . . . . . . . . . .
39
+ 8
40
+ 3.5
41
+ Amplitudes
42
+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
+ 9
44
+ 4
45
+ The Extended Partition Function
46
+ 11
47
+ 4.1
48
+ The Generating Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
+ 11
50
+ 4.2
51
+ A Few More Amplitudes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
+ 12
53
+ 5
54
+ Conclusions
55
+ 13
56
+ 1
57
+ Introduction
58
+ Two-dimensional gravity on closed Riemann surfaces was solved in terms of matrix models
59
+ [1–3], conformal field theory [4–6] and intersection theory [7,8]. While aspects of gravity on
60
+ Riemann surfaces with boundary were partially understood in terms of matrix models early
61
+ on [9, 10], a rigorous theory of topological gravity on Riemann surfaces with boundary was
62
+ only recently established [11]. Since then, various perspectives on these theories have been
63
+ developed [12–17]. The main approaches are through geometry and matrix models. The points
64
+ of view provided by these methods on the resulting integrable KdV hierarchy are qualitatively
65
+ distinct and usefully complementary.
66
+ Two-dimensional gravity on closed Riemann surfaces was also understood in a conformal
67
+ field theory approach closely related to string theory [18]. The theory was solved in terms of
68
+ Virasoro recursion relations. These relations were derived from a contact algebra for vertex
69
+ operators that carries all the topological information provided by the surface as well as the
70
+ bundles on the moduli space of surfaces [7].
71
+ Our goal in this paper is to extend the contact algebra approach [18] to topological gravity
72
+ on Riemann surfaces with boundary. To that end, we study the contact algebra for operators
73
+ in the presence of boundaries as well as how the bulk algebra is represented on an extended
74
+ set of boundary vertex operators. Through representation theory and consistency conditions,
75
+ we fix all constants in the extended open Virasoro algebra, and manage to derive the Virasoro
76
+ recursion relation for the open and closed partition functions. Given a few initial correlators,
77
+ this allows to solve the theory.
78
+ The paper is structured as follows. In section 2 we review salient features of topological
79
+ gravity on Riemann surfaces with boundary [11]. The extended representation of the bulk
80
+ vertex operator contact algebra on the boundary vertex operators is constructed in section
81
+ 3 using consistency arguments. In section 4 the constraints are translated into a differential
82
+ 1
83
+
84
+ Virasoro algebra that acts on the generating function of topological correlators. At that point,
85
+ we make contact with the extended open string partition function [12] which is sufficient to
86
+ prove that the solution to the Virasoro constraints indeed coincides with the known solution
87
+ of open topological gravity. We conclude in section 5 with a summary and suggestions for
88
+ future research.
89
+ 2
90
+ Open Topological Gravity
91
+ In this section, we recall features of the solution of open and closed topological gravity, respec-
92
+ tively on Riemann surfaces with [11] or without boundary [7]. For open topological gravity,
93
+ we indicate a few features of the rigorous geometric solution [11]. For topological gravity on
94
+ Riemann surfaces (without boundary), we also briefly recall aspects of the solution in terms
95
+ of a conformal field theory [18] with contact interactions. We then start out on the path to
96
+ generalize that solution to Riemann surfaces with boundary.1
97
+ Riemann Surfaces and Carriers of Curvature
98
+ Topological gravity on Riemann surfaces (without boundary) [7] satisfies the ghost number
99
+ conservation equation – or the dimension constraint on the integral over the moduli space of
100
+ surfaces –:
101
+ 3g − 3 + nc =
102
+ nc
103
+
104
+ i=1
105
+ nc
106
+ i .
107
+ (2.1)
108
+ The genus of the Riemann surface is g. The number of bulk vertex operator insertions is
109
+ nc and nc
110
+ i are the labels of the bulk vertex operators referring to the power of the tangent
111
+ line bundle at a point [7]. A central idea in [18] was to graft the curvature associated to the
112
+ Riemann surface itself onto the bulk vertex operators such that all topological properties of
113
+ the theory are captured by local operators – this in turn allows for the solution of the theory
114
+ in terms of contact interactions. When we associate a curvature 2(nc
115
+ i − 1)/3 to each bulk
116
+ vertex operator τnc
117
+ i of power nc
118
+ i, then ghost number conservation implies that:
119
+ The Integrated Curvature = 2g − 2 =
120
+ nc
121
+
122
+ i=1
123
+ 2
124
+ 3(nc
125
+ i − 1) ,
126
+ (2.2)
127
+ namely that the curvature of the surface is faithfully represented. The puncture operator τ0
128
+ has the smallest curvature contribution equal to −2/3, while the dilaton operator τ1 carries
129
+ no curvature at all. All other operators carry positive curvature (in this convention).
130
+ Riemann Surfaces with Boundary
131
+ The integration over the moduli space of Riemann surfaces with boundaries and with boundary
132
+ and bulk insertions leads to the dimensionality constraint valid for non-zero open correlation
133
+ functions [11]:
134
+ 3g′ − 3 + no + 2nc = 2
135
+ nc
136
+
137
+ i=1
138
+ nc
139
+ i .
140
+ (2.3)
141
+ 1See also [19] for an interesting alternative.
142
+ 2
143
+
144
+ The doubled genus g′ is the genus of the Riemann surface that is obtained by gluing a given
145
+ Riemann surface with at least one boundary to its reflection. We therefore have the relation
146
+ g′ = 2g + b − 1 where b is the number of boundaries of the original surface and g its genus.
147
+ The number of boundary operator insertions σ is no [11]. In terms of the ordinary genus g
148
+ and number of boundaries b, we have:
149
+ 6g − 6 + 3b + 2nc + no = 2
150
+ nc
151
+
152
+ i=1
153
+ nc
154
+ i ,
155
+ (2.4)
156
+ in which we recognize the constraint (2.1) as the special case without boundaries.
157
+ Our first step in generalizing the solution of the closed theory in terms of contact interac-
158
+ tions [18] is to appropriately distribute curvature in the presence of boundaries and boundary
159
+ insertions. We continue to assign curvature to the bulk insertions as before [18] – see equation
160
+ (2.2). For simplicity, we momentarily imagine a single boundary, with a non-zero number no
161
+ of boundary insertions σ. The ghost number conservation equation (2.4) then suggests that
162
+ we should assign curvature −1/3 to each basic boundary insertion σ, in such a manner that
163
+ we find the equation:
164
+ Boundary Curvature = 1 = −no
165
+ 3 ,
166
+ (2.5)
167
+ in accord with our assignment for bulk curvature as well as the ghost number conservation
168
+ equation (2.4). The relative factor of a half compared to the basic bulk (puncture) operator
169
+ τ0 is due to the fact that the boundary operator increases the dimension of the moduli space
170
+ by real dimension one (compared to a bulk operator which increases the real dimension by
171
+ two). This reasoning can be generalized to the case of multiple boundaries with insertions. It
172
+ is sufficient to introduce an extra label corresponding to each boundary (with its associated
173
+ boundary insertions). We conclude that the boundary operator σ carries curvature −1/3.
174
+ Higher Powers
175
+ To prepare for reasonings to come, it may be useful to interject a thought experiment at this
176
+ point. Note that the closed string vertex operator τn can be thought of as a power of the
177
+ vertex operator τ1 in an approximate sense. The curvature it carries is then interpreted as
178
+ the curvature n × 2/3 from which we subtract 2/3. The curvature remains bounded from
179
+ below such that the vertex operators do not cut out such a large part of the surface for it
180
+ to disappear entirely.2 Similarly, if we were to attempt to define an arbitrary power of the
181
+ boundary operator σ to which we attached curvature −1/3, the operator would not have well-
182
+ defined correlation functions. A manner to remedy this obstruction is to add explicit powers
183
+ of the string coupling u to the operator: ρn = un−1σn. Now, the powers of the string coupling
184
+ are counted by the genus g and the number of boundaries b on the one hand, and the explicit
185
+ powers of u on the other hand. Suppose we study a correlation function of operators ρno
186
+ j and
187
+ τnc
188
+ i. It satisfies the equation:
189
+ 2g − 2 + b +
190
+ no
191
+
192
+ j=1
193
+ (no
194
+ j − 1) =
195
+ nc
196
+
197
+ i=1
198
+ 2(nc
199
+ i − 1)
200
+ 3
201
+ +
202
+ no
203
+
204
+ j=1
205
+ (2no
206
+ j
207
+ 3
208
+ − 1) .
209
+ (2.6)
210
+ 2This is dictated by geometry or can be interpreted as a Seiberg bound [20].
211
+ 3
212
+
213
+ This is still the ghost conservation equation (2.4) but rewritten in such a way as to make the
214
+ explicit string coupling contributions visible on the left hand side. We made use of the fact
215
+ that the coupling u corresponds to the vacuum expectation value of the exponential of the
216
+ dilaton operator τ1 which couples to curvature. The operator ρn still caries ghost number n,
217
+ but it also carries curvature 2n/3 − 1, as we made manifest in our manner of writing equation
218
+ (2.6).3 While the boundary operators that we will soon encounter are more intricate still,
219
+ they share features with the operators ρn.
220
+ 3
221
+ The Virasoro Algebra Representations
222
+ In this subsection, we briefly remind the reader of an intuitive manner to solve topological
223
+ gravity on closed Riemann surfaces using contact terms [18]. We extend the approach to
224
+ include boundaries and boundary vertex operators which can be viewed as representing the
225
+ contact algebra. This section heavily relies on background provided in [18] to which we do
226
+ refer for more details.
227
+ 3.1
228
+ The Bulk Representation of the Virasoro Algebra
229
+ The method of [18] to solve topological gravity on closed Riemann surfaces is to represent all
230
+ the topological data in terms of local operators in a conformal field theory. As an example, we
231
+ already saw that the curvature (which codes the genus) was assigned to local bulk operator
232
+ insertions. Intersection numbers are then represented as integrals over the moduli space of
233
+ the Riemann surface of conformal field theory correlators.4 We denote the curvature carrying
234
+ bulk local operator insertions τn. Due to the topological nature of the theory, the contact
235
+ interactions between the local operators suffice to compute the intersection numbers on the
236
+ moduli space of Riemann surfaces.
237
+ The method of [18] to solve topological gravity uses the fact that the algebra of integrated
238
+ vertex operators is represented on localized bulk vertex operators (or states) in the form [18]:
239
+
240
+
241
+ τm|τn⟩ = An
242
+ m|τn+m−1⟩ ,
243
+ (3.1)
244
+ where the localized vertex operator τn is assumed to lie in the disk Dϵ over which the vertex
245
+ operator τm is integrated. The representation arises from the contact term between the oper-
246
+ ators τm and τn. When we wish to compute the algebra of consecutive actions of the locally
247
+ integrated bulk vertex operators in the representation, we need to take into account that the
248
+ first integrated operator may enter into contact with the second integrated operator. To keep
249
+ track of this term, it is useful to define a measure of the non-commutativity of the operation
250
+ of localizing the vertex operator, and integrating over it [18]:
251
+
252
+
253
+ τm|τn⟩ −
254
+
255
+
256
+ τn|τm⟩ = Cnm|τn+m−1⟩ .
257
+ (3.2)
258
+ 3For n ≥ 1 the boundary operator now has sufficient curvature to have well-defined correlation functions.
259
+ 4This is heavily reminiscent of string theory (see e.g. [21]) and we allow string theory nomenclature to creep
260
+ into our language.
261
+ 4
262
+
263
+ Then, when we consider the action of two integrated vertex operators on a localized operator,
264
+ we find a consistency condition between the representation coefficients A and the measure of
265
+ non-commutativity C [18]:
266
+ Am+k−1
267
+ n
268
+ Ak
269
+ m − An+k−1
270
+ m
271
+ Ak
272
+ n + CnmAk
273
+ m+n−1
274
+ =
275
+ 0 .
276
+ (3.3)
277
+ The coefficient An
278
+ m is calculated in [18] and it equals the curvature of the insertion plus one:
279
+ An
280
+ m = 2
281
+ 3(n − 1) + 1 ,
282
+ (3.4)
283
+ and we retain that we have the contact contribution
284
+
285
+
286
+ τm|τn⟩ = 2n + 1
287
+ 3
288
+ |τn+m−1⟩ .
289
+ (3.5)
290
+ In turn this implies that the measure of non-commutativity C is proportional to the difference
291
+ in the curvature of the insertions:
292
+ Cmn = 2
293
+ 3(m − n) .
294
+ (3.6)
295
+ Note that when we identify the coefficients An of the representation on the bulk vertex operator
296
+ space with an operator Ln−1, then the commutation relation (3.3) shows that we have a
297
+ representation of the Virasoro algebra:
298
+ [Ln, Lm] = 2
299
+ 3(m − n)Lm+n .
300
+ (3.7)
301
+ Thus, the contact algebra is a Virasoro algebra, represented on the space of bulk operator
302
+ insertions. This is an essential tool in the solution to the bulk topological gravity theory [18],
303
+ and we wish to extend it to Riemann surfaces with boundary.
304
+ 3.2
305
+ The Extended Virasoro Representation
306
+ In the presence of a boundary, we first address the question what happens when a bulk vertex
307
+ operator is integrated over a small ring Rϵ near an empty boundary. We propose that the
308
+ integrated vertex operator in that case generates an operator on the boundary:
309
+
310
+
311
+ τn| ⟩b = u c(n)|σb
312
+ n−1⟩ .
313
+ (3.8)
314
+ We have introduced operators σb
315
+ n that live on a boundary of the Riemann surface. We have
316
+ stripped off one factor of the string coupling constant u on the right hand side – we think of the
317
+ bulk vertex operators as carrying one power of the coupling constant more than the boundary
318
+ operators.5 We have allowed for a representation coefficient c(n) that is undetermined for
319
+ now. The curvature of the operator σb
320
+ n equals the curvature of the bulk vertex operator minus
321
+ one, to compensate for the string coupling constant prefactor. Therefore, the curvature of the
322
+ 5This is standard in string theory. Alternatively, it can be viewed as a consequence of the relative contri-
323
+ bution of bulk and boundary vertex operators to the dimension of moduli space.
324
+ 5
325
+
326
+ operator σb
327
+ n−1 equals 2(n − 1)/3 − 1. We allow for operators with n ≥ 2 and set other terms
328
+ to zero.
329
+ Thus, we have introduced a new space parameterized by the operators σb
330
+ n. Our next step
331
+ is to assume that the integrated bulk vertex operators also act on this space and provide a
332
+ new representation of the Virasoro algebra. We need to make sure that the resulting operator
333
+ carries the sum of the curvatures of the operators on the left hand side, and we propose that
334
+ the contact algebra coefficient is again fixed to equal the curvature of the operator plus one –
335
+ see equation (3.4). We thus find:
336
+
337
+
338
+ τm|σb
339
+ n⟩ = 2n
340
+ 3 |σb
341
+ m+n−1⟩ .
342
+ (3.9)
343
+ This natural proposal partially fixes the normalization of the boundary vertex operators. We
344
+ still need to check whether the integrated vertex operators satisfy the Virasoro algebra. The
345
+ action (3.9) is indeed a representation of the Virasoro algebra, as before. For the action (3.8)
346
+ to also enter into a representation of the Virasoro algebra, the coefficient c(n) needs to be a
347
+ linear function of n. Finally, we use a choice of overall normalization of the boundary vertex
348
+ operators to set c(n) = n+a
349
+ 3
350
+ where a is a constant to be determined. We will later argue that
351
+ consistency requires a = 0 and we therefore find the action on an empty boundary:
352
+
353
+
354
+ τn| ⟩b = u n
355
+ 3|σb
356
+ n−1⟩ .
357
+ (3.10)
358
+ In summary, we have extended the space of boundary operators considerably, and we have
359
+ represented the Virasoro contact algebra on that space.
360
+ 3.3
361
+ The Recursion Relation
362
+ For topological gravity on closed Riemann surfaces, the representation of the contact algebra
363
+ was leveraged into a recursion relation for the topological correlators [18]. The integral over
364
+ bulk vertex operators was split into an integral over small disks where other operators reside,
365
+ neighbourhoods of nodes and uneventful regions. The fact that integrals of bulk operators
366
+ over the whole Riemann surface should commute, combined with the contact algebra, gave
367
+ rise to consistency conditions on the contributions of nodes which in turn provided a recursion
368
+ relation for correlators. Our claim is that the same reasoning applies to the integrated bulk
369
+ vertex operators on Riemann surfaces with boundary. We again need to take into account the
370
+ possible development of nodes on the Riemann surface, as well as possible generalized contact
371
+ terms with the boundary, which we described previously.
372
+ To ease into the generalized recursion relation, let us recall the closed recursion relation
373
+ first [18]6:
374
+ ⟨τn+1
375
+
376
+ i∈C
377
+ τni⟩c
378
+ =
379
+
380
+ j
381
+ 2nj + 1
382
+ 3
383
+ ⟨τn+nj
384
+
385
+ i̸=j
386
+ τni⟩c
387
+ (3.11)
388
+ +u2
389
+ 18(
390
+ n−1
391
+
392
+ k=0
393
+ (⟨τkτn−k−1
394
+
395
+ i∈C
396
+ τni⟩c +
397
+
398
+ C=C1∪C2
399
+ ⟨τk
400
+
401
+ i∈C1
402
+ τni⟩c⟨τk−i−1
403
+
404
+ j∈C2
405
+ τnj⟩c) .
406
+ 6We normalize the bulk correlators as ⟨τ0τ0τ0⟩c = 1 and ⟨τ1⟩c = 1/24. We often set the string coupling u
407
+ to one.
408
+ 6
409
+
410
+ Figure 1: Two degenerations of Riemann surfaces are depicted. The left figure represents a
411
+ surface splitting into two surfaces. The sum of genera is conserved. The right figure shows a
412
+ genus two Riemann surface that turns into a genus one Riemann surface, lowering the genus
413
+ by one.
414
+ The set C is a set of bulk operator insertions. The first term on the right hand side arises
415
+ from the bulk contact algebra representation (3.5) while the second line has its origins in the
416
+ fact that a Riemann surface can develop nodes which give rise to a Riemann surface of one
417
+ genus less, or which splits the Riemann surface into two closed Riemann surfaces. See Figure
418
+ 1 and reference [18].
419
+ The generalization to the case of extended open correlators is:
420
+ ⟨τn+1
421
+
422
+ i∈C
423
+ τni
424
+
425
+ l∈O
426
+ σb
427
+ nl⟩o,ext
428
+ =
429
+
430
+ j
431
+ 2nj + 1
432
+ 3
433
+ ⟨τn+nj
434
+
435
+ i̸=j
436
+ τni
437
+
438
+ l
439
+ σnl⟩o,ext +
440
+
441
+ j
442
+ 2nj
443
+ 3 ⟨
444
+
445
+ i
446
+ τniσn+nj
447
+
448
+ l̸=j
449
+ σnl⟩o,ext
450
+ +un + 1
451
+ 3
452
+ ⟨σb
453
+ n
454
+
455
+ i∈C
456
+ τni
457
+
458
+ l∈O
459
+ σb
460
+ nl⟩o,ext
461
+ (3.12)
462
+ +u2
463
+ 18(
464
+ n−1
465
+
466
+ k=0
467
+ (⟨τkτn−k−1
468
+
469
+ i,j∈CO
470
+ τnk⟩o,ext
471
+ +
472
+
473
+ (e,f)
474
+
475
+ CO=CO1∪CO2
476
+ ⟨τk
477
+
478
+ i,j∈CO1
479
+ τniσb
480
+ nj⟩e⟨τk−i−1
481
+
482
+ l,m∈CO2
483
+ τnlσb
484
+ nm⟩f) .
485
+ The first line corresponds to the fact that we are considering an integrated bulk operator τn+1.
486
+ It gives rise to the contact terms in the second line from the bulk contact term (3.5) and the
487
+ boundary contact term (3.9). The third line arises from the naked boundary term (3.10). The
488
+ fourth line arises from pinching off a handle. The fifth line requires explanation. We sum
489
+ over the sectors (e, f) which can be either (open,closed), (closed,open) or (open,open).7 The
490
+ first two arise when we split the surface into a closed Riemann surface and a Riemann surface
491
+ with boundary.8 In that case, the open string sector will contain all the boundary insertions,
492
+ necessarily. The third value, (open,open) arises when a node splits the Riemann surface into
493
+ two Riemann surfaces with boundary. The set CO indicates the set of all bulk and boundary
494
+ insertions, and we sum over their possible distributions CO1 and CO2 on the two disjoint
495
+ 7We exclude the case with no boundaries from our definition of extended open correlators. See equation
496
+ (3.11) for the purely closed correlators.
497
+ 8We effectively obtain a factor of two from these first two sectors.
498
+ 7
499
+
500
+ surfaces.9
501
+ Note that the second line in the right hand side contains a correlator that is of one order
502
+ less in the string coupling constant, and the third line a correlator that is down by two orders
503
+ in the string coupling constant u.
504
+ 3.4
505
+ The Generalized Vertex Operators
506
+ To make further progress, we must discuss the nature of the extended set of boundary vertex
507
+ operators σb
508
+ n in more detail. We recall that in the geometric open topological theory [11], we
509
+ found a single boundary vertex operator σ of curvature −1/3 in section 2. This matches the
510
+ curvature of σb
511
+ 1 and we will indeed identify the two operators: σ = σb
512
+ 1.10 The curvature of
513
+ the general operator σb
514
+ n is 2n/3 − 1. To make such operators on the boundary, we can use a
515
+ power of the operator σ as well as the string coupling constant (effectively of curvature one).
516
+ A natural guess is that there is a component ρn = u−1(uσ)n to the boundary vertex operator
517
+ σb
518
+ n (as previewed in section 2). However, we also need to allow for more drastic processes.
519
+ Up to now, a number of complications were implicit in our extended boundary vertex
520
+ operators. To start with, we concentrate on the simplest extended operator, namely σb
521
+ 2. It
522
+ naively corresponds to an insertion of uσσ. However, to understand further possibilities, we
523
+ need to study the boundary analogue of nodes.
524
+ A strip (or open string propagator) can
525
+ be squeezed near the boundary of the moduli space of open Riemann surfaces, in various
526
+ manners. Either the number of boundaries can decrease as in an annulus to disk transition,
527
+ or the number of boundaries can increase as in a disk to two disks transition.11 See figure
528
+ 2. When the integrated bulk vertex operator is close to these singular configurations, it can
529
+ either give rise to boundary vertex operators that sit on a single boundary or it can give rise
530
+ to boundary vertex operators that sit on two different boundaries of disconnected surfaces.
531
+ The boundary vertex operator σb
532
+ 2 must capture both these possibilities. Thus, we propose the
533
+ equation:
534
+ ⟨. . . σb
535
+ 2 . . . ⟩o,ext = b1u⟨. . . σσ⟩o,ext + b2u⟨. . . σ⟩⟨σ . . . ⟩o,ext .
536
+ (3.13)
537
+ This equation shows that the generalized vertex operator σb
538
+ 2 exhibits a non-local characteristic.
539
+ We recall that in the case of a node degeneration (see Figure 1), there was a universality
540
+ between losing a handle and splitting a surface – both terms have equal coefficient in the
541
+ second line of equation (3.11). We propose a similar universality here for the two terms in
542
+ which the boundary operators remain on the same boundary, or split – compare Figures 1 and
543
+ 2 – and set the two constants in the above equation equal, namely b1 = b = b2. To determine
544
+ the overall constant b, we calculate an amplitude.
545
+ 9If one labels boundaries, and their associated boundary operators, a finer combinatorics and summation
546
+ is necessary.
547
+ 10There is a possible normalization factor between these two operators.
548
+ Our previous choice of overall
549
+ normalization of the boundary operators makes sure that this identification is spot on in standard conventions.
550
+ 11There is a third degeneration process in which the genus drops by one.
551
+ When one labels boundary
552
+ components, it will play a role. See e.g. [22] for a discussion in open/closed string field theory.
553
+ 8
554
+
555
+ Figure 2: Two degenerations of Riemann surfaces with boundary are drawn. The left figure
556
+ represents a disk splitting into two disks. The right figure shows an annulus that turns into a
557
+ disk.
558
+ 3.5
559
+ Amplitudes
560
+ To understand the content of the recursion relation further, we need initial conditions, which
561
+ we take from the most basic geometric calculations [11]. We have that the boundary three-
562
+ point function is the only non-zero disk amplitude with only boundary σ insertions, and
563
+ normalize it to one:12
564
+ ⟨σσσ⟩o,ext = 1 .
565
+ (3.14)
566
+ The other initial condition is that the bulk-boundary one-point function on the disk equals:
567
+ ⟨τ0σ⟩o,ext = 1 .
568
+ (3.15)
569
+ To save on indices, we will drop the upper index on the correlator from now on – it should be
570
+ clear from the context which correlator we have in mind.
571
+ To understand the structure of the vertex operator σb
572
+ m≥2, we can use the puncture equation,
573
+ namely, the recursion relation (3.12) for n = −1:
574
+ ⟨τ0
575
+
576
+ i∈C
577
+ τni
578
+
579
+ l∈O
580
+ σb
581
+ nl⟩ =
582
+
583
+ j
584
+ 2nj + 1
585
+ 3
586
+ ⟨τnj−1
587
+
588
+ i̸=j
589
+ τni
590
+
591
+ l
592
+ σnl⟩ +
593
+
594
+ j
595
+ 2nj
596
+ 3 ⟨
597
+
598
+ i
599
+ τniσnj−1
600
+
601
+ l̸=j
602
+ σnl⟩ .
603
+ (3.16)
604
+ Let us also be explicit about the dilaton equation:
605
+ ⟨τ1
606
+
607
+ i∈C
608
+ τni
609
+
610
+ l∈O
611
+ σb
612
+ nl⟩ =
613
+
614
+ j
615
+ 2nj + 1
616
+ 3
617
+
618
+
619
+ i
620
+ τni
621
+
622
+ l
623
+ σnl⟩ +
624
+
625
+ j
626
+ 2nj
627
+ 3 ⟨
628
+
629
+ i
630
+ τni
631
+
632
+ l
633
+ σnl⟩ .
634
+ (3.17)
635
+ We are ready to calculate a first amplitude in two manners, using either the puncture equation,
636
+ or the factorization equation (3.13):
637
+ ⟨τ0σ2σσ⟩
638
+ =
639
+ 4
640
+ 3⟨σσσ⟩
641
+ =
642
+ 2b⟨τ0σ⟩⟨σσσ⟩ .
643
+ (3.18)
644
+ In the first line we used the puncture equation (3.16). In the second line, we used the ansatz
645
+ (3.13) and allowed for the two possible ways in which the vertex operators can split over two
646
+ 12This is a disk amplitude. We have set u = 1 once more.
647
+ 9
648
+
649
+ correlators to give a non-vanishing result.13 Note that in the second line a factor of the string
650
+ coupling constant implicitly cancelled between the two disk amplitudes and the expression for
651
+ the operator σb
652
+ 2. Using the normalization of the initial conditions, we find:
653
+ b = 2
654
+ 3 .
655
+ (3.19)
656
+ This fixes our reading of the extended vertex operator σb
657
+ 2 once and for all.
658
+ For the next
659
+ extended operator σb
660
+ 3 we propose a similar universal ansatz consistent with curvature conser-
661
+ vation and splitting off a single vertex operator σ:
662
+ ⟨. . . σb
663
+ 3 . . . ⟩ = c(u⟨. . . σσ2⟩ + u⟨. . . σ2⟩⟨σ . . . ⟩) .
664
+ (3.20)
665
+ We can again determine the constant c using either the puncture or the factorization equation
666
+ to determine one and the same amplitude consistently:
667
+ ⟨τ0σ3σ4⟩
668
+ =
669
+ 2⟨σ2σ4⟩ = 8⟨σ3⟩⟨σ3⟩
670
+ =
671
+ c⟨τ0σ⟩⟨σ2σ4⟩ + 6c⟨τ0σ2σ2⟩⟨σ3⟩
672
+ =
673
+ 4c⟨τ0σ⟩⟨σ3⟩⟨σ3⟩ + 8c⟨σ3⟩⟨σ3⟩ ,
674
+ (3.21)
675
+ and find that again c = 2/3 – the constant is fixed once more in terms of the bulk-boundary
676
+ one-point function ⟨τ0σ⟩. Continuing recursively in this manner, e.g. exploiting the correlation
677
+ functions ⟨τ0σb
678
+ nσ2(n−1)⟩, we find:
679
+ ⟨. . . σb
680
+ n⟩ = u 2
681
+ 3(⟨. . . σσb
682
+ n−1⟩ + ⟨. . . σb
683
+ n−1⟩⟨σ . . . ⟩) .
684
+ (3.22)
685
+ Thus, we have determined the intricate nature of the extended boundary vertex operators σb
686
+ n
687
+ and how they recursively code the splitting of boundaries of open Riemann surfaces.
688
+ Tying up a loose end: fixing the constant a
689
+ We tie up a loose end at the hand of another amplitude. The amplitude illustrates a splitting
690
+ of open Riemann surfaces involving two disk one-point functions. We calculate the amplitude
691
+ ⟨τ3τ0σσ⟩ in two manners. We can apply recursion to the operator τ3, or to the operator τ0
692
+ first. In this calculation, we restore the possible constant a that we introduced in subsection
693
+ 3.2 and use an appropriately modified recursion relation. We demonstrate that the constant
694
+ can be determined by consistency. Using the a-modified recursion relation, we find:
695
+ ⟨τ3τ0σ2⟩
696
+ =
697
+ 7
698
+ 3⟨τ2σ2⟩
699
+ =
700
+ 1
701
+ 3⟨τ2σ2⟩ + 2
702
+ 9⟨τ0σ⟩⟨τ1τ0σ⟩ + 4
703
+ 3⟨τ0σ3σ⟩ + 3 + a
704
+ 3
705
+ ⟨σ2τ0σ2⟩ .
706
+ (3.23)
707
+ This implies:
708
+ ⟨τ2σ2⟩
709
+ =
710
+ 1
711
+ 9⟨τ0σ⟩⟨τ0σ⟩ + 4
712
+ 3⟨σ2σ⟩ + 3 + a
713
+ 3
714
+ 2
715
+ 3⟨σ3⟩ .
716
+ (3.24)
717
+ 13Ghost number conservation applies to each factor separately.
718
+ 10
719
+
720
+ We can compute the latter correlator in another manner, using the puncture equation and
721
+ the modified recursion relation:
722
+ ⟨τ2σ2⟩
723
+ =
724
+ 1
725
+ 9⟨τ0σ⟩⟨τ0σ⟩ + 4
726
+ 3⟨σ2σ⟩ + 2 + a
727
+ 3
728
+ ⟨σ3⟩ .
729
+ (3.25)
730
+ Using our previous results, we find full consistency if and only if a = 0. Thus, we tied up the
731
+ loose end in subsection 3.2.
732
+ 4
733
+ The Extended Partition Function
734
+ In this section we introduce the generating function of extended open string correlators and
735
+ prove that the recursion relations for the correlators imply Virasoro constraints on the gen-
736
+ erating function.
737
+ This allows us to make our results more rigorous by connecting to the
738
+ mathematics literature on the integrable structure of the intersection theory on moduli spaces
739
+ of Riemann surfaces with boundary [12]. We conclude the section with a few example ampli-
740
+ tudes.
741
+ 4.1
742
+ The Generating Function
743
+ We recall the generating functions of closed as well as open topological gravity correlation
744
+ functions [11]:
745
+ F c
746
+ =
747
+
748
+ g≥0,n≥1,2g−2+n>0
749
+ u2g−2
750
+ n!
751
+
752
+ ki≥0
753
+ ⟨τk1 . . . τkn⟩c
754
+ gtk1 . . . tkn
755
+ F o,geom
756
+ =
757
+
758
+ g′,k,l≥0,2g′−2+k+2l>0
759
+
760
+ ai≥0
761
+ ug′−1
762
+ k!l! ⟨τa1 . . . τalσk⟩o
763
+ g′sk
764
+ l�
765
+ i=1
766
+ tai .
767
+ (4.1)
768
+ In view of our enlarged space of boundary vertex operators, we also introduce a generating
769
+ function for extended open topological gravity correlation functions:
770
+ F o,ext
771
+ =
772
+
773
+ g′,k,l≥0,2g′−2+k+2l>0
774
+
775
+ ai,bi≥0
776
+ ug′−1
777
+ k!l! ⟨τa1 . . . τalσb
778
+ b1 . . . σb
779
+ bk⟩o,ext
780
+ g
781
+
782
+ i
783
+ tai
784
+
785
+ j
786
+ sbj .
787
+ (4.2)
788
+ The Extended Virasoro Constraints
789
+ We define Virasoro generators
790
+ Ln
791
+ =
792
+
793
+ i≥0
794
+ 2i + 1
795
+ 2
796
+ ti∂ti+n − 3
797
+ 2∂tn+1 + u2
798
+ 12
799
+ n−1
800
+
801
+ i=0
802
+ ∂ti∂tn−i−1 + 3
803
+ 4
804
+ t2
805
+ 0
806
+ u2δn,−1 + 1
807
+ 16δn,0
808
+ (4.3)
809
+ Lext
810
+ n
811
+ =
812
+ Ln +
813
+
814
+ i≥0
815
+ (i + 1)si+1∂sn+i+1 + un + 1
816
+ 2
817
+ ∂sn + 3
818
+ 2
819
+ s1
820
+ u δn,−1 + 3
821
+ 4δn,0
822
+ (4.4)
823
+ 11
824
+
825
+ for n ≥ −1. These are defined such that the recursion relation (3.11) on the closed as well as
826
+ the recursion relation (3.12) on the extended open correlators leads to the constraints:
827
+ Ln exp F c
828
+ =
829
+ 0
830
+ Lext
831
+ n exp(F c + F o,ext)
832
+ =
833
+ 0 .
834
+ (4.5)
835
+ The extra constants terms in the closed Virasoro algebra (4.3) are due to the initialization
836
+ cases ⟨τ 3
837
+ 0 ⟩c = 1 = 24 ⟨τ1⟩ at genus zero and one respectively, while the initial conditions
838
+ ⟨σ3⟩ = 1 = ⟨τ0σ⟩ on the disk lead to the extra constants in the extended Virasoro algebra
839
+ (4.4), which satisfies14
840
+ [Lm, Ln] = (m − n)Lm+n .
841
+ (4.6)
842
+ At this stage, we are able to make contact with rigorous results – these constraints on an
843
+ extended partition function of open topological correlators defined through an extended (or
844
+ unconstrained) integrable KdV hierarchy were found to hold in [12].15 The relation between
845
+ the operators σb
846
+ n and σ as well as the string coupling constant is neatly captured by a relation
847
+ between derivatives of the extended partition function:
848
+ ∂sn = (2u
849
+ 3 )n−1∂n
850
+ s1 .
851
+ (4.7)
852
+ This equation was proven from the KdV integrable hierarchy perspective in [12]. Using this
853
+ equation, and setting extended open times sn≥2 to zero, this relation between derivatives imply
854
+ the higher order Virasoro constraints on the geometric open topological partition function,
855
+ where the open Virasoro generators are [11]:
856
+ Lo
857
+ n = Ln + (2u
858
+ 3 )n∂n+1
859
+ s1
860
+ + n + 1
861
+ 2
862
+ u(2u
863
+ 3 )n−1∂n
864
+ s1 + δn,−1
865
+ 3
866
+ 2
867
+ s1
868
+ u + δn,0
869
+ 3
870
+ 4 .
871
+ (4.8)
872
+ The Virasoro constraints and the initialization condition are sufficient to determine the full
873
+ partition function [11,12]. Through the generating function of extended correlators, we have
874
+ connected our arguments with rigorous results on intersection theory on moduli spaces of
875
+ Riemann surfaces with boundary [11,12].
876
+ 4.2
877
+ A Few More Amplitudes
878
+ For illustrative purposes, we calculate a few more amplitudes. They render the integrable
879
+ hierarchy structure, the Virasoro constraints and how to solve them more concrete.
880
+ 4.2.1
881
+ Amplitudes on The Disk
882
+ We have already indicated that on the disk only the third power of the elementary boundary
883
+ vertex operator σ has a non-zero correlation function and equals one, ⟨σ3⟩ = 1. The disk
884
+ bulk-boundary one-point function ⟨τ0σ⟩ is also one by a choice of normalization. Amplitudes
885
+ 14These generators are rescaled by a factor of 2/3 compared to section 3 in order to reach a standard
886
+ normalization for the Virasoro algebra.
887
+ 15 The translation of variables and normalizations is: Lthere,ext
888
+ n
889
+ = (3/2)nLext
890
+ n , tthere
891
+ n
892
+ = 3−n(2n + 1)!!tn and
893
+ sthere
894
+ n−1 = (2/3)n−1n!sn.
895
+ 12
896
+
897
+ involving extended boundary vertex operators are computed through the reduction formula
898
+ (3.22). A non-trivial example is:
899
+ ⟨τ2σ5⟩
900
+ =
901
+ 10
902
+ 3 ⟨σb
903
+ 2σ4⟩ = 40
904
+ 3 ,
905
+ (4.9)
906
+ where we used the recursion relations (3.12) and (3.22) as well as the 6 choices of factoriza-
907
+ tion. After taking into account the different normalization in footnote 15, this agrees with a
908
+ more generic formula in [11]. Another interesting correlation function is ⟨τ2τ0σσ⟩. It can be
909
+ computed through the puncture equation (in the first line below) and/or the L1 constraint
910
+ (in the second line below):
911
+ ⟨τ2τ0σσσ⟩
912
+ =
913
+ 5
914
+ 3⟨τ1σσσ⟩ = 10
915
+ 3 ⟨σσσ⟩
916
+ =
917
+ 1
918
+ 3⟨τ1σσσ⟩ + 2⟨τ0σ2σσ⟩
919
+ =
920
+ 2
921
+ 3⟨σσσ⟩ + 2 × 8
922
+ 3⟨σσσ⟩ = 10
923
+ 3 ⟨σσσ⟩ .
924
+ (4.10)
925
+ The two ways of computing are in agreement.
926
+ 4.2.2
927
+ Higher Order Amplitudes
928
+ Amplitudes that are higher order in the string coupling exhibit qualitatively new phenomena.
929
+ We illustrate a few. We first compute amplitudes corresponding to cylinder diagrams, with two
930
+ boundaries and genus zero. An interesting amplitude that involves a closed-open factorization
931
+ due to a node can once again be computed in two manners:
932
+ ⟨τ2τ0τ0σ⟩
933
+ =
934
+ 5
935
+ 3⟨τ1τ0σ⟩ = 5
936
+ 3⟨τ0σ⟩
937
+ =
938
+ 2
939
+ 3⟨τ1τ0σ⟩ + 1
940
+ 9⟨τ 3
941
+ 0 ⟩c⟨τ0σ⟩ + 2
942
+ 3⟨τ0τ0σ2⟩
943
+ =
944
+ 2
945
+ 3⟨τ0σ⟩ + 1
946
+ 9⟨τ 3
947
+ 0 ⟩c⟨τ0σ⟩ + 8
948
+ 9⟨τ0σ⟩⟨τ0σ⟩ .
949
+ (4.11)
950
+ Both ways of computing the correlator lead to the same result, given the normalization of
951
+ the closed three-point function ⟨τ 3
952
+ 0 ⟩c as well as the bulk-boundary one-point function ⟨τ0σ⟩.
953
+ Finally, we compute an order O(u1) amplitude.
954
+ It involves the one-loop closed one-point
955
+ function ⟨τ1⟩c:
956
+ ⟨τ3σ⟩ = 2
957
+ 3⟨σ3⟩ + ⟨σ2σ⟩ + 1
958
+ 9(1 + ⟨τ1⟩)⟨τ0σ⟩
959
+ =((2
960
+ 3)3 + 2
961
+ 3)⟨σ3⟩ + 1
962
+ 9(1 + ⟨τ1⟩)⟨τ0σ⟩ .
963
+ (4.12)
964
+ Needless to say, many more results can be generated, e.g. by computer. We provided a few
965
+ telling illustrations that provide insight into the foundation of the integrable hierarchy.
966
+ 5
967
+ Conclusions
968
+ In the spirit of the solution of the bulk theory [18] and building on earlier mathematical
969
+ work [11, 12], we have solved two-dimensional topological gravity on Riemann surfaces with
970
+ 13
971
+
972
+ boundary. By making use of an extended set of boundary vertex operators, we rendered the
973
+ representation of the contact algebra on the boundary linear. Only in a second step the more
974
+ complicated degeneration of surfaces with boundary is taken into account and the non-linear
975
+ realization of the (half) Virasoro algebra is found [12]. The picture in which the solution of
976
+ the theory is provided through contact interactions is a welcome intuitive complement to the
977
+ geometric and matrix model approaches.
978
+ While we have provided a compelling global picture, there are many details that remain
979
+ to be worked out. It would be good to find the geometric counterpart to the extended set of
980
+ boundary operators. The link between (the expectation values of) the conformal field theory
981
+ fields implicit in our analysis [18] and the sections of vector bundles of open topological
982
+ gravity can be clarified (e.g. by exploiting references [15, 20]). The analysis of the contact
983
+ terms in terms of an integration over a degeneration region of the moduli space of open
984
+ Riemann surfaces would be interesting. It will also be instructive to compare our analysis
985
+ to the geometric derivation of the topological recursion relation through closed and open
986
+ factorization [11], intuitively reviewed in [15].
987
+ Another research direction is to exploit the insights developed here and apply them to
988
+ more general theories. The generalization to the extended closed theory [23]) comes to mind,
989
+ but mostly to open spin r curves. Geometric [24], integrable [25, 26], matrix model [27, 28]
990
+ and conformal field theory insights [29] could be complemented by the perspective developed
991
+ in this paper.
992
+ The study of these topological theories of gravity is worthwhile in its own right. It occasion-
993
+ ally fruitfully interfaces with recent developments. For instance, the KdV integrable hierarchy
994
+ governing topological gravity also permeates the two-dimensional JT-gravity holographic dual
995
+ of a peculiar (SYK) one-dimensional quantum system – see e.g. [30] and references thereto.
996
+ We believe that the further study of these elementary solvable systems, their integrable hierar-
997
+ chy but also their various manifestations in superficially different mathematical structures like
998
+ topology, matrices and conformal field theory is worthwhile, and may eventually contribute
999
+ to our understanding of quantum gravity.
1000
+ References
1001
+ [1] E. Brezin and V. A. Kazakov, “Exactly Solvable Field Theories of Closed Strings,” Phys.
1002
+ Lett. B 236 (1990) 144. doi:10.1016/0370-2693(90)90818-Q
1003
+ [2] M. R. Douglas and S. H. Shenker, “Strings in Less Than One-Dimension,” Nucl. Phys.
1004
+ B 335 (1990) 635. doi:10.1016/0550-3213(90)90522-F
1005
+ [3] D. J. Gross and A. A. Migdal, “Nonperturbative Two-Dimensional Quantum Gravity,”
1006
+ Phys. Rev. Lett. 64 (1990) 127. doi:10.1103/PhysRevLett.64.127
1007
+ [4] V. G. Knizhnik, A. M. Polyakov and A. B. Zamolodchikov, “Fractal Structure of 2D
1008
+ Quantum Gravity,” Mod. Phys. Lett. A 3 (1988) 819. doi:10.1142/S0217732388000982
1009
+ [5] F. David, “Conformal Field Theories Coupled to 2D Gravity in the Conformal Gauge,”
1010
+ Mod. Phys. Lett. A 3 (1988) 1651. doi:10.1142/S0217732388001975
1011
+ 14
1012
+
1013
+ [6] J. Distler and H. Kawai, “Conformal Field Theory and 2D Quantum Gravity,” Nucl.
1014
+ Phys. B 321 (1989) 509. doi:10.1016/0550-3213(89)90354-4
1015
+ [7] E. Witten, “On the Structure of the Topological Phase of Two-dimensional Gravity,”
1016
+ Nucl. Phys. B 340 (1990) 281. doi:10.1016/0550-3213(90)90449-N
1017
+ [8] M. Kontsevich, “Intersection theory on the moduli space of curves and the matrix Airy
1018
+ function,” Commun. Math. Phys. 147 (1992) 1. doi:10.1007/BF02099526
1019
+ [9] S. Dalley, C. V. Johnson, T. R. Morris and A. Watterstam, “Unitary matrix models and
1020
+ 2-D quantum gravity,” Mod. Phys. Lett. A 7 (1992) 2753 doi:10.1142/S0217732392002226
1021
+ [hep-th/9206060].
1022
+ [10] C. V. Johnson, “On integrable c < 1 open string theory,” Nucl. Phys. B 414 (1994) 239
1023
+ doi:10.1016/0550-3213(94)90430-8 [hep-th/9301112].
1024
+ [11] R. Pandharipande, J. P. Solomon and R. J. Tessler, “Intersection theory on moduli of
1025
+ disks, open KdV and Virasoro,” arXiv:1409.2191 [math.SG].
1026
+ [12] A. Buryak, “Open intersection numbers and the wave function of the KdV hierarchy,”
1027
+ Moscow Math. J. 16 (2016) no.1, 27 [arXiv:1409.7957 [math-ph]].
1028
+ [13] A. Alexandrov, “Open intersection numbers, Kontsevich-Penner model and cut-and-join
1029
+ operators,” JHEP 1508 (2015) 028 doi:10.1007/JHEP08(2015)028 [arXiv:1412.3772 [hep-
1030
+ th]].
1031
+ [14] A. Buryak and R. J. Tessler, “Matrix Models and A Proof of the Open Analog of Witten’s
1032
+ Conjecture,” Commun. Math. Phys. 353 (2017) no.3, 1299 doi:10.1007/s00220-017-2899-
1033
+ 5 [arXiv:1501.07888 [math.SG]].
1034
+ [15] R. Dijkgraaf and E. Witten, “Developments in Topological Gravity,” arXiv:1804.03275
1035
+ [hep-th].
1036
+ [16] K. Aleshkin and V. Belavin, “Open minimal strings and open Gelfand-Dickey hierar-
1037
+ chies,” JHEP 1902 (2019) 043 doi:10.1007/JHEP02(2019)043 [arXiv:1811.04066 [hep-
1038
+ th]].
1039
+ [17] A. Alexandrov, H. Muraki and C. Rim, “From minimal gravity to open intersection
1040
+ theory,” arXiv:1904.06885 [hep-th].
1041
+ [18] E. P. Verlinde and H. L. Verlinde, “A Solution of Two-dimensional Topological Quantum
1042
+ Gravity,” Nucl. Phys. B 348 (1991), 457-489 doi:10.1016/0550-3213(91)90200-H
1043
+ [19] D. Gaiotto and L. Rastelli, “A Paradigm of open / closed duality: Liouville D-branes
1044
+ and the Kontsevich model,” JHEP 0507 (2005) 053 doi:10.1088/1126-6708/2005/07/053
1045
+ [hep-th/0312196].
1046
+ [20] R. Dijkgraaf, H. L. Verlinde and E. P. Verlinde, “Notes on topological string theory and
1047
+ 2-D quantum gravity,” PUPT-1217.
1048
+ 15
1049
+
1050
+ [21] J. Polchinski, “String theory. Vol. 1: An introduction to the bosonic string,” Cambridge
1051
+ University Press 2001 doi:10.1017/CBO9780511816079
1052
+ [22] B. Zwiebach, “Oriented open - closed string theory revisited,” Annals Phys. 267 (1998),
1053
+ 193-248 doi:10.1006/aphy.1998.5803 [arXiv:hep-th/9705241 [hep-th]].
1054
+ [23] A. Buryak,
1055
+ E. Clader and R. J. Tessler,
1056
+ “Closed extended r-spin theory and
1057
+ the Gelfand–Dickey wave function,” Journal of Geometry and Physics 137 132,
1058
+ arXiv:1710.04829v3 [math.AG].
1059
+ [24] C. Faber, S. Shadrin and D. Zvonkine, “Tautological relations and the r-spin Witten
1060
+ conjecture,” math/0612510.
1061
+ [25] A. Buryak, E. Clader and R. J. Tessler, “Open r-spin theory and the Gelfand-Dickey
1062
+ wave function,” arXiv:1809.02536 [math.SG].
1063
+ [26] M. Bertola and D. Yang, “The partition function of the extended r-reduced Kadomt-
1064
+ sev–Petviashvili hierarchy,” J. Phys. A 48 (2015) no.19, 195205 doi:10.1088/1751-
1065
+ 8113/48/19/195205 [arXiv:1411.5717 [math-ph]].
1066
+ [27] E. Brezin and S. Hikami, “The intersection numbers of the p-spin curves from random
1067
+ matrix theory,” JHEP 1302 (2013) 035 doi:10.1007/JHEP02(2013)035 [arXiv:1212.6096
1068
+ [math-ph]].
1069
+ [28] S. K. Ashok and J. Troost, “Topological Open/Closed String Dualities:
1070
+ Matrix
1071
+ Models and Wave Functions,” JHEP 09 (2019), 064 doi:10.1007/JHEP09(2019)064
1072
+ [arXiv:1907.02410 [hep-th]].
1073
+ [29] H. Muraki and C. Rim, “Open KdV hierarchy of 2d minimal gravity of Lee-Yang series,”
1074
+ arXiv:1808.07304 [hep-th].
1075
+ [30] K. Okuyama and K. Sakai, “JT gravity, KdV equations and macroscopic loop operators,”
1076
+ JHEP 01 (2020), 156 doi:10.1007/JHEP01(2020)156 [arXiv:1911.01659 [hep-th]].
1077
+ 16
1078
+
9dAzT4oBgHgl3EQfFPqV/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
9dFLT4oBgHgl3EQfuS8F/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff632b5ef15dca00f0f2d6333deb331d1305c63cc71e136e370907fc8e2bd1e0
3
+ size 7536685
9tE4T4oBgHgl3EQfDQub/content/2301.04868v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fada408f84d19368f4bdf4892b31e1a5a6f7a8056dd2d72f220890d293399b53
3
+ size 876213
9tE4T4oBgHgl3EQfDQub/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e28a196339a72e57822564349563b19f8008303de844f2697286ec8cebc4c391
3
+ size 7405613
9tE4T4oBgHgl3EQfDQub/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3133721d086302e8ff47f50f931fa1c16f22ff6424681b0f0e3676e4f69f5c2
3
+ size 232743
A9E0T4oBgHgl3EQfxwJo/content/tmp_files/2301.02650v1.pdf.txt ADDED
@@ -0,0 +1,1694 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Model-Agnostic Hierarchical Attention for 3D Object Detection
2
+ Manli Shu1*
3
+ Le Xue2
4
+ Ning Yu2
5
+ Roberto Martín-Martín2,3
6
+ Juan Carlos Niebles2
7
+ Caiming Xiong2
8
+ Ran Xu2
9
+ 1 University of Maryland, 2 Salesforce Research, 3 UT Austin
10
+ Abstract
11
+ Transformers as versatile network architectures have re-
12
+ cently seen great success in 3D point cloud object detec-
13
+ tion. However, the lack of hierarchy in a plain transformer
14
+ makes it difficult to learn features at different scales and
15
+ restrains its ability to extract localized features. Such limita-
16
+ tion makes them have imbalanced performance on objects
17
+ of different sizes, with inferior performance on smaller ones.
18
+ In this work, we propose two novel attention mechanisms
19
+ as modularized hierarchical designs for transformer-based
20
+ 3D detectors. To enable feature learning at different scales,
21
+ we propose Simple Multi-Scale Attention that builds multi-
22
+ scale tokens from a single-scale input feature. For localized
23
+ feature aggregation, we propose Size-Adaptive Local At-
24
+ tention with adaptive attention ranges for every bounding
25
+ box proposal. Both of our attention modules are model-
26
+ agnostic network layers that can be plugged into existing
27
+ point cloud transformers for end-to-end training. We evalu-
28
+ ate our method on two widely used indoor 3D point cloud
29
+ object detection benchmarks. By plugging our proposed mod-
30
+ ules into the state-of-the-art transformer-based 3D detector,
31
+ we improve the previous best results on both benchmarks,
32
+ with the largest improvement margin on small objects.1
33
+ 1. Introduction
34
+ 3D point cloud data provides accurate geometric and spa-
35
+ tial information, which are important to computer vision ap-
36
+ plications such as autonomous driving and augmented reality.
37
+ Different from image data, which has a grid-like structure,
38
+ point clouds consist of unordered irregular points. Due to
39
+ such unique properties of point clouds, previous works have
40
+ proposed various deep network architectures for point cloud
41
+ understanding [7,8,22–25,34,37,48,50]. With the success
42
+ of transformers in natural language processing [4, 26, 40]
43
+ and 2D vision [5,14,39], attention-based architectures for
44
+ point clouds [20,38,46,49,52,53,55] are explored in recent
45
+ *Work done during an internship at Salesforce. [email protected].
46
+ 1The code and models will be available at https://github.com/
47
+ salesforce/Hierarchical_Point_Attention.
48
+ Groundtruths
49
+ Predictions
50
+ Attention weights
51
+ (high to low)
52
+ Plain Attention
53
+ Our Attention
54
+ Groundtruth object
55
+ Figure 1. Visualization of the attention weights. With our hierar-
56
+ chical attentions, the object center has higher attention weights with
57
+ points that belong to the object, and the predicted bounding box
58
+ is better aligned with the groundtruth. Our multi-scale attention
59
+ extracts feature at different scales, which helps distinguish object
60
+ boundaries. Our size-adaptive local attention aggregates features at
61
+ the object level and helps refine the bounding box proposals.
62
+ works and have seen great success in 3D point cloud object
63
+ detection [16, 19, 32, 45]. Several properties of transform-
64
+ ers make them ideal for learning on raw point clouds. For
65
+ example, their permutation-invariant property is necessary
66
+ for modeling unordered sets like point clouds, and their at-
67
+ tention mechanism can model long-range relationships that
68
+ help capture the global context for point cloud learning.
69
+ Despite the advantages of transformers for point clouds,
70
+ we find the state-of-the-art transformer detector to have im-
71
+ balanced performance across different object sizes, with the
72
+ lowest average precision on small objects (see Section 4.3).
73
+ We speculate the inferior performance on small objects can
74
+ be due to two factors. Firstly, to make the computation feasi-
75
+ ble, transformer detectors use point cloud features consisting
76
+ of a small set of points compared to the original point cloud.
77
+ 1
78
+ arXiv:2301.02650v1 [cs.CV] 6 Jan 2023
79
+
80
+ The extensively downsampled point cloud loses geometric
81
+ details, which has a larger impact on small objects. Secondly,
82
+ plain transformers (e.g., Transformer [40], ViT [5]) extract
83
+ features at the global scale throughout the network, which
84
+ does not support explicit localized feature learning.
85
+ Motivated by the above observations, we expect existing
86
+ point cloud transformers to benefit from a hierarchical fea-
87
+ ture learning strategy, which allows multi-scale feature learn-
88
+ ing and supports localized feature aggregation. Nonetheless,
89
+ considering the computation intensity of point cloud trans-
90
+ formers, it is inefficient to use higher-resolution (i.e., higher
91
+ point density) point cloud features throughout the network.
92
+ Furthermore, due to the irregularity of point clouds, it is
93
+ non-trivial to integrate hierarchical design and multi-scale
94
+ features into transformers for point cloud object detection.
95
+ Our approach.
96
+ In this work, we aim to improve
97
+ transformer-based 3D object detectors with modularized
98
+ hierarchical designs. We propose two attention modules for
99
+ multi-scale feature learning and size-adaptive local feature
100
+ aggregation. Our attention modules are model-agnostic and
101
+ can be plugged into existing point cloud transformers for
102
+ end-to-end training.
103
+ We first propose Simple Multi-Scale Attention (MS-A). It
104
+ builds higher resolution point features from the single-scale
105
+ input feature with a learnable upsampling strategy and use
106
+ both features in the attention function. To reduce computa-
107
+ tion and parameter overhead, we transform the multi-scale
108
+ features into multi-scale tokens and perform multi-scale to-
109
+ ken aggregation [30] within a multi-head attention module.
110
+ The second module is Size-Adaptive Local Attention (Local-
111
+ A), which learns localized object-level features for each
112
+ object candidate. It assigns larger attention regions to ob-
113
+ ject candidates with larger bounding box proposals. The
114
+ local attention regions are defined by their corresponding
115
+ intermediate bounding box proposals.
116
+ We evaluate our method on two widely used indoor 3D
117
+ object detection benchmarks: ScanNetV2 [3] and SUN RGB-
118
+ D [36]. We plug our attention modules into the state-of-
119
+ the-art transformer-based 3D detector and perform end-to-
120
+ end training. Our method improves the previous best result
121
+ by over 1% in [email protected] and over 2% in [email protected] on
122
+ ScanNetV2. Furthermore, our size-aware evaluation shows
123
+ we have the most performance gain among small objects
124
+ with a 2.5% increase in mAPS. We summarize our main
125
+ contributions as follows:
126
+ • We propose Simple Multi-Scale Attention (MS-A) to en-
127
+ able multi-scale feature learning on single-scale features.
128
+ • We present Size-Adaptive Local Attention (Local-A) for
129
+ local feature aggregation within bounding box proposals.
130
+ • We conduct experiments on two widely used indoor 3D
131
+ detection benchmarks and surpass the previous best results
132
+ on both benchmarks.
133
+ 2. Related Work
134
+ Network architectures for point cloud learning.
135
+ Exist-
136
+ ing network architectures for point cloud learning can be
137
+ roughly divided into two categories based on their point
138
+ cloud representation: grid-based and point-based, yet in
139
+ between, there also exist hybrid architectures that operate on
140
+ both representations [9,33,51,53,57]. Grid-based methods
141
+ project the irregular point clouds into grid-like structures,
142
+ such as 3D voxels. With the grid-like structure, existing
143
+ works have proposed a variety of 3D-convolution-based ar-
144
+ chitectures [7,18,31,42]. Point-based methods, on the other
145
+ hand, directly learn features from the raw point cloud. Within
146
+ this category, graph-based method [8, 35, 44, 47, 56] use a
147
+ graph to model the relationships among the points. Another
148
+ line of work models a point cloud as a set of points, and
149
+ extracts features through set abstraction [17,23,25,41]. Re-
150
+ cent works explore transformer architecture for point-based
151
+ learning [16,19,20,53,55], where each point is fed into the
152
+ transformer as a token and the attention mechanism learns
153
+ point features at a global scale. While previous methods
154
+ improve point cloud learning by developing new backbones
155
+ and modifying the overall network architecture, our work
156
+ focuses on the attention mechanism of the point cloud trans-
157
+ former. Instead of proposing new architectures for point
158
+ cloud learning, we aim to provide a model-agnostic solution.
159
+ Point cloud object detection.
160
+ One major challenge in
161
+ point cloud object detection is extracting object features.
162
+ In 2D object detection, a common practice for extracting ob-
163
+ ject features is to use a region proposal network (RPN) [29]
164
+ to generate dense bounding box proposals (i.e., object can-
165
+ didate) in a top-down manner and then extract features for
166
+ each object candidate. However, in 3D vision, generating
167
+ dense 3D bounding box proposals for point cloud data is
168
+ inefficient due to the irregularity and sparsity of point clouds.
169
+ Previous work [2,57] addresses this issue by projecting point
170
+ clouds into 2D bird’s-eye views or voxels and then applying
171
+ RPN. However, such projection operations can result in the
172
+ loss of geometric information or introduce quantization er-
173
+ rors. Another line of work seeks to generate 3D proposals
174
+ in a bottom-up manner (i.e., point-based) [16, 19, 21, 34].
175
+ VoteNet [21] samples a set of points from a point cloud
176
+ as the initial object candidates and then assigns points to
177
+ each object candidate through voting. Object features of
178
+ each candidate are learned by aggregating features within
179
+ its corresponding vote cluster (i.e., group). Instead of voting
180
+ and grouping, follow-up works [16, 19] propose to use a
181
+ transformer to automatically model the relationship between
182
+ the object candidates and the point cloud. Although point-
183
+ based methods do not have quantization errors caused by
184
+ voxelization, to make the computation feasible, a point cloud
185
+ needs to be extensively downsampled at the beginning of the
186
+ 2
187
+
188
+ model. Such downsampling also causes a loss of geomet-
189
+ ric information, while it is important for object detection to
190
+ have fine-grained features to make accurate predictions. Our
191
+ work is based on point-based transformer detectors. We ad-
192
+ dress the downsampling issue by building higher-resolution
193
+ features without increasing the computation budget.
194
+ Hierarchical designs for 2D and 3D vision transformers.
195
+ Extensive work has been done to adapt transformers for
196
+ vision recognition. One direction is to borrow the hierar-
197
+ chical design and inductive biases from convolutional neu-
198
+ ral networks (ConvNet) [10]. In the 2D vision, one line
199
+ of ConvNet-based hierarchical design [6,11,43] produces
200
+ multi-scale feature maps for 2D images by progressively
201
+ decreasing the resolution and expanding feature channels.
202
+ Swin Transformer [15] adopts the idea of weight-sharing of
203
+ ConvNet and proposes efficient self-attention with shifted
204
+ windows. Shunted self-attention [30] attends to features at
205
+ different scales through multi-scale token aggregation. In the
206
+ 3D vision, hierarchical designs for point cloud transformers
207
+ are explored in previous works, where self-attentions are ap-
208
+ plied to local regions (specified by k nearest neighbors [55]
209
+ or a given radius [20]), and downsampling operations are
210
+ performed after every encoding stage following the hierar-
211
+ chical design of PointNet++ [25]. Patchformer [53] proposes
212
+ a multi-scale attention block that performs extracts features
213
+ at multiple granularities, but it requires voxelization on the
214
+ point cloud. Different from previous works, we pack our
215
+ hierarchical design into model-agnostic attention modules
216
+ that can be plugged into any existing architecture and enable
217
+ both multi-scale and localized feature learning.
218
+ 3. Method
219
+ In this section, we first discuss the background, including
220
+ a brief introduction to the task of point cloud object detection,
221
+ an overview of point-based 3D detection methods, and the
222
+ attention mechanism. Next, we dive into the detailed designs
223
+ of our proposed attention modules.
224
+ 3.1. Background
225
+ Point cloud object detection.
226
+ Given a point cloud Praw
227
+ with a set of P points P = {pi}P
228
+ i=1, each point pi ∈ R3
229
+ is represented by its 3-dimensional coordinate. 3D object
230
+ detection on point cloud aims to predict a set of bounding
231
+ boxes for the objects in the scene, including their locations
232
+ (as the center of the bounding box), size and orientation
233
+ of the bounding box, and the semantic class of the corre-
234
+ sponding object. Note that due to the computation limit, the
235
+ point cloud is downsampled at the early stage of a model
236
+ to a subset of Praw, which contains N (N << P) points.
237
+ P = SA(Praw) = {pi}N
238
+ i=1 contains the aggregated groups
239
+ of points around N group centers, where SA (set abstraction)
240
+ is the aggregation function, and the group centers are sam-
241
+ pled from the raw point cloud using Furthest Point Sample
242
+ (FPS) [23], a random sampling algorithm that provides good
243
+ coverage of the entire point cloud.
244
+ Point-based 3D object detectors.
245
+ Our method is built on
246
+ point-based 3D object detectors [16,21,34], which detect 3D
247
+ objects in point clouds in a bottom-up manner. Compared to
248
+ other 3D detectors that generate box proposals in a top-down
249
+ manner on the bird’s-eye view or voxelized point clouds [2],
250
+ point-based methods work directly on the irregular point
251
+ cloud and do not cause loss of information or quantization
252
+ errors. In addition, point-based methods are suitable for
253
+ more efficient single-stage object detection [1,13,28].
254
+ The feature representation of the input point cloud
255
+ {zi}N
256
+ i=1, zi ∈ Rd is first obtained using a backbone model
257
+ (e.g., PointNet++ [25]), where d is the feature dimen-
258
+ sion. Point-based detectors generate bounding box predic-
259
+ tions starting with M (M < N) initial object candidates
260
+ {qi}M
261
+ i=1, qi ∈ RC, sampled from the point cloud as object
262
+ centers. A common approach for sampling the candidates is
263
+ Furthest Point Sample (FPS). Once get the initial candidates,
264
+ the detector then extracts features for every object candidate.
265
+ Attention-based methods [16] learn features by doing self-
266
+ attention among the object candidates, and cross-attention be-
267
+ tween the candidates (i.e., query) and point features {zi}N
268
+ i=1.
269
+ The learned features of the object candidates will then be
270
+ passed to prediction heads, which predict the attributes of
271
+ the bounding box for each object candidate. The attributes of
272
+ a 3D bounding box include its location (box center) ˆc ∈ R3,
273
+ size (H/W/D dimensions) ˆd ∈ R3, orientation (heading an-
274
+ gles) ˆa ∈ R, and the semantic label of the object ˆs. With
275
+ these parameterizations, we can represent a bounding box
276
+ proposal as ˆb = {ˆc, ˆd, ˆa,ˆs}. The detailed parameterizations
277
+ of a bounding box are included in Appendix A.2.
278
+ Attention mechanism
279
+ is the basic building block of trans-
280
+ formers. The attention function takes in query (Q), key (K),
281
+ and value (V ) as the input. The output of the attention func-
282
+ tion is a weighted sum of the value with the attention weight
283
+ being the scaled dot-product between the key and query:
284
+ Attn(Q, K, V ) = softmax(QKT
285
+ √dh
286
+ )V,
287
+ (1)
288
+ where dh is the hidden dimension of the attention layer.
289
+ For self-attention, Q ∈ Rdh, K ∈ Rdh and V ∈ Rdv are
290
+ transformed from the input X ∈ Rd via linear projection
291
+ with parameter matrix W Q
292
+ i
293
+ ∈ Rd×dh, W K
294
+ i
295
+ ∈ Rd×dh, and
296
+ W V
297
+ i
298
+ ∈ Rd×dv respectively. For cross-attention, Q, K, and
299
+ V can have different sources.
300
+ In practice, transformers adopt the multi-head attention
301
+ design, where multiple attention functions are applied in
302
+ 3
303
+
304
+
305
+ Object Features
306
+ Point Features
307
+ Upsampled Point Features
308
+
309
+
310
+ Q
311
+ K1×, V1×
312
+ K2×, V2×
313
+ Learnable
314
+ Upsample
315
+ Concat & Linear
316
+ Attention head=0
317
+ Attention head=h/2
318
+ Object Features
319
+ Prediction
320
+ Head
321
+ Bounding Box
322
+ Proposals
323
+ Point Features
324
+ Sampled Point Features
325
+ {Q} (batch)
326
+ {K, V} (batch)
327
+ (a)
328
+ Simple Multi-Scale Attention
329
+ (b)
330
+ Size-Adaptive Local Attention
331
+ Multi-Head
332
+ Attention
333
+ Pad &
334
+ Truncate
335
+ Figure 2. An illustration of our hierarchical attention modules. (a). Simple Multi-Scale Attention (MS-A) learns features at different
336
+ scales within the multi-head cross-attention module. It constructs high resolution (i.e., point density) point features from the single-scale
337
+ input point features and uses keys and values of both scales. (b). Size-Adaptive Local Attention (Local-A) extracts localized features for
338
+ each object candidate by restricting the attention range to be inside its bounding box proposal. The attention range (the token lengths of key
339
+ and value) is adaptive for each object candidate (query) and we perform padding or truncating to allow batch processing
340
+ .
341
+ parallel across different attention heads. The input of each
342
+ attention head is a segment of the layer’s input. Specifically,
343
+ the query, key, and value are split along the hidden dimension
344
+ into (Qi, Ki, Vi)h
345
+ i=1, with Qi ∈ Rdh/h, Ki ∈ Rdh/h, Vi ∈
346
+ Rdv/h, where h is the number of attention heads. The final
347
+ output of the multi-head attention layer is the projection of
348
+ the concatenated outputs of all attention heads:
349
+ MultiHead(Q,K, V ) = Concat({Attn(Q0, K0, V0);
350
+ ...; Attn(Qh−1, Kh−1, Vh−1)})W O,
351
+ (2)
352
+ where the first term denotes the concatenation of the output
353
+ and W O is the output projection matrix.
354
+ 3.2. Simple Multi-Scale Attention
355
+ When applying transformers to point-based 3D object
356
+ detection, the cross-attention models the relationship be-
357
+ tween object candidates and all other points within the point
358
+ cloud. The intuition is that, for each object candidate, every
359
+ point within the point cloud (i.e., scene) either belongs to
360
+ the object or can provide context information for the object.
361
+ Therefore, it makes sense to gather all point features for
362
+ every object candidate, and the importance of a point to the
363
+ object candidate can be determined by the attention weight.
364
+ However, due to the computation overhead of the atten-
365
+ tion function, the actual number of points (i.e., tokens) that a
366
+ model is learned on is set as 1024 [16,19], whereas the raw
367
+ point cloud usually contains tens of thousands points [3,36].
368
+ Such extensive downsampling on the point cloud causes a
369
+ loss of detailed geometric information and fine-grained fea-
370
+ tures, which are important for dense prediction tasks like
371
+ object detection.
372
+ To this end, we propose Simple Multi-Scale Attention
373
+ (MS-A), which builds higher-resolution (i.e., higher point
374
+ density) feature maps from the single-scale feature input. It
375
+ then uses features of both scales as the key and value in the
376
+ cross-attention between object candidates and other points.
377
+ the multi-scale feature aggregation is realized through multi-
378
+ scale token aggregation, where we use the key and value of
379
+ different scales in different subsets of attention heads. Our
380
+ goal is to create a higher-resolution feature map that provides
381
+ fine-grained geometric details of the point cloud.
382
+ The first step of our multi-scale attention is to obtain a
383
+ higher-resolution feature map from the single-scale input.
384
+ We propose a learnable upsampling operation. Given the
385
+ layer’s input point cloud feature {zi}N
386
+ i=1, zi ∈ Rd, we want
387
+ to create a feature map with 2N points. To get the locations
388
+ (i.e., coordinates) of the 2N points, we use FPS to sample
389
+ 4
390
+
391
+ 2N points from the raw point cloud {pi}2N
392
+ i=1, pi ∈ R3. Next,
393
+ for each sampled point pi, we search for the top three of
394
+ its nearest neighbors (in the euclidean distance) in the input
395
+ feature map {zi}N
396
+ i=1, denoted as {z0
397
+ i , z1
398
+ i , z2
399
+ i }. Then we cal-
400
+ culate a weighted interpolation of the three-point features,
401
+ weighted by the inverse of their distance to the sample point.
402
+ The interpolated feature is then projected into the feature rep-
403
+ resentation of sampled point. The upsampled point feature
404
+ map can be written as:
405
+ {˜zi}2N
406
+ i=1, ˜zi = Φθ(interpolate({z0
407
+ i , z1
408
+ i , z2
409
+ i }))
410
+ (3)
411
+ Here, Φθ is learnable projection function parameterized by
412
+ θ. We choose MLP as our projection function.
413
+ After the upsampling, we have two sets of point features
414
+ of different scale {zi}N
415
+ i=1, {˜zi}2N
416
+ i=1. To avoid computation
417
+ increase, we perform multi-head cross-attention on both sets
418
+ of point features in a single pass by using features of different
419
+ scales on different attention heads. We divide attention heads
420
+ evenly into two groups, and use zi}N
421
+ i=1 to obtain K and V
422
+ in the first group while using the other for the second group.
423
+ Both groups share the same set of queries transformed from
424
+ {qi}M
425
+ i=1. Since the input and output of this module are the
426
+ same as a plain attention module, we can plug MS-A into any
427
+ attention-based model to enable feature learning at different
428
+ scales. In practice, we apply MS-A only at the first layer
429
+ of a transformer which makes minimal modifications to the
430
+ network and introduces little computation overhead.
431
+ 3.3. Size-Adaptive Local Attention
432
+ Although the attention mechanism can model the relation-
433
+ ship between every point pair, it is not guaranteed the learned
434
+ model will pay more attention to points that are important
435
+ to an object (e.g., those belonging to the object) than the
436
+ ones that are not. The lack of hierarchy in transformers, on
437
+ the other hand, does not support explicit localized feature
438
+ extraction. Different from existing local attentions that are
439
+ performed within a fixed region, we propose Size-Adaptive
440
+ Local Attention (Local-A) that defines local regions based
441
+ on the size of bounding box proposals.
442
+ We first generate intermediate bounding box proposals
443
+ {ˆbi}M
444
+ i=1 with the features of object candidates ({qi}M
445
+ i=1).
446
+ We then perform cross-attention between every candidate qi
447
+ and the points sampled from within its corresponding box
448
+ proposal ˆbi. Therefore, we have customized size-adaptive
449
+ local regions for every query point. For every input object
450
+ candidate qil ∈ Rd, it is updated Local-A as:
451
+ qi
452
+ l+1 = Attn(Ql
453
+ i, Ki, Vi), where
454
+ (4)
455
+ Ql
456
+ i = qi
457
+ lW Q, Ki = ZiW K, Vi = ZiW V with
458
+ (5)
459
+ Zi = {zk
460
+ i | pos(zk
461
+ i) in ˆbi}, ˆbi = Predl
462
+ box(qi
463
+ l).
464
+ (6)
465
+ In the Eq.( 6), we use pos(·) to denote the coordinate of a
466
+ point in the 3D space, and Zi is a set of points inside box
467
+ ˆbi. Note that the point features {zi}N
468
+ i=1 are extracted by the
469
+ backbone network and are not updated during the feature
470
+ learning of object candidates. Predl
471
+ box is the prediction head
472
+ at layer l that generate intermediate box predictions.
473
+ Since object candidates (i.e., query) will have different
474
+ sets of keys and values depending on the size of their bound-
475
+ ing box proposals, the number of K and V tokens also
476
+ differs for each object candidate. To allow batch computa-
477
+ tion, we set a maximum number of points (Nlocal) for the
478
+ sampling process and use Nlocal as a fixed token length for
479
+ every query point. For bounding boxes that contain less than
480
+ Nlocal points, we pad the point sequence with an unused
481
+ token to Nlocal and mask the unused tokens out in the cross-
482
+ attention function; for those containing more than Nlocal
483
+ points, we randomly discard them and truncate the sequence
484
+ to have Nlocal points as keys and values. Lastly, in the case
485
+ where the bounding box is empty, we perform ball query [23]
486
+ around the object candidate to sample Nlocal points.
487
+ Same as MS-A, Local-A does not pose additional require-
488
+ ments on modules input, therefore we can apply it at any
489
+ layer of a transformer. Specifically, we apply Local-A at the
490
+ end of a transformer where bounding box proposals are in
491
+ general more accurate.
492
+ 4. Experiments
493
+ In this section, we first evaluate our method on two widely
494
+ used indoor point cloud detection datasets, ScanNetV2 and
495
+ SUN RGB-D. Next, we provide qualitative and quantita-
496
+ tive analyses of our method, including visualizations of the
497
+ bounding box predictions and attention weights, and eval-
498
+ uations using our proposed size-aware metrics. Lastly, we
499
+ include ablation studies on the design choices of our atten-
500
+ tion modules. We include more experiments and ablation
501
+ studies in Appendix A.1, including analyses on the infer-
502
+ ence speed and the number of parameters of each individual
503
+ attention module.
504
+ 4.1. Main Results
505
+ Datasets.
506
+ ScanNetV2 [3] consists of 1513 reconstructed
507
+ meshes of hundreds of indoor scenes. It contains rich anno-
508
+ tations for various 3D scene understanding tasks, including
509
+ object classification, semantic segmentation, and object de-
510
+ tection. For point cloud object detection, it provides axis-
511
+ aligned bounding boxes with 18 object categories. We follow
512
+ the official dataset split by using 1201 samples for training
513
+ and 312 samples for testing. SUN RGB-D [36] is a single-
514
+ view RGB-D dataset with 10335 samples. For 3D object
515
+ detection, it provides oriented bounding box annotations
516
+ with 37 object categories, while we follow the standard eval-
517
+ uation protocol [21] and only use the 10 common categories.
518
+ The training split contains 5285 samples and the testing set
519
+ contains 5050 samples.
520
+ 5
521
+
522
+ Methods
523
+ #Params
524
+ Backbone
525
+ ScanNet V2
526
527
528
+ VoteNet [21]
529
+ -
530
+ PointNet++
531
+ 62.9
532
+ 39.9
533
+ H3DNet [54]
534
+ -
535
+ PointNet++
536
+ 64.4
537
+ 43.4
538
+ H3DNet [54]
539
+ -
540
+ 4×PointNet++
541
+ 67.2
542
+ 48.1
543
+ 3DETR [19]
544
+ -
545
+ transformer
546
+ 65.0
547
+ 47.0
548
+ Pointformer [20]
549
+ -
550
+ transformer
551
+ 64.1
552
+ 42.6
553
+ Group-Free6,256 [16]
554
+ 13.0M
555
+ PointNet++
556
+ 67.3 (66.3)
557
+ 48.9 (48.5)
558
+ w/ MS + Local (Ours)
559
+ 15.0M
560
+ PointNet++
561
+ 67.9 (67.1) (↑ 0.6)
562
+ 51.4 (49.8) (↑ 2.5)
563
+ RepSurf-U6,256 [27]
564
+ 13.1M
565
+ PointNet++
566
+ 68.8 ( - )
567
+ 50.5 ( - )
568
+ RepSurf-U6,256 (reproduce)
569
+ 13.1M
570
+ PointNet++
571
+ 68.0 (67.4)
572
+ 50.2 (48.7)
573
+ w/ MS + Local (Ours)
574
+ 15.1M
575
+ PointNet++
576
+ 69.5 (68.8) (↑ 1.5)
577
+ 52.5 (51.1) (↑ 2.3)
578
+ Group-Free12,512 [16]
579
+ 26.9M
580
+ PointNet++w2x
581
+ 69.1 (68.6)
582
+ 52.8 (51.8)
583
+ w/ MS + Local (Ours)
584
+ 28.9M
585
+ PointNet++w2x
586
+ 70.3 (69.2) (↑ 1.2)
587
+ 54.6 (53.2) (↑ 1.8)
588
+ RepSurf-U12,512 [27]
589
+ 27.1M
590
+ PointNet++w2x
591
+ 71.2 ( - )
592
+ 54.8 ( - )
593
+ RepSurf-U12,512 (reproduce)
594
+ 27.1M
595
+ PointNet++w2x
596
+ 70.8 (70.2)
597
+ 54.4 (53.6)
598
+ w/ MS + Local (Ours)
599
+ 29.1M
600
+ PointNet++w2x
601
+ 71.7 (71.0) (↑ 0.9)
602
+ 56.5 (54.8) (↑ 2.1)
603
+ Table 1. Performance of object detection on ScanNetV2. We follow the standard protocol [21] by reporting the best results over 5 × 5
604
+ trials (5 trainings, each with 5 testings) and including the averaged results in the bracket. Group-FreeL,O denotes the variant with L decoder
605
+ layers and O object candidates. The same notation applies to RepSurf-U. The detection code of RepSurf is not published, so we implement
606
+ our version of RepSurf-U and apply our method to it. We include the results of our implementation of RepSurf-U.
607
+ Methods
608
609
610
+ VoteNet [21]
611
+ 59.1
612
+ 35.8
613
+ H3DNet [54]
614
+ -
615
+ -
616
+ H3DNet [54]
617
+ 60.1
618
+ 39.0
619
+ 3DETR [19]
620
+ 59.1
621
+ 32.7
622
+ Pointformer [20]
623
+ 61.1
624
+ 36.6
625
+ Group-Free6,256 [16]
626
+ 63.0 (62.6)
627
+ 45.2 (44.4)
628
+ w/ MS + Local (Ours)
629
+ 63.8 (63.2) (↑ 0.8)
630
+ 46.6 (45.7) (↑ 1.4)
631
+ RepSurf-U6,256 [27]
632
+ 64.3 ( - )
633
+ 45.9 ( - )
634
+ RepSurf-U6,256 (repd.)
635
+ 64.0 (63.3)
636
+ 45.7 (45.2)
637
+ w/ MS + Local (Ours)
638
+ 64.5 (63.8) (↑ 0.5)
639
+ 47.5 (46.1) (↑ 1.8)
640
+ Table 2.
641
+ Performance of object detection on SUN RGB-D.
642
+ “repd." stands for the reproduced results of our implementation.
643
+ “-" means the official result is not available.
644
+ Evaluation metrics.
645
+ For both datasets, we follow the stan-
646
+ dard evaluation protocol [21] and use the mean Average Pre-
647
+ cision (mAP) as the evaluation metric. We report mAP scores
648
+ under two different Intersection over Union (IoU) thresholds:
649
+ [email protected] and [email protected]. In addition, in Section 4.3, to
650
+ evaluate model performance across different object sizes, we
651
+ follow the practice in 2D vision [12] and implement our own
652
+ size-aware metrics that measure the mAP on small, medium,
653
+ and large objects respectively. On account of the randomness
654
+ of point cloud training and inference, we train a model 5
655
+ times and test each model 5 times. We report both the best
656
+ and the average results among the 25 trials.
657
+ Baselines.
658
+ We validate our method by applying it to ex-
659
+ isting transformer point cloud detectors. Group-Free [16]
660
+ extracts features for object candidates using a transformer
661
+ decoder with plain attention. We include two configura-
662
+ tions of Group-Free in our comparison: Group-Free6,256
663
+ samples a total of 256 object candidates for feature learning
664
+ and bounding box prediction, using a transformer decoder
665
+ with 6 layers; Group-Free12,512 is the largest configuration,
666
+ which has 12 transformer layers and 512 object candidates.
667
+ RepSurf-U [27] proposes a novel multi-surface (umbrella
668
+ curvature) representation of point clouds that can explicitly
669
+ describe the local geometry. For object detection, RepSurf-U
670
+ adopts the transformer decoder of Group-Free and replaces
671
+ its backbone with one that extracts features on both point
672
+ clouds and the surface representations. The official imple-
673
+ mentation and the averaged results of RepSurf-U for object
674
+ detection are not publicly available, so we include the results
675
+ of our own implementation of RepSurf-U.
676
+ We also include the performance of previous point-based
677
+ 3D detectors for comparison. VoteNet [21] aggregates fea-
678
+ tures for object candidates through end-to-end optimizable
679
+ Hough Voting. H3DNet [54] proposes a hybrid set of ge-
680
+ ometric primitives for object detection and trains multiple
681
+ individual backbones for each primitive. 3DETR [19] solves
682
+ point cloud object detection as a set-to-set problem using
683
+ a transformer encoder-decoder network. Pointformer [20]
684
+ proposes a hierarchical transformer-based point cloud back-
685
+ bone and adopts the voting algorithm of VoteNet for object
686
+ detection.
687
+ Implementation details.
688
+ For a baseline model with L
689
+ transformer layers, we enable multi-scale feature learning
690
+ by replacing the cross-attention of the 1-st layer with MS-A.
691
+ After the L-th layer, we append an additional transformer
692
+ 6
693
+
694
+ layer to perform local feature aggregation, which consists
695
+ of Local-A and a feedforward layer. We follow the original
696
+ training settings of the baseline models [16,27]. The detailed
697
+ hyperparameter settings can be found in Appendix A.2.
698
+ Results.
699
+ From Table 1, on ScanNetV2, we observe consis-
700
+ tent improvements in point cloud transformer detectors when
701
+ equipped with our attention modules. By applying MS-A
702
+ and Local-A to Group-Free, we achieve on-par performance
703
+ with the state-of-the-art RepSurf-U detector. In addition, we
704
+ can further improve RepSurf-U by over 1% in [email protected]
705
+ and over 2% in [email protected] on varying model configurations.
706
+ Table 2 shows a similar trend on SUN RGB-D, where our
707
+ attention modules boost the [email protected] of group-Free to sur-
708
+ pass RepSurf-U, and can further improve the state-of-the-art
709
+ method by 0.5% in [email protected] and 1.8% in [email protected].
710
+ 4.2. Qualitative Results
711
+ In Figure 3, we provide qualitative results on both datasets.
712
+ The visualized results are of our methods applied to the
713
+ Group-Free detectors. The qualitative results suggest that
714
+ our model is able to detect and classify objects of different
715
+ scales even in complex scenarios containing more than 10
716
+ objects (e.g., the example in the bottom row). By looking
717
+ into cross-attention weights in the transformer detector, we
718
+ find that object candidates tend to have higher correlations
719
+ with points that belong to their corresponding objects.
720
+ 4.3. Performance on objects of different sizes.
721
+ In addition to the standard evaluation metrics, we are in-
722
+ terested in examining models’ performance across different
723
+ object sizes. Inspired by the size-aware metrics in 2D de-
724
+ tection [12], we implement our own version of size-aware
725
+ metrics for 3D detection. We conduct this analysis on Scan-
726
+ NetV2, on which we calculate the volume for all the objects
727
+ in all samples. We set the threshold for mAPS as the 30th
728
+ percentile of the volume of all objects, and use the 70th
729
+ percentile as the threshold for mAPL. More details about
730
+ these metrics are included in Appendix A.2.
731
+ MS-A
732
+ Local-A
733
+ mAPS
734
+ mAPM
735
+ mAPL
736
+ -
737
+ -
738
+ 63.1
739
+ 76.6
740
+ 83.2
741
+ 
742
+ -
743
+ 65.0
744
+ 77.5
745
+ 83.9
746
+ -
747
+ 
748
+ 65.2
749
+ 78.6
750
+ 83.9
751
+ 
752
+ 
753
+ 65.6 (↑ 2.5)
754
+ 79.0 (↑ 2.4)
755
+ 84.3 (↑ 1.1)
756
+ Table 3.
757
+ Performance on different size categories on Scan-
758
+ NetV2. We define the S/M/L thresholds based on the statistics
759
+ (volume distribution) of ScanNetV2 objects. The configuration in
760
+ the first row denotes the Group-Free12,512 baseline.
761
+ In Table 3, we evaluate our methods using size-aware
762
+ metrics. We report the average result over 25 trials. The first
763
+ row denotes the Group-Free12,512 baseline. Firstly, by com-
764
+ paring the mAPS to mAPL, we notice that it has imbalanced
765
+ performance across different object sizes. Looking at the
766
+ improvement margins, we find our method to have the most
767
+ performance gain on small and medium-sized objects. The
768
+ result suggests that hierarchical designs can aid fine-grained
769
+ and localized feature learning for point cloud transformer
770
+ detectors and helps models detect smaller objects.
771
+ 4.4. Ablation Study
772
+ In this subsection, we first conduct an ablation study on
773
+ the stand-alone effects of our multi-scale attention and size-
774
+ adaptive local attention. Next, we include empirical analyses
775
+ of the design choices of our attention modules. If not other-
776
+ wise specified, experiments in this subsection are conducted
777
+ on ScanNetV2 with the Group-Free12,512 baseline. With-
778
+ out loss of generality, the results in this subsection are the
779
+ averaged numbers over 25 trials.
780
+ The stand-alone effects of MS-A and Local-A.
781
+ Table 4
782
+ shows the stand-alone performance of our proposed attention
783
+ modules. Compared to the plain attention baseline, both of
784
+ our attentions are proved to be effective. When combined
785
+ together, we find the two modules to be complementary to
786
+ each other and bring more significant performance gain.
787
+ MS-A
788
+ Local-A
789
790
791
+ -
792
+ -
793
+ 68.6
794
+ 51.8
795
+ 
796
+ -
797
+ 68.9
798
+ 52.5
799
+ -
800
+ 
801
+ 68.9
802
+ 52.9
803
+ 
804
+ 
805
+ 69.2
806
+ 53.2
807
+ Table 4. The stand-alone effect of our attention modules. The
808
+ configuration in the first row denotes the Group-Free12,512 baseline.
809
+ The results are averaged over 25 trials.
810
+ The maximum number of points (Nlocal) in Local-A.
811
+ In Local-A, for each object candidate (i.e., query), we sam-
812
+ ple a set of points within its corresponding bounding box
813
+ proposal and use the point features as the key and value
814
+ for this object candidate in the cross-attention function. As
815
+ introduced in Section 3.3, we cap the number of sampled
816
+ points with Nlocal to allow batch computation.
817
+ We provide an empirical analysis of the effects of Nlocal
818
+ on Local-A. From Table 5, we find that too little number
819
+ of points (e.g., Nlocal = 8) for Local-A results in a per-
820
+ formance drop. On the other hand, as Nlocal continues to
821
+ increase, we do not observe a significant performance gain
822
+ compared to Nlocal = 16. Intuitively, a small Nlocal means
823
+ the points within each bounding box are sampled sparsely,
824
+ which can be too sparse to provide enough information about
825
+ 7
826
+
827
+ Scene
828
+ Groundtruth
829
+ Prediction
830
+ Attention
831
+ Figure 3. Qualitative results on SUN RGB-D (top) and ScanNetV2 (bottom). The color of a bounding box in the middle two columns
832
+ stands for the semantic label of the object. In the last column, we draw both the groundtruth (in green) and the prediction (in blue) of the
833
+ object. We highlight the points that belong to an object for better visualization. In the last column, we visualize the attention weight of the
834
+ last transformer layer (before applying Local-A). We visualize the cross-attention weight between an object candidate and the point cloud.
835
+ Nlocal
836
837
838
+ mAPS
839
+ mAPM
840
+ mAPL
841
+ 8
842
+ 67.8
843
+ 51.1
844
+ 64.6
845
+ 78.0
846
+ 82.8
847
+ 16
848
+ 68.9
849
+ 52.9
850
+ 65.2
851
+ 78.6
852
+ 83.9
853
+ 24
854
+ 68.9
855
+ 53.0
856
+ 65.4
857
+ 78.5
858
+ 84.0
859
+ 32
860
+ 68.3
861
+ 52.1
862
+ 64.7
863
+ 77.8
864
+ 84.3
865
+ Table 5. The effect of Nlocal in Local-A. When there are enough
866
+ points, a larger Nlocal means the points are sampled more densely
867
+ within each bounding box proposal.
868
+ any object. This explains why Nlocal = 8 does not work
869
+ well. However, on the other hand, a large Nlocal may only
870
+ benefit large objects and has little effect on smaller objects,
871
+ because the latter are padded with unused tokens.
872
+ MS-A with different feature resolutions.
873
+ In Section 3,
874
+ we propose learnable upsampling for MS-A to build higher-
875
+ resolution point features from the single-scale input. In the
876
+ same spirit, a parameterized downsampling procedure can
877
+ be realized through conventional set abstraction [23], which
878
+ aggregated point features within local groups and produce
879
+ a feature map with fewer points (i.e., lower resolution). In-
880
+ tuitively, a higher point density of the feature map provides
881
+ more fine-grained features. To study the effects of feature
882
+ maps of different granularity, we conduct an empirical analy-
883
+ sis on MS-A using different sets of multi-scale feature maps
884
+ representing point clouds of varying granularity.
885
+ In Table 6, we examined the performance of two multi-
886
+ scale choices in comparison with the single-scale baseline.
887
+ The result suggests that coarse features (s = 0.5×) do not
888
+ benefit transformer detectors. This is expected because trans-
889
+ formers do not have limited receptive fields and thus do not
890
+ Feature Scales s
891
892
893
+ [1×]
894
+ 68.6
895
+ 51.8
896
+ [1×, 2×]
897
+ 68.9
898
+ 52.5
899
+ [0.5×, 1×, 2×]
900
+ 67.9
901
+ 51.7
902
+ Table 6. Simple Multi-Scale Attention with different feature
903
+ scales. Feature scale = s× means the feature map contains s × N
904
+ points, with N being the original number of points. A larger s
905
+ denotes a feature map with higher point density (i.e., resolution)
906
+ rely on a coarse-grained feature map to learn global context.
907
+ 5. Conclusion
908
+ In this work, we present Simple Multi-Scale Attention and
909
+ Size-Adaptive Local Attention, two model-agnostic modules
910
+ that bring in hierarchical designs to existing transformer-
911
+ based 3D detectors. We enable multi-scale feature learning
912
+ and explicit localized feature aggregation through improved
913
+ attention functions, which are generic modules that can be
914
+ applied to any existing attention-based network for end-to-
915
+ end training. We improve the state-of-the-art transformer
916
+ detector on two challenging indoor 3D detection benchmarks,
917
+ with the largest improvement margin on small objects.
918
+ As our attention modules promote fine-grained feature
919
+ learning, which is important to various dense prediction
920
+ vision tasks, one direction for future work is to adapt our
921
+ attention modules for other point cloud learning problems
922
+ such as segmentation. Another direction is to introduce more
923
+ efficient attention mechanisms to the multi-scale attention to
924
+ further bring down the computation overhead.
925
+ 8
926
+
927
+ References
928
+ [1] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas
929
+ Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-
930
+ to-end object detection with transformers. In ECCV, 2020.
931
+ 3
932
+ [2] Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia.
933
+ Multi-view 3d object detection network for autonomous driv-
934
+ ing. In CVPR, 2017. 2, 3
935
+ [3] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Hal-
936
+ ber, Thomas A. Funkhouser, and Matthias Nießner. Scannet:
937
+ Richly-annotated 3d reconstructions of indoor scenes. In
938
+ CVPR, 2017. 2, 4, 5
939
+ [4] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina
940
+ Toutanova. BERT: Pre-training of Deep Bidirectional Trans-
941
+ formers for Language Understanding. In NAACL-HLT (1),
942
+ 2019. 1
943
+ [5] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov,
944
+ Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner,
945
+ Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl-
946
+ vain Gelly, Jakob Uszkoreit, and Neil Houlsby. An Image is
947
+ Worth 16x16 Words: Transformers for Image Recognition at
948
+ Scale. In ICLR, 2021. 1, 2
949
+ [6] Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li,
950
+ Zhicheng Yan, Jitendra Malik, and Christoph Feichtenhofer.
951
+ Multiscale vision transformers. In ICCV, 2021. 3
952
+ [7] Benjamin Graham, Martin Engelcke, and Laurens van der
953
+ Maaten. 3d semantic segmentation with submanifold sparse
954
+ convolutional networks. In CVPR, 2018. 1, 2
955
+ [8] Loïc Landrieu and Martin Simonovsky. Large-scale point
956
+ cloud semantic segmentation with superpoint graphs.
957
+ In
958
+ CVPR, 2018. 1, 2
959
+ [9] Alex H. Lang, Sourabh Vora, Holger Caesar, Lubing Zhou,
960
+ Jiong Yang, and Oscar Beijbom. Pointpillars: Fast encoders
961
+ for object detection from point clouds. In CVPR, 2019. 2
962
+ [10] Yann LeCun, Bernhard E. Boser, John S. Denker, Donnie
963
+ Henderson, Richard E. Howard, Wayne E. Hubbard, and
964
+ Lawrence D. Jackel. Backpropagation applied to handwritten
965
+ zip code recognition. Neural Comput., 1989. 3
966
+ [11] Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Man-
967
+ galam, Bo Xiong, Jitendra Malik, and Christoph Feichten-
968
+ hofer. Mvitv2: Improved multiscale vision transformers for
969
+ classification and detection. In CVPR, 2022. 3
970
+ [12] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays,
971
+ Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence
972
+ Zitnick. Microsoft COCO: common objects in context. In
973
+ ECCV, 2014. 6, 7
974
+ [13] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian
975
+ Szegedy, Scott E. Reed, Cheng-Yang Fu, and Alexander C.
976
+ Berg. SSD: single shot multibox detector. In ECCV, 2016. 3
977
+ [14] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng
978
+ Zhang, Stephen Lin, and Baining Guo. Swin Transformer:
979
+ Hierarchical Vision Transformer using Shifted Windows. In
980
+ ICCV, 2021. 1
981
+ [15] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng
982
+ Zhang, Stephen Lin, and Baining Guo. Swin transformer:
983
+ Hierarchical vision transformer using shifted windows. In
984
+ ICCV, 2021. 3
985
+ [16] Ze Liu, Zheng Zhang, Yue Cao, Han Hu, and Xin Tong.
986
+ Group-free 3d object detection via transformers. In ICCV,
987
+ 2021. 1, 2, 3, 4, 6, 7, 11
988
+ [17] Xu Ma, Can Qin, Haoxuan You, Haoxi Ran, and Yun Fu.
989
+ Rethinking network design and local geometry in point cloud:
990
+ A simple residual MLP framework. In ICLR, 2022. 2
991
+ [18] Daniel Maturana and Sebastian Scherer. Voxnet: A 3d con-
992
+ volutional neural network for real-time object recognition.
993
+ In 2015 IEEE/RSJ International Conference on Intelligent
994
+ Robots and Systems (IROS), 2015. 2
995
+ [19] Ishan Misra, Rohit Girdhar, and Armand Joulin. An end-to-
996
+ end transformer model for 3d object detection. In ICCV, 2021.
997
+ 1, 2, 4, 6, 11, 12, 13
998
+ [20] Xuran Pan, Zhuofan Xia, Shiji Song, Li Erran Li, and Gao
999
+ Huang. 3d object detection with pointformer. In CVPR, 2021.
1000
+ 1, 2, 3, 6, 11, 12, 13
1001
+ [21] Charles R. Qi, Or Litany, Kaiming He, and Leonidas J. Guibas.
1002
+ Deep hough voting for 3d object detection in point clouds. In
1003
+ ICCV, 2019. 2, 3, 5, 6, 12, 13
1004
+ [22] Charles R. Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J.
1005
+ Guibas. Frustum pointnets for 3d object detection from RGB-
1006
+ D data. In CVPR, 2018. 1
1007
+ [23] Charles Ruizhongtai Qi, Hao Su, Kaichun Mo, and Leonidas J.
1008
+ Guibas. Pointnet: Deep learning on point sets for 3d classifi-
1009
+ cation and segmentation. In CVPR, 2017. 1, 2, 3, 5, 8
1010
+ [24] Charles Ruizhongtai Qi, Hao Su, Matthias Nießner, Angela
1011
+ Dai, Mengyuan Yan, and Leonidas J. Guibas. Volumetric and
1012
+ multi-view cnns for object classification on 3d data. In CVPR,
1013
+ 2016. 1
1014
+ [25] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J.
1015
+ Guibas. Pointnet++: Deep hierarchical feature learning on
1016
+ point sets in a metric space. In NIPS, 2017. 1, 2, 3
1017
+ [26] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee,
1018
+ Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and
1019
+ Peter J. Liu. Exploring the Limits of Transfer Learning with
1020
+ a Unified Text-to-Text Transformer. J. Mach. Learn. Res., 21,
1021
+ 2020. 1
1022
+ [27] Haoxi Ran, Jun Liu, and Chengjie Wang. Surface representa-
1023
+ tion for point clouds. In CVPR, 2022. 6, 7, 11
1024
+ [28] Joseph Redmon, Santosh Kumar Divvala, Ross B. Girshick,
1025
+ and Ali Farhadi. You only look once: Unified, real-time
1026
+ object detection. In CVPR, 2016. 3
1027
+ [29] Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun.
1028
+ Faster R-CNN: towards real-time object detection with region
1029
+ proposal networks. In NIPS, 2015. 2
1030
+ [30] Sucheng Ren, Daquan Zhou, Shengfeng He, Jiashi Feng, and
1031
+ Xinchao Wang. Shunted self-attention via multi-scale token
1032
+ aggregation. In CVPR, 2022. 2, 3
1033
+ [31] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. Oct-
1034
+ net: Learning deep 3d representations at high resolutions. In
1035
+ CVPR, 2017. 2
1036
+ [32] Hualian Sheng, Sijia Cai, Yuan Liu, Bing Deng, Jianqiang
1037
+ Huang, Xian-Sheng Hua, and Min-Jian Zhao. Improving 3d
1038
+ object detection with channel-wise transformer. In ICCV,
1039
+ 2021. 1
1040
+ [33] Shaoshuai Shi, Chaoxu Guo, Li Jiang, Zhe Wang, Jianping
1041
+ Shi, Xiaogang Wang, and Hongsheng Li. PV-RCNN: point-
1042
+ 9
1043
+
1044
+ voxel feature set abstraction for 3d object detection. In CVPR,
1045
+ 2020. 2
1046
+ [34] Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li. Pointr-
1047
+ cnn: 3d object proposal generation and detection from point
1048
+ cloud. In CVPR, 2019. 1, 2, 3
1049
+ [35] Martin Simonovsky and Nikos Komodakis. Dynamic edge-
1050
+ conditioned filters in convolutional neural networks on graphs.
1051
+ In CVPR, 2017. 2
1052
+ [36] Shuran Song, Samuel P. Lichtenberg, and Jianxiong Xiao.
1053
+ SUN RGB-D: A RGB-D scene understanding benchmark
1054
+ suite. In CVPR, 2015. 2, 4, 5
1055
+ [37] Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evan-
1056
+ gelos Kalogerakis, Ming-Hsuan Yang, and Jan Kautz. Splat-
1057
+ net: Sparse lattice networks for point cloud processing. In
1058
+ CVPR, 2018. 1
1059
+ [38] Anirud Thyagharajan, Benjamin Ummenhofer, Prashant Lad-
1060
+ dha, Om Ji Omer, and Sreenivas Subramoney. Segment-
1061
+ fusion: Hierarchical context fusion for robust 3d semantic
1062
+ segmentation. In CVPR, 2022. 1
1063
+ [39] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco
1064
+ Massa, Alexandre Sablayrolles, and Hervé Jégou. Training
1065
+ data-efficient image transformers & distillation through atten-
1066
+ tion. In ICML, 2021. 1
1067
+ [40] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkor-
1068
+ eit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia
1069
+ Polosukhin. Attention is all you need. In NLPS, 2017. 1, 2
1070
+ [41] Haiyang Wang, Shaoshuai Shi, Ze Yang, Rongyao Fang, Qi
1071
+ Qian, Hongsheng Li, Bernt Schiele, and Liwei Wang. Rbgnet:
1072
+ Ray-based grouping for 3d object detection. In CVPR, 2022.
1073
+ 2
1074
+ [42] Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun,
1075
+ and Xin Tong. O-CNN: octree-based convolutional neural net-
1076
+ works for 3d shape analysis. ACM Trans. Graph., 36(4):72:1–
1077
+ 72:11, 2017. 2
1078
+ [43] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao
1079
+ Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyra-
1080
+ mid vision transformer: A versatile backbone for dense pre-
1081
+ diction without convolutions. In ICCV, 2021. 3
1082
+ [44] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma,
1083
+ Michael M. Bronstein, and Justin M. Solomon. Dynamic
1084
+ graph CNN for learning on point clouds. ACM Trans. Graph.,
1085
+ 38(5):146:1–146:12, 2019. 2
1086
+ [45] Qian Xie, Yu-Kun Lai, Jing Wu, Zhoutao Wang, Yiming
1087
+ Zhang, Kai Xu, and Jun Wang. Mlcvnet: Multi-level context
1088
+ votenet for 3d object detection. In CVPR, 2020. 1
1089
+ [46] Saining Xie, Sainan Liu, Zeyu Chen, and Zhuowen Tu. Atten-
1090
+ tional shapecontextnet for point cloud recognition. In CVPR,
1091
+ 2018. 1
1092
+ [47] Qiangeng Xu, Xudong Sun, Cho-Ying Wu, Panqu Wang, and
1093
+ Ulrich Neumann. Grid-gcn for fast and scalable point cloud
1094
+ learning. In CVPR, 2020. 2
1095
+ [48] Yifan Xu, Tianqi Fan, Mingye Xu, Long Zeng, and Yu Qiao.
1096
+ Spidercnn: Deep learning on point sets with parameterized
1097
+ convolutional filters. In ECCV, 2018. 1
1098
+ [49] Jiancheng Yang, Qiang Zhang, Bingbing Ni, Linguo Li, Jinx-
1099
+ ian Liu, Mengdie Zhou, and Qi Tian. Modeling point clouds
1100
+ with self-attention and gumbel subset sampling. In CVPR,
1101
+ 2019. 1
1102
+ [50] Yaoqing Yang, Chen Feng, Yiru Shen, and Dong Tian. Fold-
1103
+ ingnet: Point cloud auto-encoder via deep grid deformation.
1104
+ In CVPR, 2018. 1
1105
+ [51] Maosheng Ye, Shuangjie Xu, and Tongyi Cao. Hvnet: Hybrid
1106
+ voxel network for lidar based 3d object detection. In CVPR,
1107
+ 2020. 2
1108
+ [52] Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie
1109
+ Zhou, and Jiwen Lu. Point-bert: Pre-training 3d point cloud
1110
+ transformers with masked point modeling. In CVPR, 2022. 1
1111
+ [53] Cheng Zhang, Haocheng Wan, Xinyi Shen, and Zizhao Wu.
1112
+ Patchformer: An efficient point transformer with patch atten-
1113
+ tion. In CVPR, 2022. 1, 2, 3
1114
+ [54] Zaiwei Zhang, Bo Sun, Haitao Yang, and Qixing Huang.
1115
+ H3dnet: 3d object detection using hybrid geometric primi-
1116
+ tives. In ECCV, 2020. 6, 12, 13
1117
+ [55] Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip H. S. Torr, and
1118
+ Vladlen Koltun. Point transformer. In ICCV, 2021. 1, 2, 3
1119
+ [56] Haoran Zhou, Yidan Feng, Mingsheng Fang, Mingqiang Wei,
1120
+ Jing Qin, and Tong Lu. Adaptive graph convolution for point
1121
+ cloud analysis. In ICCV, 2021. 2
1122
+ [57] Yin Zhou and Oncel Tuzel. Voxelnet: End-to-end learning
1123
+ for point cloud based 3d object detection. In CVPR, 2018. 2
1124
+ A. Appendix
1125
+ A.1. More Experiments
1126
+ The placement of Simple Multi-scale Attention.
1127
+ We de-
1128
+ sign simple multi-scale attention as a compact network layer
1129
+ to enable hierarchical feature learning. As it can be inserted
1130
+ at any place within a network, we are interested in finding out
1131
+ how the placement of the multi-scale attention layer affects
1132
+ a model’s performance.
1133
+ Layers
1134
1135
1136
+ [0]
1137
+ 68.9
1138
+ 52.5
1139
+ [0, 4, 8]
1140
+ 68.8
1141
+ 52.4
1142
+ [0, 3, 6, 9]
1143
+ 68.9
1144
+ 52.3
1145
+ [0, 2, 4, 6, 8, 10]
1146
+ 68.7
1147
+ 52.6
1148
+ Table 7. Different placements of the simple multi-scale atten-
1149
+ tion layer. Layers = [i] means we replace the ith layer of the
1150
+ transformer decoder with our MS-A layer. The best results are in
1151
+ bold, and the second-best results are underlined.
1152
+ We consider different strategies to place MS-A within
1153
+ the transformer decoder of Group-Free. We divide the 12-
1154
+ layer decoder into several stages and place MS-A at the
1155
+ first layer of each stage. Specifically, our default setting
1156
+ uses a single MS-A at the first decoder layer, and we try
1157
+ placing MS-A by dividing the decoder evenly into 3, 4, and
1158
+ 6 stages. From the results in Table 7, we do not observe a
1159
+ significant benefit of using more than one multi-scale layer.
1160
+ We conjecture this is because the up-scaled feature map in
1161
+ our multi-scale attention is obtained through interpolation
1162
+ 10
1163
+
1164
+ and simple linear projection. The up-scale point feature
1165
+ obtained in this way may mainly provide more accurate
1166
+ geometric information with a higher point density, while may
1167
+ not have much semantic difference than the original input
1168
+ feature. We expect such fine-grained geometric information
1169
+ to be particularly helpful at the beginning of the decoding
1170
+ stage (i.e., Layer= 0), yet may be less useful as the object
1171
+ features go deeper in the decoder and become more abstract.
1172
+ Per-category mAP on ScanNetV2 and SUN RGB-D.
1173
+ We include the detailed per-category mAP on both datasets
1174
+ in Table 9, Table 10, Table 11, and Table 12. For the results
1175
+ in this paragraph, we follow the baselines [16,19,20,27] and
1176
+ report the result of the best trial.
1177
+ Inference Speed.
1178
+ We analyze the parameter and computa-
1179
+ tion overhead of each of our attention modules. We measure
1180
+ the inference speeds for all model configurations on the same
1181
+ machine with a single A100 GPU. In Table 8, we can see that
1182
+ replacing plain attention with MS-A results in little parame-
1183
+ ter increase. While applying Local-A leads to a larger param-
1184
+ eter increase, the Local-A module itself contains the same
1185
+ number of parameters as a plain cross-attention. The param-
1186
+ eter increase is mainly due to the additional feed-forward
1187
+ layer and learnable positional embeddings, etc. In terms of
1188
+ inference speed, we find MS-A to cause more substantial
1189
+ latency in inference. Such latency is caused by applying
1190
+ the attention function on the key/value with 2 times more
1191
+ tokens (from 1024 to 2048). A future direction is to incorpo-
1192
+ rate more efficient attention mechanisms into the multi-scale
1193
+ attention function.
1194
+ MS-A
1195
+ Local-A
1196
+ #Params
1197
+ Inference Speed
1198
+ (M)
1199
+ (ms/frame)
1200
+ -
1201
+ -
1202
+ 26.9
1203
+ 186
1204
+ 
1205
+ -
1206
+ 27.0 (+0.1)
1207
+ 225
1208
+ -
1209
+ 
1210
+ 28.8 (+1.9)
1211
+ 191
1212
+ 
1213
+ 
1214
+ 28.9 (+2.0)
1215
+ 232
1216
+ Table 8. Ablating the parameter and computation overhead of
1217
+ individual attention modules.
1218
+ A.2. Implementation Details.
1219
+ We include implementation details covering several as-
1220
+ pects in this paragraph. In addition, we include our source
1221
+ code in the supplementary material, containing the full im-
1222
+ plementation of our attention modules.
1223
+ Training Details.
1224
+ Group-Free baseline. When applying our method to this
1225
+ baseline, we follow the original training settings. Specifi-
1226
+ cally, on ScanNetV2, the models are trained for 400 epochs
1227
+ on 4 GPUs with a batchsize of 32 (8 on each GPU). We use
1228
+ the same optimizer with the same learning rates and weight
1229
+ decays as the baseline training. On SUN RGB-D, models
1230
+ are trained for 600 epochs on 4 GPUs with the same learning
1231
+ rate and weight decay as the baseline training on this dataset.
1232
+ RepSurf-U baseline. The official implementation and
1233
+ training details of this baseline are not published. We im-
1234
+ plement our own version of RepSurf-U detector, for which
1235
+ we mostly follow the training setup of Group-Free and have
1236
+ done a grid search for the hyperparameters. Different from
1237
+ Group-Free, we train RepSurf-U models on ScanNetV2 and
1238
+ SUN RGB-D using a weight decay of 0.01 for all model
1239
+ parameters, because we find it to achieve better performance
1240
+ on our reproduced RepSurf-U. The learning rate and other
1241
+ hyperparameters remain the same as Group-Free on both
1242
+ datasets. When applying our method to the reproduced
1243
+ model, we do not change the hyperparameter configurations.
1244
+ Bounding box parameterization.
1245
+ In this paragraph, we
1246
+ include a brief introduction to the bounding box parameteri-
1247
+ zation used in our baselines. First, the predicted box center
1248
+ ˆc for each object candidate q is obtained by adding an offset
1249
+ to the coordinate of q. In this way, by predicting the cen-
1250
+ ter, the actual prediction made by a detector is this offset
1251
+ value. The size ˆd of a box is the height, width, and depth
1252
+ dimension. One way for predicting ˆd is to directly predict
1253
+ the values of H, W, and D. Another way is to divide a range
1254
+ of sizes into several bins and make a classification prediction
1255
+ that determines which “bin" the object belongs to. The final
1256
+ size prediction is obtained by adding the quantized size (i.e.,
1257
+ the bin) with a “residual" term which is also predicted by
1258
+ the model with another prediction head. The bounding box
1259
+ orientation ˆa is also parameterized as the combination of a
1260
+ quantized value and a residual term. Lastly, the prediction of
1261
+ the semantic label is a common classification problem that
1262
+ parameterizes a semantic label as a one-hot vector.
1263
+ Size-Aware Evaluation Metrics.
1264
+ For a quantitative anal-
1265
+ ysis of the model’s performance on objects of different
1266
+ sizes. We implement our own size-aware evaluation met-
1267
+ rics, namely mAPS, mAPM and mAPL. For each metric,
1268
+ we only calculate the mAP score among objects that fall
1269
+ into the corresponding size category (i.e., small, medium, or
1270
+ large). We conduct the size-aware evaluation on ScanNetV2,
1271
+ where we determine the threshold for dividing object size
1272
+ categories based on the statistics of this dataset. Specifically,
1273
+ we take the 1201 training samples and record the volume
1274
+ (v = H × W × D) of every groundtruth bounding box
1275
+ of every sample (see Figure 4). Among a total of 15733
1276
+ goundtruth bounding boxes, we take the 30th (v = 0.155)
1277
+ and 70th (v = 0.526) percentile as the thresholds for divid-
1278
+ ing small and large objects.
1279
+ 11
1280
+
1281
+ 00.155 0.526
1282
+ 1
1283
+ 2
1284
+ 3
1285
+ 4
1286
+ 5
1287
+ Volume of the object bounding box
1288
+ 0
1289
+ 200
1290
+ 400
1291
+ 600
1292
+ 800
1293
+ 1000
1294
+ Number of objects
1295
+ Figure 4. Volume distribution of the object groundtruth bounding boxes in ScanNetV2. We highlight the threshold of small objects
1296
+ (v <= 0.155, the 30th percentile) and large objects (v > 0.526, the 70th percentile)
1297
+ methods
1298
+ backbone
1299
+ cab
1300
+ bed chair sofa tabl door wind bkshf
1301
+ pic
1302
+ cntr desk curt fridg showr toil
1303
+ sink bath ofurn mAP
1304
+ VoteNet [21]
1305
+ PointNet++
1306
+ 47.7 88.7 89.5 89.3 62.1 54.1 40.8
1307
+ 54.3
1308
+ 12.0 63.9 69.4 52.0 52.5
1309
+ 73.3
1310
+ 95.9 52.0 92.5 42.4
1311
+ 62.9
1312
+ H3DNet [54]
1313
+ 4×PointNet++
1314
+ 49.4 88.6 91.8 90.2 64.9 61.0 51.9
1315
+ 54.9
1316
+ 18.6 62.0 75.9 57.3 57.2
1317
+ 75.3
1318
+ 97.9 67.4 92.5 53.6
1319
+ 67.2
1320
+ 3DETR [19]
1321
+ transformer
1322
+ 49.4 83.6 90.9 89.8 67.6 52.4 39.6
1323
+ 56.4
1324
+ 15.2 55.9 79.2 58.3 57.6
1325
+ 67.6
1326
+ 97.2 70.6 92.2 53.0
1327
+ 65.0
1328
+ Pointformer [20]
1329
+ Pointformer
1330
+ 46.7 88.4 90.5 88.7 65.7 55.0 47.7
1331
+ 55.8
1332
+ 18.0 63.8 69.1 55.4 48.5
1333
+ 66.2
1334
+ 98.9 61.5 86.7 47.4
1335
+ 64.1
1336
+ GroupFree6,256
1337
+ PointNet++
1338
+ 54.1 86.2 92.0 84.8 67.8 55.8 46.9
1339
+ 48.5
1340
+ 15.0 59.4 80.4 64.2 57.2
1341
+ 76.3
1342
+ 97.6 76.8 92.5 55.0
1343
+ 67.3
1344
+ w/ MS + Local
1345
+ PointNet++
1346
+ 55.9 88.6 93.6 90.8 68.2 59.0 44.2
1347
+ 50.3
1348
+ 14.6 63.0 85.0 62.8 58.5
1349
+ 68.6
1350
+ 97.6 73.2 92.4 56.4
1351
+ 67.9
1352
+ RepSurf-U6,256
1353
+ PointNet++
1354
+ 55.5 87.7 93.4 85.9 69.1 57.3 48.8
1355
+ 50.0
1356
+ 16.5 61.0 81.6 66.2 59.0
1357
+ 77.5
1358
+ 99.2 78.2 94.0 56.8
1359
+ 68.8
1360
+ RepSurf-U6,256 (repd.)
1361
+ PointNet++
1362
+ 57.4 89.6 93.2 87.4 70.2 58.8 46.6
1363
+ 47.4
1364
+ 18.1 63.4 78.2 70.4 46.5
1365
+ 81.0
1366
+ 99.8 69.4 90.8 55.5
1367
+ 68.0
1368
+ w/ MS + Local
1369
+ PointNet++
1370
+ 51.2 89.5 93.4 87.5 71.8 60.5 49.0
1371
+ 57.7
1372
+ 21.9 65.2 82.1 70.3 53.3
1373
+ 80.2
1374
+ 98.2 68.8 91.9 58.2
1375
+ 69.5
1376
+ GroupFree12,512
1377
+ PointNet++w2x 52.1 91.9 93.6 88.0 70.7 60.7 53.7
1378
+ 62.4
1379
+ 16.1 58.5 80.9 67.9 47.0
1380
+ 76.3
1381
+ 99.6 72.0 95.3 56.4
1382
+ 69.1
1383
+ w/ MS + Local
1384
+ PointNet++
1385
+ 53.7 91.9 93.4 88.8 72.1 61.3 52.8
1386
+ 58.6
1387
+ 17.4 70.8 83.3 69.9 56.5
1388
+ 75.6
1389
+ 98.5 70.3 94.4 56.9
1390
+ 70.3
1391
+ RepSurf-U12,512
1392
+ PointNet++w2x 54.6 94.0 96.2 90.5 73.2 62.7 55.7
1393
+ 64.5
1394
+ 18.6 60.9 83.1 69.9 49.4
1395
+ 78.4
1396
+ 99.4 74.5 97.6 58.3
1397
+ 71.2
1398
+ RepSurf-U12,512 (repd.) PointNet++w2x 54.5 90.7 93.4 87.6 76.3 64.4 54.4
1399
+ 61.4
1400
+ 19.0 62.2 84.0 69.2 48.8
1401
+ 79.2
1402
+ 99.8 75.9 92.2 62.0
1403
+ 70.8
1404
+ w/ MS + Local
1405
+ PointNet++w2x 58.0 89.3 94.1 86.5 74.3 62.4 60.2
1406
+ 57.9
1407
+ 21.7 67.9 85.3 74.4 53.5
1408
+ 75.9
1409
+ 99.6 74.6 91.6 63.7
1410
+ 71.7
1411
+ Table 9. Performance of [email protected] in each category on ScanNetV2.
1412
+ methods
1413
+ backbone
1414
+ cab
1415
+ bed chair sofa tabl door wind bkshf
1416
+ pic
1417
+ cntr desk curt fridg showr toil
1418
+ sink bath ofurn mAP
1419
+ VoteNet [21]
1420
+ PointNet++
1421
+ 14.6 77.8 73.1 80.5 46.5 25.1 16.0
1422
+ 41.8
1423
+ 2.5
1424
+ 22.3 33.3 25.0 31.0
1425
+ 17.6
1426
+ 87.8 23.0 81.6 18.7
1427
+ 39.9
1428
+ H3DNet [54]
1429
+ 4×PointNet++
1430
+ 20.5 79.7 80.1 79.6 56.2 29.0 21.3
1431
+ 45.5
1432
+ 4.2
1433
+ 33.5 50.6 37.3 41.4
1434
+ 37.0
1435
+ 89.1 35.1 90.2 35.4
1436
+ 48.1
1437
+ GroupFree6,256
1438
+ PointNet++
1439
+ 23.0 78.4 78.9 68.7 55.1 35.3 23.6
1440
+ 39.4
1441
+ 7.5
1442
+ 27.2 66.4 43.3 43.0
1443
+ 41.2
1444
+ 89.7 38.0 83.4 37.3
1445
+ 48.9
1446
+ w/ MS + Local
1447
+ PointNet++
1448
+ 27.3 80.8 83.3 85.3 60.2 39.7 21.7
1449
+ 40.4
1450
+ 7.6
1451
+ 41.7 61.5 42.9 42.3
1452
+ 26.2
1453
+ 96.1 38.5 89.5 39.7
1454
+ 51.4
1455
+ RepSurf-U6,256
1456
+ PointNet++
1457
+ 24.9 79.6 80.1 70.4 56.4 36.7 25.5
1458
+ 41.4
1459
+ 8.8
1460
+ 28.7 68.0 45.2 45.0
1461
+ 42.7
1462
+ 91.3 40.1 85.1 39.2
1463
+ 50.5
1464
+ RepSurf-U6,256 (repd.)
1465
+ PointNet++ 1.
1466
+ 24.3 82.6 82.6 71.3 55.9 38.3 18.6
1467
+ 40.3
1468
+ 11.2 44.0 60.7 45.1 35.7
1469
+ 36.6
1470
+ 97.1 34.6 84.6 39.8
1471
+ 50.2
1472
+ w/ MS + Local
1473
+ PointNet++
1474
+ 27.1 80.9 83.0 77.1 58.0 45.8 24.8
1475
+ 50.8
1476
+ 10.5 31.9 67.7 44.6 40.6
1477
+ 34.9
1478
+ 97.7 38.3 87.3 44.6
1479
+ 52.5
1480
+ GroupFree12,512
1481
+ PointNet++w2x 26.0 81.3 82.9 70.7 62.2 41.7 26.5
1482
+ 55.8
1483
+ 7.8
1484
+ 34.7 67.2 43.9 44.3
1485
+ 44.1
1486
+ 92.8 37.4 89.7 40.6
1487
+ 52.8
1488
+ w/ MS + Local
1489
+ PointNet++
1490
+ 31.0 81.0 85.0 79.4 61.1 44.5 27.9
1491
+ 50.6
1492
+ 10.1 45.0 61.2 54.1 39.5
1493
+ 43.5
1494
+ 91.7 45.9 89.3 42.4
1495
+ 54.6
1496
+ RepSurf-U12,512
1497
+ PointNet++w2x 28.5 83.5 84.8 72.6 64.0 43.6 28.3
1498
+ 57.8
1499
+ 9.6
1500
+ 37.0 69.7 45.9 46.4
1501
+ 46.1
1502
+ 94.9 39.1 92.1 42.6
1503
+ 54.8
1504
+ RepSurf-U12,512 (repd.) PointNet++w2x 27.6 82.7 85.3 68.8 60.6 44.0 27.3
1505
+ 56.7
1506
+ 9.6
1507
+ 39.6 63.7 53.8 43.0
1508
+ 42.4
1509
+ 99.8 38.8 88.7 47.3
1510
+ 54.4
1511
+ w/ MS + Local
1512
+ PointNet++w2x 29.3 83.6 85.7 78.7 66.2 45.6 30.4
1513
+ 59.8
1514
+ 10.4 34.2 60.0 60.8 48.1
1515
+ 45.3
1516
+ 99.9 44.5 87.1 48.4
1517
+ 56.5
1518
+ Table 10. Performance of [email protected] in each category on ScanNetV2.
1519
+ 12
1520
+
1521
+ methods
1522
+ backbone
1523
+ bathtub bed bkshf chair desk drser nigtstd sofa table toilet mAP
1524
+ VoteNet [21]
1525
+ PointNet++
1526
+ 75.5
1527
+ 85.6
1528
+ 31.9
1529
+ 77.4 24.8 27.9
1530
+ 58.6
1531
+ 67.4 51.1
1532
+ 90.5
1533
+ 59.1
1534
+ H3DNet [54]
1535
+ 4×PointNet++
1536
+ 73.8
1537
+ 85.6
1538
+ 31.0
1539
+ 76.7 29.6 33.4
1540
+ 65.5
1541
+ 66.5 50.8
1542
+ 88.2
1543
+ 60.1
1544
+ 3DETR [19]
1545
+ transformer
1546
+ 69.8
1547
+ 84.6
1548
+ 28.5
1549
+ 72.4 34.3 29.6
1550
+ 61.4
1551
+ 65.3 52.6
1552
+ 91.0
1553
+ 61.1
1554
+ Pointformer [20]
1555
+ Pointformer
1556
+ 80.1
1557
+ 84.3
1558
+ 32.0
1559
+ 76.2 27.0 37.4
1560
+ 64.0
1561
+ 64.9 51.5
1562
+ 92.2
1563
+ 61.1
1564
+ GroupFree6,256
1565
+ PointNet++
1566
+ 80.0
1567
+ 87.8
1568
+ 32.5
1569
+ 79.4 32.6 36.0
1570
+ 66.7
1571
+ 70.0 53.8
1572
+ 91.1
1573
+ 63.0
1574
+ w/ MS + Local
1575
+ PointNet++
1576
+ 83.2
1577
+ 86.7
1578
+ 34.5
1579
+ 79.0 31.9 39.3
1580
+ 66.0
1581
+ 70.6 55.6
1582
+ 90.8
1583
+ 63.8
1584
+ RepSurf-U6,256
1585
+ PointNet++
1586
+ 81.1
1587
+ 89.3
1588
+ 34.4
1589
+ 80.4 33.5 37.3
1590
+ 68.1
1591
+ 71.4 54.8
1592
+ 92.3
1593
+ 64.3
1594
+ RepSurf-U6,256 (repd.)
1595
+ PointNet++
1596
+ 79.5
1597
+ 87.5
1598
+ 33.8
1599
+ 79.4 32.7 40.2
1600
+ 69.0
1601
+ 70.3 55.4
1602
+ 92.1
1603
+ 64.0
1604
+ w/ MS + Local
1605
+ PointNet++
1606
+ 79.9
1607
+ 87.0
1608
+ 36.8
1609
+ 79.5 33.8 41.4
1610
+ 67.4
1611
+ 71.2 55.3
1612
+ 92.4
1613
+ 64.5
1614
+ Table 11. Performance of [email protected] in each category on SUN RGB-D.
1615
+ methods
1616
+ backbone
1617
+ bathtub bed bkshf chair desk drser nigtstd sofa table toilet mAP
1618
+ VoteNet [21]
1619
+ PointNet++
1620
+ 45.4
1621
+ 53.4
1622
+ 6.8
1623
+ 56.5
1624
+ 5.9
1625
+ 12.0
1626
+ 38.6
1627
+ 49.1 21.3
1628
+ 68.5
1629
+ 35.8
1630
+ H3DNet [54]
1631
+ 4×PointNet++
1632
+ 47.6
1633
+ 52.9
1634
+ 8.6
1635
+ 60.1
1636
+ 8.4
1637
+ 20.6
1638
+ 45.6
1639
+ 50.4 27.1
1640
+ 69.1
1641
+ 39.0
1642
+ GroupFree6,256
1643
+ PointNet++
1644
+ 64.0
1645
+ 67.1
1646
+ 12.4
1647
+ 62.6 14.5 21.9
1648
+ 49.8
1649
+ 58.2 29.2
1650
+ 72.2
1651
+ 45.2
1652
+ w/ MS + Local
1653
+ PointNet++
1654
+ 66.2
1655
+ 67.4
1656
+ 10.8
1657
+ 63.6 15.0 24.7
1658
+ 56.7
1659
+ 56.1 30.8
1660
+ 74.3
1661
+ 46.6
1662
+ RepSurf-U6,256
1663
+ PointNet++
1664
+ 65.2
1665
+ 67.5
1666
+ 13.2
1667
+ 63.4 15.0 22.4
1668
+ 50.9
1669
+ 58.8 30.0
1670
+ 72.7
1671
+ 45.9
1672
+ RepSurf-U6,256 (repd.)
1673
+ PointNet++
1674
+ 61.4
1675
+ 66.8
1676
+ 11.3
1677
+ 64.0 14.8 24.2
1678
+ 51.8
1679
+ 59.0 31.6
1680
+ 71.7
1681
+ 45.7
1682
+ w/ MS + Local
1683
+ PointNet++
1684
+ 62.2
1685
+ 67.6
1686
+ 16.6
1687
+ 65.0 15.0 24.2
1688
+ 57.0
1689
+ 59.0 30.9
1690
+ 77.7
1691
+ 47.5
1692
+ Table 12. Performance of [email protected] in each category on SUN RGB-D.
1693
+ 13
1694
+
A9E0T4oBgHgl3EQfxwJo/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
ANFKT4oBgHgl3EQfVi5k/content/2301.11788v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7ebe37fb75b7f85c5a052e292b93d2cb72a53bf6ad734ea69ee4350640b1375
3
+ size 785323
ANFKT4oBgHgl3EQfVi5k/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d93c07efb69a2b69584abe6be68b64dd2166cc2b00ed1aee9beafe6f0e57a873
3
+ size 3276845
AtFLT4oBgHgl3EQfFC_E/content/2301.11986v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f593e8b58c5bdf9a3badb5c63c2c52a085ef321ca73df98b63ddfd51627bdb4a
3
+ size 4896472
AtFLT4oBgHgl3EQfFC_E/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0176931c5366a57e2b29bc83b258b70f3b344a7bc9fb6eed39688e5e3e07391
3
+ size 288244
CNE1T4oBgHgl3EQfDwNk/content/tmp_files/2301.02881v1.pdf.txt ADDED
@@ -0,0 +1,1204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Moment of inertia of slowly rotating anisotropic neutron stars in f(R, T) gravity
2
+ Juan M. Z. Pretel1, ∗
3
+ 1Centro Brasileiro de Pesquisas F´ısicas, Rua Dr. Xavier Sigaud,
4
+ 150 URCA, Rio de Janeiro CEP 22290-180, RJ, Brazil
5
+ (Dated: January 10, 2023)
6
+ Within the framework of f(R, T) theories of gravity, we investigate the hydrostatic equilibrium of
7
+ anisotropic neutron stars with a physically relevant equation of state (EoS) for the radial pressure.
8
+ In particular, we focus on the f(R, T) = R + 2βT model, where β is a minimal coupling constant.
9
+ In the slowly rotating approximation, we derive the modified TOV equations and the expression for
10
+ the relativistic moment of inertia. The main properties of neutron stars, such as radius, mass and
11
+ moment of inertia, are studied in detail. Our results revel that the main consequence of the 2βT term
12
+ is a substantial increase in the surface radius for low enough central densities. Nevertheless, such
13
+ a term slightly modifies the total gravitational mass and moment of inertia of the slowly rotating
14
+ stars. Furthermore, the changes are noticeable when anisotropy is incorporated into the stellar fluid,
15
+ and it is possible to obtain higher masses that are consistent with the current observational data.
16
+ I.
17
+ INTRODUCTION
18
+ Despite the great success of General Relativity (GR) in
19
+ predicting various gravitational phenomena tested in the
20
+ solar system [1] and in strong-field situations (such as the
21
+ final stage of compact-object binaries [2, 3]), it could not
22
+ help to identify the nature of dark energy and other puz-
23
+ zles. In other words, there are still many open problems
24
+ in modern cosmology and it is well known that GR is not
25
+ the only theory of gravity [4]. Indeed, it has been shown
26
+ that GR is not renormalizable as a quantum field theory
27
+ unless higher-order curvature invariants are included in
28
+ its action [5, 6]. Furthermore, GR requires modifications
29
+ at small time and length scales or at energies comparable
30
+ with the Planck energy scales. In that regard, it has been
31
+ argued that the early-time inflation and the late-time ac-
32
+ celerated expansion of the Universe can be an effect of
33
+ the modification of the geometric theory formulated by
34
+ Einstein [7–10].
35
+ One of the simplest ways to modify GR is by re-
36
+ placing the Ricci scalar R in the standard Einstein-
37
+ Hilbert action by an arbitrary function of R, this is, the
38
+ so-called f(R) theories of gravity [11, 12].
39
+ Extensive
40
+ and detailed reviews on the cosmological implications
41
+ of such theories can be found in Refs. [13–16]. On the
42
+ other hand, at astrophysical level, these theories basically
43
+ change the Tolman-Oppenheimer-Volkoff (TOV) equa-
44
+ tions and hence the astrophysical properties of compact
45
+ stars, such as mass-radius relations, maximum masses, or
46
+ moment of inertia are somehow altered. See Ref. [17] for
47
+ a broad overview about relativistic and non-relativistic
48
+ stars within the context of modified theories of gravity
49
+ formulated in both metric and metric-affine approaches.
50
+ In most of the works reported in the literature about
51
+ internal structure of compact stars in GR and modified
52
+ theories of gravity it is very common to assume that such
53
+ stars are made up of an isotropic perfect fluid. Never-
54
55
+ theless, there are strong arguments indicating that the
56
+ impact of anisotropy (this is, unequal radial and tangen-
57
+ tial pressures) cannot be neglected when we deal with
58
+ nuclear matter at very high densities and pressures, for
59
+ instance, see Refs. [18–24] and references therein. In that
60
+ regard, it has been shown that the presence of anisotropy
61
+ can lead to significant changes in the main characteris-
62
+ tics of compact stars [21–23, 25–31]. Within the frame-
63
+ work of extended theories of gravity, it is also important
64
+ to mention that non-rotating anisotropic compact stars
65
+ have been recently studied by some authors in Refs. [32–
66
+ 50]. In addition, in the context of scalar-tensor theory
67
+ of gravity, slowly rotating anisotropic neutron stars have
68
+ been investigated in Ref. [51].
69
+ Harko and collaborators [52] have proposed a gener-
70
+ alization of f(R) modified theories of gravity in order
71
+ to introduce a coupling between geometry and matter,
72
+ namely f(R, T) gravity, where T denotes the trace of the
73
+ energy-momentum tensor. Indeed, the simplest and most
74
+ studied model involving a minimal matter-gravity cou-
75
+ pling is given by f(R, T) = R+2βT gravity. The cosmo-
76
+ logical aspects of this model have been recently explored
77
+ in Refs. [53–57], while other authors have investigated
78
+ the astrophysical consequences of the 2βT term on the
79
+ equilibrium structure of isotropic [58–65] and anisotropic
80
+ [37–42] compact stars. A characteristic of this model is
81
+ that R = 0 outside a compact star, and hence the ex-
82
+ terior spacetime is still described by the Schwarzschild
83
+ exterior solution.
84
+ As a result, it has been shown that
85
+ for high enough central densities the contributions of the
86
+ 2βT term are irrelevant, whereas below a certain cen-
87
+ tral density value the radius of an isotropic compact star
88
+ undergoes substantial deviations from GR [62, 63].
89
+ To determine the equilibrium configurations and mo-
90
+ ment of inertia of slowly rotating anisotropic stars up to
91
+ first order in the angular velocity, we will employ a phys-
92
+ ically motivated functional relation σ (defined as the dif-
93
+ ference between radial and tangential pressure) for the
94
+ anisotropy profile known in the literature as quasi-local
95
+ ansatz [25]. Moreover, we will follow a procedure anal-
96
+ ogous to that carried out by Hartle in GR [66] in order
97
+ arXiv:2301.02881v1 [gr-qc] 7 Jan 2023
98
+
99
+ 2
100
+ to obtain the modified version of the differential equation
101
+ which governs the difference between the angular velocity
102
+ of the star and the angular velocity of the local inertial
103
+ frames.
104
+ To achieve our results, the present work is organized
105
+ as follows: In Sec. II we briefly review f(R, T) gravity
106
+ and we present the corresponding relativistic equations
107
+ for the f(R, T) = R + 2βT model. In Sec. III we de-
108
+ rive the modified TOV equations for anisotropic stellar
109
+ configurations by adopting a non-rotating and slowly ro-
110
+ tating metric. Section IV presents a well-known EoS to
111
+ describe neutron stars as well as the anisotropy ansatz.
112
+ In Sec. V we discuss our numerical results, and finally,
113
+ our conclusions are presented in Sec. VI. In this paper
114
+ we will use a geometric unit system and the sign conven-
115
+ tion (−, +, +, +). However, our results will be given in
116
+ physical units.
117
+ II.
118
+ BASIC FORMALISM OF f(R, T) GRAVITY
119
+ A more general formulation of f(R) modified theories
120
+ of gravity consists in the inclusion of an explicit gravity-
121
+ matter coupling by means of an arbitrary function of the
122
+ Ricci scalar R and the trace of the energy-momentum
123
+ tensor T. Thus, the modified Einstein-Hilbert action in
124
+ f(R, T) gravity is given by [52]
125
+ S =
126
+ 1
127
+ 16π
128
+
129
+ f(R, T)√−gd4x +
130
+
131
+ Lm
132
+ √−gd4x,
133
+ (1)
134
+ where g is the determinant of the spacetime metric gµν
135
+ and Lm denotes the Lagrangian density for matter fields.
136
+ The corresponding field equations in f(R, T) gravity can
137
+ be obtained from the variation of the action (1) with
138
+ respect to the metric:
139
+ fR(R, T)Rµν − 1
140
+ 2f(R, T)gµν + [gµν□ − ∇µ∇ν]fR(R, T)
141
+ = 8πTµν − (Tµν + Θµν)fT (R, T),
142
+ (2)
143
+ where Rµν is the Ricci tensor, Tµν the energy-momentum
144
+ tensor, fR ≡ ∂f/∂R, fT ≡ ∂f/∂T, □ ≡ ∇µ∇µ is the
145
+ d’Alembertian operator with ∇µ standing for the covari-
146
+ ant derivative, and the tensor Θµν is defined in terms of
147
+ the variation of Tµν with respect to the metric, namely
148
+ Θµν ≡ gαβ δTαβ
149
+ δgµν
150
+ = −2Tµν + gµνLm − 2gαβ
151
+ ∂2Lm
152
+ ∂gµν∂gαβ .
153
+ (3)
154
+ Just as in f(R) gravity [11, 12], in f(R, T) theories the
155
+ Ricci scalar is also a dynamical entity which is described
156
+ by a differential equation obtained by taking the trace of
157
+ the field equations (2), this is
158
+ 3□fR(R, T) + RfR(R, T) − 2f(R, T)
159
+ = 8πT − (T + Θ)fT (R, T),
160
+ (4)
161
+ where we have denoted Θ = Θ µ
162
+ µ . In addition, the four-
163
+ divergence of Eq. (2) yields [67]
164
+ ∇µTµν =
165
+ fT (R, T)
166
+ 8π − fT (R, T)
167
+
168
+ (Tµν + Θµν)∇µ ln fT (R, T)
169
+ + ∇µΘµν − 1
170
+ 2gµν∇µT
171
+
172
+ .
173
+ (5)
174
+ In order to obtain numerical solutions that describe
175
+ compact stars, one has to specify the particular model of
176
+ f(R, T) gravity. In that regard, we consider the simplest
177
+ model involving a minimal matter-gravity coupling pro-
178
+ posed by Harko et al. [52], i.e. f(R, T) = R + 2βT grav-
179
+ ity, which has been the most studied model of f(R, T)
180
+ gravity at both astrophysical and cosmological scale. As
181
+ a consequence, Eqs. (2), (4) and (5) can be written as
182
+ follows
183
+ Gµν = 8πTµν + βTgµν − 2β(Tµν + Θµν),
184
+ (6)
185
+ R = −8πT − 2β(T − Θ),
186
+ (7)
187
+ ∇µTµν =
188
+
189
+ 8π − 2β
190
+
191
+ ∇µΘµν − 1
192
+ 2gµν∇µT
193
+
194
+ ,
195
+ (8)
196
+ where Gµν is the Einstein tensor.
197
+ III.
198
+ MODIFIED TOV EQUATIONS
199
+ A.
200
+ Non-rotating stars
201
+ We shall assume that the matter source is described
202
+ by an anisotropic perfect fluid with energy density ρ, ra-
203
+ dial pressure pr and tangential pressure pt. Under theses
204
+ assumptions, the energy-momentum tensor is given by
205
+ Tµν = (ρ + pt)uµuν + ptgµν − σkµkν,
206
+ (9)
207
+ with uµ being the four-velocity of the fluid and which
208
+ satisfies the normalization property uµuµ = −1, kµ is a
209
+ unit radial four-vector so that kµkµ = 1, and σ ≡ pt − pr
210
+ is the anisotropy factor.
211
+ In addition, we consider that the interior spacetime
212
+ of the spherically symmetric stellar configuration is de-
213
+ scribed by the standard line element
214
+ ds2 = −e2ψdt2 + e2λdr2 + r2(dθ2 + sin2 θdφ2),
215
+ (10)
216
+ where xµ = (t, r, θ, φ) are the Schwarzschild-like coordi-
217
+ nates, and the metric potentials ψ and λ are functions
218
+ only of the radial coordinate in a hydrostatic equilib-
219
+ rium situation. Consequently, we can write uµ = e−ψδµ
220
+ 0 ,
221
+ kµ = e−λδµ
222
+ 1 and the trace of the energy-momentum ten-
223
+ sor (9) takes the form T = −ρ + 3pr + 2σ.
224
+ Within the context of anisotropic fluids in f(R, T)
225
+ gravity, the most adopted choice in the literature for
226
+ the matter Lagrangian density is given by Lm = P,
227
+ where P ≡ (pr + 2pt)/3.
228
+ For more details about this
229
+
230
+ 3
231
+ choice, see Refs. [37–40, 42]. Under this consideration,
232
+ Θµν = −2Tµν + Pgµν and Eqs. (6), (7) and (8) become
233
+ Gµν = 8πTµν + βTgµν + 2β(Tµν − Pgµν),
234
+ (11)
235
+ R = −8πT − 2β(3T − 4P),
236
+ (12)
237
+ ∇µTµν =
238
+
239
+ 8π + 2β ∂ν
240
+
241
+ P − 1
242
+ 2T
243
+
244
+ .
245
+ (13)
246
+ For the metric (10) and energy-momentum tensor (9),
247
+ the non-zero components of the field equations (11) are
248
+ explicitly given by
249
+ 1
250
+ r2
251
+ d
252
+ dr(re−2λ) − 1
253
+ r2 = −8πρ + β
254
+
255
+ −3ρ + pr + 2
256
+
257
+
258
+ ,
259
+ (14)
260
+ e−2λ
261
+ �2
262
+ r ψ′ + 1
263
+ r2
264
+
265
+ − 1
266
+ r2 = 8πpr + β
267
+
268
+ −ρ + 3pr + 2
269
+
270
+
271
+ ,
272
+ (15)
273
+ e−2λ
274
+
275
+ ψ′′ + ψ′2 − ψ′λ′ + 1
276
+ r (ψ′ − λ′)
277
+
278
+ = 8π(pr + σ) + β
279
+
280
+ −ρ + 3pr + 8
281
+
282
+
283
+ ,
284
+ (16)
285
+ where the prime represents differentiation with respect
286
+ to the radial coordinate. Moreover, Eq. (13) implies that
287
+ dpr
288
+ dr = − (ρ + pr)ψ′ + 2
289
+ r σ
290
+ +
291
+ β
292
+ 8π + 2β
293
+ d
294
+ dr
295
+
296
+ ρ − pr − 2
297
+
298
+
299
+ .
300
+ (17)
301
+ Eq. (14) leads to
302
+ re−2λ = r −
303
+
304
+ r2
305
+
306
+ 8πρ + β
307
+
308
+ 3ρ − pr − 2
309
+
310
+ ��
311
+ dr,
312
+ (18)
313
+ or alternatively,
314
+ e−2λ = 1 − 2m
315
+ r ,
316
+ (19)
317
+ where m(r) represents the gravitational mass within a
318
+ sphere of radius r, given by
319
+ m(r) = 4π
320
+ � r
321
+ 0
322
+ ¯r2ρ(¯r)d¯r
323
+ + β
324
+ 2
325
+ � r
326
+ 0
327
+ ¯r2
328
+
329
+ 3ρ(¯r) − pr(¯r) − 2
330
+ 3σ(¯r)
331
+
332
+ d¯r.
333
+ (20)
334
+ At the surface, where the radial pressure vanishes,
335
+ M ≡ m(rsur) is the total mass of the anisotropic compact
336
+ star. From our anisotropic version (20), here we can see
337
+ that by making σ = 0 one recovers the mass function for
338
+ the isotropic case given in Ref. [63]. In view of Eq. (19),
339
+ from Eq. (15) we obtain
340
+ ψ′ =
341
+ �m
342
+ r2 + 4πrpr + βr
343
+ 2
344
+
345
+ −ρ + 3pr + 2
346
+
347
+ ��
348
+ ×
349
+
350
+ 1 − 2m
351
+ r
352
+ �−1
353
+ ,
354
+ (21)
355
+ and hence the relativistic structure of an anisotropic com-
356
+ pact star within the context of f(R, T) = R+2βT gravity
357
+ is described by the modified TOV equations:
358
+ dm
359
+ dr = 4πr2ρ + βr2
360
+ 2
361
+
362
+ 3ρ − pr − 2
363
+
364
+
365
+ ,
366
+ (22)
367
+ dpr
368
+ dr = − ρ + pr
369
+ 1 + a
370
+ �m
371
+ r2 + 4πrpr + βr
372
+ 2
373
+
374
+ 3pr − ρ + 2
375
+
376
+ ��
377
+ ×
378
+
379
+ 1 − 2m
380
+ r
381
+ �−1
382
+ +
383
+ a
384
+ 1 + a
385
+
386
+ dr
387
+ +
388
+ 2
389
+ 1 + a
390
+ �σ
391
+ r − a
392
+ 3
393
+
394
+ dr
395
+
396
+ ,
397
+ (23)
398
+
399
+ dr =
400
+ 1
401
+ ρ + pr
402
+
403
+ −(1 + a)dpr
404
+ dr + adρ
405
+ dr + 2
406
+ �σ
407
+ r − a
408
+ 3
409
+
410
+ dr
411
+ ��
412
+ ,
413
+ (24)
414
+ where we have defined a ≡ β/(8π + 2β). As expected,
415
+ the modified TOV equations in the isotropic scenario are
416
+ retrieved when pr = pt [63]. Furthermore, when the min-
417
+ imal coupling constant vanishes (this is, β = 0), we can
418
+ recover the standard TOV equations for anisotropic stars
419
+ in GR [23].
420
+ Given an EoS for the radial pressure pr = pr(ρ) and
421
+ an anisotropy relation for σ, Eqs. (22) and (23) can be
422
+ integrated by guaranteeing regularity at the center of the
423
+ star and for a given value of central energy density. In
424
+ addition, according to Eq. (12), we notice that R = 0 in
425
+ the outer region of the star. This means that we can still
426
+ use the Schwarzschild vacuum solution to describe the
427
+ exterior spacetime so that the interior solution is matched
428
+ at the boundary r = rsur to the exterior Schwarzschild
429
+ solution. Thus, the system of equations (22)-(24) can be
430
+ solved by imposing the following boundary conditions
431
+ m(0) = 0,
432
+ ρ(0) = ρc,
433
+ ψ(rsur) = 1
434
+ 2 ln
435
+
436
+ 1 − 2M
437
+ rsur
438
+
439
+ .
440
+ (25)
441
+ B.
442
+ Slowly rotating stars
443
+ In the slowly rotating approximation [66], i.e., when
444
+ rotational corrections appear at first order in the angu-
445
+ lar velocity of the stars Ω, the spacetime metric (10) is
446
+ replaced by its slowly rotating counterpart [66, 68]
447
+ ds2 = − e2ψ(r)dt2 + e2λ(r)dr2 + r2(dθ2 + sin2 θdφ2)
448
+ − 2ω(r, θ)r2 sin2 θdtdφ,
449
+ (26)
450
+ where ω(r, θ) stands for the angular velocity of the lo-
451
+ cal inertial frames dragged by the stellar rotation.
452
+ In
453
+ other words, if a particle is dropped from rest at a great
454
+ distance from the rotating star, the particle would expe-
455
+ rience an ever increasing drag in the direction of rotation
456
+ of the star as it approaches. In fact, here it is convenient
457
+ to define the difference ϖ ≡ Ω − ω as the coordinate an-
458
+ gular velocity of the fluid element at (r, θ) seen by the
459
+ freely falling observer [66].
460
+
461
+ 4
462
+ Since Ω is the angular velocity of the fluid as seen by an
463
+ observer at rest at some spacetime point (t, r, θ, φ), one
464
+ finds that the four-velocity up to linear terms in Ω is given
465
+ by uµ = (e−ψ, 0, 0, Ωe−ψ). To this order, the spherical
466
+ symmetry is still preserved and it is possible to extend
467
+ the validity of the TOV equations (22)-(24). Neverthe-
468
+ less, the 03-component of the field equations contributes
469
+ an additional differential equation for angular velocity
470
+ ω(r, θ). By retaining only first-order terms in the angu-
471
+ lar velocity, we have T03 = −[ϖ(ρ + pt) + ωpt]r2 sin2 θ
472
+ and hence Eq. (11) gives the following expression
473
+ G03 = −
474
+
475
+ 2(4π + β)(ρ + pt)ϖ + 8πωpt
476
+
477
+
478
+ −ρ + 1
479
+ 3pr + 8
480
+ 3pt
481
+
482
+ ω
483
+
484
+ r2 sin2 θ,
485
+ (27)
486
+ or alternatively,
487
+ eψ−λ
488
+ r4
489
+
490
+ ∂r
491
+
492
+ e−(ψ+λ)r4 ∂ϖ
493
+ ∂r
494
+
495
+ +
496
+ 1
497
+ r2 sin3 θ
498
+
499
+ ∂θ
500
+
501
+ sin3 θ∂ϖ
502
+ ∂θ
503
+
504
+ = 4(4π + β)(ρ + pt)ϖ.
505
+ (28)
506
+ Following the procedure carried out by Hartle in GR
507
+ [66] and Staykov et al. in R2-gravity [68], we expand ϖ
508
+ in the form
509
+ ϖ(r, θ) =
510
+
511
+
512
+ l=1
513
+ ϖl(r)
514
+ � −1
515
+ sin θ
516
+ dPl
517
+
518
+
519
+ ,
520
+ (29)
521
+ where Pl are Legendre polynomials. In view of Eq. (29),
522
+ we can write
523
+
524
+ ∂θ
525
+
526
+ sin3 θ∂ϖ
527
+ ∂θ
528
+
529
+ =
530
+
531
+ l
532
+ ϖl(r)
533
+
534
+ (cos2 θ − sin2 θ)dPl
535
+
536
+ − sin θ cos θd2Pl
537
+ dθ2 − sin2 θd3Pl
538
+ dθ3
539
+
540
+ =
541
+
542
+ l
543
+ ϖl(r) [l(l + 1) − 2] sin2 θdPl
544
+ dθ ,
545
+ (30)
546
+ where we have used the Legendre differential equation
547
+ d2Pl
548
+ dθ2 + cos θ
549
+ sin θ
550
+ dPl
551
+ dθ + l(l + 1)Pl = 0.
552
+ (31)
553
+ Thus, after substituting Eqs. (29) and (30) into (28),
554
+ we get
555
+ eψ−λ
556
+ r4
557
+ d
558
+ dr
559
+
560
+ e−(ψ+λ)r4 dϖl
561
+ dr
562
+
563
+ − l(l + 1) − 2
564
+ r2
565
+ ϖl
566
+ = 4(4π + β)(ρ + pt)ϖl.
567
+ (32)
568
+ At great distances from the stellar surface, where
569
+ spacetime must be asymptotically flat, the solution of
570
+ Eq. (32) assumes the form ϖl(r) → c1r−l−2 + c2rl−1.
571
+ Furthermore, the dragging angular velocity is expected
572
+ to be ω → 2J/r3 (or alternatively, ϖ → Ω − 2J/r3) for
573
+ r → ∞, where J is the angular momentum carried out
574
+ by the star (see Ref. [69] for more details). Therefore, by
575
+ comparison we can see that all coefficients in the Legen-
576
+ dre expansion vanish except for l = 1. This means that
577
+ ϖ is a function of r only, and Eq. (32) reduces to
578
+ eψ−λ
579
+ r4
580
+ d
581
+ dr
582
+
583
+ e−(ψ+λ)r4 dϖ
584
+ dr
585
+
586
+ = 4(4π + β)(ρ + pt)ϖ,
587
+ (33)
588
+ and taking into account that e−(ψ+λ) = 1 at the edge of
589
+ the star and beyond, the last equation can be integrated
590
+ to give
591
+
592
+ r4 dϖ
593
+ dr
594
+
595
+ rsur
596
+ = 4(4π + β)
597
+ � rsur
598
+ 0
599
+ (ρ + pt)r4eλ−ψϖdr. (34)
600
+ From Eq. (34) we can obtain the relativistic moment
601
+ of inertia of a slowly rotating anisotropic compact star in
602
+ f(R, T) = R + 2βT gravity by means of expression
603
+ I = 2
604
+ 3(4π + β)
605
+ � rsur
606
+ 0
607
+ (ρ + pr + σ)eλ−ψr4 �ϖ
608
+
609
+
610
+ dr,
611
+ (35)
612
+ and hence the angular momentum J = IΩ can be written
613
+ as
614
+ J = 2
615
+ 3(4π + β)
616
+ � rsur
617
+ 0
618
+ ρ + pr + σ
619
+
620
+ 1 − 2m/r
621
+ (Ω − ω)e−ψr4dr. (36)
622
+ It can be seen that the above result then reduces to the
623
+ pure general relativistic expression when β = 0. Further-
624
+ more, when both parameters β and σ vanish, Eq. (36)
625
+ reduces to the expression given in Ref. [69] for isotropic
626
+ compact stars in Einstein gravity. Analogously as in GR,
627
+ the differential equation (33) will be integrated from the
628
+ origin at r = 0 with an arbitrary choice of the central
629
+ value ϖ(0) and with vanishing slope, i.e., dϖ/dr = 0.
630
+ Once the solution for ϖ(r) is found, we can then com-
631
+ pute the moment of inertia via the integral (35).
632
+ IV.
633
+ EQUATION OF STATE AND ANISOTROPY
634
+ ANSATZ
635
+ Just as the construction of anisotropic compact stars
636
+ in GR, to close the system of Eqs. (22)-(24), one needs to
637
+ specify a barotropic EoS (which relates the radial pres-
638
+ sure to the mass density by means of equation pr = pr(ρ))
639
+ and also assign an anisotropy function σ since there is
640
+ now an extra degree of freedom pt. Alternatively, it is
641
+ possible to assign an EoS for radial pressure and another
642
+ for tangential pressure.
643
+ For instance, an approach for
644
+ the study of anisotropic fluids has been recently carried
645
+ out within the context of Newtonian gravity in Ref. [70]
646
+ and in conventional GR [71], where both the radial and
647
+ tangential pressures satisfy a polytropic EoS.
648
+ In this work, we will follow the first procedure de-
649
+ scribed in the previous paragraph in order to deal
650
+
651
+ 5
652
+ with anisotropic neutron stars within the framework of
653
+ f(R, T) gravity. Indeed, for radial pressure we use a well-
654
+ known and physically relevant EoS which is compatible
655
+ with the constraints of the GW170817 event (the first de-
656
+ tection of gravitational waves from a binary neutron star
657
+ inspiral [72]), namely, the soft SLy EoS [73]. This EoS
658
+ is based on the SLy effective nucleon-nucleon interaction,
659
+ which is suitable for the description of strong interactions
660
+ in the nucleon component of dense neutron-star matter.
661
+ Such unified EoS describes both the neutron-star crust
662
+ and the liquid core (which is assumed to be a “minimal”
663
+ npeµ composition), and it can be represented by the fol-
664
+ lowing analytical expression
665
+ ζ(ξ) = a1 + a2ξ + a3ξ3
666
+ 1 + a4ξ
667
+ f(a5(ξ − a6))
668
+ + (a7 + a8ξ)f(a9(a10 − ξ))
669
+ + (a11 + a12ξ)f(a13(a14 − ξ))
670
+ + (a15 + a16ξ)f(a17(a18 − ξ)),
671
+ (37)
672
+ where ζ ≡ log(pr/dyn cm−2), ξ ≡ log(ρ/g cm−3), and
673
+ f(x) ≡ 1/(ex + 1). The values ai are fitting parameters
674
+ and can be found in Ref. [74].
675
+ In addition, we adopt the anisotropy ansatz proposed
676
+ by Horvat et al. [25] to model anisotropic matter inside
677
+ compact stars, namely
678
+ σ = αprµ = αpr(1 − e−2λ),
679
+ (38)
680
+ with µ(r) ≡ 2m/r being the compactness of the star. The
681
+ advantage of this ansatz is that the stellar fluid becomes
682
+ isotropic at the origin since µ ∼ r2 when r → 0. It is also
683
+ commonly known as quasi-local ansatz in the literature
684
+ [25], where α controls the amount of anisotropy inside
685
+ the star and in principle can assume positive or negative
686
+ values [23, 25, 33, 50, 51, 75, 76]. Note that in the Newto-
687
+ nian limit, when the pressure contribution to the energy
688
+ density is negligible, the effect of anisotropy vanishes in
689
+ the hydrostatic equilibrium equation. Regardless of the
690
+ particular functional form of the anisotropy model, here
691
+ we must emphasize that physically relevant solutions cor-
692
+ respond to pr, pt ≥ 0 for r ≤ rsur.
693
+ V.
694
+ NUMERICAL RESULTS AND DISCUSSION
695
+ Given an EoS for the radial pressure, we numerically
696
+ integrate the modified TOV equations (22)-(24) with
697
+ boundary conditions (25) from the stellar center to the
698
+ surface r = rsur where the radial pressure vanishes. In
699
+ addition, we have to specify a particular value for the cou-
700
+ pling constant β and for anisotropy parameter α which
701
+ appears in Eq. (38). For instance, for a central mass den-
702
+ sity ρc = 2.0×1018 kg/m3 with SLy EoS (37), Fig. 1 illus-
703
+ trates the mass function and anisotropy factor as func-
704
+ tions of the radial coordinate for β = −0.01 and several
705
+ values of α. The left plot reveals an increase in gravita-
706
+ tional mass and a decrease in radius as α increases. More-
707
+ over, from the right plot we can see that the anisotropy
708
+ vanishes at the center (which is a required condition in
709
+ order to guarantee regularity), is more pronounced in the
710
+ intermediate regions, and it vanishes again at the stellar
711
+ surface.
712
+ For the anisotropy function (38), the left panel of Fig. 2
713
+ displays the mass-radius relations for anisotropic neutron
714
+ stars with SLy EoS in f(R, T) = R+2βT gravity for three
715
+ particular values of the coupling constant β and different
716
+ values of α. Here the total gravitational mass of each
717
+ configuration is given by M = m(rsur), and the isotropic
718
+ case in Einstein gravity has been included for compari-
719
+ son purposes by a black solid line. The mass-radius re-
720
+ lation exhibits substantial deviations from GR mainly
721
+ in the low-mass region. On the other hand, anisotropy
722
+ introduces considerable changes only in the high-mass re-
723
+ gion. We remark that the 2βT term together with the
724
+ presence of anisotropies (with positive values of α) al-
725
+ low us to obtain maximum masses bigger than 2.0 M⊙.
726
+ As a consequence, the introduction of anisotropies in
727
+ f(R, T) = R + 2βT gravity gives rise to massive neutron
728
+ stars that are in good agreement with the millisecond
729
+ pulsar observations [77, 78]. From NICER and XMM-
730
+ Newton data [79], the radius measurement for a 1.4 M⊙
731
+ neutron star is 12.45 ± 0.65 km and, according to the
732
+ mass-radius diagram, our results consistently describe
733
+ this star when β = −0.01 (see blue curves).
734
+ Further-
735
+ more, it should be noted that the parameter β = −0.01
736
+ is the one that best fits the mass-radius constraint from
737
+ the GW170817 event (see the filled cyan region). Nev-
738
+ ertheless, the massive pulsar J0740+6620 (whose radius
739
+ is 12.35 ± 0.75 km [79]) could be described only when
740
+ β = −0.03 and α = 0.4.
741
+ It is worth commenting that the value of the parame-
742
+ ter α could be constrained, but that will depend on the
743
+ particular compact star observed in the Universe. For in-
744
+ stance, the range α ∈ [−0.4, −0.2] consistently describes
745
+ the millisecond pulsar J1614-2230 regardless of the value
746
+ of β. However, for highly massive neutron stars whose
747
+ masses are greater than 2.0 M⊙, positive values of α will
748
+ be required. For PSR J0740+6620, whose gravitational
749
+ mass is 2.08 M⊙, the best value for α is 0.2. In fact, this
750
+ constraint will depend not only on the modified theory
751
+ of gravity but also on the equation of state adopted for
752
+ the radial pressure.
753
+ According to the right panel of Fig. 2, the parameter
754
+ β slightly modifies the total gravitational mass, however,
755
+ the effect of anisotropy introduces more relevant changes.
756
+ To better analyze the effects that arise as a result of the
757
+ modification of Einstein’s theory as well as the incorpo-
758
+ ration of anisotropies, in Fig. 3 we show the behavior
759
+ of the surface radius as a function of the central den-
760
+ sity. From the left plot we can conclude that the radius
761
+ is significantly altered due to the 2βT term in the low-
762
+ central-density region, while anisotropy slightly modifies
763
+ the radius of the stars. The right plot corresponds to the
764
+ pure general relativistic case and it can be observed that
765
+ the radius undergoes more significant modifications with
766
+ respect to its isotropic counterpart if the values for |α|
767
+
768
+ 6
769
+ 0
770
+ 2
771
+ 4
772
+ 6
773
+ 8
774
+ 10
775
+ 0.0
776
+ 0.5
777
+ 1.0
778
+ 1.5
779
+ 2.0
780
+ r [km]
781
+ m [M⊙]
782
+ -0.6 -0.3
783
+ 0
784
+ 0.3
785
+ 0.6
786
+ 10.85
787
+ 10.92
788
+ 10.99
789
+ α
790
+ rsur [km]
791
+ α
792
+ -0.6
793
+ -0.4
794
+ -0.2
795
+ 0
796
+ 0.2
797
+ 0.4
798
+ 0.6
799
+ 0
800
+ 2
801
+ 4
802
+ 6
803
+ 8
804
+ 10
805
+ -4
806
+ -2
807
+ 0
808
+ 2
809
+ 4
810
+ 6
811
+ r [km]
812
+ σ [1033 Pa]
813
+ α
814
+ -0.6
815
+ -0.4
816
+ -0.2
817
+ 0
818
+ 0.2
819
+ 0.4
820
+ 0.6
821
+ FIG. 1.
822
+ Radial behaviour of the mass function (left panel) and the anisotropy factor (right panel) in the framework of
823
+ f(R, T) = R + 2βT gravity for β = −0.01 and different values of α. SLy EoS (37) is valid from 1011 kg/m3 up to the maximum
824
+ density reachable within neutron stars [73], and in these plots we have considered ρc = 2.0 × 1018 kg/m3. The isotropic case is
825
+ recovered when the anisotropy parameter vanishes (this is, α = 0). We can observe that the gravitational mass increases and
826
+ the radius decreases as α increases. In addition, the anisotropy is more pronounced in the intermediate regions and vanishes
827
+ at the stellar center as expected.
828
+ are larger than those considered in the left plot.
829
+ Eq. (33) is first solved in the interior region from the
830
+ center to the surface of the star by considering an arbi-
831
+ trary value for ϖ and with vanishing slope at r = 0. Then
832
+ the same equation is solved in exterior spacetime from the
833
+ surface to a sufficiently far distance from the star where
834
+ ϖ(r) → Ω. In Fig. 4 we display the radial profile of these
835
+ solutions for the central mass density considered above.
836
+ We observe that ϖ(r) is an increasing function of the ra-
837
+ dial coordinate, whereas ω(r) is a decreasing function and
838
+ hence the largest rate of dragging of local inertial frames
839
+ always occurs at the stellar center. Furthermore, appre-
840
+ ciable effects (mainly in the interior region of the stellar
841
+ configuration) can be noted on frame-dragging angular
842
+ velocity due to the inclusion of anisotropies.
843
+ Once ϖ(r) is known for each stellar configuration, we
844
+ can then determine the moment of inertia by means of
845
+ Eq. (35). Figure 5 presents the moment of inertia as a
846
+ function of the total gravitational mass in GR and within
847
+ the context of f(R, T) = R + 2βT gravity for β = −0.01.
848
+ It can be observed that the moment of inertia undergoes
849
+ irrelevant changes from GR, however, it can change sig-
850
+ nificantly due to anisotropies in the high-mass region.
851
+ VI.
852
+ CONCLUSIONS
853
+ In this work we have investigated slowly rotating
854
+ anisotropic neutron stars in f(R, T) = R + 2βT grav-
855
+ ity, where the degree of modification with respect to GR
856
+ is measured by the coupling constant β. The modified
857
+ TOV equations and moment of inertia have been derived
858
+ within the context of anisotropic fluids by retaining only
859
+ first-order terms in the angular velocity as measured by
860
+ a distant observer (Ω). Notice that, within this linear
861
+ approximation, the moment of inertia can be calculated
862
+ from the structure of a non-rotating configuration since
863
+ the TOV equations describing the static background are
864
+ still valid. In addition, we have adopted the anisotropy
865
+ ansatz proposed by Horvat and collaborators [25], where
866
+ appears a dimensionless parameter α which measures the
867
+ degree of anisotropy within the neutron star.
868
+ We have analyzed the consequences of the extra term
869
+ 2βT together with anisotropies on the properties of neu-
870
+ tron stars such as radius, mass, frame-dragging angular
871
+ velocity and moment of inertia. Indeed, our results re-
872
+ veal that the radius deviates considerably from GR in
873
+ the low-central-density region, however, the total gravi-
874
+ tational mass and the moment of inertia undergo slight
875
+ modifications due to the influence of the effects generated
876
+ by the minimal matter-gravity coupling. Furthermore,
877
+ the presence of anisotropy generates substantial changes
878
+ both in the mass and in the moment of inertia with re-
879
+ spect to the isotropic case. The appreciable effects due
880
+ to the inclusion of anisotropy occur mainly in the higher-
881
+ central-density region, this is, for large masses (near the
882
+ maximum-mass configuration).
883
+ ACKNOWLEDGMENTS
884
+ JMZP acknowledges financial support from the PCI
885
+ program of the Brazilian agency “Conselho Nacional de
886
+ Desenvolvimento Cient´ıfico e Tecnol´ogico”–CNPq.
887
+
888
+ 7
889
+ PSR J1614-2230
890
+ PSR J0740+6620
891
+ GW170817
892
+ β = 0
893
+ β = -0.01
894
+ β = -0.02
895
+ β = -0.03
896
+ α = -0.4
897
+ α = -0.2
898
+ α = 0
899
+ α = 0.2
900
+ α = 0.4
901
+ 10
902
+ 12
903
+ 14
904
+ 16
905
+ 0.5
906
+ 1.0
907
+ 1.5
908
+ 2.0
909
+ rsur [km]
910
+ M [M⊙]
911
+ 17.8
912
+ 18.0
913
+ 18.2
914
+ 18.4
915
+ 18.6
916
+ 18.8
917
+ 0.5
918
+ 1.0
919
+ 1.5
920
+ 2.0
921
+ Log ρc [kg/m3]
922
+ M [M⊙]
923
+ 1.32
924
+ 1.36
925
+ 1.4
926
+ 17.94
927
+ 17.98
928
+ 18.02
929
+ FIG. 2.
930
+ Mass-radius diagrams (left panel) and mass-central density relations (right panel) for anisotropic neutron stars with
931
+ SLy EoS (37) in f(R, T) = R + 2βT gravity for β = −0.01 (blue curves), β = −0.02 (orange curves) and β = −0.03 (in green).
932
+ The solid lines correspond to α = 0 (that is, isotropic solutions), and the pure GR case (β = 0) is shown in both plots as a
933
+ benchmark by a black line. The magenta horizontal band stands for the observational measurement for the millisecond pulsar
934
+ J1614-2230 reported in Ref. [77]. The filled cyan region is the mass-radius constraint from the GW170817 event. The Radius
935
+ of PSR J0740+6620 from NICER and XMM-Newton Data [79] is indicated by the top brown dot with their respective error
936
+ bars. Moreover, the bottom brown dot represents the radius estimate for a 1.4 M⊙ neutron star [79].
937
+ α = -0.4
938
+ α = -0.2
939
+ α = 0
940
+ α = 0.2
941
+ α = 0.4
942
+ 17.6
943
+ 17.8
944
+ 18.0
945
+ 18.2
946
+ 18.4
947
+ 18.6
948
+ 18.8
949
+ 10
950
+ 15
951
+ 20
952
+ 25
953
+ 30
954
+ Log ρc [kg/m3]
955
+ rsur [km]
956
+ 17.76
957
+ 17.82
958
+ 17.88
959
+ 13
960
+ 15
961
+ 17
962
+ α = -1.0
963
+ α = -0.5
964
+ α = 0
965
+ α = 0.5
966
+ α = 1.0
967
+ 17.6
968
+ 17.8
969
+ 18.0
970
+ 18.2
971
+ 18.4
972
+ 18.6
973
+ 9
974
+ 10
975
+ 11
976
+ 12
977
+ 13
978
+ 14
979
+ Log ρc [kg/m3]
980
+ rsur [km]
981
+ FIG. 3.
982
+ Surface radius as a function of the central mass density. On the left panel, different styles and colors of the curves
983
+ correspond to different values of the parameters β and α as in Fig. 2. The most substantial deviations from GR take place at
984
+ low central densities, whereas for large central densities the changes are very slight due to the 2βT term. On the right panel
985
+ we display the modifications of the radius due to the inclusion of anisotropies when β = 0, where we have considered larger
986
+ values for |α| in order to appreciate the changes in radius as a consequence of anisotropy. We can mainly observe three regions
987
+ where the radius can decrease or increase depending on the value of α.
988
+
989
+ 8
990
+ α = -0.4
991
+ α = -0.2
992
+ α = 0
993
+ α = 0.2
994
+ α = 0.4
995
+ 0
996
+ 10
997
+ 20
998
+ 30
999
+ 40
1000
+ 0.4
1001
+ 0.6
1002
+ 0.8
1003
+ 1.0
1004
+ r [km]
1005
+ ϖ/Ω
1006
+ α = -0.4
1007
+ α = -0.2
1008
+ α = 0
1009
+ α = 0.2
1010
+ α = 0.4
1011
+ 0
1012
+ 10
1013
+ 20
1014
+ 30
1015
+ 40
1016
+ 0.0
1017
+ 0.2
1018
+ 0.4
1019
+ 0.6
1020
+ r [km]
1021
+ ω/Ω
1022
+ FIG. 4.
1023
+ Left panel: Numerical solution of the differential equation (33) for a given central mass density ρc = 2.0 × 1018 kg/m3
1024
+ in f(R, T) = R + 2βT gravity with β = −0.01 and different values of the free parameter α. The dotted lines represent the
1025
+ solutions of the exterior region, and as expected ϖ → Ω at great distances from the stellar surface. Right panel: Ratio of
1026
+ frame-dragging angular velocity to the angular velocity of the stars, namely ω(r)/Ω = 1 − ϖ(r)/Ω. Notice that the solution of
1027
+ the exterior problem provides an asymptotic behavior of ω(r).
1028
+ α = -0.4
1029
+ α = -0.2
1030
+ α = 0
1031
+ α = 0.2
1032
+ α = 0.4
1033
+ 0.5
1034
+ 1.0
1035
+ 1.5
1036
+ 2.0
1037
+ 0.5
1038
+ 1.0
1039
+ 1.5
1040
+ 2.0
1041
+ M [M⊙]
1042
+ I [1038 kg.m2]
1043
+ α = -0.4
1044
+ α = -0.2
1045
+ α = 0
1046
+ α = 0.2
1047
+ α = 0.4
1048
+ 1.6
1049
+ 1.7
1050
+ 1.8
1051
+ 1.9
1052
+ 2.0
1053
+ 1.7
1054
+ 1.8
1055
+ 1.9
1056
+ 2.0
1057
+ 2.1
1058
+ M [M⊙]
1059
+ I [1038 kg.m2]
1060
+ FIG. 5.
1061
+ Left panel: Moment of inertia of slowly rotating anisotropic neutron stars as a function of the total mass within the
1062
+ context of f(R, T) = R + 2βT gravity for β = −0.01 in blue. Different styles of the curves correspond to different values of the
1063
+ anisotropy parameter α. Results based on Einstein’s theory have been included for comparison purposes and are represented
1064
+ by the black curves. We can appreciate that the moment of inertia is modified very slightly by the 2βT term, however, the
1065
+ anisotropies introduce relevant changes in the large-mass region. The right plot is a magnification of the left one.
1066
+ [1] C. M. Will, Living Rev. Relativ. 17, 4 (2014).
1067
+ [2] B. P. Abbott et al. (LIGO Scientific and Virgo Collabo-
1068
+ rations), Phys. Rev. Lett. 116, 221101 (2016).
1069
+ [3] B. P. Abbott et al. (LIGO Scientific and Virgo Collabo-
1070
+ rations), Phys. Rev. Lett. 123, 011102 (2019).
1071
+ [4] E. N. Saridakis et al., arXiv:2105.12582 [gr-qc] (2021).
1072
+ [5] K. S. Stelle, Phys. Rev. D 16, 953 (1977).
1073
+ [6] G. A. Vilkovisky, Class. Quantum Grav. 9, 895 (1992).
1074
+ [7] A. Starobinsky, Physics Letters B 91, 99 (1980).
1075
+ [8] S. Capozziello, Int. J. Mod. Phys. D 11, 483 (2002).
1076
+ [9] S. M. Carroll et al., Phys. Rev. D 70, 043528 (2004).
1077
+ [10] S. Nojiri and S. D. Odintsov, Int. J. Geom. Meth. Mod.
1078
+ Phys. 4, 115 (2007).
1079
+ [11] T. P. Sotiriou and V. Faraoni, Rev. Mod. Phys. 82, 451
1080
+ (2010).
1081
+ [12] A. De Felice and S. Tsujikawa, Living Reviews in Rela-
1082
+
1083
+ 9
1084
+ tivity 13, 3 (2010).
1085
+ [13] S. Capozziello and M. De Laurentis, Phys. Rep. 509, 167
1086
+ (2011).
1087
+ [14] S. Nojiri and S. D. Odintsov, Phys. Rep. 505, 59 (2011).
1088
+ [15] T. Clifton, P. G. Ferreira, A. Padilla,
1089
+ and C. Skordis,
1090
+ Phys. Rep. 513, 1 (2012).
1091
+ [16] S. Nojiri, S. Odintsov, and V. Oikonomou, Physics Re-
1092
+ ports 692, 1 (2017).
1093
+ [17] G. J. Olmo, D. Rubiera-Garcia, and A. Wojnar, Physics
1094
+ Reports 876, 1 (2020).
1095
+ [18] L. Herrera and N. Santos, Phys. Rep. 286, 53 (1997).
1096
+ [19] A. A. Isayev, Phys. Rev. D 96, 083007 (2017).
1097
+ [20] B. V. Ivanov, Eur. Phys. J. C 77, 738 (2017).
1098
+ [21] S. K. Maurya, A. Banerjee, and S. Hansraj, Phys. Rev.
1099
+ D 97, 044022 (2018).
1100
+ [22] B. Biswas and S. Bose, Phys. Rev. D 99, 104002 (2019).
1101
+ [23] J. M. Z. Pretel, Eur. Phys. J. C 80, 726 (2020).
1102
+ [24] G. H. Bordbar and M. Karami, Eur. Phys. J. C 82, 74
1103
+ (2022).
1104
+ [25] D. Horvat, S. Iliji´c, and A. Marunovi´c, Class. Quantum
1105
+ Grav. 28, 025009 (2010).
1106
+ [26] A. Rahmansyah et al., Eur. Phys. J. C 80, 769 (2020).
1107
+ [27] Z. Roupas and G. G. L. Nashed, Eur. Phys. J. C 80, 905
1108
+ (2020).
1109
+ [28] S. Das et al., Annals of Physics 433, 168597 (2021).
1110
+ [29] S. Das et al., Gen. Relativ. Gravit. 53, 25 (2021).
1111
+ [30] Z. Roupas, Astrophys. Space Sci. 366, 9 (2021).
1112
+ [31] S. Das, B. K. Parida, and R. Sharma, Eur. Phys. J. C
1113
+ 82, 136 (2022).
1114
+ [32] M. F. Shamir and P. S. Zia, Eur. Phys. J. C 77, 448
1115
+ (2017).
1116
+ [33] V. Folomeev, Phys. Rev. D 97, 124009 (2018).
1117
+ [34] G. Mustafa, M. F. Shamir,
1118
+ and X. Tie-Cheng, Phys.
1119
+ Rev. D 101, 104013 (2020).
1120
+ [35] G. G. L. Nashed and S. Capozziello, Eur. Phys. J. C 81,
1121
+ 481 (2021).
1122
+ [36] G. G. L. Nashed, S. D. Odintsov, and V. K. Oikonomou,
1123
+ Eur. Phys. J. C 81, 528 (2021).
1124
+ [37] D. Deb et al., MNRAS 485, 5652 (2019).
1125
+ [38] S. K. Maurya et al., Phys. Rev. D 100, 044014 (2019).
1126
+ [39] S. Biswas, D. Shee, B. K. Guha, and S. Ray, Eur. Phys.
1127
+ J. C 80, 175 (2020).
1128
+ [40] S. K. Maurya and F. Tello-Ortiz, Annals of Physics 414,
1129
+ 168070 (2020).
1130
+ [41] P. Rej, P. Bhar, and M. Govender, Eur. Phys. J. C 81,
1131
+ 316 (2021).
1132
+ [42] S. Biswas, D. Deb, S. Ray, and B. K. Guha, Annals of
1133
+ Physics 428, 168429 (2021).
1134
+ [43] D. Vernieri, Phys. Rev. D 100, 104021 (2019).
1135
+ [44] C. E. Mota et al., Class. Quantum Grav. 39, 085008
1136
+ (2022).
1137
+ [45] A. Ashraf et al., Annals of Physics 422, 168322 (2020).
1138
+ [46] T. Tangphati, A. Pradhan, A. Errehymy, and A. Baner-
1139
+ jee, Physics Letters B 819, 136423 (2021).
1140
+ [47] T. Tangphati, A. Pradhan, A. Banerjee,
1141
+ and G. Pan-
1142
+ otopoulos, Physics of the Dark Universe 33, 100877
1143
+ (2021).
1144
+ [48] G. G. L. Nashed, Astrophys. J. 919, 113 (2021).
1145
+ [49] J. Solanki and J. L. Said, Eur. Phys. J. C 82, 35 (2022).
1146
+ [50] J. M. Z. Pretel and S. B. Duarte, Class. Quantum Grav.
1147
+ 39, 155003 (2022).
1148
+ [51] H. O. Silva, C. F. B. Macedo, E. Berti,
1149
+ and L. C. B.
1150
+ Crispino, Class. Quantum Grav. 32, 145008 (2015).
1151
+ [52] T. Harko, F. S. N. Lobo, S. Nojiri, and S. D. Odintsov,
1152
+ Phys. Rev. D 84, 024020 (2011).
1153
+ [53] H. Shabani and A. H. Ziaie, Eur. Phys. J. C 78, 397
1154
+ (2018).
1155
+ [54] P. S. Debnath, Int. J. Geom. Meth. Mod. Phys. 16,
1156
+ 1950005 (2019).
1157
+ [55] S. Bhattacharjee and P. Sahoo, Physics of the Dark Uni-
1158
+ verse 28, 100537 (2020).
1159
+ [56] S. Bhattacharjee, J. R. L. Santos, P. H. R. S. Moraes,
1160
+ and P. K. Sahoo, Eur. Phys. J. Plus 135, 576 (2020).
1161
+ [57] M. Gamonal, Physics of the Dark Universe 31, 100768
1162
+ (2021).
1163
+ [58] P. Moraes, J. D. Arba˜nil, and M. Malheiro, JCAP 2016,
1164
+ 005 (2016).
1165
+ [59] A. Das, F. Rahaman, B. K. Guha,
1166
+ and S. Ray, Eur.
1167
+ Phys. J. C 76, 654 (2016).
1168
+ [60] D. Deb, F. Rahaman, S. Ray, and B. Guha, JCAP 2018,
1169
+ 044 (2018).
1170
+ [61] D. Deb, S. V. Ketov, M. Khlopov,
1171
+ and S. Ray, JCAP
1172
+ 2019, 070 (2019).
1173
+ [62] R. Lobato et al., JCAP 2020, 039 (2020).
1174
+ [63] J. M. Z. Pretel, S. E. Jor´as, R. R. R. Reis,
1175
+ and J. D.
1176
+ Arba˜nil, JCAP 2021, 064 (2021).
1177
+ [64] J. M. Z. Pretel, T. Tangphati, A. Banerjee, and A. Prad-
1178
+ han, Chinese Phys. C 46, 115103 (2022).
1179
+ [65] J. Bora and U. D. Goswami, Physics of the Dark Universe
1180
+ 38, 101132 (2022).
1181
+ [66] J. B. Hartle, Astrophys. J. 150, 1005 (1967).
1182
+ [67] J. Barrientos O. and G. F. Rubilar, Phys. Rev. D 90,
1183
+ 028501 (2014).
1184
+ [68] K. V. Staykov, D. D. Doneva, S. S. Yazadjiev, and K. D.
1185
+ Kokkotas, JCAP 2014, 006 (2014).
1186
+ [69] N. K. Glendenning, Compact Stars:
1187
+ Nuclear Physics,
1188
+ Particle Physics, and General Relativity, 2nd ed. (As-
1189
+ tron. Astrophys. Library, Springer, New York, 2000).
1190
+ [70] G. Abell´an, E. Fuenmayor,
1191
+ and L. Herrera, Physics of
1192
+ the Dark Universe 28, 100549 (2020).
1193
+ [71] G. Abell´an, E. Fuenmayor, E. Contreras, and L. Herrera,
1194
+ Physics of the Dark Universe 30, 100632 (2020).
1195
+ [72] B. P. Abbott et al., Phys. Rev. Lett. 119, 161101 (2017).
1196
+ [73] F. Douchin and P. Haensel, A&A 380, 151 (2001).
1197
+ [74] P. Haensel and A. Y. Potekhin, A&A 428, 191 (2004).
1198
+ [75] D. D. Doneva and S. S. Yazadjiev, Phys. Rev. D 85,
1199
+ 124023 (2012).
1200
+ [76] K. Yagi and N. Yunes, Phys. Rev. D 91, 123008 (2015).
1201
+ [77] P. B. Demorest et al., Nature 467, 1081 (2010).
1202
+ [78] H. T. Cromartie et al., Nature Astronomy 4, 72 (2020).
1203
+ [79] M. C. Miller et al., Astrophys. J. Lett. 918, L28 (2021).
1204
+
CNE1T4oBgHgl3EQfDwNk/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
D9AzT4oBgHgl3EQfif2Z/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09a6d7a73d7d24e71df2ae98215ca17bfaf3d88973d5cf727d5064e960ba23bc
3
+ size 61060
DNE2T4oBgHgl3EQfoQhP/content/tmp_files/2301.04016v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
DNE2T4oBgHgl3EQfoQhP/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
EtE1T4oBgHgl3EQfEgOL/content/tmp_files/2301.02891v1.pdf.txt ADDED
@@ -0,0 +1,1149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Geometric quantum discord and coherence in a dipolar interacting magnetic system
2
+ Clebson Cruz,1, ∗ Maron F. Anka,2, † Hamid-Reza Rastegar-Sedehi,3, ‡ and Cleidson Castro4, §
3
+ 1Grupo de Informa¸c˜ao Quˆantica e F´ısica Estat´ıstica, Centro de Ciˆencias Exatas e das Tecnologias,
4
+ Universidade Federal do Oeste da Bahia - Campus Reitor Edgard Santos. Rua Bertioga,
5
+ 892, Morada Nobre I, 47810-059 Barreiras, Bahia, Brasil.
6
+ 2Instituto de F´ısica, Universidade Federal Fluminense,
7
+ Av. Gal. Milton Tavares de Souza s/n, 24210-346 Niter´oi, Rio de Janeiro, Brasil.
8
+ 3Department of Physics, College of Sciences, Jahrom University, Jahrom 74135-111, Iran
9
+ 4Centro de Forma¸c˜ao de Professores, Universidade Federal do Recˆoncavo da Bahia,
10
+ Avenida Nestor de Mello Pita, 535 Amargosa, Bahia, Brazil.
11
+ (Dated: January 10, 2023)
12
+ The study of low-dimensional metal complexes has revealed fascinating characteristics regarding the ground-
13
+ state crossover shown by spin-gaped systems. In this context, this work explores the effect of the quantum-level
14
+ crossing, induced by the magnetic anisotropies of dipolar interacting systems, on the quantum discord and
15
+ coherence of the system. The analytical expressions for the quantum discord, based on Schatten 1-norm, and
16
+ the l1 trace-norm quantum coherence for dinuclear spin-1/2 systems, are provided in terms of the magnetic
17
+ anisotropies. The results show that, while the quantum discord has a clear signature of the quantum level-
18
+ crossing, the basis dependence of the quantum coherence hides the crossover regarding the measured basis. In
19
+ addition, the global quantum coherence is wholly stored within the correlations of the system, regardless of its
20
+ reference basis.
21
+ Keywords: Dipolar Interaction; Qunatum discord; Quantum Coherence; Quantum-level crossing.
22
+ I.
23
+ INTRODUCTION
24
+ The study of the quantum properties of composite systems has led to a revolution in the development of emerging quantum
25
+ technologies [1–3]. The new generation of quantum devices explores physical properties associated with quantum correlations
26
+ between particles [4, 5] and superposition principle for the system states [1, 6, 7]. In this scenario, the characterization of the
27
+ quantumness of the physical systems is of paramount importance since the existence of quantum correlations and coherence are
28
+ a valuable resource for several quantum tasks [4, 8, 9].
29
+ However, the characterization of quantum correlations is a rather complicated task from the theoretical [10] and experimental
30
+ [11] point of view. This scenario is aggravated in Condensed Matter systems, where the number of interacting components in
31
+ the system is usually on the order of the Avogadro number [12]. Nevertheless, there are a few exceptions, like low-dimensional
32
+ metal complexes (LDMC), for which full knowledge about their quantum properties can be obtained through the corresponding
33
+ analytical solutions [4, 6, 13–20]. In such solid-state systems, intra-molecular interactions are strong enough to suppress ex-
34
+ trinsic and intermolecular interactions [4, 13, 14, 21]. Therefore, their quantum features exhibit high stability against external
35
+ perturbations such as temperatures [4, 13, 14, 16, 21], magnetic fields [4, 6, 16, 22], and pressures [6, 15]. These characteris-
36
+ tics make these systems promising platforms for the development of emerging quantum technologies [4, 23–26]. In regard to
37
+ these possible applications, the study of dipolar interacting magnetic systems has received considerable attention in the quantum
38
+ information literature [17, 27–31].
39
+ In this work, we present a theoretical study of quantum correlations and coherence for a dipolar interacting magnetic system,
40
+ exploring the effects of magnetic anisotropies on the quantumness of the system. As a result, this study provides to the literature
41
+ analytical expressions, in terms of the magnetic anisotropies, for the quantum discord, based on Schatten 1-norm, and the l1
42
+ trace-norm quantum coherence written in an arbitrary basis, defined by the co-latitude and longitude angles of the Bloch sphere
43
+ representation. According to the findings, the behavior of the quantum discord carries a noteworthy signature of the quantum
44
+ level-crossing, caused by population changes resulting from the alteration of Boltzmann weights arising from the change of the
45
+ magnetic anisotropies of the system. On the other hand, the basis dependency of quantum coherence is detrimental in terms of
46
+ recognizing this crossover. In this regard, the measurement of the average quantum coherence is numerically obtained in order
47
+ to obtain a basis-independent perspective for this quantum resource. The results not only demonstrate that the average coherence
48
+ ∗Electronic address: [email protected]
49
+ †Electronic address: [email protected]ff.br
50
+ ‡Electronic address: [email protected]
51
+ §Electronic address: [email protected]
52
+ arXiv:2301.02891v1 [quant-ph] 7 Jan 2023
53
+
54
+ 2
55
+ is completely stored within the correlations of the system, yet they also demonstrate that it is possible to retrieve the signature of
56
+ the energy-level crossover present on the quantum discord measurement. Furthermore, the findings show how dipolar interaction
57
+ coupling magnetic anisotropies impact quantum correlations and coherence in a dinuclear spin-1/2 system. Thus, the dipolar
58
+ interaction model is a viable foundation for quantum technologies based on quantum discord and coherence.
59
+ II.
60
+ DINUCLEAR METAL COMPLEX WITH DIPOLAR INTERACTION
61
+ The class of dinuclear metal complexes undergoes several types of magnetic coupling [12]. Among these are Heisenberg
62
+ exchange, which is isotropic under rotations in spin space [4, 6, 20, 21], and Dzyaloshinskii-Moriya (DM) interaction [32–
63
+ 34], which accounts for weak ferromagnetism in some antiferromagnetic materials [12]. A ubiquitous example of anisotropic
64
+ coupling in LDMCs is the dipolar interaction [17, 27–31]. This coupling arises from the influence of a magnetic field yielded
65
+ by one of the magnetic moments in the other ones [12]. In particular, for dinuclear metal complexes, the Hamiltonian which
66
+ describes this interaction is given by:
67
+ H = −1
68
+ 3
69
+ ⃗S T
70
+ A ·
71
+ ↔D · ⃗S B ,
72
+ (1)
73
+ where ⃗S j = {S x
74
+ j, S y
75
+ j, S z
76
+ j} are the spin operators and
77
+ ↔D = diag(∆ − 3ϵ, ∆ + 3ϵ, −2∆) is a diagonal tensor, with ϵ and ∆ being the
78
+ rhombic and axial parameters, respectively, related to the magnetic anisotropies in the dipolar model [12]. In particular, ∆ is
79
+ related to the zero-field splitting of the energy levels [35]. Considering the Hamiltonian, Eq. (1), written in the S (z) eigenbasis,
80
+ ∆ > 0 becomes a signature that the spins are on z-axis, while ∆ < 0 indicates that the spins will be on the x − y plane.
81
+ Considering a dinuclear metal complex in with d9 electronic configuration, Eq. (1) can describe two coupled spin 1/2 particles
82
+ in the corresponding S z eigenbasis {|00⟩, |01⟩, |10⟩, |11⟩}
83
+ H = 1
84
+ 6
85
+ ��������������
86
+
87
+
88
+ −∆ −∆
89
+ −∆ −∆
90
+
91
+
92
+ ��������������
93
+ ,
94
+ (2)
95
+ The energy levels of the system from the coupling parameters are composed by the
96
+ E1 = 0 ,
97
+ E2 = −2∆ ,
98
+ E3 = ∆ + 3ϵ ,
99
+ E4 = ∆ − 3ϵ .
100
+ (3)
101
+ From the thermal equilibrium, the density matrix for the coupled system is described by the Gibbs form ρAB = Z−1e−H/kBT,
102
+ where
103
+ Z = Tr(e−H/kBT) = 2eβ∆/6 cosh
104
+ �β∆
105
+ 6
106
+
107
+ + 2e−β∆/6 cosh
108
+ �βϵ
109
+ 2
110
+
111
+ .
112
+ (4)
113
+ is the canonical the partition function, with kB representing the Boltzmann’s constant. Thus, the dinuclear density matrix at sites
114
+ labeled by A and B can be written in the S z eigenbasis as the so-called X-shaped mixed state
115
+ ρAB = e− β∆
116
+ 6
117
+ Z
118
+ �������������������
119
+ cosh
120
+ � βϵ
121
+ 2
122
+
123
+ − sinh
124
+ � βϵ
125
+ 2
126
+
127
+ e
128
+ 2β∆
129
+ 6 cosh
130
+ � β∆
131
+ 6
132
+
133
+ e
134
+ 2β∆
135
+ 6 sinh
136
+ � β∆
137
+ 6
138
+
139
+ e
140
+ 2β∆
141
+ 6 sinh
142
+ � β∆
143
+ 6
144
+
145
+ e
146
+ 2β∆
147
+ 6 cosh
148
+ � β∆
149
+ 6
150
+
151
+ − sinh
152
+ � βϵ
153
+ 2
154
+
155
+ cosh
156
+ � βϵ
157
+ 2
158
+
159
+ �������������������
160
+ .
161
+ (5)
162
+ The density matrix eigenvalues (population) and their corresponding eigenvectors can be written as:
163
+ PΨ− =
164
+ 1
165
+ 1 + e
166
+ β∆
167
+ 3 + 2e− β∆
168
+ 6 cosh
169
+ � βϵ
170
+ 2
171
+ � → |Ψ−⟩,
172
+ (6)
173
+ PΨ+ =
174
+ 1
175
+ 1 + e− β∆
176
+ 3 + 2e− β∆
177
+ 6 cosh
178
+ � βϵ
179
+ 2
180
+ � → |Ψ+⟩,
181
+ (7)
182
+ PΦ+ =
183
+ 1
184
+ 1 + eβϵ + eβ( ∆+ϵ
185
+ 2 ) + eβ( ∆+3ϵ
186
+ 6 ) → |Φ+⟩,
187
+ (8)
188
+ PΦ− =
189
+ eβϵ
190
+ 1 + eβϵ + eβ( ∆+ϵ
191
+ 2 ) + eβ( ∆+3ϵ
192
+ 6 ) → |Φ−⟩,
193
+ (9)
194
+
195
+ 3
196
+ where
197
+ |Ψ±⟩ =
198
+ 1√
199
+ 2
200
+ (|01⟩ ± |10⟩) ,
201
+ |Φ±⟩ =
202
+ 1√
203
+ 2
204
+ (|00⟩ ± |11⟩) .
205
+ (10)
206
+ are the so-called Bell states, which represent the maximally entangled states for a bipartite system [36].
207
+ The study of LDMC has attracted the attention of both theoretical and experimental condensed matter physics communities
208
+ due to the fascinating properties of their ground states [6, 37]. In the presence of an external magnetic field, this systems
209
+ typically show a quantum level-crossing between its ground state and the first excited one when the field reaches a critical value
210
+ since the external magnetic field splits its energy levels, changing their corresponding populations. However, since the dipolar
211
+ interaction arises from the influence of the magnetic field created by one of the magnetic moments in the other, the splitting in
212
+ energy levels is ruled by the axial (∆) and rhombic (ϵ) parameters, as can be seen in Eq. (3). In this regard, Fig. 1 shows the
213
+ populations as a function of the ratio between the magnetic anisotropies and the energy scale factor kBT.
214
+ - 20
215
+ - 10
216
+ 0
217
+ 10
218
+ 20
219
+ 0.0
220
+ 0.2
221
+ 0.4
222
+ 0.6
223
+ 0.8
224
+ 1.0
225
+ ϵ/kBT
226
+ Populations
227
+ - 20
228
+ - 10
229
+ 0
230
+ 10
231
+ 20
232
+ 0.0
233
+ 0.2
234
+ 0.4
235
+ 0.6
236
+ 0.8
237
+ 1.0
238
+ ϵ/kBT
239
+ Populations
240
+ (a)
241
+ (b)
242
+ - 20
243
+ - 10
244
+ 0
245
+ 10
246
+ 20
247
+ 0.0
248
+ 0.2
249
+ 0.4
250
+ 0.6
251
+ 0.8
252
+ 1.0
253
+ Δ/kBT
254
+ Populations
255
+ - 20
256
+ - 10
257
+ 0
258
+ 10
259
+ 20
260
+ 0.0
261
+ 0.2
262
+ 0.4
263
+ 0.6
264
+ 0.8
265
+ 1.0
266
+ Δ/kB
267
+ Δ/kB
268
+ T
269
+ Populations
270
+ PΨ-
271
+ PΨ+
272
+ PΦ+
273
+ PΦ-
274
+ ϵ/kB= 5K
275
+ ϵ/kB=-5K
276
+ = 5K
277
+ Δ/kB=-5K
278
+ FIG. 1: (Color online) Populations, Eqs. (6)–(9), as a function of the ratio between the magnetic anisotropies and the energy scale factor kBT.
279
+ (a) Axial dependence considering the rhombic parameter ϵ/kB = 5 K (left) and ϵ/kB = −5 K (right). (b) Rhombic dependence considering the
280
+ axial parameter ∆/kB = 5 K (left) and ∆/kB = −5 K (right). The inset shows the magnetic anisotropy dependence on the energy levels.
281
+ As can be seen, in agreement with Eq. (3), when the spins are in the x − y plane (∆ < 0) with positive rhombic parameter
282
+ (ϵ > 0), the ground state is the given by the state |Φ−⟩ (with population PΦ−), and there is no quantum level crossing. Thus, the
283
+ system remains in the ground state. However, changing the signal of the rhombic parameter induces a quantum level crossing
284
+ between states |Φ−⟩ and |Φ+⟩ (with population PΦ+). Moreover, for the spins oriented in the z axis (∆ > 0), it is possible to
285
+ observe a quantum level crossing between the state |Φ−⟩ (if ϵ > 0) or |Φ+⟩ (if ϵ < 0) and the state |Ψ−⟩ (with population PΨ+) by
286
+ increasing the ratio ∆/kBT to the critical point ∆ = |ϵ|.
287
+ In reference [29], the authors study the effect of the magnetic anisotropies, described by the axial and rhombic parameters,
288
+ on the nonlocal correlations of a dipolar interacting system of two spins-1/2, identified by the Peres-Horodecki separability
289
+ criterion [38, 39]. In addition, they explore the change in the ground state on the thermal entanglement for the teleportation
290
+ process. However, although quantum entanglement provides one path toward the characterization of nonlocal correlations, it
291
+ does not encompass all quantum correlations in the system [21, 40–51]. Therefore, in order to expand this result, the following
292
+ section presents a study of the quantum correlations and coherence described by the Schatten 1-norm geometric quantum discord
293
+ and the l1 trace-norm quantum coherence.
294
+
295
+ 30
296
+ 20
297
+ 10
298
+ 10
299
+ 20
300
+ 0
301
+ 0
302
+ 5
303
+ 10
304
+ E20
305
+ 10
306
+ 0
307
+ 10
308
+ 20
309
+ 30
310
+ 10
311
+ 0
312
+ 5
313
+ 10
314
+ E20
315
+ 10
316
+ 0
317
+ -10
318
+ 20
319
+ 0
320
+ 5
321
+ 0
322
+ 5
323
+ 1020
324
+ 10
325
+ ler
326
+ 0
327
+ -10
328
+ 20
329
+ 0
330
+ 5
331
+ 0
332
+ 5
333
+ 1060
334
+ 40
335
+ 20
336
+ 20
337
+ 40
338
+ 60
339
+ 20
340
+ -10
341
+ 0
342
+ 10
343
+ 2060
344
+ 40
345
+ 20
346
+ 0
347
+ 20
348
+ 40
349
+ 60
350
+ 20
351
+ 0
352
+ 10
353
+ 204
354
+ III.
355
+ QUANTUM DISCORD
356
+ Quantum discord has been defined as a measurement of the quantumness of correlations in a quantum system. It has been first
357
+ introduced as an entropic measurement of genuinely quantum correlations in a quantum state, defined as the difference between
358
+ the total and the classical correlation [40] Q(ρAB) = I(ρA : ρB) − C(ρAB), where I(ρA : ρB) = S (ρA) + S (ρB) − S (ρAB) represents
359
+ the mutual information between the subsystems A and B, and C(ρAB) is the classical correlation of the composite system ρAB
360
+ defined as C(ρAB) = max{Bk}
361
+ �S (ρA) − �
362
+ k pkS (ρk)�, with the maximization taking over positive operator-valued measurements
363
+ (POVM’s) {Bk} performed locally only on subsystem B. However, this analytical maximization over POVMs is an arduous task
364
+ even for a two-qubit system [10, 11, 40, 41, 47, 51]. In this scenario, the class of entropic measurements of correlations, such as
365
+ the entropic quantum discord, is defined as nondeterministic polynomial time (NP-complete) problems [10]. Consequently, only
366
+ a few results for the analytical expression of entropic quantum discord, and only for certain classes of states are exact solutions
367
+ known [41, 43, 45, 46, 49, 50, 52]. Due to this fact, alternative measurements of quantum correlations have been proposed
368
+ [41, 43, 45–50, 52–60], especially quantifiers based on geometric arguments [47–49, 53, 57–60].
369
+ Geometric approaches are widely used to characterize and quantify quantum resources in a wide variety of quantum systems
370
+ [61]. In particular, the Schatten 1-norm quantum discord [4, 21, 47, 48, 62], is a reliable geometric-based quantifier of the
371
+ amount of quantum correlations in metal complexes [4, 21, 62, 63]. The so-called geometric quantum discord can be defined in
372
+ terms of the minimal distance between a set ω of closest classical-quantum states ρc [21, 47, 48], given by:
373
+ ρc =
374
+
375
+ k
376
+ pkΠ{A}
377
+ k
378
+ ⊗ ρ{B}
379
+ k ,
380
+ (11)
381
+ where 0 ≤ pk ≤ 1 and �
382
+ k pk = 1; {Π{A}
383
+ k } define a set of orthogonal projectors for a given subsystem A and ρ{B}
384
+ k
385
+ the reduced
386
+ density matrix for the subsystem B [47, 48]. Therefore, the geometric quantum discord can be expressed as
387
+ QG(ρAB) = min
388
+ ω ∥ρAB − ρc∥ ,
389
+ (12)
390
+ where ∥M∥ = Tr
391
+ � √
392
+ M†M
393
+
394
+ is the so-called 1-norm, and ρAB is the given quantum state at thermal equilibrium, Eq. (5).
395
+ Therefore, considering the given dinuclear magnetic system of spins-1/2 in a quantum spin-lattice, ruled by a dipolar Hamil-
396
+ tonian H, Eq.(1), the invariance under π rotation around a given spin axis (Z2 symmetry) [12, 46] allow us to compute the
397
+ geometric quantum discord, based on Schatten 1-norm, for the two-qubit X state, Eq. (5), as [53, 64]
398
+ QG(ρAB) = 1
399
+ 2
400
+
401
+ φ2
402
+ 1max{φ2
403
+ 2, φ2
404
+ 3} − φ2
405
+ 2min{φ2
406
+ 1, φ2
407
+ 3}
408
+ max{φ2
409
+ 2, φ2
410
+ 3} − min{φ2
411
+ 1, φ2
412
+ 3} + φ2
413
+ 1 − φ2
414
+ 2
415
+ (13)
416
+ where
417
+ φ1 =
418
+ e
419
+ β∆
420
+ 6 ���−1 + eβ∆/3��� + 2
421
+ ����sinh
422
+ � βϵ
423
+ 2
424
+ �����
425
+ ����2 cosh
426
+ � βϵ
427
+ 2
428
+
429
+ + eβ∆/6 + eβ∆/2
430
+ ����
431
+ ,
432
+ (14)
433
+ φ2 =
434
+ e
435
+
436
+ 6 ���−1 + eβ∆/3��� − 2
437
+ ����sinh
438
+ � βϵ
439
+ 2
440
+ �����
441
+ ����2 cosh
442
+ � βϵ
443
+ 2
444
+
445
+ + eβ∆/6 + eβ∆/2
446
+ ����
447
+ ,
448
+ (15)
449
+ φ3 =
450
+ 2
451
+ eβ∆/3 cosh
452
+ � β∆
453
+ 6
454
+
455
+ sech
456
+ � βϵ
457
+ 2
458
+
459
+ + 1
460
+ − 1 .
461
+ (16)
462
+ Considering the dipolar magnetic system in thermal equilibrium described by Eq. (5), it is possible to examine how the
463
+ magnetic anisotropies, represented by the axial (∆) and rhombic (ϵ) coupling parameters, affects the thermal quantum discord
464
+ in the system. Fig. 2 shows the geometric quantum discord, based on Schatten 1-norm, Eq. (13), as a function of the ratio
465
+ ∆/kBT and ϵ/kBT. As expected, the quantum discord reaches its maximum (saturated) value of 1/2 as T approaches zero. As
466
+ the temperature rises, the value of quantum discord decreases inexorably and goes to zero when T ≫ |∆| and T ≫ |ϵ|. On the
467
+ other hand, given the spins in the x − y plane (∆ < 0), it is sufficient that only T ≫ |ϵ| to the discord reaches its minimum value.
468
+ However, if the spins are in the z-axis (∆ > 0), one can increase the quantum discord by increasing the axial parameter ∆ even
469
+ when T ≫ |ϵ|.
470
+ Furthermore, regarding the magnetic anisotropies, the quantum discord presents a signature of the quantum level crossing in
471
+ the dipolar interacting system, highlighted on the solid white line in Fig. 2. Considering the spins oriented in the z-axis (∆ > 0),
472
+ the zero-field splitting leads the system to a quantum level crossing in the critical boundary ∆ = |ϵ|, where it is possible to
473
+ detect a crossover between the states |Ψ+⟩ and |Φ−⟩, if ϵ > 0, or |Φ+⟩, if ϵ < 0. Moreover, for the spins oriented in the x − y
474
+
475
+ 5
476
+ plane (∆ < 0), it is possible to observe a quantum level crossing between the state |Φ−⟩ and |Φ−⟩ in the critical boundary ϵ = 0.
477
+ On the other hand, the degree of quantum discord in the system can be increased by gradient ascent of the function QG(ρAB),
478
+ perpendicularly to the crossing boundary, which occurs for values in which |ϵ| ≫ kBT (for ∆ < 0), ∆ ≫ kBT (for |ϵ| ≪ ∆), and
479
+ |ϵ| ≫ ∆), corresponding to the lightest region in Fig. 2.Therefore, by controlling the axial (∆) and rhombic (ϵ) anisotropies is
480
+ possible to manage the degree of quantum discord in the dipolar interacting system.
481
+ In addition, in order to compare quantum discord to the level of entanglement in the system under investigation, we use the
482
+ concurrence measure. Typically, concurrence is used to assess entanglement in bipartite systems, and it can be easily computed
483
+ for any two-qubit system. The thermal concurrence examines the resemblance between the considered quantum state in thermal
484
+ equilibrium and its bit-flipped density matrix, ¯ρ = ρAB(σy ⊗ σy)ρ∗
485
+ AB(σy ⊗ σy). In particular, for the X- shaped density matrix, Eq.
486
+ (5), the concurrence is analytically defined as
487
+ C(ρAB) := max{0, A, B},
488
+ (17)
489
+ where
490
+ A = e− β∆
491
+ 6
492
+ Z
493
+ �������e
494
+ 2β∆
495
+ 6 sinh
496
+ �β∆
497
+ 6
498
+ ������� − cosh
499
+ �βϵ
500
+ 2
501
+ ��
502
+ ,
503
+ (18)
504
+ B = e− β∆
505
+ 6
506
+ Z
507
+ ������sinh
508
+ �βϵ
509
+ 2
510
+ ������ − e
511
+ 2β∆
512
+ 6 cosh
513
+ �β∆
514
+ 6
515
+ ��
516
+ .
517
+ (19)
518
+ Dashed green line in Fig. 2 denotes the boundary given by C(ρAB) = 0. Inside this region, the concurrence is zero, and the
519
+ state of the system is separable. However, within the region where entanglement is absent, the quantum discord of the system
520
+ is still considerably more than zero, ensuring the presence of quantum-correlated states even when the system is in a separable
521
+ syaye. On the other hand, for low temperatures, the entanglement is zero in the quantum level crossing boundary alongside
522
+ the quantum discord at the quantum level crossing boundary. In this scenario, the existence or absence of entanglement and,
523
+ therefore, quantum correlations, is dependent on its ground state, which might vary in response to magnetic anisotropies. Thus,
524
+ the variation of Boltzmann’s weights, Eqs. (6)-(9), associated with the occupancy of the energy levels, is the physical mechanism
525
+ responsible for the abrupt change in the quantum correlations near the energy-level crossover.
526
+ ϵ
527
+ kB T
528
+ Δ
529
+ kB T
530
+ PΨ+
531
+ PΦ+
532
+ PΦ-
533
+ FIG. 2: (Color online) Quantum Discord, based on Schatten 1-norm, for a dipolar interacting magnetic system, Eq. (13), as a function of
534
+ the ratios ∆/kBT and ϵ/kBT. The solid white line denotes the boundary between the quantum level crossings. The dashed green line is the
535
+ boundary given by the concurrence, Eq. (17), C(ρAB) = 0, inside which the entanglement of the system is absent.
536
+ IV.
537
+ QUANTUM COHERENCE
538
+ Similar to the approach proposed for the entanglement theory, where the quantum entanglement can be characterized by the
539
+ distance between a state of interest (ρ) and a set of states closed under local operations, and classical communication (separable
540
+ states) [38, 61, 65], Baumgratz et al. [65] provided the mathematical tools for quantifying the amount of quantum coherence
541
+ in a quantum system. Considering a d-dimensional Hilbert space, quantum coherence can be obtained from the minimal value
542
+
543
+ Outf· J=6
544
+ of a distance measurement D(ρ, σ), between the considered quantum state ρ and a set {σ = �d
545
+ k |k⟩⟨k| ∈ I} of incoherent states,
546
+ where the reference basis {|k⟩}{k=1,...,d} can be adequately defined considering the physics of the problem under investigation or
547
+ the task that requires this quantum resource [6, 65, 66]. In this scenario, since the non-vanishing off-diagonal terms of the
548
+ density operator ρ, which characterizes the quantum state of the system of interest, constitute the superposition from the chosen
549
+ reference basis [61, 65], the authors established a reliable measurement of quantum coherence through the l1 trace norm as [65]
550
+ Cl1 = min
551
+ σ∈I ∥ρ − σ∥l1 =
552
+
553
+ i�j
554
+ |⟨i|ρ| j⟩| .
555
+ (20)
556
+ Since coherence is a quantity that is reliant on the basis on which it is measured, it is essential to choose a reference basis for
557
+ the system within a metrology setting [61, 65]. In this scenario, the basis of an arbitrary quantum state can be altered by means
558
+ of unitary operations [36, 61]. In particular, for two-level systems such as spin-1/2, any reference basis can be obtained from the
559
+ unitary transformation
560
+ U(θ, φ) =
561
+ �������
562
+ cos
563
+ � θ
564
+ 2
565
+
566
+ −eiφ sin
567
+ � θ
568
+ 2
569
+
570
+ e−iφ sin
571
+ � θ
572
+ 2
573
+
574
+ cos
575
+ � θ
576
+ 2
577
+
578
+ ������� ,
579
+ (21)
580
+ where the θ and φ angles are the spherical equivalents of the co-latitude with respect to the z-axis, and the longitude concerning
581
+ the x-axis in a Bloch sphere representation, respectively [36, 67]. In this regard, the unitary transformation for the bipartite state
582
+ given by Eq. (5) is given by ρ{θ,φ}
583
+ AB = ˆUAB(θ, φ)ρAB ˆUAB(θ, φ) [67], where
584
+ ˆUAB(θ, φ) = U(θ, φ) ⊗ U(θ, φ) =
585
+ �������������������
586
+ cos2 � θ
587
+ 2
588
+
589
+ −eiφ sin
590
+ � θ
591
+ 2
592
+
593
+ cos
594
+ � θ
595
+ 2
596
+
597
+ −eiφ sin
598
+ � θ
599
+ 2
600
+
601
+ cos
602
+ � θ
603
+ 2
604
+
605
+ e2iφ sin2 � θ
606
+ 2
607
+
608
+ e−iφ sin
609
+ � θ
610
+ 2
611
+
612
+ cos
613
+ � θ
614
+ 2
615
+
616
+ cos2 � θ
617
+ 2
618
+
619
+ − sin2 � θ
620
+ 2
621
+
622
+ −eiφ sin
623
+ � θ
624
+ 2
625
+
626
+ cos
627
+ � θ
628
+ 2
629
+
630
+ e−iφ sin
631
+ � θ
632
+ 2
633
+
634
+ cos
635
+ � θ
636
+ 2
637
+
638
+ − sin2 � θ
639
+ 2
640
+
641
+ cos2 � θ
642
+ 2
643
+
644
+ −eiφ sin
645
+ � θ
646
+ 2
647
+
648
+ cos
649
+ � θ
650
+ 2
651
+
652
+ e−2iφ sin2 � θ
653
+ 2
654
+
655
+ e−iφ sin
656
+ � θ
657
+ 2
658
+
659
+ cos
660
+ � θ
661
+ 2
662
+
663
+ e−iφ sin
664
+ � θ
665
+ 2
666
+
667
+ cos
668
+ � θ
669
+ 2
670
+
671
+ cos2 � θ
672
+ 2
673
+
674
+ �������������������
675
+ (22)
676
+ By varying the co-latitude and longitude angles {θ, φ}, one can obtain the bipartite state ρAB, Eq. (5), in any reference basis.
677
+ Using the unitary transformation for the bipartite states, Eq. (22), in Eq. (5), one can obtain the representation of the density
678
+ operator for the dipolar interacting magnetic system of two spins-1/2 written in an arbitrary basis as
679
+ ρ{θ,φ}
680
+ AB = e− β∆
681
+ 6
682
+ 4Z
683
+ ��������������
684
+ ϱ11
685
+ ϱ12
686
+ ϱ12
687
+ ϱ14
688
+ ϱ∗
689
+ 12
690
+ ϱ22
691
+ ϱ23
692
+ −ϱ12
693
+ ϱ∗
694
+ 12
695
+ ϱ23
696
+ ϱ22
697
+ −ϱ12
698
+ ϱ∗
699
+ 14 −ϱ∗
700
+ 12 −ϱ∗
701
+ 12
702
+ ϱ11
703
+ ��������������
704
+ ,
705
+ (23)
706
+ where
707
+ ϱ11 = 2 sin2(θ)
708
+
709
+ sinh
710
+ �β∆
711
+ 2
712
+
713
+ + cosh
714
+ �β∆
715
+ 2
716
+
717
+ − sinh
718
+ �βϵ
719
+ 2
720
+
721
+ cos(2φ)
722
+
723
+ + cosh
724
+ �βϵ
725
+ 2
726
+
727
+ (cos(2θ) + 3) ,
728
+ (24)
729
+ ϱ12 = e−3iφ sin(θ)
730
+
731
+ 2e2iφ cos(θ)
732
+
733
+ cosh
734
+ �βϵ
735
+ 2
736
+
737
+ − eβ∆/2�
738
+ + e4iφ sinh
739
+ �βϵ
740
+ 2
741
+
742
+ (cos(θ) + 1) + sinh
743
+ �βϵ
744
+ 2
745
+
746
+ (cos(θ) − 1)
747
+
748
+ ,
749
+ (25)
750
+ ϱ14 = 2e−2iφ sin2(θ)
751
+
752
+ cosh
753
+ �βϵ
754
+ 2
755
+
756
+ − eβ∆/2�
757
+ − 4e−4iφ sinh
758
+ �βϵ
759
+ 2
760
+
761
+ sin4 �θ
762
+ 2
763
+
764
+ − 4 sinh
765
+ �βϵ
766
+ 2
767
+
768
+ cos4 �θ
769
+ 2
770
+
771
+ ,
772
+ (26)
773
+ ϱ22 = 2
774
+
775
+ eβ∆/2 cos2(θ) + eβ∆/6 + sin2(θ)
776
+
777
+ sinh
778
+ �βϵ
779
+ 2
780
+
781
+ cos(2φ) + cosh
782
+ �βϵ
783
+ 2
784
+ ���
785
+ ,
786
+ (27)
787
+ ϱ23 = 2eβ∆/2 cos2(θ) − 2eβ∆/6 + 2 sin2(θ)
788
+
789
+ sinh
790
+ �βϵ
791
+ 2
792
+
793
+ cos(2φ) + cosh
794
+ �βϵ
795
+ 2
796
+ ��
797
+ .
798
+ (28)
799
+ The diagonal entries of Eq. (23) are real, and the trace is 1. In addition, to ensure real eigenvalues, hermiticity restricts
800
+ off-diagonal elements to two complex numbers, i.e., ϱi j is the complex conjugate of ϱji.
801
+ Thus, from Eqs. (20) and (23), it is possible to write an analytical expression for the normalized quantum coherence in an
802
+ arbitrary basis, defined by the co-latitude and longitude angles {θ, φ}, as:
803
+ C{θ,φ}
804
+ l1
805
+ = e− β∆
806
+ 6
807
+ 6 |Z|
808
+ �4 |ϱ12| + |ϱ14| + |ϱ23|� .
809
+ (29)
810
+ In order to examine the relationship between quantum coherence and quantum correlations, a new metric known as corre-
811
+ lated coherence was established recently [67–69]. Quantum correlated coherence is a measure of coherence in which all local
812
+
813
+ 7
814
+ components have been eliminated, i.e., all coherence in the system is totally recorded in the quantum correlations. For any
815
+ given quantum state ρ, the correlated contribution to quantum coherence may be calculated by subtracting the local coherence
816
+ of subsystems ρA = TrB(ρ) and ρB = TrA(ρ) from the overall coherence [68, 69]. Thus, the definition of correlated coherence
817
+ according to the l1-norm of coherence is:
818
+ Ccc(ρ{θ,φ}
819
+ AB ) := Cl1(ρ) − Cl1(ρA) − Cl1(ρB).
820
+ (30)
821
+ Considering the density matrix of the dipolar interacting magnetic system written in an arbitrary basis, Eq. (23), the reduced
822
+ density matrices of local subsystems are ρA = ρB = I/2, the maximally mixed state. Thus, regardless of the basis, the local
823
+ subsystems will remain in the maximally mixed state, since it is basis invariant [36]. Consequently, the local contribution for
824
+ the quantum coherence in this dipolar interacting system is always null, and the global coherence of the system, Eq. (29),
825
+ is totally recorded in the quantum correlations of the system, regardless of its reference basis. Therefore, for a number of
826
+ different combinations of values for the co-latitude and longitude angles {θ, φ}, the unitary transformation, Eq. (22), gives a
827
+ direct connection between the overall and the correlated degrees of coherence.
828
+ A.
829
+ Axial Coherence
830
+ In particular, due to the rotation symmetry of the dipolar interaction, the density matrix will be invariant when rotated both
831
+ spins by an angle π along any given spin axis. Thus, choosing the co-latitude angle as θ = nπ (n = {0, 1, 2, ...}), regardless of
832
+ the longitude angle φ, one can obtain the density matrix in the X-shaped form as described in Eq. (5). On the other hand, by
833
+ applying the unitary transformation for the bipartite states, Eq. (22), for {θ = π/2; φ = nπ}, and {θ = π/2; φ = nπ/2}, in Eq. (5)
834
+ one can obtain the density matrix S (x) and S (y) eigenbasis, respectively.
835
+ ρ{X,Y}
836
+ AB
837
+ = e− β∆
838
+ 6
839
+ 2Z
840
+ ����������������
841
+ eβ∆/2 + e∓βϵ/2
842
+ 0
843
+ 0
844
+
845
+
846
+ eβ∆/2 − e∓βϵ/2�
847
+ 0
848
+ eβ∆/6 + e±βϵ/2 e±βϵ/2 − eβ∆/6
849
+ 0
850
+ 0
851
+ e±βϵ/2 − eβ∆/6 eβ∆/6 + e±βϵ/2
852
+ 0
853
+
854
+
855
+ eβ∆/2 − e∓βϵ/2�
856
+ 0
857
+ 0
858
+ eβ∆/2 + e∓βϵ/2
859
+ ����������������
860
+ .
861
+ (31)
862
+ As can be seen, due to the symmetry of the X-shaped density matrices [70], the X-structure of the operator is preserved.
863
+ Therefore, from Eqs. (5), (20) and (31), one can obtain the analytical expressions for the normalized axial quantum coherences
864
+ as
865
+ C{Z}
866
+ l1
867
+ =
868
+ 2
869
+ 3 |Z|
870
+
871
+ eβ∆/6
872
+ ������sinh
873
+ �β∆
874
+ 6
875
+ ������� + e−β∆/6
876
+ �����sinh
877
+ �βϵ
878
+ 2
879
+ ������
880
+
881
+ (32)
882
+ C{X,Y}
883
+ l1
884
+ =
885
+ e− β∆
886
+ 6
887
+ 3 |Z|
888
+ ����eβ∆/2 − e∓βϵ/2��� +
889
+ ���eβ∆/6 − e±βϵ/2���
890
+
891
+ (33)
892
+ Fig. 3 shows the axial quantum coherence in S (i) spin eigenbasis, where i = {x, y, z}. Different from the behavior observed
893
+ for quantum discord (see Fig. 2), the axial quantum coherence is not sensible to the quantum level crossing. The quantum
894
+ coherence in each axis {x, y, z} is minimized in only one energy-level crossover. As can be seen, considering the spins oriented
895
+ in the z-axis (∆ > 0), the axial coherence in the S (x) eigenbasis is minimized on the critical boundary ∆ = −ϵ (with ϵ < 0), where
896
+ it is possible to detect a crossover between the states |Ψ+⟩ and |Φ+⟩, while the coherence in S (y) eigenbasis is minimized on the
897
+ critical boundary ∆ = ϵ (with ϵ > 0), where it is possible to detect a crossover between the states |Ψ+⟩ and |Φ−⟩ (see Fig. 2). As
898
+ expected from Eq. 33, if the rhombic parameter is null (ϵ = 0), C{X}
899
+ l1
900
+ = C{Y}
901
+ l1 . On the other hand, for the spins oriented in the x − y
902
+ plane, (∆ < 0), it is possible to observe that C{Z}
903
+ l1 is minimized in the quantum level crossing between the state |Φ−⟩ and |Φ−⟩ in
904
+ the critical boundary ϵ = 0.
905
+ As shown in Fig. 3, the basis dependence of the quantum coherence hides the energy-level crossover in this dipolar interacting
906
+ system regarding the measured basis. Therefore, the basis dependence on the quantum coherence defined by Baumgratz et al.
907
+ [65], can be unfavorable to recognizing the quantum level crossing caused by population changes resulting from the alteration
908
+ of Boltzman weights, Eqs. (6)-(9), arising from the change of the magnetic anisotropies of the dipolar interacting system.
909
+ B.
910
+ Average Coherence
911
+ Since the coherence formulated in the quantum resource theory is a basis-dependent measurement [8, 61, 66], it is natural to
912
+ define a basis-independent measurement [71–75]. Recent research has shown, via the use of relative entropies, as distance mea-
913
+ surements of quantum correlations, that basis-independent measurements of entropic quantum coherence are precisely identical
914
+
915
+ 8
916
+ (a) (b) (c)
917
+ - 10
918
+ - 5
919
+ 0
920
+ 5
921
+ 10
922
+ - 10
923
+ - 5
924
+ 0
925
+ 5
926
+ 10
927
+ Δ
928
+ kB T
929
+ ϵ
930
+ kB T
931
+ - 10
932
+ - 5
933
+ 0
934
+ 5
935
+ 10
936
+ - 10
937
+ - 5
938
+ 0
939
+ 5
940
+ 10
941
+ Δ
942
+ kB T
943
+ ϵ
944
+ kB T
945
+ - 10
946
+ - 5
947
+ 0
948
+ 5
949
+ 10
950
+ - 10
951
+ - 5
952
+ 0
953
+ 5
954
+ 10
955
+ Δ
956
+ kB T
957
+ ϵ
958
+ kB T
959
+ 0.05
960
+ 0.10
961
+ 0.15
962
+ 0.20
963
+ 0.25
964
+ 0.30
965
+ SX eigenbasis
966
+ SY
967
+ SZ eigenbasis
968
+ eigenbasis
969
+ FIG. 3: (Color online) Axial quantum coherence based on l1 trace norm, for a dipolar interacting magnetic system, as a function of the ratios
970
+ ∆/kBT and ϵ/kBT. The dashed white line represents the minimum value for the axial quantum coherence.
971
+ to entropic discord [74]. On the other hand, a possible basis-free measurement of quantum coherence for a quantum system
972
+ can be obtained from a geometrical standpoint by averaging the coherence of a state across all reference bases [71–73, 75].
973
+ From a theoretical point of view, this measurement corresponds to averaging the coherence on a standard basis across all equiv-
974
+ alent states ρ{θ,φ}
975
+ AB
976
+ = ˆUAB(θ, φ)ρAB ˆUAB(θ, φ). Therefore, as any two-qubit reference base can be created by applying the unitary
977
+ operation described in Eq. (22), the average quantum coherence can be obtained from Eq. (29) as
978
+ ⟨Cl1⟩ = 1
979
+
980
+
981
+
982
+ 0
983
+ π
984
+
985
+ 0
986
+ sin (θ)C{θ,φ}
987
+ l1
988
+ dθdφ .
989
+ (34)
990
+ It is worth mentioning that these integrals are not trivial to solve, and an analytical expression for the average coherence is not
991
+ presented. However, it can be numerically integrated by any quadrature method [76]. In this scenario, Eq. (34) is estimated by
992
+ using the Clenshaw-Curtis rule on adaptively refined subintervals of the integration area [76, 77] since the numerical integration
993
+ algorithms are often equally efficient and effective as conventional algorithms for well-behaved integrands such as Eqs. (29) and
994
+ (30) [76].
995
+ Fig. 4 shows the average quantum coherence for the dipolar magnetic interacting system. The solid white line represents the
996
+ threshold at which the quantum-level crossing, described in previous sections, actually occurs. As expected, based on Fig. 3,
997
+ when the temperature rises reaching the threshold T ≫ |∆| and T ≫ |ϵ|, the value of coherence reaches its lowest point and
998
+ will be equal to zero. However, the behavior of the average coherence is completely different from that observed in the axial
999
+ (basis-dependent) coherence shown in Fig. 3.
1000
+ Moreover, besides unified frameworks from relative entropic measurements has shown that basis-independent entropic quan-
1001
+ tum coherence is equivalent to entropic discord [74], this is not true for this geometrical approach. However, although the
1002
+ contour lines of the average coherence are quite different from that shown in the discord presented in Fig. 2, it is still able to
1003
+ identify the signature of the energy-level crossing that was seen during the measurement of the quantum discord. This result is
1004
+ due to the fact that the global coherence is totally stored within the correlations of the system, and its average behavior is affected
1005
+ by the presence of genuine quantum correlations measured by the quantum discord.
1006
+ In addition, the entanglement of the system is absent within the area shown by the dashed green line that denotes the boundary
1007
+ supplied by the concurrence, which is denoted by Eq. (17), C(ρAB) = 0. Thus, as one would anticipate based on the observation
1008
+ of the quantum discord in Fig. 2, even in the absence of entanglement, the average coherence that is completely stored on the
1009
+ correlations of the system is noticeably distinct from zero.
1010
+ V.
1011
+ CONCLUSIONS
1012
+ In summary, this paper explored the influence of magnetic anisotropies on the quantumness of a dipolar interacting magnetic
1013
+ system via a theoretical examination of the geometric quantum discord, measured by Schatten 1-norm, and the l1 trace-norm
1014
+ quantum coherence. The analytical formulations for these quantum information quantifiers were obtained in terms of magnetic
1015
+ anisotropies. In this scenario, the effects of dipolar coupling constants on these quantifiers are highlighted. It is demonstrated
1016
+ that the presence of dipolar anisotropies increases the degree to which the system possesses quantum correlation and coherence.
1017
+
1018
+ 9
1019
+ 0
1020
+ 5
1021
+ 10
1022
+ 0
1023
+ 5
1024
+ 10
1025
+ - 10
1026
+ - 5
1027
+ - 10
1028
+ - 5
1029
+ Δ
1030
+ kB T
1031
+ ϵ
1032
+ kB T
1033
+ 0.1
1034
+ 0.2
1035
+ 0.3
1036
+ 0.4
1037
+ 0.5
1038
+ 0.6
1039
+ 0.7
1040
+ PΨ+
1041
+ PΦ+
1042
+ PΦ-
1043
+ FIG. 4: (Color online) Average quantum coherence based on l1 trace norm, for a dipolar interacting magnetic system, as a function of the ratios
1044
+ ∆/kBT and ϵ/kBT. The solid white line represents the boundary between the quantum level crossings. The dashed green line is the boundary
1045
+ given by the concurrence, Eq. (17), C(ρAB) = 0, inside which the entanglement of the system is absent.
1046
+ As another remarkable result, it is proved that the global coherence, expressed in an arbitrary reference basis, determined by
1047
+ the co-latitude and longitude angles of the Bloch sphere representation, is totally stored within the correlations of the system.
1048
+ Moreover, according to the results, the behavior of quantum discord contains a notable hallmark of quantum level-crossing in
1049
+ the system, in contrast to the basis-dependent axial quantum coherence, which hides the energy-level crossover regarding the
1050
+ measured basis.
1051
+ Therefore, the dependency of the base on the quantum coherence specified by Baumgratz might be deleterious in identifying
1052
+ the crossing of levels owing to population changes originating from the changing of Boltzman weights due to the modification
1053
+ of the magnetic anisotropies of the studied system. In this regard, the average quantum coherence was measured numerically
1054
+ obtained in order to gain a viewpoint independent of the reference basis, unraveling that the average coherence is able to extract
1055
+ the signature of the energy-level crossover present in the measurement of quantum discord.
1056
+ Finally, the findings that were given provide light on the ways in which magnetic anisotropies caused by the dipolar interaction
1057
+ coupling of a dinuclear spin-1/2 system influence quantum correlations and coherence. Therefore, the dipolar interaction model
1058
+ is an excellent option for usage as a platform for quantum technologies that are based on quantum resources such as quantum
1059
+ coherence and quantum discord.
1060
+ ACKNOWLEDGEMENTS
1061
+ C. Cruz gratefully acknowledges Mario Reis for the valuable discussions. M. F. Anka thanks FAPERJ for financial support.
1062
+ [1] M. Mohseni, P. Read, H. Neven, S. Boixo, V. Denchev, R. Babbush, A. Fowler, V. Smelyanskiy, and J. Martinis, Nature 543, 171 (2017).
1063
+ [2] M. Atzori and R. Sessoli, Journal of the American Chemical Society 141, 11339 (2019).
1064
+ [3] I. H. Deutsch, PRX Quantum 1, 020101 (2020).
1065
+ [4] C. Cruz, M. F. Anka, M. S. Reis, R. Bachelard, and A. C. Santos, Quantum Science and Technology 7, 025020 (2022).
1066
+ [5] G. L. Giorgi and S. Campbell, J. Phys. B: At. Mol. Opt. Phys. 48, 035501 (2015).
1067
+ [6] C. Cruz and M. Anka, EPL (Europhysics Letters) 130, 30006 (2020).
1068
+ [7] F. Caravelli, G. Coulter-De Wit, L. P. Garc´ıa-Pintos, and A. Hamma, Physical Review Research 2, 023095 (2020).
1069
+ [8] A. Streltsov, G. Adesso, and M. B. Plenio, Reviews of Modern Physics 89, 041003 (2017).
1070
+ [9] F. Sapienza, F. Cerisola, and A. J. Roncaglia, Nature communications 10, 1 (2019).
1071
+ [10] Y. Huang, New journal of physics 16, 033027 (2014).
1072
+ [11] M. Cramer, M. Plenio, and H. Wunderlich, Physical review letters 106, 020401 (2011).
1073
+ [12] M. Reis (Academic Press, Boston, 2013), ISBN 978-0-12-405545-2.
1074
+ [13] A. M. Souza, D. O. Soares-Pinto, R. S. Sarthour, I. S. Oliveira, M. S. Reis, P. Brandao, and A. M. dos Santos, Physical Review B 79,
1075
+ 054408 (2009).
1076
+
1077
+ 10
1078
+ [14] M. S. Reis, S. Soriano, A. M. dos Santos, B. C. Sales, D. Soares-Pinto, and P. Brandao, EPL (Europhysics Letters) 100, 50001 (2012).
1079
+ [15] C. Cruz, ´A. Alves, R. dos Santos, D. Soares-Pinto, J. de Jesus, J. de Almeida, and M. Reis, EPL (Europhysics Letters) 117, 20004 (2017).
1080
+ [16] H. ˇCenˇcarikov´a and J. Streˇcka, Physical Review B 102, 184419 (2020).
1081
+ [17] E. I. Kuznetsova and M. A. Yurischev, Quantum Information Processing 12, 3587 (2013).
1082
+ [18] M. A. Yurishchev, Physical Review B 84, 024418 (2011).
1083
+ [19] S. Aldoshin, E. Fel’dman, and M. Yurishchev, Low Temperature Physics 40, 3 (2014).
1084
+ [20] C. Cruz, H.-R. Rastegar-Sedehi, M. F. Anka, T. R. de Oliveira, and M. Reis, arXiv preprint arXiv:2208.14548 (2022).
1085
+ [21] C. Cruz, D. O. Soares-Pinto, P. Brand?o, A. M. dos Santos, and M. S. Reis, EPL (Europhysics Letters) 113, 40004 (2016).
1086
+ [22] A. M. Souza, M. S. Reis, D. O. Soares-Pinto, I. S. Oliveira, and R. S. Sarthour, Physical Review B 77, 104402 (2008).
1087
+ [23] M. R. Wasielewski, M. D. Forbes, N. L. Frank, K. Kowalski, G. D. Scholes, J. Yuen-Zhou, M. A. Baldo, D. E. Freedman, R. H. Goldsmith,
1088
+ T. Goodson, et al., Nature Reviews Chemistry pp. 1–15 (2020).
1089
+ [24] A. Gaita-Ari˜no, F. Luis, S. Hill, and E. Coronado, Nature chemistry 11, 301 (2019).
1090
+ [25] Y. A. Mezenov, A. A. Krasilin, V. P. Dzyuba, A. Nomin´e, and V. A. Milichko, Advanced Science 6, 1900506 (2019).
1091
+ [26] E. Moreno-Pineda, C. Godfrin, F. Balestro, W. Wernsdorfer, and M. Ruben, Chemical Society Reviews 47, 501 (2018).
1092
+ [27] D. F. Pinto and J. Maziero, Quantum Information Processing 17, 1 (2018).
1093
+ [28] D. F. Pinto and J. Maziero, Quantum Information Processing 20, 1 (2021).
1094
+ [29] C. Castro, O. Duarte, D. Pires, D. Soares-Pinto, and M. Reis, Physics Letters A 380, 1571 (2016).
1095
+ [30] A. A. Mohamed, H. Hessian, and H. Eleuch, Physica Scripta 95, 075104 (2020).
1096
+ [31] R. Muthuganesan and V. Chandrasekar, Physica Scripta 96, 125113 (2021).
1097
+ [32] R. Hoshikawa, K. Yoshida, R. Mitsuhashi, M. Mikuriya, T. Okuno, and H. Sakiyama, Molecules 26, 897 (2021).
1098
+ [33] M.-A. Bouammali, N. Suaud, R. Maurice, and N. Guih´ery, The Journal of Chemical Physics 155, 164305 (2021).
1099
+ [34] M.-A. Bouammali, N. Suaud, C. Martins, R. Maurice, and N. Guih´ery, The Journal of Chemical Physics 154, 134301 (2021).
1100
+ [35] E. Moreno-Pineda and W. Wernsdorfer, Nature Reviews Physics 3, 645 (2021).
1101
+ [36] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information: 10th Anniversary Edition (Cambridge University
1102
+ Press, New York, NY, USA, 2011), 10th ed., ISBN 1107002176, 9781107002173.
1103
+ [37] T. Chakraborty and C. Mitra, Journal of Physics: Condensed Matter 31, 475802 (2019).
1104
+ [38] R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, Reviews of modern physics 81, 865 (2009).
1105
+ [39] A. Peres, Phys. Rev. Lett. 77, 1413 (1996).
1106
+ [40] H. Ollivier and W. H. Zurek, Physical Review Letters 88, 017901 (2001).
1107
+ [41] V. Vedral, Physical Review Letters 90, 050401 (2003).
1108
+ [42] B.-Q. Liu, L.-A. Wu, G.-M. Zeng, J.-M. Song, W. Luo, Y. Lei, G.-A. Sun, B. Chen, and S.-M. Peng, Physics Letters A 378, 3441 (2014).
1109
+ [43] Z. Ma, Z. Chen, F. F. Fanchini, and S.-M. Fei, Scientific Reports 5 (2015).
1110
+ [44] D. Girolami and G. Adesso, Physical Review A 83, 052108 (2011).
1111
+ [45] T. Nakano, M. Piani, and G. Adesso, Physical Review A 88, 012117 (2013).
1112
+ [46] M. Sarandy, Physical Review A 80, 022108 (2009).
1113
+ [47] F. Paula, T. R. de Oliveira, and M. Sarandy, Physical Review A 87, 064101 (2013).
1114
+ [48] J. Montealegre, F. Paula, A. Saguia, and M. Sarandy, Physical Review A 87, 042115 (2013).
1115
+ [49] S. Luo, Physical Review A 77, 042303 (2008).
1116
+ [50] A. Datta, A. Shaji, and C. M. Caves, Physical Review Letters 100, 050502 (2008).
1117
+ [51] L. Henderson and V. Vedral, Journal of Physics A: Mathematical and General 34, 6899 (2001).
1118
+ [52] A. Brodutch and D. R. Terno, Physical Review A 81, 062103 (2010).
1119
+ [53] P. C. Obando, F. M. Paula, and M. S. Sarandy, Physical Review A 92, 032307 (2015).
1120
+ [54] D. Girolami, T. Tufarelli, and G. Adesso, Physical review letters 110, 240402 (2013).
1121
+ [55] D. Girolami, A. M. Souza, V. Giovannetti, T. Tufarelli, J. G. Filgueiras, R. S. Sarthour, D. O. Soares-Pinto, I. S. Oliveira, and G. Adesso,
1122
+ Physical Review Letters 112, 210401 (2014).
1123
+ [56] D. Girolami, A. M. Souza, V. Giovannetti, T. Tufarelli, J. G. Filgueiras, R. S. Sarthour, D. O. Soares-Pinto, I. S. Oliveira, and G. Adesso,
1124
+ Physical Review Letters 112, 210401 (2014).
1125
+ [57] B. Daki´c, V. Vedral, and ˇC. Brukner, Physical Review Letters 105, 190502 (2010).
1126
+ [58] M. Piani, Physical Review A 86, 034101 (2012).
1127
+ [59] F. Paula, A. Saguia, T. R. de Oliveira, and M. Sarandy, EPL (Europhysics Letters) 108, 10003 (2014).
1128
+ [60] D. Spehner, F. Illuminati, M. Orszag, and W. Roga, arXiv preprint arXiv:1611.03449 (2016).
1129
+ [61] M.-L. Hu, X. Hu, J. Wang, Y. Peng, Y.-R. Zhang, and H. Fan, Physics Reports (2018).
1130
+ [62] Y. Khedif, S. Haddadi, M. Daoud, H. Dolatkhah, and M. R. Pourkarimi, Quantum Information Processing 21, 1 (2022).
1131
+ [63] C. Cruz, arXiv preprint arXiv:1610.05255 (2016).
1132
+ [64] F. Ciccarello, T. Tufarelli, and V. Giovannetti, New Journal of Physics 16, 013038 (2014).
1133
+ [65] T. Baumgratz, M. Cramer, and M. Plenio, Physical review letters 113, 140401 (2014).
1134
+ [66] A. Streltsov, G. Adesso, and M. B. Plenio, Reviews of Modern Physics 89, 041003 (2017).
1135
+ [67] C. Filgueiras, O. Rojas, and M. Rojas, Annalen der Physik 532, 2000207 (2020).
1136
+ [68] T. Kraft and M. Piani, Journal of Physics A: Mathematical and Theoretical 51, 414013 (2018).
1137
+ [69] K. C. Tan, H. Kwon, C.-Y. Park, and H. Jeong, Physical Review A 94, 022329 (2016).
1138
+ [70] A. Rau, Journal of Physics A: Mathematical and Theoretical 42, 412002 (2009).
1139
+ [71] X.-Y. Liu and M.-L. Hu, Physica A: Statistical Mechanics and its Applications 609, 128308 (2023).
1140
+ [72] S. Luo and Y. Sun, Physics Letters A 383, 2869 (2019).
1141
+ [73] S. Cheng and M. J. Hall, Physical Review A 92, 042101 (2015).
1142
+
1143
+ 11
1144
+ [74] Y. Yao, X. Xiao, L. Ge, and C. Sun, Physical Review A 92, 022112 (2015).
1145
+ [75] S. Designolle, R. Uola, K. Luoma, and N. Brunner, Physical Review Letters 126, 220404 (2021).
1146
+ [76] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical recipes 3rd edition: The art of scientific computing
1147
+ (Cambridge university press, 2007).
1148
+ [77] G. Liu and S. Xiang, Applied Mathematics and Computation 340, 251 (2019).
1149
+