jackkuo commited on
Commit
9e3368a
·
verified ·
1 Parent(s): 4f4da1f

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -9E0T4oBgHgl3EQfxAH-/content/tmp_files/2301.02642v1.pdf.txt +931 -0
  2. -9E0T4oBgHgl3EQfxAH-/content/tmp_files/load_file.txt +0 -0
  3. -dE3T4oBgHgl3EQfrgpm/content/2301.04660v1.pdf +3 -0
  4. .gitattributes +62 -0
  5. 1NE2T4oBgHgl3EQf5Ahd/content/tmp_files/2301.04186v1.pdf.txt +495 -0
  6. 1NE2T4oBgHgl3EQf5Ahd/content/tmp_files/load_file.txt +277 -0
  7. 1NE4T4oBgHgl3EQfzQ1E/content/tmp_files/2301.05272v1.pdf.txt +605 -0
  8. 1NE4T4oBgHgl3EQfzQ1E/content/tmp_files/load_file.txt +405 -0
  9. 1dE1T4oBgHgl3EQfRwNN/content/2301.03056v1.pdf +3 -0
  10. 1dE1T4oBgHgl3EQfRwNN/vector_store/index.pkl +3 -0
  11. 3NE1T4oBgHgl3EQfAQKK/content/tmp_files/2301.02837v1.pdf.txt +875 -0
  12. 3NE1T4oBgHgl3EQfAQKK/content/tmp_files/load_file.txt +0 -0
  13. 49FKT4oBgHgl3EQf9y5B/content/tmp_files/2301.11955v1.pdf.txt +1414 -0
  14. 49FKT4oBgHgl3EQf9y5B/content/tmp_files/load_file.txt +0 -0
  15. 5dE1T4oBgHgl3EQfTAM3/content/2301.03072v1.pdf +3 -0
  16. 5dE1T4oBgHgl3EQfTAM3/vector_store/index.faiss +3 -0
  17. 7NE2T4oBgHgl3EQfkwf2/content/2301.03983v1.pdf +3 -0
  18. 7NE2T4oBgHgl3EQfkwf2/vector_store/index.pkl +3 -0
  19. 8NE4T4oBgHgl3EQfdAy1/content/tmp_files/2301.05088v1.pdf.txt +1260 -0
  20. 8NE4T4oBgHgl3EQfdAy1/content/tmp_files/load_file.txt +0 -0
  21. 8tAyT4oBgHgl3EQfc_f6/content/tmp_files/2301.00296v1.pdf.txt +579 -0
  22. 8tAyT4oBgHgl3EQfc_f6/content/tmp_files/load_file.txt +310 -0
  23. 9NAzT4oBgHgl3EQfFPpp/content/tmp_files/2301.01007v1.pdf.txt +2688 -0
  24. 9NAzT4oBgHgl3EQfFPpp/content/tmp_files/load_file.txt +0 -0
  25. 9dAzT4oBgHgl3EQfSfu4/vector_store/index.pkl +3 -0
  26. 9tE5T4oBgHgl3EQfRQ5h/content/2301.05519v1.pdf +3 -0
  27. ANFLT4oBgHgl3EQfEi_Y/content/tmp_files/2301.11984v1.pdf.txt +1549 -0
  28. ANFLT4oBgHgl3EQfEi_Y/content/tmp_files/load_file.txt +0 -0
  29. AdE0T4oBgHgl3EQfxgLI/content/2301.02648v1.pdf +3 -0
  30. AdE1T4oBgHgl3EQfpAU_/content/2301.03326v1.pdf +3 -0
  31. AdE1T4oBgHgl3EQfpAU_/vector_store/index.faiss +3 -0
  32. AdE1T4oBgHgl3EQfpAU_/vector_store/index.pkl +3 -0
  33. B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf +3 -0
  34. B9E0T4oBgHgl3EQfgAGB/vector_store/index.faiss +3 -0
  35. B9E0T4oBgHgl3EQfgAGB/vector_store/index.pkl +3 -0
  36. BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf +0 -0
  37. BNFRT4oBgHgl3EQfuzgt/content/tmp_files/2301.13632v1.pdf.txt +34 -0
  38. BNFRT4oBgHgl3EQfuzgt/content/tmp_files/load_file.txt +44 -0
  39. CdE1T4oBgHgl3EQfDwOw/content/2301.02882v1.pdf +3 -0
  40. CdE1T4oBgHgl3EQfDwOw/vector_store/index.faiss +3 -0
  41. CdE1T4oBgHgl3EQfDwOw/vector_store/index.pkl +3 -0
  42. JtAzT4oBgHgl3EQfj_0_/content/2301.01524v1.pdf +3 -0
  43. JtAzT4oBgHgl3EQfj_0_/vector_store/index.faiss +3 -0
  44. JtAzT4oBgHgl3EQfj_0_/vector_store/index.pkl +3 -0
  45. LdAyT4oBgHgl3EQfsvlL/vector_store/index.faiss +3 -0
  46. M9AyT4oBgHgl3EQfUPcx/content/2301.00120v1.pdf +3 -0
  47. M9AyT4oBgHgl3EQfUPcx/vector_store/index.faiss +3 -0
  48. M9AyT4oBgHgl3EQfUPcx/vector_store/index.pkl +3 -0
  49. OdFJT4oBgHgl3EQf0y3g/vector_store/index.faiss +3 -0
  50. OdFRT4oBgHgl3EQf4zhC/content/tmp_files/2301.13670v1.pdf.txt +1563 -0
-9E0T4oBgHgl3EQfxAH-/content/tmp_files/2301.02642v1.pdf.txt ADDED
@@ -0,0 +1,931 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Triple-stream Deep Metric Learning of Great Ape Behavioural Actions
2
+ Otto Brookes1, Majid Mirmehdi1, Hjalmar K¨uhl2, Tilo Burghardt1
3
+ 1Department of Computer Science, University of Bristol, United Kingdom
4
+ 2Evolutionary and Anthropocene Ecology, iDiv, Leipzig, Germany
5
6
+ Keywords:
7
+ Animal Biometrics, Multi-stream Deep Metric Learning, Animal Behaviour, Great Apes, PanAf-500 Dataset
8
+ Abstract:
9
+ We propose the first metric learning system for the recognition of great ape behavioural actions. Our proposed
10
+ triple stream embedding architecture works on camera trap videos taken directly in the wild and demonstrates
11
+ that the utilisation of an explicit DensePose-C chimpanzee body part segmentation stream effectively com-
12
+ plements traditional RGB appearance and optical flow streams. We evaluate system variants with different
13
+ feature fusion techniques and long-tail recognition approaches. Results and ablations show performance im-
14
+ provements of ∼ 12% in top-1 accuracy over previous results achieved on the PanAf-500 dataset containing
15
+ 180,000 manually annotated frames across nine behavioural actions. Furthermore, we provide a qualitative
16
+ analysis of our findings and augment the metric learning system with long-tail recognition techniques show-
17
+ ing that average per class accuracy – critical in the domain – can be improved by ∼ 23% compared to the
18
+ literature on that dataset. Finally, since our embedding spaces are constructed as metric, we provide first data-
19
+ driven visualisations of the great ape behavioural action spaces revealing emerging geometry and topology.
20
+ We hope that the work sparks further interest in this vital application area of computer vision for the benefit of
21
+ endangered great apes. We provide all key source code and network weights alongside this publication.
22
+ positive
23
+ anchor
24
+ fusion
25
+ negative
26
+ ResNet-50
27
+ ResNet-50
28
+ ResNet-50
29
+ x0
30
+ . . . . . .
31
+ x128
32
+ x0
33
+ . . . . . .
34
+ x128
35
+ x0
36
+ . . . . . .
37
+ x128
38
+ Metric
39
+ Learning
40
+ d
41
+ d
42
+ d
43
+ d
44
+ LTriplet
45
+ x0
46
+ . . . . . .
47
+ x128
48
+ x0
49
+ . . . . . .
50
+ x128
51
+ embedding model
52
+ DensePose-C
53
+ Optical flow
54
+ RGB
55
+ x0
56
+ . . . . . .
57
+ x128
58
+ x0
59
+ . . . . . .
60
+ x128
61
+ x0
62
+ . . . . . .
63
+ x128
64
+ x0
65
+ . . . . . .
66
+ x128
67
+ x0
68
+ . . . . . .
69
+ x128
70
+ x0
71
+ . . . . . .
72
+ x128
73
+ x0
74
+ . . . . . .
75
+ x128
76
+ embedding model
77
+ embedding model
78
+ shared weights
79
+ shared weights
80
+ Figure 1: System Overview. Our proposed triple-stream metric learning approach utilises all RGB appearance, optical flow,
81
+ and DensePose-C segmentations of chimps in videos. Exploiting hybrid reciprocal triplet and cross entropy losses, the model
82
+ is then trained to map embeddings representing great ape behavioural actions onto a metric space, where semantically similar
83
+ representations are geometrically close forming natural clusters. This pipeline improves on state-of-the-art classification
84
+ performance and allows for visualisations of the underpinning space of behavioural actions. (best viewed zoomed)
85
+ 1
86
+ INTRODUCTION
87
+ As the climate crisis gathers pace, the threat to many
88
+ endangered species grows ever more perilous (Al-
89
+ mond et al., 2022). All species of great apes are, for
90
+ instance, listed as endangered or critically endangered
91
+ according to the IUCN Red List (IUCN, 2022)
92
+ .. . . . . . . . . . . there is urgent need for methods
93
+ Consequently, there is urgent need for methods that
94
+ can help to monitor population status and assess the
95
+ effectiveness of conservation interventions (K¨uhl and
96
+ Burghardt, 2013; Congdon et al., 2022; Tuia et al.,
97
+ 2022). This includes the recognition of behaviors and
98
+ variation therein, as an integral part of biological di-
99
+ versity (Dominoni et al., 2020; Carvalho et al., 2022).
100
+ arXiv:2301.02642v1 [cs.CV] 6 Jan 2023
101
+
102
+ Tripletloss:randomtriplets
103
+ camera_interaction
104
+ climbing down
105
+ climbing_up
106
+ hanging
107
+ running
108
+ sitting
109
+ sitting_on_back
110
+ standing
111
+ walkingTripletloss:randomlyinitialisedweights
112
+ camera_interaction
113
+ climbing_down
114
+ climbing_up
115
+ hanging
116
+ running
117
+ sitting
118
+ sitting_on_back
119
+ standing
120
+ walkingPrevious works have employed deep neural net-
121
+ works which leverage multiple modalities, such as
122
+ RGB, optical flow, and audio (Sakib and Burghardt,
123
+ 2020; Bain et al., 2021), for the classification of great
124
+ ape behaviours and actions. However, higher level ab-
125
+ stractions such as pose or body part information have
126
+ remained unexplored for addressing this task. In re-
127
+ sponse, we propose utilising the latter together with
128
+ RGB and optical flow in a triple-stream metric learn-
129
+ ing system (see Fig. 1) for improved classification re-
130
+ sults and domain visualisations relevant to biologists.
131
+ 104
132
+ 105
133
+ # Samples (log)
134
+ Behavioural action classes
135
+ hanging
136
+ walking
137
+ sitting on back
138
+ standing
139
+ sitting
140
+ climbing up
141
+ camera interaction
142
+ running
143
+ climbing down
144
+ Figure 2: Behavioural Actions in the PanAf-500 Data.
145
+ Examples of each one of the nine behavioural action classes
146
+ (top) and their distribution across the approx. 180k frames
147
+ in the dataset (bottom). Note the imbalance of two orders of
148
+ magnitude in the distribution. (best viewed zoomed)
149
+ Great Ape Activities - This paper will focus on
150
+ great ape activity recognition, where the coarse ac-
151
+ tivity classes used are illustrated in Fig. 2 for the
152
+ utilised PanAf-500 dataset (see Sec. 3).
153
+ Note that
154
+ computer vision would traditionally categorise these
155
+ classes as actions whilst in the biological realm they
156
+ represent behaviour (or aspects thereof) often cap-
157
+ tured in ethograms (Nishida et al., 1999; Zamma and
158
+ Matsusaka, 2015). For clarity, in this paper we will
159
+ refer to these classes as behavioural actions recognis-
160
+ ing historical traditions in both disciplines.
161
+ We will approach the classification task via a deep
162
+ metric learning system (Karaderi et al., 2022) that
163
+ embeds inputs into a latent space and uses geomet-
164
+ ric distances to form distributions that align with the
165
+ semantic similarity captured by the classes (Hermans
166
+ et al., 2017; Musgrave et al., 2020). A major advan-
167
+ tage over standard supervised systems is that sample
168
+ distances in visualisations of the latent space always
169
+ relate to learned similarity and, thus, are more natu-
170
+ rally interpretable by experts. We will also analyse
171
+ the role that additional DensePose-Chimp informa-
172
+ tion (Sanakoyeu et al., 2020) can play in improving
173
+ recognition performance compared to systems that
174
+ utilise RGB and optical flow only. Lastly, as shown
175
+ by Sakib and Burghardt (Sakib and Burghardt, 2020),
176
+ there are significant challenges in correctly classify-
177
+ ing behavioural actions which occur infrequently and
178
+ form the distribution tail (see Fig. 2). To address this,
179
+ we will employ three long-tailed recognition (LTR)
180
+ techniques to improve performance on tail classes; (i)
181
+ logit adjustment (Menon et al., 2020); (ii) class bal-
182
+ anced focal loss (Cui et al., 2019); and (iii) weight
183
+ balancing (Alshammari et al., 2022).
184
+ In summary, our contributions are as follows:
185
+ (i) we implement the first deep metric learning system
186
+ for recognising great ape behavioural actions; (ii) we
187
+ show that utilising explicit pose information has a sig-
188
+ nificant positive effect on recognition performance in
189
+ this domain; and (iii) we establish that existing LTR
190
+ techniques can be applied in a metric learning setting
191
+ to improve performance on tail classes for the prob-
192
+ lem. The proposed approaches improve the state-of-
193
+ the-art performance benchmarks with respect to top-1
194
+ (∼ 85%) and average per class (∼ 65%) accuracy on
195
+ the PanAf-500 dataset.
196
+ 2
197
+ RELATED WORK
198
+ Action recognition aims to classify actions observed
199
+ in video (Kalfaoglu et al., 2020; Shaikh and Chai,
200
+ 2021). Learning spatio-temporal features character-
201
+ istic for actions (Simonyan and Zisserman, 2014) via
202
+ various deep learning paradigms forms the approach
203
+ of choice in the domain of human action recogni-
204
+ tion (HAR). We will briefly review concepts from this
205
+ field, before discussing specifc relevant great ape be-
206
+ havioural action recognition and LTR methods.
207
+ Human Action Recognition - Although there are
208
+ numerous deep learning approaches to action recog-
209
+ nition (Zhou et al., 2018; Lin et al., 2019; Tran et al.,
210
+ 2019; Kalfaoglu et al., 2020; Pan et al., 2019; Majd
211
+ and Safabakhsh, 2020; Sharir et al., 2021; Zhang
212
+ et al., 2021a) this work focuses on multi-stream ar-
213
+ chitectures, which address key aspects of the action
214
+ recognition problem (e.g., spatial and temporal) in-
215
+
216
+ dependently and explicitly. Feichtenhofer et al. (Fe-
217
+ ichtenhofer et al., 2019) introduced the SlowFast ar-
218
+ chitecture which employs two streams, each operat-
219
+ ing at different frame rates; a slow, low frame-rate
220
+ pathway captures spatial information while the fast,
221
+ high frame-rate pathway captures fine temporal detail.
222
+ Other types of multi-stream networks process differ-
223
+ ent visual modalities. Simonyan (Simonyan and Zis-
224
+ serman, 2014) introduced a two-stream network that
225
+ processes RGB and optical flow to exploit spatial and
226
+ temporal semantics, respectively. Since then, several
227
+ networks that utilise additional modalities, such as
228
+ motion saliency (Zong et al., 2021) and audio (Wang
229
+ et al., 2021), have been introduced. Recently, the in-
230
+ troduction of pose, which is critical for the perception
231
+ of actions (Le et al., 2022), has shown promising re-
232
+ sults in multi-stream architectures (Hong et al., 2019;
233
+ Hayakawa and Dariush, 2020; Duan et al., 2021; Li
234
+ et al., 2022).
235
+ In particular, the DensePose format
236
+ provides an opportunity to exploit fine-grained, seg-
237
+ mentation map-based pose representations for action
238
+ recognition. Hayakawa et al. (Hayakawa and Dar-
239
+ iush, 2020) combine RGB and DensePose estimations
240
+ in a two-stream network and demonstrate strong per-
241
+ formance on egocentric footage of humans. Whilst
242
+ such significant progress has been made in the domain
243
+ of HAR, research into great ape behavioural action
244
+ recognition is still in its infancy and few systems have
245
+ been tested on natural datasets.
246
+ Great Ape Domain -
247
+ To date, two systems
248
+ have attempted automated great ape behavioural ac-
249
+ tion recognition, both are multi-stream architectures.
250
+ The first (Sakib and Burghardt, 2020) is based on the
251
+ two-stream convolutional architecture by Simonyan
252
+ et al. (Simonyan and Zisserman, 2014) and used 3D
253
+ ResNet-18s for feature extraction and LSTM-based
254
+ fusion of RGB and optical flow features. They report
255
+ top-1 accuracy of 73.52% across the nine behavioural
256
+ actions in the PanAf-500 dataset (see Sec. 3) and a
257
+ relatively low average per class accuracy (42.33%),
258
+ highlighting the issue of tail class performance. The
259
+ second, proposed by Bain et al. (Bain et al., 2021),
260
+ is a deep learning system that requires both audio
261
+ and video inputs and detects two specific behaviours;
262
+ buttress drumming and nut cracking. Their system
263
+ utilised a 3D ResNet-18 and a 2D ResNet-18 for ex-
264
+ traction of visual and assisting audio features, respec-
265
+ tively, in different streams. They achieved an aver-
266
+ age precision of 87% for buttress drumming and 85%
267
+ for nut cracking on their unpublished dataset. How-
268
+ ever, the multi-modal method is not applicable to all
269
+ camera trap settings since many older models do not
270
+ provide audio. It cannot be utilised on the PanAf-500
271
+ dataset since many clips there do not contain audio.
272
+ Long-tailed Recognition - Most natural recorded
273
+ data exhibits long-tailed class distributions (Liu et al.,
274
+ 2019). This is true of great ape camera-trap footage
275
+ which is dominated by commonly occurring be-
276
+ haviours - even with only the nine classes of the
277
+ PanAf-500 data the distribution shows a clear tail (see
278
+ Fig. 2). Without addressing this issue, models trained
279
+ on such data often exhibit poor performance on rare
280
+ classes.
281
+ Various counter-measures have been pro-
282
+ posed (Verma et al., 2018; Kang et al., 2019; Zhang
283
+ et al., 2021b).
284
+ Class balanced losses assign addi-
285
+ tional weights, typically determined by inverse class
286
+ frequencies, to samples from rare classes and have
287
+ yielded strong results when coupled with techniques
288
+ to reduce per-class redundancy (Cui et al., 2019).
289
+ Similarly, logit adjustment uses class frequencies to
290
+ directly offset output logits in favour of minority
291
+ classes during training (Menon et al., 2020).
292
+ An
293
+ orthogonal approach, based on the observation that
294
+ weight norms for rare classes are smaller in naively
295
+ trained classifiers, is to perform weight balancing (Al-
296
+ shammari et al., 2022).
297
+ These techniques have
298
+ achieved strong results on several LTR benchmarks.
299
+ Before detailing how we use triple-stream metric
300
+ learning with explicit DensePose-Chimp processing
301
+ and LTR extensions for behavioural action recogni-
302
+ tion, we will briefly outline the utilised dataset.
303
+ 3
304
+ DATASET
305
+ The Pan-African dataset, gathered by the Pan African
306
+ Programme: ‘The Cultured Chimpanzee’, comprises
307
+ ∼ 20,000 videos from footage gathered at 39 study
308
+ sites spanning 15 African countries. Here we utilise
309
+ a 500 video subset, PanAf-500, specifically ground-
310
+ truth labelled for use in computer vision under re-
311
+ producible and comparable benchmarks. It includes
312
+ frame-by-frame annotations for full-body locations of
313
+ great apes and nine behavioural actions (Sakib and
314
+ Burghardt, 2020) across approximately 180k frames
315
+ (see. Fig. 3). Fig. 2 displays the behavioural actions
316
+ classes in focus together with their distribution. We
317
+ utilised the PanAf-500 dataset for all experiments and
318
+ employ the same training and test partitions described
319
+ in (Sakib and Burghardt, 2020).
320
+ 4
321
+ METHOD
322
+ The proposed system utilises three visual modali-
323
+ ties as input; RGB, optical flow, and DensePose-C
324
+ estimations (Sanakoyeu et al., 2020), as illustrated
325
+ in Fig. 1). All optical flow images are pre-computed
326
+ using OpenCV’s implementation of the Dual TV L1
327
+ algorithm (Zach et al., 2007). We employ the model
328
+ developed by Sanakoyeu et al. (Sanakoyeu et al.,
329
+
330
+ Figure 3: Frame-by-frame Ground Truth Annotations.
331
+ Four still frames from PanAf-500 videos with annotations
332
+ of location (green boxes) and behavioural actions (visu-
333
+ alised as text) of the apes in-frame. (best viewed zoomed)
334
+ x0 .... x128
335
+ Concatenation
336
+ Conv3D
337
+ ResNet50
338
+ ResNet50
339
+ ResNet50
340
+ b x 6144 x 5 x 8 x 8
341
+ MaxPool3D
342
+ Feature maps
343
+ 2048 x 5 x 8 x 8
344
+ feature extractors
345
+ Triple stream
346
+ AdaptiveAvgPool3D
347
+ Fc1
348
+ b x 2048 x 3 x 5 x 5
349
+ Fc2
350
+ b x 1024
351
+ x0 .... x128
352
+ x0 .... x128
353
+ x0 .... x128
354
+ Multiplication
355
+ L2 norm
356
+ x0 .... x128
357
+ RGB
358
+ Optical flow
359
+ Output
360
+ DensePose-C
361
+ Figure 4: Fusion Head Schematics. A component break-
362
+ down of fusion by element-wise multiplication (left) and
363
+ convolutional fusion (right) as applied for our work to ex-
364
+ plore their impact on performance.
365
+ 2020) to generate DensePose-C segmentations de-
366
+ scribing chimpanzee pose. The model predicts dense
367
+ correspondences between image pixels and a 3-D ob-
368
+ ject mesh where each mesh represents a chimpanzee
369
+ body part specified by a selector I and local surface
370
+ coordinates within each mesh indexed by U and V.
371
+ Frame-by-frame application to each of the PanAf-
372
+ 500 videos yields DensePose-C estimates expressed
373
+ in IUV coordinates.
374
+ Each of the three input modalities is fed into a 3D
375
+ ResNet-50 (Du Tran et al., 2017) backbone, which
376
+ together act as a feature extractor (see Fig. 1). The
377
+ input tensors into the backbones are 3D since inputs
378
+ are processed in snippets, that is each stream accepts a
379
+ sequence of n consecutive RGB frames, optical flow
380
+ images, or IUV coordinates, respectively. The final
381
+ fully-connected layer outputs an n-dimensional en-
382
+ coding for each stream. These are fused into a single
383
+ embedding using three popular approaches; (i) sim-
384
+ ple averaging across streams; (ii) convolutional fusion
385
+ whereby stream features are concatenated and passed
386
+ to a 3D convolutional layer as a volume; and (iii)
387
+ element-wise multiplication of all three embedding
388
+ vectors followed by L2 normalisation. The latter two
389
+ approaches are illustrated in detail in Fig. 4. A lin-
390
+ ear layer at the end of the fusion head finally outputs
391
+ the unified embedding as logits. Whilst this system
392
+ was trained via metric learning - visually sketched in
393
+ Fig. 1 (right) - a k-NN classifier is used to perform
394
+ inference in the embedding space during evaluation.
395
+ Let the parameters of this network fθ(·) be de-
396
+ noted by θ. Furthermore, let fθ(x) = x be the short-
397
+ hand for referring to embeddings. Our metric learn-
398
+ ing objective is, thus, to minimise the distance be-
399
+ tween anchor-positive embedding pairs d(xa,xp) and
400
+ maximise distance between anchor-negative embed-
401
+ ding pairs d(xa,xn), where d represents a Euclidean.
402
+ Instead of using standard triplet loss (Hermans et al.,
403
+ 2017) LTL, we use an improved version (Andrew
404
+ et al., 2021), where the model is optimised via a hy-
405
+ brid reciprocal triplet and softmax cross-entropy loss:
406
+ LRC = LCE +λ LRT.
407
+ (1)
408
+ It is assembled from two components balanced by
409
+ λ = 0.1 as given in (Andrew et al., 2021). The two
410
+ components themselves are evaluated as:
411
+ LRT = d(xa,xp)+
412
+ 1
413
+ d(xa,xn)
414
+ (2)
415
+ LCE = −log
416
+
417
+ exy
418
+ ∑C
419
+ i=1 exi
420
+
421
+ ,
422
+ (3)
423
+ where C denotes the total number of classes and y are
424
+ the class labels.
425
+ In order to extend this system into the LTR do-
426
+ main we substitute the softmax cross-entropy term
427
+ for losses calculated using; (i) cross-entropy soft-
428
+ max with logit adjustment (Menon et al., 2020) LLA;
429
+ (ii) class-balanced focal loss (Cui et al., 2019) LCB;
430
+ and (iii) class-balanced focal loss with weight balanc-
431
+ ing (Alshammari et al., 2022). The first two losses are
432
+ evaluated as follows:
433
+ LLA = −log
434
+ � exy +τ · log πy
435
+ ∑C
436
+ i=1 exi+τ · log πi
437
+
438
+ ,
439
+ (4)
440
+ LCB = − 1−β
441
+ 1−βny
442
+ C
443
+
444
+ i=1
445
+ (1− pi)γ log(pi),
446
+ (5)
447
+
448
+ Bushnestandingcomera_interaction
449
+ camara
450
+ interactionwhere π represents the class priors (i.e., class frequen-
451
+ cies in the training set) and temperature factor τ = 1,
452
+ β = 0.99 is the re-weighting hyper-parameter, n is the
453
+ total number of samples, y are the classes, γ = 1 is the
454
+ focal loss hyper-parameter and pi = σ(xi). Balancing
455
+ the network weights θ is performed via a MaxNorm
456
+ constraint ∥θl,i∥2
457
+ 2 ≤ δ2,∀i given in (Alshammari et al.,
458
+ 2022) imposed on each class filter i in the last layer l
459
+ of the network where δ is the L2-norm ball radius. We
460
+ will reference a LCB-based optimisation where weight
461
+ balancing is performed via LWB.
462
+ Methodologically, this described architecture ap-
463
+ proaches the learning of behavioural great ape actions
464
+ via five key capabilities: 1) utilisation of multiple rel-
465
+ evant input modalities across an entire video snippet;
466
+ 2) effective streamed content encoding; 3) fusion into
467
+ a single embedding space; 4) metric space optimisa-
468
+ tion so that distances naturally reflect semantic sim-
469
+ ilarity; and 5) taking into account class imbalances
470
+ common to the domain content.
471
+ 5
472
+ EXPERIMENTS
473
+ 5.1
474
+ General Training Setup
475
+ We train our architecture via SGD optimisation using
476
+ batch size 32 and learning rate 10−4. Feature extrac-
477
+ tor backbones are initialised with Kinetics-400 (Kay
478
+ et al., 2017) pre-trained weights and training runs are
479
+ distributed over 8 Tesla V100 GPUs for 100 epochs.
480
+ 5.2
481
+ Baselines and Stream Ablations
482
+ As shown in Tab. 1, we first establish performance
483
+ benchmarks for one and two stream baseline archi-
484
+ tectures of our system (rows 2–5) against the cur-
485
+ rent state-of-the-art (row 1), which uses a ResNet-18
486
+ backbone with focal loss LFL, SGD, and LSTM-based
487
+ frame fusion (Sakib and Burghardt, 2020). As ex-
488
+ pected, we confirmed that - using identical setups and
489
+ losses - adding an optical flow stream is beneficial
490
+ in the great ape domain mirroring HAR results (see
491
+ rows 2 vs 4, and 3 vs 5). Additionally, models trained
492
+ using LRC consistently outperformed standard triplet
493
+ loss LRC scenarios (see rows 2 vs 3, and 4 vs 5). Fi-
494
+ nally, a dual-stream version of our proposed architec-
495
+ ture trained with LRC outperforms the state-of-the-art
496
+ by a small margin (see rows 1 vs 5).
497
+ 5.3
498
+ Triple-Stream Recognition
499
+ As given in Tab. 1 rows 6–8, our proposed triple-
500
+ stream architecture significantly outperforms all base-
501
+ lines with regards to top-1 accuracy, achieving up to
502
+ 85.86%. Thus, explicit DensePose-C information ap-
503
+ pears a useful information source for boosting be-
504
+ havioural action recognition in great apes. However,
505
+ Table 1: Behavioural Action Recognition Benchmarks.
506
+ Top-1 and average per-class (C-Avg) accuracy performance
507
+ on the PanAf-500 dataset for the current state-of-the-
508
+ art (row 1), single and dual-stream baselines (rows 2–5),
509
+ and our triple-stream networks (rows 6–8) for different fu-
510
+ sion methodologies and losses tested.
511
+ Models/Streams
512
+ Fusion
513
+ Loss
514
+ Top-1
515
+ C-Avg
516
+ Sakib et al. 2020
517
+ 1
518
+ RGB+OF
519
+ LSTM
520
+ LFL
521
+ 73.52%
522
+ 42.33%
523
+ Up to Dual-Stream
524
+ 2
525
+ RGB only
526
+ None
527
+ LTL
528
+ 55.50%
529
+ 32.67%
530
+ 3
531
+ RGB only
532
+ None
533
+ LRC
534
+ 74.24%
535
+ 55.76%
536
+ 4
537
+ RGB+OF
538
+ Avg
539
+ LTL
540
+ 62.90%
541
+ 39.10%
542
+ 5
543
+ RGB+OF
544
+ Avg
545
+ LRC
546
+ 75.02%
547
+ 61.97%
548
+ Triple-Stream (Ours)
549
+ 6
550
+ RGB+OF+DP
551
+ Avg
552
+ LRC
553
+ 81.71%
554
+ 46.61%
555
+ 7
556
+ RGB+OF+DP
557
+ Conv
558
+ LRC
559
+ 82.04%
560
+ 56.31%
561
+ 8
562
+ RGB+OF+DP
563
+ Elem
564
+ LRC
565
+ 85.86%
566
+ 50.50%
567
+ without LTR techniques all our triple-stream models
568
+ are significantly outperformed by a dual-stream set-
569
+ ting (row 5) with regards to average per-class accu-
570
+ racy. This reduction is caused by significantly poorer
571
+ performance on minority classes (see Sec. 5.4).
572
+ Since the learned behavioural action embeddings
573
+ are constructed as metric from the outset, they can
574
+ be visualised meaningfully – we note that such data-
575
+ driven visualisations are novel in the primatology do-
576
+ main. Fig. 5 depicts such learned spaces for our data
577
+ and architecture where, independent of stream cardi-
578
+ nality, embeddings cluster the training data cleanly.
579
+ This is of course expected given above 99% top-1
580
+ training accuracy in all settings. Yet, behavioural ac-
581
+ tions of great apes are highly intricate as well as vari-
582
+ able and, even with approx. 144,000 training frames
583
+ used, the model clearly shows signs of overfitting. As
584
+ a result, test set embeddings exhibit significant cluster
585
+ overlap. Sample groups representing sitting, standing,
586
+ and walking, for instance, blend into one another. In
587
+ addition to overfitting, this also highlights the transi-
588
+ tional nature of these often temporarily adjacent and
589
+ smoothly changing actions. Thus, future temporally
590
+ transitional ground truth labelling may be needed to
591
+ represent behavioural great ape action in the PanAf-
592
+ 500 dataset more authentically.
593
+ 5.4
594
+ Fusing Streams
595
+ When looking at the impact of information fusion
596
+ methods on performance in more detail, we find that
597
+ benchmarks vary significantly (see Tab. 1 rows 6–8)
598
+ when we test averaging, element-wise multiplication,
599
+ and convolutional fusion, as described in Sec. 4. Re-
600
+ sults show that convolution and element-wise mul-
601
+ tiplication improve performance slightly across both
602
+ metrics when compared with averaging: top-1 accu-
603
+
604
+ camera interaction
605
+ climbing up
606
+ climbing down
607
+ hanging
608
+ running
609
+ sitting
610
+ sitting on back
611
+ standing
612
+ walking
613
+ Behavioural actions
614
+ Single Stream (RGB)
615
+ Kinetics pretrained (no training)
616
+ Training
617
+ Training
618
+ Test
619
+ Dual Stream (RGB+OF)
620
+ Triple Stream (AllThree)
621
+ Figure 5: Visualisations of Great Ape Behavioural Action Spaces. A 2D t-SNE (Wattenberg et al., 2016) visualisation of
622
+ the 128-dimensional training (top-right) and test (bottom-right) embeddings produced by the single, dual and three-stream
623
+ network with convolutional fusion. We can see that training set embeddings from all classes are clustered cleanly. In contrast,
624
+ test set embeddings show significant overlap and only embeddings from majority classes form distinct clusters. This is
625
+ consistent with the high top-1 accuracy and relatively low average per-class accuracy reported in Tab. 1
626
+ racy improves by 0.33% and 4.1%, respectively (see
627
+ rows 6–8). However, the most significant gains are
628
+ observed with respect to average per class accuracy
629
+ which increases by 3.44% for element-wise multipli-
630
+ cation and 9.7% for convolutional fusion. Learnable
631
+ parameters in the convolution method clearly help
632
+ blending information even when only fewer samples
633
+ are available for training. Building on this improve-
634
+ ment, we will next investigate the impact of LTR
635
+ methods in order to benefit tail class performance.
636
+ 5.5
637
+ Long-tail Recognition
638
+ When grouping behavioural actions into head (cov-
639
+ ering sitting, standing, and walking) and remain-
640
+ ing tail classes based on frequency in the data (see
641
+ Fig. 2), a significant performance gap becomes appar-
642
+ ent even when using the so far best C-Avg performing
643
+ model (see Tab. 2 row 1). Employing LTR techniques
644
+ can, however, reduce this gap and improve average
645
+ per-class accuracy further as quantified across rows
646
+ 2–4 in Tab. 2). Fig. 6 shows t-SNE visualisations of
647
+ the three LTR triple-stream approaches when trained
648
+ with convolutional feature fusion. Particularly for the
649
+ class-balanced approaches and weight-balancing se-
650
+ tups (two rightmost), tail class clusters appear more
651
+ clearly separated and class overlap is generally re-
652
+ duced. Thus, for the great ape domain underrepre-
653
+ sented classes are indeed an effective source of infor-
654
+ mation for improving action separability in general.
655
+ 6
656
+ CONCLUSION
657
+ In this work we introduced the first deep metric learn-
658
+ ing system for great ape behavioural action recogni-
659
+ tion. We demonstrated that the proposed triple-stream
660
+ architecture can provide leading state-of-the-art per-
661
+ formance when tested on the PanAf-500 camera trap
662
+ dataset covering 180,000 annotated frames across 500
663
+ videos taken in the wild. We demonstrated that the ad-
664
+ dition of a DensePose-C chimpanzee pose estimation
665
+ stream into the embedding architecture is highly ef-
666
+ fective and leads to system performance of 85.86%
667
+ top-1 accuracy on the data.
668
+ We also showed that
669
+ adding LTR techniques that address poor tail class
670
+ performance to the system can improve the average
671
+ per-class accuracy to 65.66% on the dataset. Despite
672
+ these improvements we note that both larger anno-
673
+ tated datasets to counteract overfitting as well as more
674
+ temporally blended forms of annotation (e.g. action
675
+ transition annotations) would benefit the authenticity
676
+ of data-driven great ape behavioural representations.
677
+ We hope that the research presented here sparks fur-
678
+ ther interest in this vital application area for the bene-
679
+ fit of endangered species such as great apes.
680
+ ACKNOWLEDGEMENTS
681
+ We thank the Pan African Programme:
682
+ ‘The Cultured
683
+ Chimpanzee’ team and its collaborators for allowing the use
684
+ of their data for this paper. We thank Amelie Pettrich, An-
685
+ tonio Buzharevski, Eva Martinez Garcia, Ivana Kirchmair,
686
+
687
+ climbing up
688
+ hanging
689
+ running
690
+ sitting
691
+ sitting on back
692
+ standing
693
+ walking
694
+ Logit adjustment
695
+ No LTR augmentation
696
+ Weight balanced
697
+ CB (+focal loss)
698
+ climbing down
699
+ camera interaction
700
+ Test
701
+ Figure 6: Long-tail Test Embeddings. A 2D t-SNE visualisation of the 128-dimensional test embeddings produced by the
702
+ three-stream network with convolutional fusion alone (leftmost) and augmented with each LTR technique; (i) logit adjustment
703
+ (ii) CB (+focal loss) and (iii) weight balancing. All LTR-augmented methods improve clustering of embeddings belonging to
704
+ tail classes. They appear more clearly separated and exhibit less overlap when compared with the non-LTR method.
705
+ Sebastian Sch¨utte, Linda Gerlach and Fabina Haas. We also
706
+ thank management and support staff across all sites; specif-
707
+ ically Yasmin Moebius, Geoffrey Muhanguzi, Martha Rob-
708
+ bins, Henk Eshuis, Sergio Marrocoli and John Hart. Thanks
709
+ to the team at https://www.chimpandsee.org particularly
710
+ Briana Harder, Anja Landsmann, Laura K. Lynn, Zuzana
711
+ Mach´aˇckov´a, Heidi Pfund, Kristeena Sigler and Jane Wid-
712
+ ness. The work that allowed for the collection of the dataset
713
+ was funded by the Max Planck Society, Max Planck Society
714
+ Innovation Fund, and Heinz L. Krekeler. In this respect we
715
+ would like to thank: Ministre des Eaux et Forˆets, Minist`ere
716
+ de l’Enseignement sup´erieur et de la Recherche scientifique
717
+ in Cˆote d’Ivoire; Institut Congolais pour la Conservation de
718
+ la Nature, Minist`ere de la Recherche Scientifique in Demo-
719
+ cratic Republic of Congo; Forestry Development Authority
720
+ in Liberia; Direction Des Eaux Et Forˆets, Chasses Et Con-
721
+ servation Des Sols in Senegal; Makerere University Biolog-
722
+ ical Field Station, Uganda National Council for Science and
723
+ Technology, Uganda Wildlife Authority, National Forestry
724
+ Authority in Uganda; National Institute for Forestry De-
725
+ velopment and Protected Area Management, Ministry of
726
+ Agriculture and Forests, Ministry of Fisheries and Environ-
727
+ ment in Equatorial Guinea. This work was supported by the
728
+ UKRI CDT in Interactive AI under grant EP/S022937/1.
729
+ Table 2:
730
+ LTR-enabled Behavioural Action Recogni-
731
+ tion Benchmarks.
732
+ Average per-class accuracy for our
733
+ triple-stream network with convolutional fusion for best
734
+ performing non-LTR method (row1), and three LTR ap-
735
+ proaches (rows 2–4) targetting poor tail class performance.
736
+ Method/Loss
737
+ C-Avg
738
+ Head
739
+ Tail
740
+ Non-LTR Triple-Stream
741
+ 1
742
+ LRC
743
+ 56.31
744
+ 80.57
745
+ 44.78
746
+ LTR Triple-Stream
747
+ 2
748
+ LLA
749
+ 61.76
750
+ 83.22
751
+ 50.7
752
+ 3
753
+ LCB
754
+ 63.56
755
+ 77.60
756
+ 55.95
757
+ 4
758
+ LWB
759
+ 65.66
760
+ 82.55
761
+ 56.26
762
+ REFERENCES
763
+ Almond, R., Grooten, M., Juffe Bignoli, D., and Petersen,
764
+ T. (2022).
765
+ Wwf (2022) living planet report 2022 -
766
+ building a nature-positive society. 1
767
+ Alshammari, S., Wang, Y.-X., Ramanan, D., and Kong, S.
768
+ (2022). Long-tailed recognition via weight balancing.
769
+ In CVPR, pages 6897–6907. 2, 3, 4, 5
770
+ Andrew, W., Gao, J., Mullan, S., Campbell, N., Dowsey,
771
+ A. W., and Burghardt, T. (2021). Visual identification
772
+ of individual holstein-friesian cattle via deep metric
773
+ learning. Computers and Electronics in Agriculture,
774
+ 185:106133. 4
775
+ Bain, M., Nagrani, A., Schofield, D., Berdugo, S., Bessa, J.,
776
+ Owen, J., Hockings, K. J., Matsuzawa, T., Hayashi,
777
+ M., Biro, D., et al. (2021). Automated audiovisual
778
+ behavior recognition in wild primates.
779
+ Science ad-
780
+ vances, 7(46):eabi4883. 2, 3
781
+ Carvalho, S., Wessling, E. G., Abwe, E. E., Almeida-
782
+ Warren, K., Arandjelovic, M., Boesch, C., Danquah,
783
+ E., Diallo, M. S., Hobaiter, C., Hockings, K., et al.
784
+ (2022). Using nonhuman culture in conservation re-
785
+ quires careful and concerted action. Conservation Let-
786
+ ters, 15(2):e12860. 1
787
+ Congdon, J., Hosseini, M., Gading, E., Masousi, M.,
788
+ Franke, M., and MacDonald, S. (2022). The future
789
+ of artificial intelligence in monitoring animal identifi-
790
+ cation, health, and behaviour. 1
791
+ Cui, Y., Jia, M., Lin, T.-Y., Song, Y., and Belongie, S.
792
+ (2019). Class-balanced loss based on effective num-
793
+ ber of samples. In CVPR, pages 9268–9277. 2, 3,
794
+ 4
795
+ Dominoni, D. M., Halfwerk, W., Baird, E., Buxton, R. T.,
796
+ Fern´andez-Juricic, E., Fristrup, K. M., McKenna,
797
+ M. F., Mennitt, D. J., Perkin, E. K., Seymoure, B. M.,
798
+ et al. (2020). Why conservation biology can benefit
799
+ from sensory ecology. Nature Ecology & Evolution,
800
+ 4(4):502–511. 1
801
+ Du Tran, H. W., Torresani, L., Ray, J., Lecun, Y., and Paluri,
802
+ M. (2017). A closer look at spatiotemporal convolu-
803
+ tions for action recognition.(2017). OK. 4
804
+ Duan, M., Qiu, H., Zhang, Z., and Wu, Y. (2021). Ntu-
805
+
806
+ densepose: A new benchmark for dense pose action
807
+ recognition. In Big Data, pages 3170–3175. IEEE. 3
808
+ Feichtenhofer, C., Fan, H., Malik, J., and He, K. (2019).
809
+ Slowfast networks for video recognition.
810
+ In ICCV,
811
+ pages 6202–6211. 3
812
+ Hayakawa, J. and Dariush, B. (2020).
813
+ Recognition and
814
+ 3d localization of pedestrian actions from monocular
815
+ video. In ITSC, pages 1–7. IEEE. 3
816
+ Hermans, A., Beyer, L., and Leibe, B. (2017). In defense
817
+ of the triplet loss for person re-identification. arXiv
818
+ preprint arXiv:1703.07737. 2, 4
819
+ Hong, J., Cho, B., Hong, Y. W., and Byun, H. (2019).
820
+ Contextual action cues from camera sensor for multi-
821
+ stream action recognition. Sensors, 19(6):1382. 3
822
+ IUCN (2022). Iucn red list of threatened species version
823
+ 2022.1. 1
824
+ Kalfaoglu, M. E., Kalkan, S., and Alatan, A. A. (2020).
825
+ Late temporal modeling in 3d cnn architectures with
826
+ bert for action recognition. In ECCV, pages 731–747.
827
+ Springer. 2
828
+ Kang, B., Xie, S., Rohrbach, M., Yan, Z., Gordo, A., Feng,
829
+ J., and Kalantidis, Y. (2019). Decoupling representa-
830
+ tion and classifier for long-tailed recognition. arXiv
831
+ preprint arXiv:1910.09217. 3
832
+ Karaderi, T., Burghardt, T., Hsiang, A. Y., Ramaer, J., and
833
+ Schmidt, D. N. (2022). Visual microfossil identifica-
834
+ tion via deep metric learning. In ICPRAI, pages 34–
835
+ 46. Springer. 2
836
+ Kay, W., Carreira, J., Simonyan, K., Zhang, B., Hillier, C.,
837
+ Vijayanarasimhan, S., Viola, F., Green, T., Back, T.,
838
+ Natsev, P., et al. (2017). The kinetics human action
839
+ video dataset. arXiv preprint arXiv:1705.06950. 5
840
+ K¨uhl, H. S. and Burghardt, T. (2013).
841
+ Animal biomet-
842
+ rics: quantifying and detecting phenotypic appear-
843
+ ance. TREE, 28(7):432–441. 1
844
+ Le, V.-T., Tran-Trung, K., and Hoang, V. T. (2022). A com-
845
+ prehensive review of recent deep learning techniques
846
+ for human activity recognition. Computational Intel-
847
+ ligence and Neuroscience, 2022. 3
848
+ Li, Y., Lu, Z., Xiong, X., and Huang, J. (2022). Perf-net:
849
+ Pose empowered rgb-flow net. In WACV, pages 513–
850
+ 522. 3
851
+ Lin, J., Gan, C., and Han, S. (2019). Tsm: Temporal shift
852
+ module for efficient video understanding. In ICCV,
853
+ pages 7083–7093. 2
854
+ Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., and Yu,
855
+ S. X. (2019). Large-scale long-tailed recognition in
856
+ an open world. In CVPR, pages 2537–2546. 3
857
+ Majd, M. and Safabakhsh, R. (2020). Correlational con-
858
+ volutional lstm for human action recognition. Neuro-
859
+ computing, 396:224–229. 2
860
+ Menon, A. K., Jayasumana, S., Rawat, A. S., Jain, H., Veit,
861
+ A., and Kumar, S. (2020). Long-tail learning via logit
862
+ adjustment. arXiv preprint arXiv:2007.07314. 2, 3, 4
863
+ Musgrave, K., Belongie, S., and Lim, S.-N. (2020). Pytorch
864
+ metric learning. 2
865
+ Nishida, T., Kano, T., Goodall, J., McGrew, W. C., and
866
+ Nakamura, M. (1999).
867
+ Ethogram and ethnography
868
+ of mahale chimpanzees.
869
+ Anthropological Science,
870
+ 107(2):141–188. 2
871
+ Pan, Y., Xu, J., Wang, M., Ye, J., Wang, F., Bai, K., and
872
+ Xu, Z. (2019). Compressing recurrent neural networks
873
+ with tensor ring for action recognition. In AAAI, vol-
874
+ ume 33, pages 4683–4690. 2
875
+ Sakib, F. and Burghardt, T. (2020). Visual recognition of
876
+ great ape behaviours in the wild. VAIB. 2, 3, 5
877
+ Sanakoyeu, A., Khalidov, V., McCarthy, M. S., Vedaldi, A.,
878
+ and Neverova, N. (2020). Transferring dense pose to
879
+ proximal animal classes. In CVPR, pages 5233–5242.
880
+ 2, 3, 4
881
+ Shaikh, M. B. and Chai, D. (2021). Rgb-d data-based action
882
+ recognition: A review. Sensors, 21(12):4246. 2
883
+ Sharir, G., Noy, A., and Zelnik-Manor, L. (2021). An image
884
+ is worth 16x16 words, what is a video worth? arXiv
885
+ preprint arXiv:2103.13915. 2
886
+ Simonyan, K. and Zisserman, A. (2014). Two-stream con-
887
+ volutional networks for action recognition in videos.
888
+ NeurIPS, 27. 2, 3
889
+ Tran, D., Wang, H., Torresani, L., and Feiszli, M. (2019).
890
+ Video classification with channel-separated convolu-
891
+ tional networks. In ICCV, pages 5552–5561. 2
892
+ Tuia, D., Kellenberger, B., Beery, S., Costelloe, B. R.,
893
+ Zuffi, S., Risse, B., Mathis, A., Mathis, M. W., van
894
+ Langevelde, F., Burghardt, T., et al. (2022). Perspec-
895
+ tives in machine learning for wildlife conservation.
896
+ Nature communications, 13(1):1–15. 1
897
+ Verma, V., Lamb, A., Beckham, C., Najafi, A., Courville,
898
+ A., Mitliagkas, I., and Bengio, Y. (2018). Manifold
899
+ mixup: learning better representations by interpolat-
900
+ ing hidden states. 3
901
+ Wang, L., Yuan, X., Zong, M., Ma, Y., Ji, W., Liu, M.,
902
+ and Wang, R. (2021). Multi-cue based four-stream 3d
903
+ resnets for video-based action recognition. Informa-
904
+ tion Sciences, 575:654–665. 3
905
+ Wattenberg, M., Vi´egas, F., and Johnson, I. (2016). How to
906
+ use t-sne effectively. Distill. 6
907
+ Zach, C., Pock, T., and Bischof, H. (2007). A duality based
908
+ approach for realtime tv-l 1 optical flow. In DAGM,
909
+ pages 214–223. Springer. 3
910
+ Zamma, K. and Matsusaka, T. (2015).
911
+ Ethograms and
912
+ the diversity of behaviors, page 510–518. Cambridge
913
+ University Press. 2
914
+ Zhang, Y., Li, X., Liu, C., Shuai, B., Zhu, Y., Brattoli, B.,
915
+ Chen, H., Marsic, I., and Tighe, J. (2021a).
916
+ Vidtr:
917
+ Video transformer without convolutions.
918
+ In ICCV,
919
+ pages 13577–13587. 2
920
+ Zhang, Y., Wei, X.-S., Zhou, B., and Wu, J. (2021b). Bag
921
+ of tricks for long-tailed visual recognition with deep
922
+ convolutional neural networks. In AAAI, volume 35,
923
+ pages 3447–3455. 3
924
+ Zhou, B., Andonian, A., Oliva, A., and Torralba, A. (2018).
925
+ Temporal relational reasoning in videos. In ECCV,
926
+ pages 803–818. 2
927
+ Zong, M., Wang, R., Chen, X., Chen, Z., and Gong, Y.
928
+ (2021). Motion saliency based multi-stream multiplier
929
+ resnets for action recognition. Image and Vision Com-
930
+ puting, 107:104108. 3
931
+
-9E0T4oBgHgl3EQfxAH-/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
-dE3T4oBgHgl3EQfrgpm/content/2301.04660v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98cd0b29886e8fe4a68b0e5a9caf17647d18f34810f2bb47b45b52dad7b96fda
3
+ size 1565316
.gitattributes CHANGED
@@ -5969,3 +5969,65 @@ QNE4T4oBgHgl3EQfkw0x/content/2301.05153v1.pdf filter=lfs diff=lfs merge=lfs -tex
5969
  19E1T4oBgHgl3EQflQTp/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5970
  RdE4T4oBgHgl3EQflQ1W/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5971
  B9AzT4oBgHgl3EQfh_2U/content/2301.01493v1.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5969
  19E1T4oBgHgl3EQflQTp/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5970
  RdE4T4oBgHgl3EQflQ1W/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5971
  B9AzT4oBgHgl3EQfh_2U/content/2301.01493v1.pdf filter=lfs diff=lfs merge=lfs -text
5972
+ -dE3T4oBgHgl3EQfrgpm/content/2301.04660v1.pdf filter=lfs diff=lfs merge=lfs -text
5973
+ LdAyT4oBgHgl3EQfsvlL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5974
+ 5dE1T4oBgHgl3EQfTAM3/content/2301.03072v1.pdf filter=lfs diff=lfs merge=lfs -text
5975
+ 5dE1T4oBgHgl3EQfTAM3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5976
+ M9AyT4oBgHgl3EQfUPcx/content/2301.00120v1.pdf filter=lfs diff=lfs merge=lfs -text
5977
+ btE2T4oBgHgl3EQfaQc9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5978
+ xtAyT4oBgHgl3EQfOvaI/content/2301.00011v1.pdf filter=lfs diff=lfs merge=lfs -text
5979
+ cNFQT4oBgHgl3EQfiDZt/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5980
+ B9E0T4oBgHgl3EQfgAGB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5981
+ rdFAT4oBgHgl3EQffx02/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5982
+ B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf filter=lfs diff=lfs merge=lfs -text
5983
+ kNFKT4oBgHgl3EQfxS4R/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5984
+ OdFJT4oBgHgl3EQf0y3g/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5985
+ pdE2T4oBgHgl3EQf0Qhs/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5986
+ m9E2T4oBgHgl3EQfJgYM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5987
+ y9AzT4oBgHgl3EQfePxr/content/2301.01433v1.pdf filter=lfs diff=lfs merge=lfs -text
5988
+ s9E0T4oBgHgl3EQfrwEw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5989
+ _9AzT4oBgHgl3EQfvv1V/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5990
+ ctAyT4oBgHgl3EQfwfma/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5991
+ rdFAT4oBgHgl3EQffx02/content/2301.08583v1.pdf filter=lfs diff=lfs merge=lfs -text
5992
+ hNE2T4oBgHgl3EQfHgYi/content/2301.03668v1.pdf filter=lfs diff=lfs merge=lfs -text
5993
+ ctAyT4oBgHgl3EQfwfma/content/2301.00650v1.pdf filter=lfs diff=lfs merge=lfs -text
5994
+ 9tE5T4oBgHgl3EQfRQ5h/content/2301.05519v1.pdf filter=lfs diff=lfs merge=lfs -text
5995
+ ptFKT4oBgHgl3EQfyy7U/content/2301.11909v1.pdf filter=lfs diff=lfs merge=lfs -text
5996
+ JtAzT4oBgHgl3EQfj_0_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5997
+ kNFKT4oBgHgl3EQfxS4R/content/2301.11902v1.pdf filter=lfs diff=lfs merge=lfs -text
5998
+ sdAzT4oBgHgl3EQfcvxN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5999
+ WdE0T4oBgHgl3EQf3ALu/content/2301.02721v1.pdf filter=lfs diff=lfs merge=lfs -text
6000
+ UNFLT4oBgHgl3EQfQy8p/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6001
+ AdE0T4oBgHgl3EQfxgLI/content/2301.02648v1.pdf filter=lfs diff=lfs merge=lfs -text
6002
+ JtAzT4oBgHgl3EQfj_0_/content/2301.01524v1.pdf filter=lfs diff=lfs merge=lfs -text
6003
+ y9AzT4oBgHgl3EQfePxr/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6004
+ c9AyT4oBgHgl3EQfXfd-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6005
+ atAyT4oBgHgl3EQfW_e5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6006
+ rdE4T4oBgHgl3EQfVAyt/content/2301.05021v1.pdf filter=lfs diff=lfs merge=lfs -text
6007
+ fdAzT4oBgHgl3EQfafyY/content/2301.01370v1.pdf filter=lfs diff=lfs merge=lfs -text
6008
+ CdE1T4oBgHgl3EQfDwOw/content/2301.02882v1.pdf filter=lfs diff=lfs merge=lfs -text
6009
+ bNE2T4oBgHgl3EQfaAeA/content/2301.03870v1.pdf filter=lfs diff=lfs merge=lfs -text
6010
+ 7NE2T4oBgHgl3EQfkwf2/content/2301.03983v1.pdf filter=lfs diff=lfs merge=lfs -text
6011
+ dNFST4oBgHgl3EQfEjgF/content/2301.13714v1.pdf filter=lfs diff=lfs merge=lfs -text
6012
+ rdE4T4oBgHgl3EQfVAyt/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6013
+ S9E0T4oBgHgl3EQf2AKe/content/2301.02707v1.pdf filter=lfs diff=lfs merge=lfs -text
6014
+ o9FQT4oBgHgl3EQfrjZh/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6015
+ htE2T4oBgHgl3EQfHgbc/content/2301.03670v1.pdf filter=lfs diff=lfs merge=lfs -text
6016
+ AdE1T4oBgHgl3EQfpAU_/content/2301.03326v1.pdf filter=lfs diff=lfs merge=lfs -text
6017
+ edE0T4oBgHgl3EQfogFH/content/2301.02526v1.pdf filter=lfs diff=lfs merge=lfs -text
6018
+ idFKT4oBgHgl3EQfvi5N/content/2301.11895v1.pdf filter=lfs diff=lfs merge=lfs -text
6019
+ c9A0T4oBgHgl3EQfGv_2/content/2301.02053v1.pdf filter=lfs diff=lfs merge=lfs -text
6020
+ ytAzT4oBgHgl3EQf7_7g/content/2301.01899v1.pdf filter=lfs diff=lfs merge=lfs -text
6021
+ b9E4T4oBgHgl3EQfPgyg/content/2301.04974v1.pdf filter=lfs diff=lfs merge=lfs -text
6022
+ TdE2T4oBgHgl3EQftAiy/content/2301.04066v1.pdf filter=lfs diff=lfs merge=lfs -text
6023
+ AdE1T4oBgHgl3EQfpAU_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6024
+ hdE4T4oBgHgl3EQfSAzu/content/2301.04996v1.pdf filter=lfs diff=lfs merge=lfs -text
6025
+ 1dE1T4oBgHgl3EQfRwNN/content/2301.03056v1.pdf filter=lfs diff=lfs merge=lfs -text
6026
+ k9AyT4oBgHgl3EQfyfkg/content/2301.00683v1.pdf filter=lfs diff=lfs merge=lfs -text
6027
+ CdE1T4oBgHgl3EQfDwOw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6028
+ fdAzT4oBgHgl3EQfafyY/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6029
+ htE2T4oBgHgl3EQfHgbc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6030
+ dNFST4oBgHgl3EQfEjgF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6031
+ bNE2T4oBgHgl3EQfaAeA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6032
+ M9AyT4oBgHgl3EQfUPcx/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6033
+ r9E4T4oBgHgl3EQfVQyb/content/2301.05023v1.pdf filter=lfs diff=lfs merge=lfs -text
1NE2T4oBgHgl3EQf5Ahd/content/tmp_files/2301.04186v1.pdf.txt ADDED
@@ -0,0 +1,495 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Elliptic flow measurement of J/ψ in PHENIX Run14
2
+ Au+Au at √sNN = 200 GeV ∗
3
+ Luis Bichon III (for the PHENIX collaboration,
4
+ https://doi.org/10.5281/zenodo.7430208)
5
+ Department of Physics and Astronomy, Vanderbilt University, Nashville, TN
6
+ 37235 USA
7
+ We obtain the first measurement of J/ψ elliptic flow at RHIC energies
8
+ in forward rapidity using data from the PHENIX detector and applying
9
+ an event plane method. The dataset used contains 19 billion events from
10
+ the PHENIX experiment’s Run 14 Au + Au dataset at √sNN = 200 GeV.
11
+ PHENIX has measured a J/ψ v2 in a centrality range of 10 − 60% that is
12
+ consistent with zero. Taken together with results from LHC the measure-
13
+ ment of v2, which is consistent with zero may indicate that J/ψ production
14
+ by coalescence is not significant at forward rapidity at RHIC energy.
15
+ 1. Introduction
16
+ The QGP has been found to exhibit a nearly perfect fluid behavior [1].
17
+ This behavior manifests itself as strong correlations between particles pro-
18
+ duced in nuclear collisions. Presently, the detailed interactions of the heavy
19
+ quarks in the QGP medium are under investigation and, because heavy fla-
20
+ vor quarks will have relatively larger masses, they may not be thermalized
21
+ and flow with the medium.
22
+ The production of J/ψ in p+p collisions is
23
+ theoretically well understood because they are produced in hard scattering
24
+ processes. This feature in addition to their production in hard scattering
25
+ events in the initial stages of the collision make them ideal probes for testing
26
+ the properties of the QGP medium. However, in nucleus+nucleus collisions
27
+ some of the produced J/ψ mesons may be dissolved by the QGP, which may
28
+ create anisotropies in the observed J/ψ azimuthal distributions due to the
29
+ different path length in the medium. Additionally, a similar signal may be
30
+ created if the J/ψ thermalizes inside the medium and follows the pressure
31
+ gradients as lighter particles do, or the J/ψ may dissociate, and the charm
32
+ ∗ Presented at the 29th International Conference on Ultrarelativistic Nucleus-Nucleus
33
+ Collisions (Quark Matter 2022)
34
+ (1)
35
+ arXiv:2301.04186v1 [nucl-ex] 10 Jan 2023
36
+
37
+ 2
38
+ QM˙Proceedings˙Bichon
39
+ printed on January 12, 2023
40
+ quarks could equilibrate which could lead to J/ψ regeneration. We present
41
+ a preliminary result for J/ψ v2 using the PHENIX Run14 Au+Au dataset
42
+ at √sNN = 200 GeV.
43
+ 2. Data Analysis & Methodology
44
+ 2.1. Dataset and Detectors
45
+ In this analysis, we use the Run 14 Au+Au Muon Arm dataset at
46
+ √sNN = 200 GeV containing 19 billion events. The dimuon decay channel
47
+ is used to reconstruct candidate J/ψ mesons. The PHENIX experiment has
48
+ a unique coverage at forward rapidity with muon identification. This in ad-
49
+ dition to the large dataset of Au+Au collisions collected in 2014 allows for
50
+ a statistically improved measurement of J/ψ elliptic flow at RHIC energies.
51
+ The key detector in this analysis is the Forward Silicon Vertex Detector
52
+ (FVTX). With the FVTX, an increase in precision vertexing capabilities
53
+ was added to the muon spectrometers, enabling the rejection of muons from
54
+ the decay of relatively long-lived particles, the rejection of muons from the
55
+ decays of relatively long-lived particles, and an additional way of determin-
56
+ ing the event plane [2].
57
+ 2.2. Combinatorial Background Subtraction
58
+ To obtain a pure signal for the J/ψ from dimuon mass distributions we
59
+ employ event-mixing as the standard method of removing the background
60
+ dimuons.
61
+ For this event-mixing method, the background is constructed
62
+ from dimuon pairs of opposite sign, but the single muons come from differ-
63
+ ent events. Mixed event dimuon pairs are only formed if two events have a
64
+ centrality closer than 5%, a Z vertex closer than 0.75 cm and a event plane
65
+ angle closer than π/20 rad.
66
+ Using events instead of individual dimuons
67
+ allows us to increase the likelihood that we are using combinatorial back-
68
+ ground dimuons. A normalization factor must be applied for the background
69
+ which can be obtained by using the ratio of like-sign pairs from the same
70
+ event to like-sign pairs from mixed events. The signal is then obtained by
71
+ the subtraction of the normalized background from the foreground.
72
+ 2.3. Fitting the Dimuon Mass Distribution
73
+ In the fitting of the mass distributions, we assume the shape of the
74
+ J/ψ signal to be a Crystal Ball function, and given the statistical precision
75
+ of the dataset, we also apply the same shape to the Ψ(2S) to avoid their
76
+ inclusion in the higher mass J/ψ region. The parameters of the Crystal Ball
77
+ function are obtained using J/ψ embedded Monte Carlo simulation data.
78
+ We produce simulated mass distributions for low/high pT and South/North
79
+
80
+ QM˙Proceedings˙Bichon
81
+ printed on January 12, 2023
82
+ 3
83
+ arm rapidities, fitting the distributions allowing for the function to have
84
+ free (α, n, ¯x, and σ) parameters. The J/ψ count for each distribution is
85
+ obtained by the integral of the J/ψ crystal ball function in the fit (see Figure
86
+ 1).
87
+ Fig. 1. Mass distributions using mixed-event subtraction for the unweighted “stan-
88
+ dard” set. These are binned by pT in each column, and rapidity+∆φ angle for
89
+ each row. The green/dashed curve is a Crystal Ball fitted to the J/ψ peak, the
90
+ blue/dashed-dot curve is a Crystal Ball fitted to the ψ(2S) peak, the red/dotted
91
+ curve is an exponential fitted to the remaining background after subtraction, and
92
+ the black/solid curve is the total fit.
93
+
94
+ 0 <Pr [GeV/c] ≤ 1
95
+ 1 <Pr [GeV/c] ≤ 2
96
+ 2 <Pr [GeV/c] ≤ 3
97
+ 3 <pr [GeV/c] ≤ 5
98
+ 300E
99
+ 160
100
+ 250
101
+ 元-4
102
+ x = 3.125 ± 0.003 GeV/c2
103
+ x=3.125+0.003GeV/c2
104
+ x = 3.118 ± 0.004 GeV/c2
105
+ 80
106
+ 140
107
+ x =3.118 ± 0.004 GeV/c2
108
+ VI
109
+ = 0.135 ± 0.003 GeV/c2
110
+ 250
111
+ = 0.135 ± 0.003 GeV/c2
112
+ = 0.160 ± 0.002 GeV/c2
113
+ = 0.160 ± 0.002 GeV/c2
114
+ 200
115
+ α= 1.19
116
+ α= 1.19
117
+ 120
118
+ α = 1.00
119
+ α = 1.00
120
+ n = 3.94
121
+ n= 3.94
122
+ n = 4.06
123
+ n = 4.06
124
+ 200
125
+ 60
126
+ 150
127
+ 100
128
+ >
129
+ Nj/±= 1422±70
130
+ Nj/ =1683±74
131
+ 150
132
+ South Arm - 0
133
+ 100
134
+ 80
135
+ PH米ENIX
136
+ PH米ENIX
137
+ 40
138
+ 100
139
+ PH米ENIX
140
+ PH米ENIX
141
+ 60
142
+ preliminary
143
+ preliminary
144
+ preliminary
145
+ preliminary
146
+ 50
147
+ 40
148
+ 20+
149
+ 20
150
+ 2
151
+ 2.5
152
+ 3
153
+ 3.5
154
+ 4.5
155
+ 2
156
+ 2.5
157
+ 3
158
+ 3.5
159
+ 4.5
160
+ 2
161
+ 2.5
162
+ 3
163
+ 3.5
164
+ 4.5
165
+ 2.5
166
+ 3
167
+ 3.5
168
+ 4.5
169
+ Mass [GeV/c2]
170
+ Mass [GeV/c2
171
+ Mass [GeV/c2]
172
+ Mass [GeV/c2]
173
+ 300日
174
+ 250
175
+ 80
176
+ π-2
177
+ T+T
178
+ x= 3.125 ±0.003 GeV/c2
179
+ 250E
180
+ = 3.125 ± 0.003 GeV/c2
181
+ x=3.118 ±0.004 GeV/c2
182
+ x = 3.118 ± 0.004 GeV/c2
183
+ VI
184
+ = 0.135 ± 0.003 GeV/c2
185
+ = 0.135 ± 0.003 GeV/c2
186
+ = 0.160 ± 0.002 GeV/c2
187
+ = 0.160 ± 0.002 GeV/c2
188
+ 200
189
+ α= 1.19
190
+ α= 1.19
191
+ α = 1.00
192
+ α = 1.00
193
+ v
194
+ n = 3.94
195
+ 200
196
+ n = 3.94
197
+ n = 4.06
198
+ 60
199
+ 100
200
+ n = 4.06
201
+ 150
202
+ V
203
+ Nj/ = 1374±66
204
+ 150E
205
+ Nj/Φ =1582±72
206
+ N
207
+ π一4
208
+ 100
209
+ 40
210
+ PH米ENIX
211
+ 100E
212
+ PH米ENIX
213
+ PH米ENIX
214
+ PH米ENIX
215
+ South Arm -
216
+ preliminary
217
+ preliminary
218
+ preliminary
219
+ preliminary
220
+ 50
221
+ 20
222
+ -50
223
+ 50田
224
+ 50
225
+ 2
226
+ 2.5
227
+ 3.5
228
+ 4.5
229
+ 2.5
230
+ 4.5
231
+ 2.5
232
+ 4.5
233
+ 2.5
234
+ Mass [GeV/c2]
235
+ Mass [GeV/c2]
236
+ Mass [GeV/c2]
237
+ Mass [GeV/c2]
238
+ 140F
239
+ 80
240
+ 允-4
241
+ 120
242
+ 3.147±0.006GeV/c2
243
+ 120
244
+ x=3.147±0.006 GeV/c2
245
+ x = 3.159 ± 0.006 GeV/c2
246
+ 50E
247
+ x = 3.159 ± 0.006 GeV/c2
248
+ VI
249
+ = 0.160 ± 0.002 GeV/c2
250
+ = 0.160 ± 0.002 GeV/c2
251
+ = 0.146 ± 0.006 GeV/c2
252
+ = 0.146 ± 0.006 GeV/c2
253
+ >-
254
+ 100
255
+ α = 0.63
256
+ 100
257
+ α = 0.63
258
+ 60
259
+ α= 1.17
260
+ α = 1.17
261
+ n = 7.60
262
+ n= 7.60
263
+ n = 2.55
264
+ 40
265
+ n = 2.55
266
+ 80E
267
+ Nj/ =902±63
268
+ [Nj/=1013±63
269
+ 30
270
+ 60
271
+ 60
272
+ PH*ENIX
273
+ North Arm -
274
+ PH米ENIX
275
+ PH米ENIX
276
+ PH米ENIX
277
+ 20
278
+ preliminary
279
+ 40
280
+ preliminary
281
+ preliminary
282
+ preliminary
283
+ 20
284
+ 10吨
285
+ -20
286
+ 2
287
+ 2.5
288
+ 3.5
289
+ 2.5
290
+ 2.5
291
+ 3.5
292
+ 4.5
293
+ 2.5
294
+ Mass [GeV/c2]
295
+ Mass [GeV/c2
296
+ Mass [GeV/c2]
297
+ Mass [GeV/c2]
298
+ 160
299
+ 150
300
+ 100
301
+ 40
302
+ π-2
303
+ X= 3.147 ± 0.006 GeV/c2
304
+ x=3.147 ±0.006 GeV/c2
305
+ x=3.159±0.006 GeV/c2
306
+ x= 3.159 ± 0.006 GeV/c2
307
+ VI
308
+ 140
309
+ = 0.160 ± 0.002 GeV/c2
310
+ = 0.160 ± 0.002 GeV/c2
311
+ = 0.146 ± 0.006 GeV/c2
312
+ g = 0.146 ± 0.006 GeV/c2
313
+ α = 0.63
314
+ α = 0.63
315
+ 80
316
+ α = 1.17
317
+ 30
318
+ α = 1.17
319
+ 120
320
+ n = 7.60
321
+ 100
322
+ n = 7.60
323
+ n = 2.55
324
+ n = 2.55
325
+ >
326
+ 100
327
+ 60
328
+ "j/ = 893±60
329
+ Nj/=474±35
330
+ π-4
331
+ 20
332
+ PH米ENIX
333
+ 50
334
+ PH米ENIX
335
+ PH米ENIX
336
+ PH米ENIX
337
+ North Arm -
338
+ 60E
339
+ preliminary
340
+ preliminary
341
+ preliminary
342
+ 10-
343
+ preliminary
344
+ 40
345
+ 20
346
+ -20E
347
+ 50
348
+ 20—
349
+ 2
350
+ 2.5
351
+ 3
352
+ 3.5
353
+ 4.5
354
+ 2.5
355
+ 3
356
+ 3.5
357
+ 4.5
358
+ 2
359
+ 2.5
360
+ 3
361
+ 3.5
362
+ 4.5
363
+ 2.5
364
+ 3
365
+ 3.5
366
+ Mass [GeV/c2]
367
+ Mass [GeV/c2]
368
+ Mass [GeV/c2]4
369
+ QM˙Proceedings˙Bichon
370
+ printed on January 12, 2023
371
+ 2.4. Event Plane Method and Measuring v2
372
+ We are primarily using the In/Out ratio method, which is an event plane
373
+ method [3] that uses the counts of the J/ψ in bins of ∆φ to measure v2.
374
+ The In/Out ratio method splits the distributions into 2 bins of ∆φ one in
375
+ plane with the event plane and the other out of plane. We measure v2 using
376
+ this method by looking at the difference between these bins. If there is no
377
+ preference in either plane, we would observe a flow around zero.
378
+ 2.5. Systematic Uncertainties
379
+ The systematic uncertainties are determined by changing various aspects
380
+ of the analysis. As of this time, we have employed changing the primary
381
+ detector of the analysis from the FVTX to the Central Arm Spectrometers
382
+ (CNT), which covers a different pseudorapidity range.
383
+ We have used a
384
+ different method for our combinatorial background subtraction, the like-sign
385
+ method, which constructs the background with dimuon pairs of the same
386
+ sign (µ+µ+ and µ−µ−) that come from the same event. The uncertainty in
387
+ the normalization factor in the event-mixing method was also incorporated
388
+ into the systematic uncertainty. The last systematic uncertainty we consider
389
+ comes from the mass fitting of the dimuon distribution, where the shape
390
+ of the continuum distribution was assumed to be an exponential function,
391
+ and the uncertainty in this assumption can be explored by assuming no
392
+ continuum contribution in the J/ψ mass region.
393
+ 3. Results
394
+ Figure 2 shows the pT -dependent J/ψ v2.
395
+ The measurement in this
396
+ analysis for PHENIX Run 14 at forward rapidity in a centrality range of 10
397
+ - 60% is shown in red. The measurement made by STAR at mid-rapidity
398
+ and in a centrality range of 10-40% is shown in black. The ALICE result
399
+ at forward rapidity in a centrality range of 20-40% is shown in blue. Boxes
400
+ surrounding the data points represent systematic uncertainties.
401
+ PHENIX observes a larger suppression of J/ψ yield in forward rapidity
402
+ when compared to mid-rapidity. This is contrary to expectations, because
403
+ effects that dissolve the J/ψ have been determined to be stronger at mid-
404
+ rapidity [4]. To understand this observation we begin by looking into the
405
+ production of c¯c pairs. The majority of c¯c pairs per event in central collisions
406
+ at RHIC are produced at mid-rapidity. At LHC energies, less suppression
407
+ is observed, where many more c¯c pairs per event in central collisions are
408
+ produced [5]. To explain this behavior, theoretical models require a contri-
409
+ bution of coalescence via a recombination mechanism between charm and
410
+ anticharm quarks [6]. It was found that the strength of this coalescence
411
+
412
+ QM˙Proceedings˙Bichon
413
+ printed on January 12, 2023
414
+ 5
415
+ effect increases with the initial number of produced c¯c pairs relative to the
416
+ total number of quarks, increasing with the collisions energy.
417
+ At LHC energies, a nonzero v2 is observed, this is in line with J/ψ
418
+ formed by coalescence in the QGP medium, and carrying the azimuthal
419
+ anisotropy of the system [7]. At RHIC energies, STAR has measured v2 that
420
+ is consistent with zero, but due to limited statistics remains inconclusive [8].
421
+ With coalescence being the dominant mechanism for nonzero J/ψ v2 it
422
+ should follow that systems where fewer c¯c pairs are formed should have a
423
+ smaller azimuthal anisotropy.
424
+ Fig. 2. Plot of pT dependent J/ψ v2. The PHENIX result in light gray/red/circle
425
+ is compared to STAR [8] in black/star and ALICE [7] gray/blue/square.
426
+ From the figure we can see the clear nonzero v2 measured by ALICE.
427
+ Although the ALICE measurement is at a much higher energy, we know
428
+
429
+ 0.3
430
+ Au+Au → J/ + X /Snn = 200 GeV
431
+ PHENIX Run14. 10 - 60%. 1.2 < Iyl < 2.2
432
+ ★ STAR, 10 - 40%, lyl < 1 (PRL 111, 052301 (2013))
433
+ 0.2
434
+ Pb+Pb → J/Φ + X /Snn = 5.02 TeV
435
+ ALICE, 20 - 40%, 2.5 < lyl < 4.4 (JHEP 10 (2020) 141)
436
+ 0.1
437
+ -0.1
438
+ PH米ENIX
439
+ -0.2
440
+ preliminary
441
+ -0.3
442
+ 0.5
443
+ 1
444
+ 1.5
445
+ 2
446
+ 2.5
447
+ 3
448
+ 3.5
449
+ 4
450
+ 4.5
451
+ 5
452
+ pT [GeV/c]6
453
+ QM˙Proceedings˙Bichon
454
+ printed on January 12, 2023
455
+ v2 does not scale with energy for J/ψ, so it makes for a good comparison
456
+ that the ALICE result which is clearly nonzero is different from our mea-
457
+ surement. In our measurement, we see a v2 that is clearly consistent with
458
+ zero across all pT bins. The systematic uncertainties were conservatively
459
+ estimated, not taking into account cancellations or correlations of uncer-
460
+ tainties from different sources. Additional data from Run 16 of RHIC will
461
+ be included in the final results, and we expect that both statistical and
462
+ systematic uncertainties will be significantly reduced.
463
+ 4. Conclusion and Outlook
464
+ We have presented PHENIX Run 14 pT -dependent J/ψ v2 at forward
465
+ rapidity at √sNN = 200 GeV. PHENIX has measured a J/ψ v2 that is
466
+ consistent with zero. We have determined that the ALICE result, where
467
+ there is clearly nonzero v2, is distinctly different from our measurement,
468
+ and that forward and mid-rapidity results at RHIC are consistent, but the
469
+ uncertainties are still large. In the future, we will incorporate Run 16 data
470
+ in our measurement, essentially doubling the current dataset and reducing
471
+ statistical uncertainties accordingly. We also plan to study open heavy flavor
472
+ v2 to obtain a more complete understanding of the heavy flavor dynamics
473
+ at RHIC.
474
+ REFERENCES
475
+ [1] Ulrich Heinz.
476
+ The strongly coupled quark–gluon plasma created at RHIC.
477
+ Journal of Physics A: Mathematical and Theoretical, 42(21):214003, May 2009.
478
+ [2] C. Aidala et al. The PHENIX forward silicon vertex detector. Nuclear Instru-
479
+ ments and Methods in Physics Research Section A: Accelerators, Spectrometers,
480
+ Detectors and Associated Equipment, 755:44–61, Aug 2014.
481
+ [3] A. M. Poskanzer and S. A. Voloshin. Methods for analyzing anisotropic flow in
482
+ relativistic nuclear collisions. Physical Review C, 58(3):1671–1678, Sep 1998.
483
+ [4] A. Adare et al. J/ψ suppression at forward rapidity in Au+Au collisions at
484
+ √sNN = 200 GeV. Physical Review C, 84:054912, Nov 2011.
485
+ [5] Anton Andronic, Peter Braun-Munzinger, Krzysztof Redlich, and Johanna
486
+ Stachel. Decoding the phase structure of QCD via particle production at high
487
+ energy. Nature, 561(7723):321–330, Sep 2018.
488
+ [6] H. Pereira Da Costa et al. Charmonium production in Pb–Pb collisions with
489
+ ALICE at the LHC. Nuclear Physics A, 956:705–708, Dec 2016.
490
+ [7] S. Acharya et al. J/ψ elliptic and triangular flow in Pb-Pb collisions at √sNN
491
+ = 5.02 TeV. Journal of High Energy Physics, 2020(10), Oct 2020.
492
+ [8] L. Adamczyk et al.
493
+ Measurement of J/ψ Azimuthal Anisotropy in Au+Au
494
+ Collisions at √sNN = 200 GeV. Physical Review Letters, 111(5), Aug 2013.
495
+
1NE2T4oBgHgl3EQf5Ahd/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf,len=276
2
+ page_content='Elliptic flow measurement of J/ψ in PHENIX Run14 Au+Au at √sNN = 200 GeV ∗ Luis Bichon III (for the PHENIX collaboration, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
3
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
4
+ page_content='5281/zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
5
+ page_content='7430208) Department of Physics and Astronomy, Vanderbilt University, Nashville, TN 37235 USA We obtain the first measurement of J/ψ elliptic flow at RHIC energies in forward rapidity using data from the PHENIX detector and applying an event plane method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
6
+ page_content=' The dataset used contains 19 billion events from the PHENIX experiment’s Run 14 Au + Au dataset at √sNN = 200 GeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
7
+ page_content=' PHENIX has measured a J/ψ v2 in a centrality range of 10 − 60% that is consistent with zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
8
+ page_content=' Taken together with results from LHC the measure- ment of v2, which is consistent with zero may indicate that J/ψ production by coalescence is not significant at forward rapidity at RHIC energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
9
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
10
+ page_content=' Introduction The QGP has been found to exhibit a nearly perfect fluid behavior [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
11
+ page_content=' This behavior manifests itself as strong correlations between particles pro- duced in nuclear collisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
12
+ page_content=' Presently, the detailed interactions of the heavy quarks in the QGP medium are under investigation and, because heavy fla- vor quarks will have relatively larger masses, they may not be thermalized and flow with the medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
13
+ page_content=' The production of J/ψ in p+p collisions is theoretically well understood because they are produced in hard scattering processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
14
+ page_content=' This feature in addition to their production in hard scattering events in the initial stages of the collision make them ideal probes for testing the properties of the QGP medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
15
+ page_content=' However, in nucleus+nucleus collisions some of the produced J/ψ mesons may be dissolved by the QGP, which may create anisotropies in the observed J/ψ azimuthal distributions due to the different path length in the medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
16
+ page_content=' Additionally, a similar signal may be created if the J/ψ thermalizes inside the medium and follows the pressure gradients as lighter particles do, or the J/ψ may dissociate, and the charm ∗ Presented at the 29th International Conference on Ultrarelativistic Nucleus-Nucleus Collisions (Quark Matter 2022) (1) arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
17
+ page_content='04186v1 [nucl-ex] 10 Jan 2023 2 QM˙Proceedings˙Bichon printed on January 12, 2023 quarks could equilibrate which could lead to J/ψ regeneration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
18
+ page_content=' We present a preliminary result for J/ψ v2 using the PHENIX Run14 Au+Au dataset at √sNN = 200 GeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
19
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
20
+ page_content=' Data Analysis & Methodology 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
21
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
22
+ page_content=' Dataset and Detectors In this analysis, we use the Run 14 Au+Au Muon Arm dataset at √sNN = 200 GeV containing 19 billion events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
23
+ page_content=' The dimuon decay channel is used to reconstruct candidate J/ψ mesons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
24
+ page_content=' The PHENIX experiment has a unique coverage at forward rapidity with muon identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
25
+ page_content=' This in ad- dition to the large dataset of Au+Au collisions collected in 2014 allows for a statistically improved measurement of J/ψ elliptic flow at RHIC energies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
26
+ page_content=' The key detector in this analysis is the Forward Silicon Vertex Detector (FVTX).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
27
+ page_content=' With the FVTX, an increase in precision vertexing capabilities was added to the muon spectrometers, enabling the rejection of muons from the decay of relatively long-lived particles, the rejection of muons from the decays of relatively long-lived particles, and an additional way of determin- ing the event plane [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
28
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
29
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
30
+ page_content=' Combinatorial Background Subtraction To obtain a pure signal for the J/ψ from dimuon mass distributions we employ event-mixing as the standard method of removing the background dimuons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
31
+ page_content=' For this event-mixing method, the background is constructed from dimuon pairs of opposite sign, but the single muons come from differ- ent events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
32
+ page_content=' Mixed event dimuon pairs are only formed if two events have a centrality closer than 5%, a Z vertex closer than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
33
+ page_content='75 cm and a event plane angle closer than π/20 rad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
34
+ page_content=' Using events instead of individual dimuons allows us to increase the likelihood that we are using combinatorial back- ground dimuons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
35
+ page_content=' A normalization factor must be applied for the background which can be obtained by using the ratio of like-sign pairs from the same event to like-sign pairs from mixed events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
36
+ page_content=' The signal is then obtained by the subtraction of the normalized background from the foreground.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
37
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
38
+ page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
39
+ page_content=' Fitting the Dimuon Mass Distribution In the fitting of the mass distributions, we assume the shape of the J/ψ signal to be a Crystal Ball function, and given the statistical precision of the dataset, we also apply the same shape to the Ψ(2S) to avoid their inclusion in the higher mass J/ψ region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
40
+ page_content=' The parameters of the Crystal Ball function are obtained using J/ψ embedded Monte Carlo simulation data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
41
+ page_content=' We produce simulated mass distributions for low/high pT and South/North QM˙Proceedings˙Bichon printed on January 12, 2023 3 arm rapidities, fitting the distributions allowing for the function to have free (α, n, ¯x, and σ) parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
42
+ page_content=' The J/ψ count for each distribution is obtained by the integral of the J/ψ crystal ball function in the fit (see Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
43
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
44
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
45
+ page_content=' Mass distributions using mixed-event subtraction for the unweighted “stan- dard” set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
46
+ page_content=' These are binned by pT in each column, and rapidity+∆φ angle for each row.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
47
+ page_content=' The green/dashed curve is a Crystal Ball fitted to the J/ψ peak, the blue/dashed-dot curve is a Crystal Ball fitted to the ψ(2S) peak, the red/dotted curve is an exponential fitted to the remaining background after subtraction, and the black/solid curve is the total fit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
48
+ page_content=' 0 <Pr [GeV/c] ≤ 1 1 <Pr [GeV/c] ≤ 2 2 <Pr [GeV/c] ≤ 3 3 <pr [GeV/c] ≤ 5 300E 160 250 元-4 x = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
49
+ page_content='125 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
50
+ page_content='003 GeV/c2 x=3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
51
+ page_content='125+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
52
+ page_content='003GeV/c2 x = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
53
+ page_content='118 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
54
+ page_content='004 GeV/c2 80 140 x =3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
55
+ page_content='118 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
56
+ page_content='004 GeV/c2 VI = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
57
+ page_content='135 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
58
+ page_content='003 GeV/c2 250 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
59
+ page_content='135 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
60
+ page_content='003 GeV/c2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
61
+ page_content='160 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
62
+ page_content='002 GeV/c2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
63
+ page_content='160 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
64
+ page_content='002 GeV/c2 200 α= 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
65
+ page_content='19 α= 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
66
+ page_content='19 120 α = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
67
+ page_content='00 α = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
68
+ page_content='00 n = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
69
+ page_content='94 n= 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
70
+ page_content='94 n = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
71
+ page_content='06 n = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
72
+ page_content='06 200 60 150 100 > Nj/±= 1422±70 Nj/ =1683±74 150 South Arm - 0 100 80 PH米ENIX PH米ENIX 40 100 PH米ENIX PH米ENIX 60 preliminary preliminary preliminary preliminary 50 40 20+ 20 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
73
+ page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
74
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
75
+ page_content='5 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
76
+ page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
77
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
78
+ page_content='5 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
79
+ page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
80
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
81
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
82
+ page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
83
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
84
+ page_content='5 Mass [GeV/c2] Mass [GeV/c2 Mass [GeV/c2] Mass [GeV/c2] 300日 250 80 π-2 T+T x= 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
85
+ page_content='125 ±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
86
+ page_content='003 GeV/c2 250E = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
87
+ page_content='125 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
88
+ page_content='003 GeV/c2 x=3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
89
+ page_content='118 ±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
90
+ page_content='004 GeV/c2 x = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
91
+ page_content='118 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
92
+ page_content='004 GeV/c2 VI = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
93
+ page_content='135 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
94
+ page_content='003 GeV/c2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
95
+ page_content='135 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
96
+ page_content='003 GeV/c2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
97
+ page_content='160 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
98
+ page_content='002 GeV/c2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
99
+ page_content='160 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
100
+ page_content='002 GeV/c2 200 α= 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
101
+ page_content='19 α= 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
102
+ page_content='19 α = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
103
+ page_content='00 α = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
104
+ page_content='00 v n = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
105
+ page_content='94 200 n = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
106
+ page_content='94 n = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
107
+ page_content='06 60 100 n = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
108
+ page_content='06 150 V Nj/ = 1374±66 150E Nj/Φ =1582±72 N π一4 100 40 PH米ENIX 100E PH米ENIX PH米ENIX PH米ENIX South Arm - preliminary preliminary preliminary preliminary 50 20 50 50田 50 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
109
+ page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
110
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
111
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
112
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
113
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
114
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
115
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
116
+ page_content='5 Mass [GeV/c2] Mass [GeV/c2] Mass [GeV/c2] Mass [GeV/c2] 140F 80 允-4 120 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
117
+ page_content='147±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
118
+ page_content='006GeV/c2 120 x=3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
119
+ page_content='147±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
120
+ page_content='006 GeV/c2 x = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
121
+ page_content='159 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
122
+ page_content='006 GeV/c2 50E x = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
123
+ page_content='159 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
124
+ page_content='006 GeV/c2 VI = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
125
+ page_content='160 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
126
+ page_content='002 GeV/c2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
127
+ page_content='160 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
128
+ page_content='002 GeV/c2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
129
+ page_content='146 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
130
+ page_content='006 GeV/c2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
131
+ page_content='146 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
132
+ page_content='006 GeV/c2 >- 100 α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
133
+ page_content='63 100 α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
134
+ page_content='63 60 α= 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
135
+ page_content='17 α = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
136
+ page_content='17 n = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
137
+ page_content='60 n= 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
138
+ page_content='60 n = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
139
+ page_content='55 40 n = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
140
+ page_content='55 80E Nj/ =902±63 [Nj/=1013±63 30 60 60 PH*ENIX North Arm - PH米ENIX PH米ENIX PH米ENIX 20 preliminary 40 preliminary preliminary preliminary 20 10吨 20 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
141
+ page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
142
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
143
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
144
+ page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
145
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
146
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
147
+ page_content='5 Mass [GeV/c2] Mass [GeV/c2 Mass [GeV/c2] Mass [GeV/c2] 160 150 100 40 π-2 X= 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
148
+ page_content='147 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
149
+ page_content='006 GeV/c2 x=3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
150
+ page_content='147 ±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
151
+ page_content='006 GeV/c2 x=3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
152
+ page_content='159±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
153
+ page_content='006 GeV/c2 x= 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
154
+ page_content='159 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
155
+ page_content='006 GeV/c2 VI 140 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
156
+ page_content='160 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
157
+ page_content='002 GeV/c2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
158
+ page_content='160 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
159
+ page_content='002 GeV/c2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
160
+ page_content='146 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
161
+ page_content='006 GeV/c2 g = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
162
+ page_content='146 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
163
+ page_content='006 GeV/c2 α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
164
+ page_content='63 α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
165
+ page_content='63 80 α = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
166
+ page_content='17 30 α = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
167
+ page_content='17 120 n = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
168
+ page_content='60 100 n = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
169
+ page_content='60 n = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
170
+ page_content='55 n = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
171
+ page_content='55 > 100 60 "j/ = 893±60 Nj/=474±35 π-4 20 PH米ENIX 50 PH米ENIX PH米ENIX PH米ENIX North Arm - 60E preliminary preliminary preliminary 10- preliminary 40 20 20E 50 20— 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
172
+ page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
173
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
174
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
175
+ page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
176
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
177
+ page_content='5 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
178
+ page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
179
+ page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
180
+ page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
181
+ page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
182
+ page_content='5 Mass [GeV/c2] Mass [GeV/c2] Mass [GeV/c2]4 QM˙Proceedings˙Bichon printed on January 12, 2023 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
183
+ page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
184
+ page_content=' Event Plane Method and Measuring v2 We are primarily using the In/Out ratio method, which is an event plane method [3] that uses the counts of the J/ψ in bins of ∆φ to measure v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
185
+ page_content=' The In/Out ratio method splits the distributions into 2 bins of ∆φ one in plane with the event plane and the other out of plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
186
+ page_content=' We measure v2 using this method by looking at the difference between these bins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
187
+ page_content=' If there is no preference in either plane, we would observe a flow around zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
188
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
189
+ page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
190
+ page_content=' Systematic Uncertainties The systematic uncertainties are determined by changing various aspects of the analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
191
+ page_content=' As of this time, we have employed changing the primary detector of the analysis from the FVTX to the Central Arm Spectrometers (CNT), which covers a different pseudorapidity range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
192
+ page_content=' We have used a different method for our combinatorial background subtraction, the like-sign method, which constructs the background with dimuon pairs of the same sign (µ+µ+ and µ−µ−) that come from the same event.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
193
+ page_content=' The uncertainty in the normalization factor in the event-mixing method was also incorporated into the systematic uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
194
+ page_content=' The last systematic uncertainty we consider comes from the mass fitting of the dimuon distribution, where the shape of the continuum distribution was assumed to be an exponential function, and the uncertainty in this assumption can be explored by assuming no continuum contribution in the J/ψ mass region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
195
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
196
+ page_content=' Results Figure 2 shows the pT -dependent J/ψ v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
197
+ page_content=' The measurement in this analysis for PHENIX Run 14 at forward rapidity in a centrality range of 10 60% is shown in red.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
198
+ page_content=' The measurement made by STAR at mid-rapidity and in a centrality range of 10-40% is shown in black.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
199
+ page_content=' The ALICE result at forward rapidity in a centrality range of 20-40% is shown in blue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
200
+ page_content=' Boxes surrounding the data points represent systematic uncertainties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
201
+ page_content=' PHENIX observes a larger suppression of J/ψ yield in forward rapidity when compared to mid-rapidity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
202
+ page_content=' This is contrary to expectations, because effects that dissolve the J/ψ have been determined to be stronger at mid- rapidity [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
203
+ page_content=' To understand this observation we begin by looking into the production of c¯c pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
204
+ page_content=' The majority of c¯c pairs per event in central collisions at RHIC are produced at mid-rapidity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
205
+ page_content=' At LHC energies, less suppression is observed, where many more c¯c pairs per event in central collisions are produced [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
206
+ page_content=' To explain this behavior, theoretical models require a contri- bution of coalescence via a recombination mechanism between charm and anticharm quarks [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
207
+ page_content=' It was found that the strength of this coalescence QM˙Proceedings˙Bichon printed on January 12, 2023 5 effect increases with the initial number of produced c¯c pairs relative to the total number of quarks, increasing with the collisions energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
208
+ page_content=' At LHC energies, a nonzero v2 is observed, this is in line with J/ψ formed by coalescence in the QGP medium, and carrying the azimuthal anisotropy of the system [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
209
+ page_content=' At RHIC energies, STAR has measured v2 that is consistent with zero, but due to limited statistics remains inconclusive [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
210
+ page_content=' With coalescence being the dominant mechanism for nonzero J/ψ v2 it should follow that systems where fewer c¯c pairs are formed should have a smaller azimuthal anisotropy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
211
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
212
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
213
+ page_content=' Plot of pT dependent J/ψ v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
214
+ page_content=' The PHENIX result in light gray/red/circle is compared to STAR [8] in black/star and ALICE [7] gray/blue/square.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
215
+ page_content=' From the figure we can see the clear nonzero v2 measured by ALICE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
216
+ page_content=' Although the ALICE measurement is at a much higher energy, we know 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
217
+ page_content='3 Au+Au → J/ + X /Snn = 200 GeV PHENIX Run14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
218
+ page_content=' 10 - 60%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
219
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
220
+ page_content='2 < Iyl < 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
221
+ page_content='2 ★ STAR, 10 - 40%, lyl < 1 (PRL 111, 052301 (2013)) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
222
+ page_content='2 Pb+Pb → J/Φ + X /Snn = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
223
+ page_content='02 TeV ALICE, 20 - 40%, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
224
+ page_content='5 < lyl < 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
225
+ page_content='4 (JHEP 10 (2020) 141) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
226
+ page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
227
+ page_content='1 PH米ENIX 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
228
+ page_content='2 preliminary 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
229
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
230
+ page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
231
+ page_content='5 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
232
+ page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
233
+ page_content='5 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
234
+ page_content='5 5 pT [GeV/c]6 QM˙Proceedings˙Bichon printed on January 12, 2023 v2 does not scale with energy for J/ψ, so it makes for a good comparison that the ALICE result which is clearly nonzero is different from our mea- surement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
235
+ page_content=' In our measurement, we see a v2 that is clearly consistent with zero across all pT bins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
236
+ page_content=' The systematic uncertainties were conservatively estimated, not taking into account cancellations or correlations of uncer- tainties from different sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
237
+ page_content=' Additional data from Run 16 of RHIC will be included in the final results, and we expect that both statistical and systematic uncertainties will be significantly reduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
238
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
239
+ page_content=' Conclusion and Outlook We have presented PHENIX Run 14 pT -dependent J/ψ v2 at forward rapidity at √sNN = 200 GeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
240
+ page_content=' PHENIX has measured a J/ψ v2 that is consistent with zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
241
+ page_content=' We have determined that the ALICE result, where there is clearly nonzero v2, is distinctly different from our measurement, and that forward and mid-rapidity results at RHIC are consistent, but the uncertainties are still large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
242
+ page_content=' In the future, we will incorporate Run 16 data in our measurement, essentially doubling the current dataset and reducing statistical uncertainties accordingly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
243
+ page_content=' We also plan to study open heavy flavor v2 to obtain a more complete understanding of the heavy flavor dynamics at RHIC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
244
+ page_content=' REFERENCES [1] Ulrich Heinz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
245
+ page_content=' The strongly coupled quark–gluon plasma created at RHIC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
246
+ page_content=' Journal of Physics A: Mathematical and Theoretical, 42(21):214003, May 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
247
+ page_content=' [2] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
248
+ page_content=' Aidala et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
249
+ page_content=' The PHENIX forward silicon vertex detector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
250
+ page_content=' Nuclear Instru- ments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 755:44–61, Aug 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
251
+ page_content=' [3] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
252
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
253
+ page_content=' Poskanzer and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
254
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
255
+ page_content=' Voloshin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
256
+ page_content=' Methods for analyzing anisotropic flow in relativistic nuclear collisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
257
+ page_content=' Physical Review C, 58(3):1671–1678, Sep 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
258
+ page_content=' [4] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
259
+ page_content=' Adare et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
260
+ page_content=' J/ψ suppression at forward rapidity in Au+Au collisions at √sNN = 200 GeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
261
+ page_content=' Physical Review C, 84:054912, Nov 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
262
+ page_content=' [5] Anton Andronic, Peter Braun-Munzinger, Krzysztof Redlich, and Johanna Stachel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
263
+ page_content=' Decoding the phase structure of QCD via particle production at high energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
264
+ page_content=' Nature, 561(7723):321–330, Sep 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
265
+ page_content=' [6] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
266
+ page_content=' Pereira Da Costa et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
267
+ page_content=' Charmonium production in Pb–Pb collisions with ALICE at the LHC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
268
+ page_content=' Nuclear Physics A, 956:705–708, Dec 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
269
+ page_content=' [7] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
270
+ page_content=' Acharya et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
271
+ page_content=' J/ψ elliptic and triangular flow in Pb-Pb collisions at √sNN = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
272
+ page_content='02 TeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
273
+ page_content=' Journal of High Energy Physics, 2020(10), Oct 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
274
+ page_content=' [8] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
275
+ page_content=' Adamczyk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
276
+ page_content=' Measurement of J/ψ Azimuthal Anisotropy in Au+Au Collisions at √sNN = 200 GeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
277
+ page_content=' Physical Review Letters, 111(5), Aug 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE2T4oBgHgl3EQf5Ahd/content/2301.04186v1.pdf'}
1NE4T4oBgHgl3EQfzQ1E/content/tmp_files/2301.05272v1.pdf.txt ADDED
@@ -0,0 +1,605 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Inaccessible Neural Language Models Could
2
+ Reinvigorate Linguistic Nativism
3
+ Patrick Perrine
4
+ California Polytechnic State University
5
+ 1 Grand Ave, San Luis Obispo, CA, 93410
6
7
+ Abstract
8
+ Large Language Models (LLMs) have been mak-
9
+ ing big waves in the machine learning community
10
+ within the past few years. The impressive scalabil-
11
+ ity of LLMs due to the advent of deep learning can
12
+ be seen as a continuation of empiricist lingusitic
13
+ methods, as opposed to rule-based linguistic meth-
14
+ ods that are grounded in a nativist perspective. Cur-
15
+ rent LLMs are generally inaccessible to resource-
16
+ constrained researchers, due to a variety of factors
17
+ including closed source code. This work argues
18
+ that this lack of accessibility could instill a nativist
19
+ bias in researchers new to computational linguis-
20
+ tics, given that new researchers may only have rule-
21
+ based, nativist approaches to study to produce new
22
+ work. Also, given that there are numerous critics
23
+ of deep learning claiming that LLMs and related
24
+ methods may soon lose their relevancy, we spec-
25
+ ulate that such an event could trigger a new wave
26
+ of nativism in the language processing community.
27
+ To prevent such a dramatic shift and placing favor
28
+ in hybrid methods of rules and deep learning, we
29
+ call upon researchers to open source their LLM
30
+ code wherever possible to allow both empircist and
31
+ hybrid approaches to remain accessible.
32
+ 1
33
+ Introduction
34
+ Large Language Models (LLMs) have been a pop-
35
+ ular topic of research among the academic com-
36
+ munity (Srivastava et al., 2022). The promise of
37
+ a near-general purpose neural model for a variety
38
+ of language processing tasks is indeed an attrac-
39
+ tive one (Xu et al., 2022). Deep learning has made
40
+ significant developments in language tasks such as
41
+ conversational language understanding (Tur et al.,
42
+ 2018), spoken/text-based dialog systems (Celikyil-
43
+ maz et al., 2018), and natural language generation
44
+ from images (He and Deng, 2018). Large Lan-
45
+ guage models can be viewed as the natural pro-
46
+ gression away from the rigid rule-based systems
47
+ that we’ve had since the 1950’s (Chiticariu et al.,
48
+ 2013), continuing the empiricist mentality of sta-
49
+ tistical natural language processing without the po-
50
+ tentially costly and context-specific activity of fea-
51
+ ture engineering (Collobert et al., 2011). However,
52
+ with large corporations touting their ever-growing,
53
+ state-of-the-art models under closed-source code
54
+ and payment walls, it could be seen that these
55
+ large language models are becoming less acces-
56
+ sible. Some organizations have acknowledged the
57
+ potential harms that deep learning models could
58
+ cause by establishing ethical frameworks (Ashurst
59
+ et al., 2022) (Weidinger et al., 2022), but there are
60
+ still growing concerns regarding accessibility and
61
+ the result of false/irreproducible science (Kapoor
62
+ and Narayanan, 2022).
63
+ This criticism for empiricist methods is not new
64
+ in linguistics-based science, in that Chomsky’s
65
+ Poverty of the Stimulus Argument (Stich, 1978)
66
+ has a rich history of discussion and debate amongst
67
+ linguists, scientists, and philosophers (Laurence
68
+ and Margolis, 2001). In this work, we will briefly
69
+ introduce this debate over language learning be-
70
+ tween nativists and empiricists, relate these topics
71
+ to research in natural language processing, and dis-
72
+ ucss how the current state of this research is rein-
73
+ forcing an imbalance between the two perspectives.
74
+ We intend to deliver a neutral ground of analysis,
75
+ as we agree that a hybrid approach for NLP re-
76
+ search can lead to strong results. The current bias
77
+ towards the highly-popular, but inaccessible em-
78
+ piricist methods utilizing LLMs could lead to a
79
+ new wave of nativism in natural language process-
80
+ ing work, following a large backlash against such
81
+ empirical methods.
82
+ 2
83
+ Background
84
+ We now provide a holistic background on the lin-
85
+ guistic and scientific developments that encompass
86
+ this issue.
87
+ arXiv:2301.05272v1 [cs.CL] 12 Jan 2023
88
+
89
+ 2.1
90
+ The Three Waves of Modern NLP
91
+ We will give a brief background on the three main
92
+ waves of modern natural language processing re-
93
+ search: the rule-based theories popularized by
94
+ Noam Chomsky (Chomsky, 1965), the statistics-
95
+ based empiricist experiments (Jelinek, 1976), and
96
+ today’s popular methodology of deep learning for
97
+ natural language processing (Collobert et al., 2011).
98
+ The first wave is considered to be under a na-
99
+ tivist perspective (Laurence and Margolis, 2001),
100
+ whereas the latter waves are in support of an em-
101
+ piricist lens (Frank et al., 2019).
102
+ 2.1.1
103
+ Rule-based NLP
104
+ The concept of viewing language as a static sys-
105
+ tem of rules to determine interpretation has been
106
+ present as early as the 1830’s (Humboldt, 1836).
107
+ Noam Chomsky popularized this perspective in the
108
+ domain of linguistics as a challenge to an exist-
109
+ ing overbearance of empiricist methods (Chomsky,
110
+ 1956b; Laurence and Margolis, 2001).
111
+ This rule-based approach to linguistics domi-
112
+ nated the field for decades, following Chomsky’s
113
+ mutliple works emphasizing and reinforcing this
114
+ doctrine (Chomsky, 1956a, 1957, 1963, 1965;
115
+ Chomsky and Halle, 1968). Being based in proposi-
116
+ tional logic and a fixed content, rule-based methods
117
+ are arguably rather accessible to researchers with
118
+ limited resources. These methods continued to be
119
+ prevalent in the field until the 1970’s, when statisti-
120
+ cal methods were proven to be very useful.
121
+ 2.1.2
122
+ Statistical NLP
123
+ The roots of statistical language processing stem
124
+ from Andrey Markov’s efforts in computing bi-
125
+ gram and trigram probabilities (Jurafsky and Mar-
126
+ tin, 2022) of vowel/consonant predictions using a
127
+ novel as a corpus in 1913 (Markov, 2006). This
128
+ n-gram approach was later applied to predicting
129
+ sequences of English words (Shannon, 1948). This
130
+ popularized the notion of using Markov chains for
131
+ use in a variety of applications within and outside
132
+ of linguistics.
133
+ Chomsky specifically challenged this use of
134
+ finite-state Markov processes, the processes that
135
+ formed n-gram based approaches, to be useless
136
+ in serving as a comprehensive cognitive model
137
+ of grammatical knowledge in humans (Chomsky,
138
+ 1956b, 1957; Miller and Chomsky, 1963). This
139
+ hindered the progress of probabilistic approaches
140
+ in linguistics.
141
+ Over a decade later, statistical language process-
142
+ ing was revitalized due in part to a series of success-
143
+ ful experiments using n-gram models for speech
144
+ recognition (Baker, 1975a,b; Jelinek, 1976; Bahl
145
+ et al., 1983; Jelinek et al., 1990). These empiricist-
146
+ based experiments showed that Chomsky’s nativist
147
+ theories do not extend to recognizing speech in real
148
+ time as previously proposed (Chomsky and Halle,
149
+ 1968).
150
+ This marked a shift towards looking at language
151
+ processing through an empirical lens, where a hy-
152
+ pothesis test primarily guides the experimentation
153
+ process, rather than theoretical insights (Manning
154
+ and Schutze, 1999). After the successful statistical
155
+ speech recognition experiments of the mid 1970’s,
156
+ statistical NLP reigned as the dominant approach
157
+ for decades.
158
+ 2.1.3
159
+ ML-based NLP
160
+ Researchers soon began to use shallow neural net-
161
+ works to reinforce statistical methodologies in NLP.
162
+ In the late 2000’s, the advent of deeper neural net-
163
+ works for NLP began to stir when scalable, hierar-
164
+ chical language models (Morin and Bengio, 2005;
165
+ Mnih and Hinton, 2008) and increased computing
166
+ power became available for use by researchers.
167
+ Alongside these developments, researchers be-
168
+ came tiresome of having to hand-engineer features
169
+ for neural networks to learn from, as this can be a
170
+ costly and rather context-specific task (Collobert
171
+ et al., 2011). In was in the 2010’s that deep learning
172
+ became known more globally (LeCun et al., 2015),
173
+ with NLP being a highly prominent application
174
+ for deep neural networks. This sparked the current
175
+ practice of training large language models in efforts
176
+ to create a general model for many language tasks
177
+ (Srivastava et al., 2022). In essence, the empiri-
178
+ cist era of NLP has persisted to today through the
179
+ evolution of deep learning practices. Some appli-
180
+ cations of deep learning outside of language have
181
+ even used empiricist terms such as tabula rasa very
182
+ openly (Silver et al., 2017). The use of deep neural
183
+ networks for language tasks has been confirmed to
184
+ reinforce empircist ideology (Frank et al., 2019).
185
+ 3
186
+ Deep Learning Can Be Inaccessible
187
+ Deep learning as a science has been under fire for a
188
+ number of reasons. While there have been encour-
189
+ aging results across many application domains of
190
+ deep learning and positive insights about their role
191
+ in advancing empiricism (Buckner, 2018), deep
192
+ learning has garnered skepticsm from both in and
193
+
194
+ outside of its community (Marcus, 2018; Buckner,
195
+ 2019).
196
+ These critcisms of deep NLP can stem from a
197
+ lack of open sourcing of model code and also data
198
+ (Klein et al., 2017; Fadel et al., 2019; Chen et al.,
199
+ 2021; Guo et al., 2022; Xu et al., 2022). These
200
+ issues are not exclusive to language processing,
201
+ as other domains have reasons to leave aspects of
202
+ their experimentation private or inaccessible when
203
+ publishing (Siegle et al., 2015; Suresha et al., 2018;
204
+ Farooq and Hafeez, 2020; Zuin et al., 2020; Guo
205
+ et al., 2022).
206
+ We now focus on issues with closed-source large
207
+ language models due to their popularity and the re-
208
+ cent claims of greater intelligence (even sentience),
209
+ as opposed to other models (y Arcas, 2022).
210
+ 4
211
+ Potential Harms
212
+ 4.1
213
+ Potential Harms of Open-Sourcing LLMs
214
+ To offer a well-rounded argument in favor of open-
215
+ sourcing LLMs, we will briefly cover some intu-
216
+ itions behind close-sourcing them in terms of po-
217
+ tential harms.
218
+ LLMs could be repurposed for malicious pur-
219
+ poses, particularly in generative tasks. LLMs have
220
+ been seen to learn negative human biases/patterns
221
+ in speech such as hate speech, discrimination, and
222
+ the promotion of misinformation (Schramowski
223
+ et al., 2022). If a powerful, pre-trained LLM is
224
+ made open source, then it could be repurposed as
225
+ an engine to cause harm across the internet at great
226
+ scale (Weidinger et al., 2022). It could also be ar-
227
+ gued that open sourcing LLM code that has been
228
+ deployed to end-users could pose security risks
229
+ (Chang et al., 2020).
230
+ We counter the argument of potential LLM mis-
231
+ use by malicious parties by arguing that such mod-
232
+ els or derivatives of such should not be published
233
+ in any form, open or closed source. We argue that
234
+ LLM experimental papers that indicate such po-
235
+ tential to cause harm at scale should be filtered
236
+ out at the publication review stage, something that
237
+ has been discussed in the deep learning community
238
+ as of late (Ashurst et al., 2022). We also counter
239
+ the security concern argument by saying that this
240
+ could hold true for all open source software that is
241
+ deployable, not just LLMs.
242
+ 4.2
243
+ Potential Harms from Continued
244
+ Close-Sourcing of LLMs
245
+ We argue that there are more potential harms in the
246
+ continued prevalence of close sourced LLM code
247
+ than the potential harms of open sourcing them.
248
+ 4.2.1
249
+ Nativist Biases
250
+ Given that LLM experiments are becoming so large,
251
+ costly, and complex, it is difficult to argue that an in-
252
+ dependent researcher can stake a claim in this sub-
253
+ field. With top publication venues focusing heavily
254
+ on empiricist experimentation (Russell and Norvig,
255
+ 2021), researchers outside the typical corporate
256
+ scope of research could be incentivized to explore
257
+ nativist, rule-based approaches to solve problems
258
+ in the NLP domain. If it is the empiricist group’s
259
+ better interest to foster growth in their methodolo-
260
+ gies and not opposing methods, steps should be
261
+ taken in order to make their approaches accessible.
262
+ Also, for hybrid methods to function, an ML-based
263
+ solution should be made accessible to combine
264
+ with the ruleset from the nativist side. This trend
265
+ could be fostering a new generation of Chomsky-
266
+ following nativist NLP researchers, which would
267
+ not bode well for empiricists if the public begins to
268
+ lose interest in deep learning methods for NLP.
269
+ 4.2.2
270
+ Lack of Reproducibility
271
+ We mention reproducibility and will further clarify
272
+ its meaning due to an also recent, yet broader prob-
273
+ lem in deep learning research, the reproducibility
274
+ crisis (Kapoor and Narayanan, 2022). Not only are
275
+ large language models becoming difficult to repro-
276
+ duce, results from other areas of ML are becoming
277
+ difficult to produce (de Freitas Pereira et al., 2022;
278
+ Towers et al., 2022). Initiatives to measure repro-
279
+ ducibility across publication venues have been cre-
280
+ ated, such as the ML Reproducibility Challenge.
281
+ LLM experiments have been specifically reviewed
282
+ to have a questionable about of reproducibility
283
+ (Crane, 2018; Wieling et al., 2018; Cahyawijaya
284
+ et al., 2022; Silva et al., 2022). There is also im-
285
+ plied to be a significant amount of computational
286
+ irreproducibility of LLM experimentation, given
287
+ model complexity and data, however, we leave this
288
+ exploration for future work.
289
+ There is some hope in the form of positive re-
290
+ producibility reports in deep learning (Gibson and
291
+ Cano, 2022). However, this growing amount of
292
+ “bad press” for deep learning, specifically LLMs,
293
+ could cause the public to begin distrusting LLM
294
+
295
+ research. This, again, could trigger a revisiting of
296
+ Chomsky’s rule-based theories of language.
297
+ 4.2.3
298
+ Issues in NLP Education
299
+ Given the previously mentioned issues, this lack
300
+ of accessibility could affect the education of NLP
301
+ methods. If students do not have access to code
302
+ of LLMs, it could be difficult for them to learn to
303
+ implement complex language model code of their
304
+ own and learn to keep up with the state of the art. A
305
+ lack of reproducibility could also be disenfranchis-
306
+ ing to a young, empircist NLP researcher, leading
307
+ them to pursue nativist approaches. These issues
308
+ could reinforce the use of statistical, pre-deep learn-
309
+ ing techniques in the classroom, but it is difficult to
310
+ argue that publication venues are interested in shal-
311
+ low neural network experimentation at this time.
312
+ These issues combine to form an uneven playing
313
+ field for students to study NLP in empiricist and
314
+ hybrid forms. After studying NLP formally, they
315
+ may be inclined to commit to nativist methods or
316
+ even reinforce the popularity of them at scale.
317
+ 5
318
+ Potential Solution
319
+ We ask that publication venues merit open source
320
+ LLM experiments significantly higher than they
321
+ do currently. We believe that this would mitigate
322
+ the issues discussed previously in this work. There
323
+ seem to be developments occuring now in the deep
324
+ learning publication space to help implement this in
325
+ a proper form of governance (Ashurst et al., 2022).
326
+ 6
327
+ Conclusion
328
+ In this work, we provided a comprehensive history
329
+ of natural language processing methodologies over
330
+ roughly the past century. We then used this nar-
331
+ rative to lead into today’s deep learning practices
332
+ used in language processing, and current issues in
333
+ an excessive closed sourcing of code for LLMs.
334
+ It is our hope that this work inspires researchers
335
+ and reviewers to champion open source language
336
+ model code in order to pave the way for a more
337
+ balanced research space.
338
+ References
339
+ Carolyn Ashurst, Emmie Hine, Paul Sedille, and Alexis
340
+ Carlier. 2022. Ai ethics statements: Analysis and
341
+ lessons learnt from neurips broader impact state-
342
+ ments. In 2022 ACM Conference on Fairness, Ac-
343
+ countability, and Transparency, pages 2047–2056.
344
+ Lalit R Bahl, Frederick Jelinek, and Robert L Mer-
345
+ cer. 1983. A maximum likelihood approach to con-
346
+ tinuous speech recognition. IEEE transactions on
347
+ pattern analysis and machine intelligence, (2):179–
348
+ 190.
349
+ James Baker. 1975a. The dragon system–an overview.
350
+ IEEE Transactions on Acoustics, speech, and signal
351
+ Processing, 23(1):24–29.
352
+ James K. Baker. 1975b.
353
+ Stochastic Modeling as a
354
+ Means of Automatic Speech Recognition. Ph.D. the-
355
+ sis, USA.
356
+ Cameron Buckner. 2018. Empiricism without magic:
357
+ Transformational abstraction in deep convolutional
358
+ neural networks. Synthese, 195(12):5339–5372.
359
+ Cameron
360
+ Buckner.
361
+ 2019.
362
+ Deep
363
+ learning:
364
+ A
365
+ philosophical introduction.
366
+ Philosophy compass,
367
+ 14(10):e12625.
368
+ Samuel Cahyawijaya, Alham Fikri Aji, Holy Lovenia,
369
+ Genta Indra Winata, Bryan Wilie, Rahmad Mahen-
370
+ dra, Fajri Koto, David Moeljadi, Karissa Vincentio,
371
+ Ade Romadhony, et al. 2022. Nusacrowd: A call
372
+ for open and reproducible nlp research in indonesian
373
+ languages. arXiv preprint arXiv:2207.10524.
374
+ Asli Celikyilmaz, Li Deng, and Dilek Hakkani-Tür.
375
+ 2018. Deep learning in spoken and text-based dia-
376
+ log systems. In Deep Learning in Natural Language
377
+ Processing, pages 49–78. Springer.
378
+ Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang,
379
+ Honglei Zhang, Peng Cui, Wenwu Zhu, and Jun-
380
+ zhou Huang. 2020. A restricted black-box adversar-
381
+ ial framework towards attacking graph embedding
382
+ models. In Proceedings of the AAAI Conference on
383
+ Artificial Intelligence, volume 34, pages 3389–3396.
384
+ Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan,
385
+ Henrique Ponde de Oliveira Pinto, Jared Kaplan,
386
+ Harri Edwards, Yuri Burda, Nicholas Joseph, Greg
387
+ Brockman, et al. 2021.
388
+ Evaluating large lan-
389
+ guage models trained on code.
390
+ arXiv preprint
391
+ arXiv:2107.03374.
392
+ Laura Chiticariu, Yunyao Li, and Frederick Reiss. 2013.
393
+ Rule-based information extraction is dead! long live
394
+ rule-based information extraction systems! In Pro-
395
+ ceedings of the 2013 conference on empirical meth-
396
+ ods in natural language processing, pages 827–832.
397
+ Noam Chomsky. 1956a. The logical structure of lin-
398
+ guistic theory. Synthese, 40(2):317–352.
399
+ Noam Chomsky. 1956b. Three models for the descrip-
400
+ tion of language. IRE Transactions on information
401
+ theory, 2(3):113–124.
402
+ Noam Chomsky. 1957. Syntactic structures. In Syntac-
403
+ tic Structures. De Gruyter Mouton.
404
+ Noam Chomsky. 1963. Formal properties of grammars.
405
+ Handbook of Math. Psychology, 2:328–418.
406
+
407
+ Noam Chomsky. 1965. Aspects of the theory of syntax.
408
+ Multilingual Matters: MIT Press.
409
+ Noam Chomsky and Morris Halle. 1968. The sound
410
+ pattern of english.
411
+ Ronan Collobert, Jason Weston, Léon Bottou, Michael
412
+ Karlen, Koray Kavukcuoglu, and Pavel Kuksa.
413
+ 2011.
414
+ Natural language processing (almost) from
415
+ scratch.
416
+ Journal of machine learning research,
417
+ 12(ARTICLE):2493–2537.
418
+ Matt Crane. 2018. Questionable answers in question
419
+ answering research: Reproducibility and variability
420
+ of published results. Transactions of the Association
421
+ for Computational Linguistics, 6:241–252.
422
+ Tiago de Freitas Pereira, Dominic Schmidli, Yu Linghu,
423
+ Xinyi Zhang, Sébastien Marcel, and Manuel Gün-
424
+ ther. 2022.
425
+ Eight years of face recognition re-
426
+ search: Reproducibility, achievements and open is-
427
+ sues. arXiv, (2208.04040).
428
+ Ali Fadel, Ibraheem Tuffaha, Mahmoud Al-Ayyoub,
429
+ et al. 2019.
430
+ Arabic text diacritization using deep
431
+ neural networks. In 2019 2nd international confer-
432
+ ence on computer applications & information secu-
433
+ rity (ICCAIS), pages 1–7. IEEE.
434
+ Muhammad Farooq and Abdul Hafeez. 2020. Covid-
435
+ resnet:
436
+ A deep learning framework for screen-
437
+ ing of covid19 from radiographs.
438
+ arXiv preprint
439
+ arXiv:2003.14395.
440
+ Stefan L Frank,
441
+ Padraic Monaghan,
442
+ and Chara
443
+ Tsoukala. 2019. Neural network models of language
444
+ acquisition and processing.
445
+ In Human language:
446
+ From genes and brain to behavior, pages 277–293.
447
+ MIT Press.
448
+ Perry Gibson and José Cano. 2022.
449
+ Productive
450
+ reproducible workflows for dnns:
451
+ A case study
452
+ for industrial defect detection.
453
+ arXiv preprint
454
+ arXiv:2206.09359.
455
+ Yinpeng Guo, Liangyou Li, Xin Jiang, and Qun Liu.
456
+ 2022.
457
+ Freetransfer-x: Safe and label-free cross-
458
+ lingual transfer from off-the-shelf models.
459
+ arXiv
460
+ preprint arXiv:2206.06586.
461
+ Xiaodong He and Li Deng. 2018.
462
+ Deep learning in
463
+ natural language generation from images. In Deep
464
+ Learning in Natural Language Processing, pages
465
+ 289–307. Springer.
466
+ W von Humboldt. 1836.
467
+ Uber die verschiedenheit
468
+ des menschlichen sprachbaues. Berlin: Konigliche
469
+ Akademie der Wissenschaften.
470
+ Fred Jelinek, B Merialdo, S Roukos, M Strauss, et al.
471
+ 1990. Self-organized language modeling for speech
472
+ recognition. In Readings in speech recognition.
473
+ Frederick Jelinek. 1976. Continuous speech recogni-
474
+ tion by statistical methods. Proceedings of the IEEE,
475
+ 64(4):532–556.
476
+ Daniel Jurafsky and James H Martin. 2022.
477
+ Speech
478
+ and language processing: An introduction to natural
479
+ language processing, computational linguistics, and
480
+ speech recognition (draft of january 12, 2022).
481
+ Sayash Kapoor and Arvind Narayanan. 2022. Leakage
482
+ and the reproducibility crisis in ml-based science.
483
+ arXiv preprint arXiv:2207.07048.
484
+ Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel-
485
+ lart, and Alexander M Rush. 2017. Opennmt: Open-
486
+ source toolkit for neural machine translation. arXiv
487
+ preprint arXiv:1701.02810.
488
+ Stephen Laurence and Eric Margolis. 2001.
489
+ The
490
+ poverty of the stimulus argument. British Journal
491
+ for the Philosophy of Science, 52(2).
492
+ Yann LeCun, Yoshua Bengio, Geoffrey Hinton, et al.
493
+ 2015. Deep learning. Nature, 521 (7553), 436-444.
494
+ Christopher Manning and Hinrich Schutze. 1999.
495
+ Foundations of statistical natural language process-
496
+ ing. MIT press.
497
+ Gary Marcus. 2018.
498
+ Deep learning: A critical ap-
499
+ praisal. arXiv preprint arXiv:1801.00631.
500
+ Andre˘ı Andreevich Markov. 2006. An example of sta-
501
+ tistical investigation of the text eugene onegin con-
502
+ cerning the connection of samples in chains. Science
503
+ in Context, 19(4):591–600.
504
+ George A Miller and Noam Chomsky. 1963. Finitary
505
+ models of language users. handbook of mathemati-
506
+ cal psychology, vol. by r. duncan luce, robert r. bush,
507
+ and eugene galanter, 419–91.
508
+ Andriy Mnih and Geoffrey E Hinton. 2008. A scalable
509
+ hierarchical distributed language model. Advances
510
+ in neural information processing systems, 21.
511
+ Frederic Morin and Yoshua Bengio. 2005. Hierarchi-
512
+ cal probabilistic neural network language model. In
513
+ International workshop on artificial intelligence and
514
+ statistics, pages 246–252. PMLR.
515
+ Stuart Russell and Peter Norvig. 2021. Artificial Intel-
516
+ ligence: A Modern Approach. Pearson.
517
+ Patrick Schramowski, Cigdem Turan, Nico Andersen,
518
+ Constantin A Rothkopf, and Kristian Kersting. 2022.
519
+ Large pre-trained language models contain human-
520
+ like biases of what is right and wrong to do. Nature
521
+ Machine Intelligence, 4(3):258–268.
522
+ Claude Elwood Shannon. 1948. A mathematical the-
523
+ ory of communication.
524
+ The Bell system technical
525
+ journal, 27(3):379–423.
526
+ Joshua H Siegle, Gregory J Hale, Jonathan P Newman,
527
+ and Jakob Voigts. 2015. Neural ensemble communi-
528
+ ties: open-source approaches to hardware for large-
529
+ scale electrophysiology. Current opinion in neurobi-
530
+ ology, 32:53–59.
531
+
532
+ Marília Costa Rosendo Silva, Felipe Alves Siqueira,
533
+ João Pedro Mantovani Tarrega, João Vitor Pataca
534
+ Beinotti, Augusto Sousa Nunes, Miguel de Mattos
535
+ Gardini, Vinícius Adolfo Pereira da Silva, Nádia
536
+ Félix Felipe da Silva, and André Carlos Ponce de
537
+ Leon Ferreira de Carvalho. 2022.
538
+ No pattern, no
539
+ recognition: a survey about reproducibility and dis-
540
+ tortion issues of text clustering and topic modeling.
541
+ arXiv preprint arXiv:2208.01712.
542
+ David Silver, Julian Schrittwieser, Karen Simonyan,
543
+ Ioannis Antonoglou, Aja Huang, Arthur Guez,
544
+ Thomas Hubert, Lucas Baker, Matthew Lai, Adrian
545
+ Bolton, et al. 2017. Mastering the game of go with-
546
+ out human knowledge. nature, 550(7676):354–359.
547
+ Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao,
548
+ Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch,
549
+ Adam R Brown, Adam Santoro, Aditya Gupta,
550
+ Adrià Garriga-Alonso, et al. 2022.
551
+ Beyond the
552
+ imitation game: Quantifying and extrapolating the
553
+ capabilities of language models.
554
+ arXiv preprint
555
+ arXiv:2206.04615.
556
+ Stephen P Stich. 1978. Empiricism, innateness, and
557
+ linguistic universals. Philosophical Studies: An In-
558
+ ternational Journal for Philosophy in the Analytic
559
+ Tradition, 33(3):273–286.
560
+ S Suresha, L Kidzi´nski, E Halilaj, GE Gold, and
561
+ SL Delp. 2018.
562
+ Automated staging of knee os-
563
+ teoarthritis severity using deep neural networks. Os-
564
+ teoarthritis and Cartilage, 26:S441.
565
+ David Towers,
566
+ Matthew Forshaw,
567
+ Amir Atapour-
568
+ Abarghouei, and Andrew Stephen McGough. 2022.
569
+ Long-term reproducibility for neural architecture
570
+ search. arXiv preprint arXiv:2207.04821.
571
+ Gokhan Tur, Asli Celikyilmaz, Xiaodong He, Dilek
572
+ Hakkani-Tür, and Li Deng. 2018.
573
+ Deep learning
574
+ in conversational language understanding. In Deep
575
+ Learning in Natural Language Processing, pages
576
+ 23–48. Springer.
577
+ Laura Weidinger, Jonathan Uesato, Maribeth Rauh,
578
+ Conor
579
+ Griffin,
580
+ Po-Sen
581
+ Huang,
582
+ John
583
+ Mellor,
584
+ Amelia Glaese, Myra Cheng, Borja Balle, Atoosa
585
+ Kasirzadeh, et al. 2022. Taxonomy of risks posed
586
+ by language models. In 2022 ACM Conference on
587
+ Fairness, Accountability, and Transparency, pages
588
+ 214–229.
589
+ Martijn Wieling, Josine Rawee, and Gertjan van Noord.
590
+ 2018. Reproducibility in computational linguistics:
591
+ are we willing to share? Computational Linguistics,
592
+ 44(4):641–649.
593
+ Frank F Xu, Uri Alon, Graham Neubig, and Vincent Jo-
594
+ sua Hellendoorn. 2022. A systematic evaluation of
595
+ large language models of code. In Proceedings of
596
+ the 6th ACM SIGPLAN International Symposium on
597
+ Machine Programming, pages 1–10.
598
+ Blaise Agüera y Arcas. 2022. Do large language mod-
599
+ els understand us? Daedalus, 151(2):183–197.
600
+ Gianluca Zuin, Adriano Veloso, João Cândido Porti-
601
+ nari, and Nivio Ziviani. 2020. Automatic tag recom-
602
+ mendation for painting artworks using diachronic de-
603
+ scriptions. In 2020 International Joint Conference
604
+ on Neural Networks (IJCNN), pages 1–8. IEEE.
605
+
1NE4T4oBgHgl3EQfzQ1E/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,405 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf,len=404
2
+ page_content='Inaccessible Neural Language Models Could Reinvigorate Linguistic Nativism Patrick Perrine California Polytechnic State University 1 Grand Ave, San Luis Obispo, CA, 93410 paperrin@calpoly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
3
+ page_content='edu Abstract Large Language Models (LLMs) have been mak- ing big waves in the machine learning community within the past few years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
4
+ page_content=' The impressive scalabil- ity of LLMs due to the advent of deep learning can be seen as a continuation of empiricist lingusitic methods, as opposed to rule-based linguistic meth- ods that are grounded in a nativist perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
5
+ page_content=' Cur- rent LLMs are generally inaccessible to resource- constrained researchers, due to a variety of factors including closed source code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
6
+ page_content=' This work argues that this lack of accessibility could instill a nativist bias in researchers new to computational linguis- tics, given that new researchers may only have rule- based, nativist approaches to study to produce new work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
7
+ page_content=' Also, given that there are numerous critics of deep learning claiming that LLMs and related methods may soon lose their relevancy, we spec- ulate that such an event could trigger a new wave of nativism in the language processing community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
8
+ page_content=' To prevent such a dramatic shift and placing favor in hybrid methods of rules and deep learning, we call upon researchers to open source their LLM code wherever possible to allow both empircist and hybrid approaches to remain accessible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
9
+ page_content=' 1 Introduction Large Language Models (LLMs) have been a pop- ular topic of research among the academic com- munity (Srivastava et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
10
+ page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
11
+ page_content=' The promise of a near-general purpose neural model for a variety of language processing tasks is indeed an attrac- tive one (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
12
+ page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
13
+ page_content=' Deep learning has made significant developments in language tasks such as conversational language understanding (Tur et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
14
+ page_content=', 2018), spoken/text-based dialog systems (Celikyil- maz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
15
+ page_content=', 2018), and natural language generation from images (He and Deng, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
16
+ page_content=' Large Lan- guage models can be viewed as the natural pro- gression away from the rigid rule-based systems that we’ve had since the 1950’s (Chiticariu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
17
+ page_content=', 2013), continuing the empiricist mentality of sta- tistical natural language processing without the po- tentially costly and context-specific activity of fea- ture engineering (Collobert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
18
+ page_content=', 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
19
+ page_content=' However, with large corporations touting their ever-growing, state-of-the-art models under closed-source code and payment walls, it could be seen that these large language models are becoming less acces- sible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
20
+ page_content=' Some organizations have acknowledged the potential harms that deep learning models could cause by establishing ethical frameworks (Ashurst et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
21
+ page_content=', 2022) (Weidinger et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
22
+ page_content=', 2022), but there are still growing concerns regarding accessibility and the result of false/irreproducible science (Kapoor and Narayanan, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
23
+ page_content=' This criticism for empiricist methods is not new in linguistics-based science, in that Chomsky’s Poverty of the Stimulus Argument (Stich, 1978) has a rich history of discussion and debate amongst linguists, scientists, and philosophers (Laurence and Margolis, 2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
24
+ page_content=' In this work, we will briefly introduce this debate over language learning be- tween nativists and empiricists, relate these topics to research in natural language processing, and dis- ucss how the current state of this research is rein- forcing an imbalance between the two perspectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
25
+ page_content=' We intend to deliver a neutral ground of analysis, as we agree that a hybrid approach for NLP re- search can lead to strong results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
26
+ page_content=' The current bias towards the highly-popular, but inaccessible em- piricist methods utilizing LLMs could lead to a new wave of nativism in natural language process- ing work, following a large backlash against such empirical methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
27
+ page_content=' 2 Background We now provide a holistic background on the lin- guistic and scientific developments that encompass this issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
28
+ page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
29
+ page_content='05272v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
30
+ page_content='CL] 12 Jan 2023 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
31
+ page_content='1 The Three Waves of Modern NLP We will give a brief background on the three main waves of modern natural language processing re- search: the rule-based theories popularized by Noam Chomsky (Chomsky, 1965), the statistics- based empiricist experiments (Jelinek, 1976), and today’s popular methodology of deep learning for natural language processing (Collobert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
32
+ page_content=', 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
33
+ page_content=' The first wave is considered to be under a na- tivist perspective (Laurence and Margolis, 2001), whereas the latter waves are in support of an em- piricist lens (Frank et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
34
+ page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
35
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
36
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
37
+ page_content='1 Rule-based NLP The concept of viewing language as a static sys- tem of rules to determine interpretation has been present as early as the 1830’s (Humboldt, 1836).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
38
+ page_content=' Noam Chomsky popularized this perspective in the domain of linguistics as a challenge to an exist- ing overbearance of empiricist methods (Chomsky, 1956b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
39
+ page_content=' Laurence and Margolis, 2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
40
+ page_content=' This rule-based approach to linguistics domi- nated the field for decades, following Chomsky’s mutliple works emphasizing and reinforcing this doctrine (Chomsky, 1956a, 1957, 1963, 1965;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
41
+ page_content=' Chomsky and Halle, 1968).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
42
+ page_content=' Being based in proposi- tional logic and a fixed content, rule-based methods are arguably rather accessible to researchers with limited resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
43
+ page_content=' These methods continued to be prevalent in the field until the 1970’s, when statisti- cal methods were proven to be very useful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
44
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
45
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
46
+ page_content='2 Statistical NLP The roots of statistical language processing stem from Andrey Markov’s efforts in computing bi- gram and trigram probabilities (Jurafsky and Mar- tin, 2022) of vowel/consonant predictions using a novel as a corpus in 1913 (Markov, 2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
47
+ page_content=' This n-gram approach was later applied to predicting sequences of English words (Shannon, 1948).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
48
+ page_content=' This popularized the notion of using Markov chains for use in a variety of applications within and outside of linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
49
+ page_content=' Chomsky specifically challenged this use of finite-state Markov processes, the processes that formed n-gram based approaches, to be useless in serving as a comprehensive cognitive model of grammatical knowledge in humans (Chomsky, 1956b, 1957;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
50
+ page_content=' Miller and Chomsky, 1963).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
51
+ page_content=' This hindered the progress of probabilistic approaches in linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
52
+ page_content=' Over a decade later, statistical language process- ing was revitalized due in part to a series of success- ful experiments using n-gram models for speech recognition (Baker, 1975a,b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
53
+ page_content=' Jelinek, 1976;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
54
+ page_content=' Bahl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
55
+ page_content=', 1983;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
56
+ page_content=' Jelinek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
57
+ page_content=', 1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
58
+ page_content=' These empiricist- based experiments showed that Chomsky’s nativist theories do not extend to recognizing speech in real time as previously proposed (Chomsky and Halle, 1968).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
59
+ page_content=' This marked a shift towards looking at language processing through an empirical lens, where a hy- pothesis test primarily guides the experimentation process, rather than theoretical insights (Manning and Schutze, 1999).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
60
+ page_content=' After the successful statistical speech recognition experiments of the mid 1970’s, statistical NLP reigned as the dominant approach for decades.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
61
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
62
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
63
+ page_content='3 ML-based NLP Researchers soon began to use shallow neural net- works to reinforce statistical methodologies in NLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
64
+ page_content=' In the late 2000’s, the advent of deeper neural net- works for NLP began to stir when scalable, hierar- chical language models (Morin and Bengio, 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
65
+ page_content=' Mnih and Hinton, 2008) and increased computing power became available for use by researchers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
66
+ page_content=' Alongside these developments, researchers be- came tiresome of having to hand-engineer features for neural networks to learn from, as this can be a costly and rather context-specific task (Collobert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
67
+ page_content=', 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
68
+ page_content=' In was in the 2010’s that deep learning became known more globally (LeCun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
69
+ page_content=', 2015), with NLP being a highly prominent application for deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
70
+ page_content=' This sparked the current practice of training large language models in efforts to create a general model for many language tasks (Srivastava et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
71
+ page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
72
+ page_content=' In essence, the empiri- cist era of NLP has persisted to today through the evolution of deep learning practices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
73
+ page_content=' Some appli- cations of deep learning outside of language have even used empiricist terms such as tabula rasa very openly (Silver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
74
+ page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
75
+ page_content=' The use of deep neural networks for language tasks has been confirmed to reinforce empircist ideology (Frank et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
76
+ page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
77
+ page_content=' 3 Deep Learning Can Be Inaccessible Deep learning as a science has been under fire for a number of reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
78
+ page_content=' While there have been encour- aging results across many application domains of deep learning and positive insights about their role in advancing empiricism (Buckner, 2018), deep learning has garnered skepticsm from both in and outside of its community (Marcus, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
79
+ page_content=' Buckner, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
80
+ page_content=' These critcisms of deep NLP can stem from a lack of open sourcing of model code and also data (Klein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
81
+ page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
82
+ page_content=' Fadel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
83
+ page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
84
+ page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
85
+ page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
86
+ page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
87
+ page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
88
+ page_content=' Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
89
+ page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
90
+ page_content=' These issues are not exclusive to language processing, as other domains have reasons to leave aspects of their experimentation private or inaccessible when publishing (Siegle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
91
+ page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
92
+ page_content=' Suresha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
93
+ page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
94
+ page_content=' Farooq and Hafeez, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
95
+ page_content=' Zuin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
96
+ page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
97
+ page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
98
+ page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
99
+ page_content=' We now focus on issues with closed-source large language models due to their popularity and the re- cent claims of greater intelligence (even sentience), as opposed to other models (y Arcas, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
100
+ page_content=' 4 Potential Harms 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
101
+ page_content='1 Potential Harms of Open-Sourcing LLMs To offer a well-rounded argument in favor of open- sourcing LLMs, we will briefly cover some intu- itions behind close-sourcing them in terms of po- tential harms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
102
+ page_content=' LLMs could be repurposed for malicious pur- poses, particularly in generative tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
103
+ page_content=' LLMs have been seen to learn negative human biases/patterns in speech such as hate speech, discrimination, and the promotion of misinformation (Schramowski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
104
+ page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
105
+ page_content=' If a powerful, pre-trained LLM is made open source, then it could be repurposed as an engine to cause harm across the internet at great scale (Weidinger et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
106
+ page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
107
+ page_content=' It could also be ar- gued that open sourcing LLM code that has been deployed to end-users could pose security risks (Chang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
108
+ page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
109
+ page_content=' We counter the argument of potential LLM mis- use by malicious parties by arguing that such mod- els or derivatives of such should not be published in any form, open or closed source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
110
+ page_content=' We argue that LLM experimental papers that indicate such po- tential to cause harm at scale should be filtered out at the publication review stage, something that has been discussed in the deep learning community as of late (Ashurst et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
111
+ page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
112
+ page_content=' We also counter the security concern argument by saying that this could hold true for all open source software that is deployable, not just LLMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
113
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
114
+ page_content='2 Potential Harms from Continued Close-Sourcing of LLMs We argue that there are more potential harms in the continued prevalence of close sourced LLM code than the potential harms of open sourcing them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
115
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
116
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
117
+ page_content='1 Nativist Biases Given that LLM experiments are becoming so large, costly, and complex, it is difficult to argue that an in- dependent researcher can stake a claim in this sub- field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
118
+ page_content=' With top publication venues focusing heavily on empiricist experimentation (Russell and Norvig, 2021), researchers outside the typical corporate scope of research could be incentivized to explore nativist, rule-based approaches to solve problems in the NLP domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
119
+ page_content=' If it is the empiricist group’s better interest to foster growth in their methodolo- gies and not opposing methods, steps should be taken in order to make their approaches accessible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
120
+ page_content=' Also, for hybrid methods to function, an ML-based solution should be made accessible to combine with the ruleset from the nativist side.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
121
+ page_content=' This trend could be fostering a new generation of Chomsky- following nativist NLP researchers, which would not bode well for empiricists if the public begins to lose interest in deep learning methods for NLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
122
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
123
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
124
+ page_content='2 Lack of Reproducibility We mention reproducibility and will further clarify its meaning due to an also recent, yet broader prob- lem in deep learning research, the reproducibility crisis (Kapoor and Narayanan, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
125
+ page_content=' Not only are large language models becoming difficult to repro- duce, results from other areas of ML are becoming difficult to produce (de Freitas Pereira et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
126
+ page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
127
+ page_content=' Towers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
128
+ page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
129
+ page_content=' Initiatives to measure repro- ducibility across publication venues have been cre- ated, such as the ML Reproducibility Challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
130
+ page_content=' LLM experiments have been specifically reviewed to have a questionable about of reproducibility (Crane, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
131
+ page_content=' Wieling et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
132
+ page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
133
+ page_content=' Cahyawijaya et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
134
+ page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
135
+ page_content=' Silva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
136
+ page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
137
+ page_content=' There is also im- plied to be a significant amount of computational irreproducibility of LLM experimentation, given model complexity and data, however, we leave this exploration for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
138
+ page_content=' There is some hope in the form of positive re- producibility reports in deep learning (Gibson and Cano, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
139
+ page_content=' However, this growing amount of “bad press” for deep learning, specifically LLMs, could cause the public to begin distrusting LLM research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
140
+ page_content=' This, again, could trigger a revisiting of Chomsky’s rule-based theories of language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
141
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
142
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
143
+ page_content='3 Issues in NLP Education Given the previously mentioned issues, this lack of accessibility could affect the education of NLP methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
144
+ page_content=' If students do not have access to code of LLMs, it could be difficult for them to learn to implement complex language model code of their own and learn to keep up with the state of the art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
145
+ page_content=' A lack of reproducibility could also be disenfranchis- ing to a young, empircist NLP researcher, leading them to pursue nativist approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
146
+ page_content=' These issues could reinforce the use of statistical, pre-deep learn- ing techniques in the classroom, but it is difficult to argue that publication venues are interested in shal- low neural network experimentation at this time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
147
+ page_content=' These issues combine to form an uneven playing field for students to study NLP in empiricist and hybrid forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
148
+ page_content=' After studying NLP formally, they may be inclined to commit to nativist methods or even reinforce the popularity of them at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
149
+ page_content=' 5 Potential Solution We ask that publication venues merit open source LLM experiments significantly higher than they do currently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
150
+ page_content=' We believe that this would mitigate the issues discussed previously in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
151
+ page_content=' There seem to be developments occuring now in the deep learning publication space to help implement this in a proper form of governance (Ashurst et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
152
+ page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
153
+ page_content=' 6 Conclusion In this work, we provided a comprehensive history of natural language processing methodologies over roughly the past century.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
154
+ page_content=' We then used this nar- rative to lead into today’s deep learning practices used in language processing, and current issues in an excessive closed sourcing of code for LLMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
155
+ page_content=' It is our hope that this work inspires researchers and reviewers to champion open source language model code in order to pave the way for a more balanced research space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
156
+ page_content=' References Carolyn Ashurst, Emmie Hine, Paul Sedille, and Alexis Carlier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
157
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
158
+ page_content=' Ai ethics statements: Analysis and lessons learnt from neurips broader impact state- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
159
+ page_content=' In 2022 ACM Conference on Fairness, Ac- countability, and Transparency, pages 2047–2056.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
160
+ page_content=' Lalit R Bahl, Frederick Jelinek, and Robert L Mer- cer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
161
+ page_content=' 1983.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
162
+ page_content=' A maximum likelihood approach to con- tinuous speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
163
+ page_content=' IEEE transactions on pattern analysis and machine intelligence, (2):179– 190.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
164
+ page_content=' James Baker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
165
+ page_content=' 1975a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
166
+ page_content=' The dragon system–an overview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
167
+ page_content=' IEEE Transactions on Acoustics, speech, and signal Processing, 23(1):24–29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
168
+ page_content=' James K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
169
+ page_content=' Baker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
170
+ page_content=' 1975b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
171
+ page_content=' Stochastic Modeling as a Means of Automatic Speech Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
172
+ page_content=' Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
173
+ page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
174
+ page_content=' the- sis, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
175
+ page_content=' Cameron Buckner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
176
+ page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
177
+ page_content=' Empiricism without magic: Transformational abstraction in deep convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
178
+ page_content=' Synthese, 195(12):5339–5372.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
179
+ page_content=' Cameron Buckner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
180
+ page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
181
+ page_content=' Deep learning: A philosophical introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
182
+ page_content=' Philosophy compass, 14(10):e12625.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
183
+ page_content=' Samuel Cahyawijaya, Alham Fikri Aji, Holy Lovenia, Genta Indra Winata, Bryan Wilie, Rahmad Mahen- dra, Fajri Koto, David Moeljadi, Karissa Vincentio, Ade Romadhony, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
184
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
185
+ page_content=' Nusacrowd: A call for open and reproducible nlp research in indonesian languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
186
+ page_content=' arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
187
+ page_content='10524.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
188
+ page_content=' Asli Celikyilmaz, Li Deng, and Dilek Hakkani-Tür.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
189
+ page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
190
+ page_content=' Deep learning in spoken and text-based dia- log systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
191
+ page_content=' In Deep Learning in Natural Language Processing, pages 49–78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
192
+ page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
193
+ page_content=' Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu, and Jun- zhou Huang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
194
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
195
+ page_content=' A restricted black-box adversar- ial framework towards attacking graph embedding models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
196
+ page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 3389–3396.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
197
+ page_content=' Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
198
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
199
+ page_content=' Evaluating large lan- guage models trained on code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
200
+ page_content=' arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
201
+ page_content='03374.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
202
+ page_content=' Laura Chiticariu, Yunyao Li, and Frederick Reiss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
203
+ page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
204
+ page_content=' Rule-based information extraction is dead!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
205
+ page_content=' long live rule-based information extraction systems!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
206
+ page_content=' In Pro- ceedings of the 2013 conference on empirical meth- ods in natural language processing, pages 827–832.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
207
+ page_content=' Noam Chomsky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
208
+ page_content=' 1956a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
209
+ page_content=' The logical structure of lin- guistic theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
210
+ page_content=' Synthese, 40(2):317–352.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
211
+ page_content=' Noam Chomsky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
212
+ page_content=' 1956b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
213
+ page_content=' Three models for the descrip- tion of language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
214
+ page_content=' IRE Transactions on information theory, 2(3):113–124.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
215
+ page_content=' Noam Chomsky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
216
+ page_content=' 1957.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
217
+ page_content=' Syntactic structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
218
+ page_content=' In Syntac- tic Structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
219
+ page_content=' De Gruyter Mouton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
220
+ page_content=' Noam Chomsky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
221
+ page_content=' 1963.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
222
+ page_content=' Formal properties of grammars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
223
+ page_content=' Handbook of Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
224
+ page_content=' Psychology, 2:328–418.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
225
+ page_content=' Noam Chomsky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
226
+ page_content=' 1965.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
227
+ page_content=' Aspects of the theory of syntax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
228
+ page_content=' Multilingual Matters: MIT Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
229
+ page_content=' Noam Chomsky and Morris Halle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
230
+ page_content=' 1968.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
231
+ page_content=' The sound pattern of english.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
232
+ page_content=' Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
233
+ page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
234
+ page_content=' Natural language processing (almost) from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
235
+ page_content=' Journal of machine learning research, 12(ARTICLE):2493–2537.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
236
+ page_content=' Matt Crane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
237
+ page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
238
+ page_content=' Questionable answers in question answering research: Reproducibility and variability of published results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
239
+ page_content=' Transactions of the Association for Computational Linguistics, 6:241–252.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
240
+ page_content=' Tiago de Freitas Pereira, Dominic Schmidli, Yu Linghu, Xinyi Zhang, Sébastien Marcel, and Manuel Gün- ther.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
241
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
242
+ page_content=' Eight years of face recognition re- search: Reproducibility, achievements and open is- sues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
243
+ page_content=' arXiv, (2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
244
+ page_content='04040).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
245
+ page_content=' Ali Fadel, Ibraheem Tuffaha, Mahmoud Al-Ayyoub, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
246
+ page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
247
+ page_content=' Arabic text diacritization using deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
248
+ page_content=' In 2019 2nd international confer- ence on computer applications & information secu- rity (ICCAIS), pages 1–7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
249
+ page_content=' IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
250
+ page_content=' Muhammad Farooq and Abdul Hafeez.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
251
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
252
+ page_content=' Covid- resnet: A deep learning framework for screen- ing of covid19 from radiographs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
253
+ page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
254
+ page_content='14395.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
255
+ page_content=' Stefan L Frank, Padraic Monaghan, and Chara Tsoukala.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
256
+ page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
257
+ page_content=' Neural network models of language acquisition and processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
258
+ page_content=' In Human language: From genes and brain to behavior, pages 277–293.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
259
+ page_content=' MIT Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
260
+ page_content=' Perry Gibson and José Cano.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
261
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
262
+ page_content=' Productive reproducible workflows for dnns: A case study for industrial defect detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
263
+ page_content=' arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
264
+ page_content='09359.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
265
+ page_content=' Yinpeng Guo, Liangyou Li, Xin Jiang, and Qun Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
266
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
267
+ page_content=' Freetransfer-x: Safe and label-free cross- lingual transfer from off-the-shelf models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
268
+ page_content=' arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
269
+ page_content='06586.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
270
+ page_content=' Xiaodong He and Li Deng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
271
+ page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
272
+ page_content=' Deep learning in natural language generation from images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
273
+ page_content=' In Deep Learning in Natural Language Processing, pages 289–307.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
274
+ page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
275
+ page_content=' W von Humboldt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
276
+ page_content=' 1836.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
277
+ page_content=' Uber die verschiedenheit des menschlichen sprachbaues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
278
+ page_content=' Berlin: Konigliche Akademie der Wissenschaften.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
279
+ page_content=' Fred Jelinek, B Merialdo, S Roukos, M Strauss, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
280
+ page_content=' 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
281
+ page_content=' Self-organized language modeling for speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
282
+ page_content=' In Readings in speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
283
+ page_content=' Frederick Jelinek.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
284
+ page_content=' 1976.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
285
+ page_content=' Continuous speech recogni- tion by statistical methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
286
+ page_content=' Proceedings of the IEEE, 64(4):532–556.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
287
+ page_content=' Daniel Jurafsky and James H Martin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
288
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
289
+ page_content=' Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition (draft of january 12, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
290
+ page_content=' Sayash Kapoor and Arvind Narayanan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
291
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
292
+ page_content=' Leakage and the reproducibility crisis in ml-based science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
293
+ page_content=' arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
294
+ page_content='07048.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
295
+ page_content=' Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M Rush.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
296
+ page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
297
+ page_content=' Opennmt: Open- source toolkit for neural machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
298
+ page_content=' arXiv preprint arXiv:1701.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
299
+ page_content='02810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
300
+ page_content=' Stephen Laurence and Eric Margolis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
301
+ page_content=' 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
302
+ page_content=' The poverty of the stimulus argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
303
+ page_content=' British Journal for the Philosophy of Science, 52(2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
304
+ page_content=' Yann LeCun, Yoshua Bengio, Geoffrey Hinton, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
305
+ page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
306
+ page_content=' Deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
307
+ page_content=' Nature, 521 (7553), 436-444.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
308
+ page_content=' Christopher Manning and Hinrich Schutze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
309
+ page_content=' 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
310
+ page_content=' Foundations of statistical natural language process- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
311
+ page_content=' MIT press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
312
+ page_content=' Gary Marcus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
313
+ page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
314
+ page_content=' Deep learning: A critical ap- praisal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
315
+ page_content=' arXiv preprint arXiv:1801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
316
+ page_content='00631.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
317
+ page_content=' Andre˘ı Andreevich Markov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
318
+ page_content=' 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
319
+ page_content=' An example of sta- tistical investigation of the text eugene onegin con- cerning the connection of samples in chains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
320
+ page_content=' Science in Context, 19(4):591–600.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
321
+ page_content=' George A Miller and Noam Chomsky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
322
+ page_content=' 1963.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
323
+ page_content=' Finitary models of language users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
324
+ page_content=' handbook of mathemati- cal psychology, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
325
+ page_content=' by r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
326
+ page_content=' duncan luce, robert r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
327
+ page_content=' bush, and eugene galanter, 419–91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
328
+ page_content=' Andriy Mnih and Geoffrey E Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
329
+ page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
330
+ page_content=' A scalable hierarchical distributed language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
331
+ page_content=' Advances in neural information processing systems, 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
332
+ page_content=' Frederic Morin and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
333
+ page_content=' 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
334
+ page_content=' Hierarchi- cal probabilistic neural network language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
335
+ page_content=' In International workshop on artificial intelligence and statistics, pages 246–252.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
336
+ page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
337
+ page_content=' Stuart Russell and Peter Norvig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
338
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
339
+ page_content=' Artificial Intel- ligence: A Modern Approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
340
+ page_content=' Pearson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
341
+ page_content=' Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A Rothkopf, and Kristian Kersting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
342
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
343
+ page_content=' Large pre-trained language models contain human- like biases of what is right and wrong to do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
344
+ page_content=' Nature Machine Intelligence, 4(3):258–268.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
345
+ page_content=' Claude Elwood Shannon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
346
+ page_content=' 1948.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
347
+ page_content=' A mathematical the- ory of communication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
348
+ page_content=' The Bell system technical journal, 27(3):379–423.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
349
+ page_content=' Joshua H Siegle, Gregory J Hale, Jonathan P Newman, and Jakob Voigts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
350
+ page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
351
+ page_content=' Neural ensemble communi- ties: open-source approaches to hardware for large- scale electrophysiology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
352
+ page_content=' Current opinion in neurobi- ology, 32:53–59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
353
+ page_content=' Marília Costa Rosendo Silva, Felipe Alves Siqueira, João Pedro Mantovani Tarrega, João Vitor Pataca Beinotti, Augusto Sousa Nunes, Miguel de Mattos Gardini, Vinícius Adolfo Pereira da Silva, Nádia Félix Felipe da Silva, and André Carlos Ponce de Leon Ferreira de Carvalho.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
354
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
355
+ page_content=' No pattern, no recognition: a survey about reproducibility and dis- tortion issues of text clustering and topic modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
356
+ page_content=' arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
357
+ page_content='01712.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
358
+ page_content=' David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
359
+ page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
360
+ page_content=' Mastering the game of go with- out human knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
361
+ page_content=' nature, 550(7676):354–359.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
362
+ page_content=' Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
363
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
364
+ page_content=' Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
365
+ page_content=' arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
366
+ page_content='04615.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
367
+ page_content=' Stephen P Stich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
368
+ page_content=' 1978.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
369
+ page_content=' Empiricism, innateness, and linguistic universals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
370
+ page_content=' Philosophical Studies: An In- ternational Journal for Philosophy in the Analytic Tradition, 33(3):273–286.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
371
+ page_content=' S Suresha, L Kidzi´nski, E Halilaj, GE Gold, and SL Delp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
372
+ page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
373
+ page_content=' Automated staging of knee os- teoarthritis severity using deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
374
+ page_content=' Os- teoarthritis and Cartilage, 26:S441.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
375
+ page_content=' David Towers, Matthew Forshaw, Amir Atapour- Abarghouei, and Andrew Stephen McGough.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
376
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
377
+ page_content=' Long-term reproducibility for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
378
+ page_content=' arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
379
+ page_content='04821.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
380
+ page_content=' Gokhan Tur, Asli Celikyilmaz, Xiaodong He, Dilek Hakkani-Tür, and Li Deng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
381
+ page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
382
+ page_content=' Deep learning in conversational language understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
383
+ page_content=' In Deep Learning in Natural Language Processing, pages 23–48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
384
+ page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
385
+ page_content=' Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, John Mellor, Amelia Glaese, Myra Cheng, Borja Balle, Atoosa Kasirzadeh, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
386
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
387
+ page_content=' Taxonomy of risks posed by language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
388
+ page_content=' In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 214–229.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
389
+ page_content=' Martijn Wieling, Josine Rawee, and Gertjan van Noord.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
390
+ page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
391
+ page_content=' Reproducibility in computational linguistics: are we willing to share?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
392
+ page_content=' Computational Linguistics, 44(4):641–649.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
393
+ page_content=' Frank F Xu, Uri Alon, Graham Neubig, and Vincent Jo- sua Hellendoorn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
394
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
395
+ page_content=' A systematic evaluation of large language models of code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
396
+ page_content=' In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, pages 1–10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
397
+ page_content=' Blaise Agüera y Arcas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
398
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
399
+ page_content=' Do large language mod- els understand us?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
400
+ page_content=' Daedalus, 151(2):183–197.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
401
+ page_content=' Gianluca Zuin, Adriano Veloso, João Cândido Porti- nari, and Nivio Ziviani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
402
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
403
+ page_content=' Automatic tag recom- mendation for painting artworks using diachronic de- scriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
404
+ page_content=' In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
405
+ page_content=' IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1NE4T4oBgHgl3EQfzQ1E/content/2301.05272v1.pdf'}
1dE1T4oBgHgl3EQfRwNN/content/2301.03056v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c0a57c53f4428541b8aedd63a7cf11362c5a648ba04a41438f55bd8555e5cbb
3
+ size 797059
1dE1T4oBgHgl3EQfRwNN/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4db4d8b787735c7159ecb5604ae7bee3c3906921373d3bee0188e3844205178a
3
+ size 464178
3NE1T4oBgHgl3EQfAQKK/content/tmp_files/2301.02837v1.pdf.txt ADDED
@@ -0,0 +1,875 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ 1
3
+ The 3D Structural Phenotype of the Glaucomatous Optic Nerve
4
+ Head and its Relationship with The Severity of Visual Field Damage
5
+
6
+ Fabian A. Braeu1,2,3, Thanadet Chuangsuwanich1,3, Tin A. Tun4,5, Shamira A. Perera4,5, Rahat Husain4,
7
+ Aiste Kadziauskiene6,7, Leopold Schmetterer4,5,8-12, Alexandre H. Thiéry13, George Barbastathis2,14, Tin
8
+ Aung3,4,5, and Michaël J.A. Girard1,5,12
9
+
10
+ 1.
11
+ Ophthalmic Engineering & Innovation Laboratory, Singapore Eye Research Institute, Singapore
12
+ National Eye Centre, Singapore
13
+ 2.
14
+ Singapore-MIT Alliance for Research and Technology, Singapore
15
+ 3.
16
+ Yong Loo Lin School of Medicine, National University of Singapore, Singapore
17
+ 4.
18
+ Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
19
+ 5.
20
+ Duke-NUS Graduate Medical School, Singapore
21
+ 6.
22
+ Clinic of Ears, Nose, Throat and Eye Diseases, Institute of Clinical Medicine, Faculty of Medicine,
23
+ Vilnius University, Vilnius, Lithuania
24
+ 7.
25
+ Center of Eye Diseases, Vilnius University Hospital Santaros Klinikos, Vilnius, Lithuania
26
+ 8.
27
+ SERI-NTU Advanced Ocular Engineering (STANCE), Singapore, Singapore
28
+ 9.
29
+ School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University
30
+ Singapore
31
+ 10. Department of Clinical Pharmacology, Medical University of Vienna, Austria
32
+ 11. Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria
33
+ 12. Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
34
+ 13. Department of Statistics and Applied Probability, National University of Singapore, Singapore
35
+ 14. Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge,
36
+ Massachusetts 02139, USA
37
+
38
+ Keywords:
39
+ Geometric deep learning, glaucoma, artificial intelligence, optic nerve
40
+ head, PointNet
41
+
42
+ Word count:
43
+
44
+ 4,971 (Manuscript Text)
45
+
46
+
47
+ 339 (Abstract)
48
+ Tables:
49
+
50
+ 1
51
+ Figures:
52
+
53
+ 4
54
+ Conflict of Interest:
55
+
56
+ MJAG and AHT are the co-founders of the AI start-up company Abyss
57
+ Processing Pte Ltd
58
+
59
+
60
+
61
+ Corresponding Author:
62
+ Michaël J.A. Girard
63
+
64
+
65
+ Ophthalmic Engineering & Innovation Laboratory (OEIL)
66
+
67
+
68
+ Singapore Eye Research Institute (SERI)
69
+
70
+
71
+ The Academia, 20 College Road
72
+
73
+
74
+ Discovery Tower Level 6,
75
+
76
+
77
+ Singapore 169856
78
+
79
+
80
81
+
82
+ https://www.ophthalmic.engineering
83
+
84
+
85
+
86
+ 2
87
+ Abstract
88
+ Purpose: To describe the 3D structural changes in both connective and neural tissues of the
89
+ optic nerve head (ONH) that occur concurrently at different stages of glaucoma using
90
+ traditional and AI-driven approaches.
91
+ Design: Retrospective cross-sectional study.
92
+ Methods: We included 213 normal, 204 mild glaucoma (mean deviation [MD] ≥ -6.00 dB), 118
93
+ moderate glaucoma (MD of -6.01 to -12.00 dB), and 118 advanced glaucoma patients (MD <
94
+ -12.00 dB). All subjects had their ONHs imaged in 3D with Spectralis optical coherence
95
+ tomography. To describe the 3D structural phenotype of glaucoma as a function of severity,
96
+ we used two different approaches: (1) We extracted ‘human-defined’ 3D structural
97
+ parameters of the ONH (total of 10) including retinal nerve fiber layer (RNFL) thickness,
98
+ minimum rim width, lamina cribrosa (LC) shape and depth at different stages of glaucoma;
99
+ (2) we also employed a geometric deep learning method (i.e. PointNet) to identify the most
100
+ important 3D structural features that differentiate ONHs from different glaucoma severity
101
+ groups without any human input.
102
+ Results: We observed that the majority of ONH structural changes occurred in the early
103
+ glaucoma stage, followed by a plateau effect in the later stages. Using PointNet, we also found
104
+ that 3D ONH structural changes were present in both neural and connective tissues.
105
+ Specifically, 57% (normal to mild glaucoma), 39% (mild to moderate glaucoma), and 53%
106
+ (moderate to advanced glaucoma) of ONH landmarks that showed major structural changes
107
+ were located in neural tissues with the remaining located in connective tissues. In both
108
+ approaches, we observed that structural changes were more prominent in the superior and
109
+ inferior quadrant of the ONH, particularly in the RNFL, the prelamina, and the LC. As the
110
+
111
+
112
+ 3
113
+ severity of glaucoma increased, these changes became more diffuse (i.e. widespread),
114
+ particularly in the LC.
115
+ Conclusions: In this study, we were able to uncover complex 3D structural changes of the
116
+ ONH in both neural and connective tissues as a function of glaucoma severity. We hope to
117
+ provide new insights into the complex pathophysiology of glaucoma that might help clinicians
118
+ in their daily clinical care.
119
+
120
+
121
+
122
+ 4
123
+ Introduction
124
+ Evaluation of structural changes of the optic nerve head (ONH) – the main site of
125
+ damage in glaucoma – is a crucial step in diagnosing and monitoring glaucoma [1, 2]. The
126
+ complex three-dimensional (3D) morphological changes occurring in glaucomatous ONHs can
127
+ be captured and quantified by optical coherence tomography (OCT) – a fast, high-resolution,
128
+ quantitative, and non-invasive 3D imaging modality [3].
129
+ In current medical practice, several investigations are conducted to assess neural tissue
130
+ health. These tests involve both a functional (e.g., visual field testing) and a structural
131
+ assessment of glaucomatous damage. The latter is typically achieved by measuring the
132
+ thickness of the retinal nerve fiber layer (RNFL) via OCT [4-6]. Researchers have further
133
+ investigated the association between other neural structural parameters with glaucomatous
134
+ visual field damage, such as the thickness of the ganglion cell complex (GCC) [7, 8] and Bruch’s
135
+ membrane opening - minimum rim width (BMO-MRW) [9, 10].
136
+ However, recent research has indicated that the pathophysiology of glaucoma is
137
+ multifaceted and cannot purely be characterized as damage to retinal ganglion cells: (1)
138
+ Brooks et al. reported that the characteristic “glaucomatous cupping” of the ONH cannot
139
+ solely be explained by neural tissue loss [11]; (2) Quigley et al. found that glaucomatous
140
+ changes of the lamina cribrosa (LC) precede visual field damage [12]; and (3) Yang et al.
141
+ suggested that ONH connective tissue deformations are the primary cause of retinal ganglion
142
+ cell axonal injury [13]. These studies indicate that the pathophysiology of glaucoma should
143
+ consider the involvement of the biomechanics, mechanobiology, remodeling, and potential
144
+ mechanical breakdown of ONH connective tissues. Given this new understanding, researchers
145
+ have begun to investigate the association between ONH connective tissue changes and
146
+
147
+
148
+ 5
149
+ glaucoma severity through connective tissue parameters extracted from OCT images of the
150
+ ONH. Examples of such parameters include the LC depth (LCD) and the LC global shape index
151
+ (LC-GSI) [14], the thickness of the peripapillary choroid [15], the scleral canal opening [16],
152
+ and the peripapillary scleral angle representing the amount of bowing of the ONH [17].
153
+ However, no study has yet provided a comprehensive analysis of 3D structural changes of
154
+ both the connective and neural tissues of the ONH that occur concurrently at different stages
155
+ of glaucoma.
156
+ Therefore, the aim of this study was to describe the 3D structural phenotype of
157
+ glaucoma as a function of severity by: (1) Extracting neural and connective tissue ONH
158
+ parameters from segmented 3D OCT scans and investigating their differences between
159
+ glaucoma severity groups; (2) Using 3D point clouds representing the complex structure of
160
+ the ONH as an input for a geometric deep learning technique (i.e. PointNet [18]) that allows
161
+ us to identify the major 3D structural changes of the ONH with glaucoma severity. Overall, we
162
+ hope that our work leads to a better understanding of the pathophysiology of glaucoma that
163
+ might improve the diagnosis and prognosis of glaucoma.
164
+
165
+ Methods
166
+ Patient Recruitment
167
+
168
+ This retrospective study involved a total of 414 subjects with glaucoma and 213
169
+ controls without glaucoma from two different cohorts: (1) 541 subjects of Chinese ethnicity
170
+ were recruited at the Singapore National Eye Centre (SNEC) as part of their standard clinical
171
+ care and (2) 112 subjects of European descent were recruited at the Vilnius University
172
+ Hospital Santaros Klinikos as part of a prospective observational study. All subjects gave
173
+
174
+
175
+ 6
176
+ written informed consent and the study adhered to the tenets of the Declaration of Helsinki
177
+ and was approved by the institutional review board of the respective institutions (SingHealth
178
+ Centralized Institutional review board, Singapore and Vilnius Regional Biomedical Research
179
+ Ethics Committee, Lithuania).
180
+ Standard Automated Perimetry
181
+
182
+ All subjects had their visual field (VF) assessed by standard automated perimetry (SAP;
183
+ Swedish Interactive Threshold Algorithm standard 24-2 or 30-2 program; Humphrey Field
184
+ Analyzer II-750i, Carl Zeiss Meditec). Subjects with a non-reliable VF examination that was
185
+ defined using the criteria of a false-positive error rate greater than 15% [19] and a fixation
186
+ loss greater than 33% [19, 20] were excluded from the study.
187
+ Definition of Glaucoma and Glaucoma Severity Groups
188
+
189
+ Glaucomatous eyes were defined as those with vertical cup-disc ratio (VCDR) > 0.7
190
+ and/or neuroretinal rim narrowing with repeatable glaucomatous VF defects and non-
191
+ occludable angles on gonioscopy whereas non-glaucomatous (normal) eyes were those with
192
+ an IOP < 21 mmHg and normal VF examinations. Subjects with corneal abnormalities that
193
+ potentially can reduce the quality of the OCT scans and with ONH disorders other than
194
+ glaucoma were excluded from the studies.
195
+ Based upon the mean deviation (MD) of the 24-2 or 30-2 VF, all glaucoma subjects
196
+ were further split into three glaucoma severity groups [21]: (1) mild glaucoma (MD ≥ -6.00
197
+ dB); (2) moderate glaucoma (MD of -6.01 to -12.00 dB); and (3) advanced glaucoma (MD < -
198
+ 12.00 dB). Even though this classification has its limitations [22], it remains a standard and
199
+ can be used as a good first indicator for staging functional damage. More information on the
200
+
201
+
202
+ 7
203
+ demographics of the four groups (i.e. normal, mild, moderate, and advanced) can be found
204
+ in Table 1.
205
+ Optical Coherence Tomography Imaging
206
+
207
+ Each patient from both cohorts had their ONH imaged with the same spectral domain
208
+ OCT device (Spectralis, Heidelberg Engineering, Germany). All OCT scans (horizontal raster
209
+ scans) covered an area of 15x10 centered on the ONH and the number of B-scans varied
210
+ between 49 and 97 B-scans (distance between B-scans varied from approximately 35 to 70
211
+ µm) with 384 A-scans per B-scan (approximately 11.5 µm between A-scans) and 496 pixels
212
+ per A-scan (axial resolution of 3.87 µm/pixel). Images were acquired using signal averaging,
213
+ eye tracking, and the enhanced depth imaging modality of the Spectralis OCT device.
214
+ Describing the Structural Phenotype of Glaucoma as a Function of Glaucoma
215
+ Severity
216
+
217
+ In the following sections, we introduce two different approaches to study the complex
218
+ structural changes of the ONH as a function of glaucoma severity. In the first, we performed
219
+ a comprehensive 3D structural analysis of the ONH using ‘human-defined’ 3D structural
220
+ parameters of the ONH (total of 10 parameters) describing the morphologies of both neural
221
+ and connective tissues. In the second, we used a relatively recent geometric deep learning
222
+ method (i.e. PointNet) to discover important 3D structural features differentiating ONHs from
223
+ different glaucoma severity groups. An overview of both approaches is shown in Figure 1.
224
+ Approach 1 for Describing the Structural Phenotype of Glaucoma – ONH
225
+ Parameters
226
+
227
+ For this approach, all ONH tissues were segmented in 3D (from the OCT scans), from
228
+ which all ONH structural parameters were automatically extracted.
229
+
230
+
231
+ 8
232
+ AI-based Segmentation of ONH Tissues. We automatically segmented all raw OCT
233
+ volume scans of the ONH using REFLECTIVITY (Reflectivity, Abyss Processing Pte Ltd,
234
+ Singapore) – a software that was developed from advances in AI-based ONH segmentation
235
+ [23] (see Figure 1a, b). More specifically, we automatically labelled the following ONH tissue
236
+ groups: (1) the retinal nerve fiber layer (RNFL) and the prelamina tissue (PLT); (2) the ganglion
237
+ cell inner plexiform layer (GCL+IPL); (3) all other retinal layers (ORL); (4) the retinal pigment
238
+ epithelium (RPE) with Bruch’s membrane (BM) and the BM opening (BMO) points; (5) the
239
+ choroid; (6) the OCT-visible part of the peripapillary sclera including the scleral flange; and (7)
240
+ the OCT-visible part of the LC. In almost all OCT volume scans, the posterior boundaries of the
241
+ sclera and LC were not visible and could therefore not be segmented.
242
+ Automated extraction of ONH parameters. Using the software REFLECTIVITY, we
243
+ extracted the following parameters: (1) the average RNFL thickness (RNFLT) in each octant
244
+ (i.e. temporal [T], superior-temporal [ST], superior [S], superior-nasal [SN], nasal [N], inferior-
245
+ nasal [IN], inferior [I], and inferior-temporal [IT]) calculated at a distance of 1.5 times the BMO
246
+ radius (BMOR) from the centre of BMO; (2) the average minimum rim width (MRW) in each
247
+ octant defined as the minimum distance from a BMO point to a point on the inner limiting
248
+ membrane (ILM); (3) the average ganglion cell complex thickness (GCCT) in each octant
249
+ evaluated at the same location than the RNFLT; (4) the average choroidal thickness (ChT) in
250
+ each octant at the same distance as that used for the RNFLT; (5) the prelamina depth (PLD)
251
+ defined as the distance from the BMO center to a point on the ILM (perpendicular to the BMO
252
+ plane); (6) the minimum prelamina thickness (MPT); (7) the LC depth (LCD) defined as the
253
+ distance from the BMO centre to a point on the anterior LC boundary (perpendicular to the
254
+ BMO plane); (8) the LC global shape index (LC-GSI) that summarizes the shape of the anterior
255
+ LC boundary into a single number [24]; (9) the peripapillary scleral angle (PPSA) representing
256
+
257
+
258
+ 9
259
+ the amount of scleral bowing and defined as the angle between two parallel lines to the
260
+ anterior scleral boundary in the nasal-temporal plane; and (10) the BMO area defined as the
261
+ area of the best-fit ellipse to the BMO points. A visualization of the extracted ONH parameters
262
+ is shown in Figure 1c.
263
+
264
+ Statistical analysis. All parameters were compared across all 4 groups (normal, mild,
265
+ moderate, and advanced). All statistical analyses were performed using R (version 4.2.1) and
266
+ RStudio (version 2022.07.1 for macOS). ONH parameters that were extracted in each octant
267
+ were reported as mean  standard deviation and single valued ONH parameters were
268
+ presented as box plots. One-way ANOVA with post-hoc Tukey HSD test was used for the
269
+ comparisons. P value for significance was set at <0.05.
270
+ Approach 2 for Describing the Structural Phenotype of Glaucoma – PointNet
271
+
272
+ PointNet, a deep neural network from the group of geometric deep learning
273
+ algorithms, can learn from complex 3D shapes, such as that of the ONH, if they are
274
+ represented as 3D point clouds. In contrast to our first approach, which relied on ‘human-
275
+ defined’ ONH parameters, PointNet allows us to identify important structural landmarks that
276
+ can differentiate ONHs from the four different glaucoma severity groups, without previous
277
+ inputs or guidance.
278
+
279
+ Representation of the ONH structure as 3D point cloud. We described the structure
280
+ of a given ONH as a 3D point cloud which then was used as input to PointNet. To do so, we
281
+ first identified the anterior boundaries of all tissue layers in the segmented OCT scan. Each
282
+ anterior boundary voxel was then represented as a 3D point (see Figure 1d). The final point
283
+ cloud consisted of about 20,000 points for each ONH (see Figure 1e). Additionally, for each
284
+ point, we extracted the local tissue thickness (minimum distance between anterior and
285
+ posterior boundary). In summary, we assigned four values to every point: its position in the
286
+
287
+
288
+ 10
289
+ 3D space ([x, y, z]-coordinate) and its local tissue thickness (not applicable for the sclera and
290
+ LC). To homogenize the data across all ONHs, the centre of BMO was set as origin of the
291
+ coordinate system [x=0, y=0, z=0] and the normal of BMO plane (best-fit plane to the BMO
292
+ points) was aligned with the axial direction of the scan. The more interested reader is referred
293
+ to our previous publication on geometric deep learning for glaucoma diagnosis [25].
294
+
295
+ Glaucoma severity classification. PointNet was specifically designed to process and
296
+ learn from 3D point clouds such as the one shown in Figure 1. We used the same architecture
297
+ as in the original publication [18], except that we implemented a max pooling layer of
298
+ dimension 256. To identify important 3D structural features of the ONH at different stages of
299
+ glaucoma, we trained three PointNet classification networks to differentiate between: (1)
300
+ normal and mild glaucoma subjects (normal-mild); (2) mild and moderate glaucoma subjects
301
+ (mild-moderate); and (3) moderate and advanced glaucoma subjects (moderate-advanced).
302
+ To assess the performance of the three binary classification networks, we split each
303
+ respective dataset (i.e. normal-mild, mild-moderate, and moderate-advanced) in training
304
+ (70%), validation (15%), and test (15%) sets. To improve performance and reduce overfitting,
305
+ we used data augmentation techniques such as random cropping, random rotations, random
306
+ rigid translations, random sampling (i.e. randomly picking a subset of points from the input
307
+ point cloud), oversampling to reduce data imbalance, and additive Gaussian noise where
308
+ applicable. A five-fold cross validation study was performed (using the train and validation
309
+ set) to tune hyperparameters and we reported the area under the receiver operating
310
+ characteristic curves (AUCs) of the model with the best performing hyperparameters as mean
311
+ ± standard deviation. All models were trained on a Nvidia RTX A5000 GPU card until optimum
312
+ performance was reached in the validation set.
313
+
314
+
315
+ 11
316
+ Identification of important 3D structural features of the ONH. The specific
317
+ architecture of PointNet inherently allowed us to identify regions of the ONH important for
318
+ the differentiation of different glaucoma severity groups by extracting all points that
319
+ contributed to the final classification score – the so-called critical points. For each
320
+ classification group (i.e. normal-mild, mild-moderate, and moderate-advanced), we extracted
321
+ critical points from all ONHs of the respective test set (networks trained on respective training
322
+ set using tuned hyperparameters). Comparing the locations of these points between the
323
+ three groups allowed us to draw conclusion on the characteristic 3D structural changes of the
324
+ ONH at different stages of glaucoma.
325
+ Visualization of critical points. To better visualize the location of the resulting critical
326
+ points, we first constructed an average ONH geometry (represented by the average anterior
327
+ boundaries of each segmented tissue) for each of the three classification groups, i.e. normal-
328
+ mild, mild-moderate, and moderate-advanced. For each group, we then projected the critical
329
+ points (closest point projection) onto their corresponding anterior tissue boundary of the
330
+ respective average ONH geometry and visualized them as 3D point cloud density maps. A
331
+ density measure for each point was obtained by counting the neighbouring points within a
332
+ 75 μm radius sphere. Since all critical points were projected on an average ONH geometry,
333
+ such a density map should highlight landmarks of the ONH that exhibit distinct 3D structural
334
+ changes between the different stages of glaucoma (represented as a cluster of red points in
335
+ the point cloud density maps).
336
+
337
+
338
+
339
+
340
+ 12
341
+ Results
342
+ Approach 1 – Statistical Analysis of ONH Parameters
343
+ We observed that the majority of ONH structural changes occurred in the early
344
+ glaucoma stage (normal to mild). These changes were also the most substantial in terms of
345
+ their size or magnitude. Specifically, we noted a decrease in average RNFLT (average over all
346
+ sectors) from 112  26 µm to 83  29 µm (Figure 2a), a decrease in average MRW from 256 
347
+ 60 µm to 169  55 µm (Figure 2b), a decrease in average GCCT from 154  26 µm to 124  30
348
+ µm (Figure 2c), no change in average ChT (Figure 2d), an increase in PLD from 136  195 µm
349
+ to 288  199 µm (Figure 2e), a decrease in MPT from 146  116 µm to 63  70 µm (Figure 2f),
350
+ an increase in LCD from 410  109 µm to 468  132 µm (Figure 2g), a decrease in LC-GSI from
351
+ -0.37  0.42 to -0.61  0.33 (Figure 2h), an increase in PPSA from 5.4  4.6 degree to 9.5  6.2
352
+ degree (Figure 2i), and an increase in BMOA from 2.15  0.5 mm2 to 2.28  0.5 mm2 (Figure
353
+ 2j).
354
+ Following substantial structural changes of the ONH in the early stage of glaucoma,
355
+ most ONH parameters showed a plateau effect, with little change from mild to moderate
356
+ glaucoma. Only RNFLT (average), GCCT (average), and MRW (average) showed a significant
357
+ decrease from 83  29 to 71  30 µm, 124  30 to 111  32 µm, and 169  55 to 159  56 µm,
358
+ respectively.
359
+ In the later stages of glaucoma (moderate to advanced), we observed significant
360
+ structural changes of the ONH, but they were much less pronounced in terms of their
361
+ magnitude compared to those seen in the early stages. In detail, the average RNFLT decreased
362
+ from 71  30 µm to 50  25 µm (Figure 2a), the average MRW decreased from 159  56 µm
363
+ to 126  46 µm (Figure 2b), the average GCCT decreased from 111  32 µm to 88  27 µm
364
+
365
+
366
+ 13
367
+ (Figure 2c), the LCD increased from 459  121 to 502  147 µm (Figure 2g), and the BMOA
368
+ decreased from 2.30  0.58 mm2 to 2.12  0.42 mm2 (Figure 2j). The ChT (Figure 2d), the PLD
369
+ (Figure 2e), the MPT (Figure 2f), the LC-GSI (Figure 2h), and the PPSA (Figure 2i) showed no
370
+ significant change.
371
+ If we were to examine regional variations, we noted that structural changes of the
372
+ RNFLT, MRW, and GCCT were more pronounced (higher in magnitude) in both the superior
373
+ and inferior octants of the ONH. This was true throughout all stages of glaucoma. In these
374
+ sectors, we observed that the decrease in MRW slowed as glaucoma severity increased.
375
+ Specifically, in the early stage of glaucoma (normal to mild), MRW decreased in the superior
376
+ octant from 295  64 µm to 192  58 µm while in the later stage (moderate to advanced), the
377
+ decrease was smaller from 179  58 µm to 133  49 µm (Figure 2b). In contrast, RNFLT and
378
+ GCCT decreased linearly as glaucoma severity increased. In the early stage of glaucoma
379
+ (normal to mild), RNFLT and GCCT in the superior octant decreased from 163  31 to 122 
380
+ 34 µm and 200  31 to 160  34 µm, respectively, while in the later stage (moderate to
381
+ advanced), the decrease was from 102  35 to 61  31 µm and 141  35 to 99  32 µm (Figure
382
+ 2a, 2c). With the exception of the inferior octant of the ONH, we did not observe any
383
+ significant changes in the ChT with glaucoma severity (Figure 2d).
384
+ Approach 2 – Performance Assessment
385
+
386
+ Using PointNet, we were able to differentiate ONHs from different glaucoma severity
387
+ groups. The normal-mild glaucoma classification showed the best performance (AUC: 0.94 
388
+ 0.02), followed by the moderate-advanced (AUC: 0.80  0.04) and mild-moderate glaucoma
389
+ classification (AUC: 0.68  0.08).
390
+
391
+
392
+ 14
393
+ Approach 2 – Changes of Important 3D Structural Features of the ONH with
394
+ Glaucoma Severity
395
+ For each classification task (i.e. normal-mild, mild-moderate, and moderate-
396
+ advanced), we pooled all critical points from all ONHs (test set), mapped them onto the
397
+ corresponding average ONH geometry, and displayed them as a 3D point cloud density map
398
+ for all ONH tissues (Figure 3), or separately for each ONH tissue (Figure 4).
399
+ In general, we observed that critical points were present in both, neural (normal-mild:
400
+ 57%, mild-moderate: 39%, moderate-advanced: 53%) and connective tissues (normal-mild:
401
+ 43%, mild-moderate: 61%, moderate-advanced: 47%). More specifically, most of the critical
402
+ points were located in the RNFL+PLT (normal-mild: 53%, mild-moderate: 37%, moderate-
403
+ advanced: 47%), the sclera (normal-mild: 17%, mild-moderate: 15%, moderate-advanced:
404
+ 11%), and the LC (normal-mild: 23%, mild-moderate: 43%, moderate-advanced: 31%). In
405
+ contrast, we observed almost no critical points in the other tissue layers, such as the GCC+IPL,
406
+ ORL, RPE, Choroid.
407
+ On a tissue level, we found that the critical points from the RNFL of all three
408
+ classification tasks formed an hourglass pattern with points mainly located in the superior
409
+ and inferior quadrant. In addition, in the normal-mild glaucoma classification, critical points
410
+ from the RNFL were mostly located around the neuro-retinal rim whereas in the moderate-
411
+ advanced glaucoma classification, these points moved more outwards to the peripheral
412
+ region of the ONH. Interestingly, we also found that in the normal-mild and mild-moderate
413
+ classification most of the critical points from the LC were located near the LC insertion zone
414
+ in the superior (normal-mild) and superior + inferior quadrant (mild-moderate) whereas in
415
+
416
+
417
+ 15
418
+ the moderate-advanced classification, critical points were more spread out over the entire
419
+ LC.
420
+
421
+ Discussion
422
+
423
+ In this study, we were able to describe the 3D structural phenotype of glaucoma as a
424
+ function of severity using two separate approaches. In the first, we extracted ‘human-defined’
425
+ 3D structural parameters of the ONH and compared them across four different groups:
426
+ normal, mild, moderate, and advanced. In the second, we represented the complex structure
427
+ of the ONH as a 3D point cloud and used PointNet to uncover the structural landmarks that
428
+ were the most affected by glaucoma severity without any human input. Overall, we found
429
+ that the structural features of both neural and connective tissues contributed to the
430
+ structural phenotype of glaucoma, and that each of our proposed method could provide its
431
+ own unique knowledge.
432
+
433
+ In this study, we found that after substantial structural changes of the ONH in the early
434
+ stage of glaucoma (normal to mild), almost all ONH parameters reached a plateau, with less
435
+ change in the later stages (mild to moderate and moderate to advanced). This is in good
436
+ agreement with previous studies that investigated the structure-function relationship and
437
+ reported a considerable structural loss before any functional VF defects were detectable [26-
438
+ 28]. Some of these studies suggested a “tipping point” in the early stage of glaucoma (at about
439
+ – 3 dB MD) from which onwards even small structural changes were associated with a
440
+ relatively large decrease in MD value [26, 28]. One should also keep in mind that MD values
441
+ are usually reported on a logarithmic scale (non-linear scale). For instance, a shift in MD value
442
+ from 0 to -6 dB would imply a much larger loss in visual sensitivity compared to a shift from -
443
+
444
+
445
+ 16
446
+ 6 to -12 dB on a linear scale [29]. Therefore, the observed plateau effect might be a result of
447
+ reporting MD values on a logarithmic scale. However, further research is needed to verify
448
+ such a hypothesis.
449
+
450
+ Furthermore, we found that critical points were present in both neural (normal-mild:
451
+ 57%, mild-moderate: 39%, moderate-advanced: 53%) and connective tissues (normal-mild:
452
+ 43%, mild-moderate: 61%, moderate-advanced: 47%) at all stages of glaucoma indicating that
453
+ the structural changes caused by glaucoma affected both types of tissue in the ONH. Our
454
+ findings are in line with previous research that suggested that the pathophysiology of
455
+ glaucoma is complex and cannot purely be characterized as a damage to the neural tissue in
456
+ the ONH (i.e. retinal ganglion cells) [11-13]. Despite these recent findings, current glaucoma
457
+ tests focus on assessing neural tissue health, ignoring any glaucomatous structural changes
458
+ of connective tissue in the ONH. In the future, the development of more comprehensive tests
459
+ that consider structural changes in both, neural and connective tissues, could potentially
460
+ improve the diagnosis and prognosis of glaucoma.
461
+ Additionally, we found that most of the critical points (normal-mild: 93%, mild-
462
+ moderate: 95%, moderate-advanced: 89%) were concentrated in the RNFL+PLT, sclera, and
463
+ LC. PointNet only focuses on the major structural changes of the optic nerve head, and since
464
+ we limited the number of critical points to 256, only the ONH landmarks with significant 3D
465
+ structural changes will be highlighted in the point cloud density maps. Therefore, the fact that
466
+ there are almost no critical points present in the GCC+IPL, ORL, RPE, and choroid does not
467
+ necessarily imply that these tissues do not exhibit any structural changes in glaucoma.
468
+ However, our findings suggest that any structural changes in these tissues are likely to be
469
+ smaller in magnitude compared to the structural changes observed in the RNFL, sclera, and
470
+ LC.
471
+
472
+
473
+ 17
474
+ In both approaches, we found that structural changes of neural tissues were more
475
+ prominent in the inferior and superior quadrants of the ONH over all stages of glaucoma. This
476
+ is in accordance with many previous studies (including our recent study in glaucoma diagnosis
477
+ [25]) that reported significant structural changes of glaucomatous ONHs in these quadrants
478
+ [30, 31]. In addition, Wang et al. reported a progressive nasalization of the central retinal
479
+ vessel trunk (CRVT) with glaucoma severity [32]. One might argue that the location of some
480
+ of the critical points from the RNFL coincides with the location of the CRVT and its branches
481
+ indicating changes in the CRVT location with disease progression. However, further research
482
+ is needed to confirm such speculations.
483
+ Furthermore, we found that the decline in MRW slowed, whereas RNFLT decreased
484
+ linearly as glaucoma severity increased. This suggests that neural tissue changes in the early
485
+ stage of glaucoma (normal to mild) are more pronounced around the optic disc (i.e. MRW),
486
+ in contrast to the later stages of glaucoma (mild to moderate and moderate to advanced),
487
+ where such changes move to the periphery of the ONH (i.e. RNFLT). Interestingly, we found a
488
+ similar trend in the distribution of critical points from the RNFL. In the early glaucoma group
489
+ (normal-mild), critical points were mostly located around the neuro-retinal rim. These critical
490
+ points (with their local tissue thickness) might act as a surrogate measurement for MRW. In
491
+ the more severe glaucoma groups (i.e. mild-moderate and moderate-advanced), critical
492
+ points from the RNFL moved to more peripheral regions of the ONH and thus closer to where
493
+ the RNFLT was measured. Up to date, there is no common consent on whether RNFLT or
494
+ MRW is better correlated with VF damage (i.e. glaucoma severity). Some studies favored
495
+ RNFLT [10, 33] whereas others reported better performance of MRW [30, 34]. In addition,
496
+ Gmeiner et al. reported that depending on the stage of glaucoma and the major site of
497
+ glaucomatous damage (peripheral or central), RNFLT might be superior to MRW and vice
498
+
499
+
500
+ 18
501
+ versa suggesting that morphological changes of the glaucomatous ONH are diverse and may
502
+ depend on various factors [33]. Therefore, when assessing ONH structural changes, it might
503
+ be important to analyze the entire region of the ONH (peripheral and central) with its complex
504
+ 3D morphology as it was done with PointNet.
505
+ We found that a considerable number of critical points were extracted from the sclera
506
+ over all stages of glaucoma, suggesting significant and progressive structural changes of the
507
+ sclera with glaucoma severity. In addition, and in line with a previous study [17], we found
508
+ that the PPSA, representative for the bending of the sclera in the nasal-temporal plane, is
509
+ significantly larger in mild glaucoma compared to normal eyes, however, no significant
510
+ differences were found between the later stages of the disease. Considering the presence of
511
+ critical points from the sclera in all stages of glaucoma, one might speculate that a single
512
+ parameter like the PPSA is not enough to capture the complex 3D structural changes of the
513
+ sclera with glaucoma severity and further research is needed to quantify such changes.
514
+ Furthermore, we found that most of the LC critical points were located in the region of
515
+ the LC insertion zone over all stages of glaucoma. However, the major site of these critical
516
+ points changed from the superior quadrant (normal-mild) to the superior + inferior quadrant
517
+ (mild-moderate) to a more diffuse distribution over all quadrants (moderate-advanced).
518
+ Previous studies reported morphological changes of the LC with glaucoma severity reflected
519
+ by a change in LC depth [35], LC curvature [36], and LC-GSI [14]. In addition, local LC defects
520
+ or alterations like posterior movement of the LC insertion zones [37] and LC disinsertions [38]
521
+ were observed in glaucomatous eyes. However, none of the studies reported structural
522
+ changes of the LC insertion zone with glaucoma severity. Our findings suggest that assessing
523
+ morphological changes of the glaucomatous LC, especially in the region of the LC insertion
524
+ zone, could be useful in monitoring disease progression (in conjunction with other ONH
525
+
526
+
527
+ 19
528
+ parameters like the RNFLT). However, further longitudinal studies are necessary to unravel
529
+ the complex 3D structural changes of the LC with glaucoma severity.
530
+ In this study, several limitations warrant further discussion. First, although the overall
531
+ sample size was fairly large, however, subjects were unevenly distributed over the glaucoma
532
+ severity groups (normal: 213, mild: 204, moderate: 118, advanced: 118). In addition, the
533
+ Caucasian subgroup had no healthy controls that might introduce a bias in both, the
534
+ comparison of ONH parameters and the learning process of PointNet. Therefore, our findings
535
+ might not be easily transferable to other populations. In the future, we want to investigate
536
+ possible differences in structural changes of the ONH with glaucoma severity between
537
+ different ethnic groups.
538
+ Second, we used MD values of the 24-2 or 30-2 VF to determine glaucoma severity,
539
+ however, standard automated perimetry is subjective and sometimes underestimate disease
540
+ severity [22]. Recent studies suggest chromatic pupillometry [39] or electroretinogram [40]
541
+ as an objective way to assess functional loss in glaucomatous eyes. However, these devices
542
+ have their own limitations and a future study has to show whether our findings would change
543
+ when using a different staging system.
544
+ Third, the accuracy of the extracted ONH parameters and the extracted point clouds to
545
+ represent local structural features of the ONH depends on the performance of the
546
+ segmentation algorithm. Even though the segmentation software that we used in this study
547
+ (Reflectivity, Abyss Processing Pte Ltd, Singapore) was tested and validated on a large cohort
548
+ of glaucomatous and non-glaucomatous ONHs at different stages of glaucoma, one should
549
+ keep in mind that the choice of the segmentation algorithm might have an impact on the
550
+ results.
551
+
552
+
553
+ 20
554
+ Fourth, although we found that many ONH parameters showed significant differences
555
+ between glaucoma severity groups, the cross-sectional nature of our data limits causal
556
+ inferences. As a result, our findings might differ from longitudinal studies that follow
557
+ individual patients over a certain period of time. In the future, we aim to validate our findings
558
+ by applying our herein developed approaches to a longitudinal dataset.
559
+ Fifth, the differentiation of ONHs from the mild and moderate glaucoma severity group
560
+ was the most challenging task and resulted in a rather small AUC of 0.68  0.08 (PointNet).
561
+ The moderate performance of PointNet might be due to the plateau effect that we observed
562
+ after substantial structural changes in the early stage of glaucoma. In the future, we could
563
+ consider the MD value as a continuous variable and predict its “true” value, instead of a binary
564
+ classification, as this might give us a boost in performance.
565
+ In summary, we successfully described the 3D structural phenotype of glaucoma as a
566
+ function of glaucoma severity by: (1) a “traditional” approach based on extracted ONH
567
+ parameters and (2) a more recently introduced approach based on critical points extracted
568
+ by PointNet. We showed that ONH structural changes are not limited to neural tissues but
569
+ occurred in both, neural and connective tissues simultaneously. In addition, we identified a
570
+ major site of 3D morphological change of the ONH that might potentially be worth monitoring
571
+ in the future - the region around the LC insertion zone. With this study, we hope to provide
572
+ new insights into the complex pathophysiology of glaucoma that might help clinicians in their
573
+ daily clinical care.
574
+
575
+ Acknowledgment
576
+
577
+
578
+ 21
579
+ We acknowledge funding from (1) the donors of the National Glaucoma Research, a
580
+ program of the BrightFocus Foundation, for support of this research (G2021010S [MJAG]); (2)
581
+ SingHealth Duke-NUS Academic Medicine Research Grant (SRDUKAMR21A6 [MJAG]); (3) the
582
+ “Retinal Analytics through Machine learning aiding Physics (RAMP)" project that is supported
583
+ by the National Research Foundation, Prime Minister’s Office, Singapore under its Intra-
584
+ Create Thematic Grant “Intersection Of Engineering And Health” - NRF2019-THE002-0006
585
+ awarded to the Singapore MIT Alliance for Research and Technology (SMART) Centre
586
+ [MJAG/AT/GB]. (4) the “Tackling & Reducing Glaucoma Blindness with Emerging Technologies
587
+ (TARGET)” project that is supported by the National Medical Research Council (NMRC),
588
+ Singapore (MOH-OFLCG21jun-0003 [MJAG]).
589
+
590
+
591
+
592
+
593
+ 22
594
+ References
595
+
596
+
597
+ 1.
598
+ Bussel, I.I., G. Wollstein, and J.S. Schuman, OCT for glaucoma diagnosis, screening
599
+ and detection of glaucoma progression. British Journal of Ophthalmology, 2014.
600
+ 98(Suppl 2): p. ii15-ii19.
601
+ 2.
602
+ Robin, T.A., et al., Performance of community-based glaucoma screening using
603
+ Frequency Doubling Technology and Heidelberg Retinal Tomography. Ophthalmic
604
+ Epidemiol, 2005. 12(3): p. 167-78.
605
+ 3.
606
+ Lavinsky, F., et al., The Future of Imaging in Detecting Glaucoma Progression.
607
+ Ophthalmology, 2017. 124(12s): p. S76-s82.
608
+ 4.
609
+ Gonzalez-Hernandez, M., et al., Structure–function relationship depends on
610
+ glaucoma severity. British Journal of Ophthalmology, 2009. 93(9): p. 1195.
611
+ 5.
612
+ Medeiros, F.A., et al., The Structure and Function Relationship in Glaucoma:
613
+ Implications for Detection of Progression and Measurement of Rates of Change.
614
+ Investigative Ophthalmology & Visual Science, 2012. 53(11): p. 6939-6946.
615
+ 6.
616
+ Hood, D.C., et al., Detecting glaucoma with only OCT: Implications for the clinic,
617
+ research, screening, and AI development. Progress in Retinal and Eye Research, 2022.
618
+ 90: p. 101052.
619
+ 7.
620
+ Kim, N.R., et al., Structure–Function Relationship and Diagnostic Value of Macular
621
+ Ganglion Cell Complex Measurement Using Fourier-Domain OCT in Glaucoma.
622
+ Investigative Ophthalmology & Visual Science, 2010. 51(9): p. 4646-4651.
623
+ 8.
624
+ Kim, Y.J., et al., Comparative study of macular ganglion cell complex thickness
625
+ measured by spectral-domain optical coherence tomography in healthy eyes, eyes
626
+ with preperimetric glaucoma, and eyes with early glaucoma. Japanese Journal of
627
+ Ophthalmology, 2014. 58(3): p. 244-251.
628
+ 9.
629
+ Danthurebandara, V.M., et al., Enhanced Structure–Function Relationship in
630
+ Glaucoma With an Anatomically and Geometrically Accurate Neuroretinal Rim
631
+ Measurement. Investigative Ophthalmology & Visual Science, 2015. 56(1): p. 98-105.
632
+ 10.
633
+ Amini, N., et al., Structure-Function Relationships in Perimetric Glaucoma:
634
+ Comparison of Minimum-Rim Width and Retinal Nerve Fiber Layer Parameters.
635
+ Investigative Ophthalmology & Visual Science, 2017. 58(11): p. 4623-4631.
636
+ 11.
637
+ Brooks, D.E., et al., Functional and structural analysis of the visual system in the
638
+ rhesus monkey model of optic nerve head ischemia. Invest Ophthalmol Vis Sci, 2004.
639
+ 45(6): p. 1830-40.
640
+ 12.
641
+ Quigley, H.A., et al., Morphologic changes in the lamina cribrosa correlated with
642
+ neural loss in open-angle glaucoma. Am J Ophthalmol, 1983. 95(5): p. 673-91.
643
+ 13.
644
+ Yang, H., et al., The connective tissue phenotype of glaucomatous cupping in the
645
+ monkey eye - Clinical and research implications. Progress in Retinal and Eye
646
+ Research, 2017. 59: p. 1-52.
647
+ 14.
648
+ Tan, N.Y.Q., et al., Changes in the Anterior Lamina Cribrosa Morphology with
649
+ Glaucoma Severity. Scientific Reports, 2019. 9(1): p. 6612.
650
+ 15.
651
+ Vianna, J.R., et al., Serial Changes in Lamina Cribrosa Depth and Neuroretinal
652
+ Parameters in Glaucoma: Impact of Choroidal Thickness. Ophthalmology, 2017.
653
+ 124(9): p. 1392-1402.
654
+ 16.
655
+ Bellezza, A.J., et al., Deformation of the Lamina Cribrosa and Anterior Scleral Canal
656
+ Wall in Early Experimental Glaucoma. Investigative Ophthalmology & Visual Science,
657
+ 2003. 44(2): p. 623-637.
658
+
659
+
660
+ 23
661
+ 17.
662
+ Wang, X., et al., Peripapillary sclera exhibits a v-shaped configuration that is more
663
+ pronounced in glaucoma eyes. Br J Ophthalmol, 2022. 106(4): p. 491-496.
664
+ 18.
665
+ Charles, R.Q., et al. PointNet: Deep Learning on Point Sets for 3D Classification and
666
+ Segmentation. in 2017 IEEE Conference on Computer Vision and Pattern Recognition
667
+ (CVPR). 2017.
668
+ 19.
669
+ Atalay, E., et al., Pattern of Visual Field Loss in Primary Angle-Closure Glaucoma
670
+ Across Different Severity Levels. Ophthalmology, 2016. 123(9): p. 1957-64.
671
+ 20.
672
+ Keltner, J.L., et al., Classification of visual field abnormalities in the ocular
673
+ hypertension treatment study. Arch Ophthalmol, 2003. 121(5): p. 643-50.
674
+ 21.
675
+ Mills, R.P., et al., Categorizing the Stage of Glaucoma From Pre-Diagnosis to End-
676
+ Stage Disease. American Journal of Ophthalmology, 2006. 141(1): p. 24-30.
677
+ 22.
678
+ De Moraes, C.G., et al., Association of Macular Visual Field Measurements With
679
+ Glaucoma Staging Systems. JAMA Ophthalmology, 2019. 137(2): p. 139-145.
680
+ 23.
681
+ Devalla, S.K., et al., Towards label-free 3D segmentation of optical coherence
682
+ tomography images of the optic nerve head using deep learning. Biomed Opt
683
+ Express, 2020. 11(11): p. 6356-6378.
684
+ 24.
685
+ Thakku, S.G., et al., A Global Shape Index to Characterize Anterior Lamina Cribrosa
686
+ Morphology and Its Determinants in Healthy Indian Eyes. Investigative
687
+ Ophthalmology & Visual Science, 2015. 56(6): p. 3604-3614.
688
+ 25.
689
+ Braeu, F.A., et al. Geometric Deep Learning to Identify the Critical 3D Structural
690
+ Features of the Optic Nerve Head for Glaucoma Diagnosis. 2022. arXiv:2204.06931.
691
+ 26.
692
+ Park, K.-H., et al., Bruch's membrane opening-minimum rim width and visual field
693
+ loss in glaucoma: a broken stick analysis. International journal of ophthalmology,
694
+ 2018. 11(5): p. 828-834.
695
+ 27.
696
+ Jonas, J.B. and A.E. Gründler, Correlation between mean visual field loss and
697
+ morphometric optic disk variables in the open-angle glaucomas. Am J Ophthalmol,
698
+ 1997. 124(4): p. 488-97.
699
+ 28.
700
+ Wollstein, G., et al., Retinal nerve fibre layer and visual function loss in glaucoma: the
701
+ tipping point. Br J Ophthalmol, 2012. 96(1): p. 47-52.
702
+ 29.
703
+ Liebmann, K., C.G. De Moraes, and J.M. Liebmann, Measuring Rates of Visual Field
704
+ Progression in Linear Versus Nonlinear Scales: Implications for Understanding the
705
+ Relationship Between Baseline Damage and Target Rates of Glaucoma Progression. J
706
+ Glaucoma, 2017. 26(8): p. 721-725.
707
+ 30.
708
+ Chauhan, B.C., et al., Enhanced Detection of Open-angle Glaucoma with an
709
+ Anatomically Accurate Optical Coherence Tomography–Derived Neuroretinal Rim
710
+ Parameter. Ophthalmology, 2013. 120(3): p. 535-543.
711
+ 31.
712
+ Mwanza, J.-C., et al., Ability of Cirrus HD-OCT Optic Nerve Head Parameters to
713
+ Discriminate Normal from Glaucomatous Eyes. Ophthalmology, 2011. 118(2): p. 241-
714
+ 248.e1.
715
+ 32.
716
+ Wang, M., et al., Relationship Between Central Retinal Vessel Trunk Location and
717
+ Visual Field Loss in Glaucoma. American journal of ophthalmology, 2017. 176: p. 53-
718
+ 60.
719
+ 33.
720
+ Gmeiner, J.M.D., et al., Comparison of Bruch's Membrane Opening Minimum Rim
721
+ Width and Peripapillary Retinal Nerve Fiber Layer Thickness in Early Glaucoma
722
+ Assessment. Investigative Ophthalmology & Visual Science, 2016. 57(9): p. OCT575-
723
+ OCT584.
724
+
725
+
726
+ 24
727
+ 34.
728
+ Muth, D.R. and C.W. Hirneiß, Structure–Function Relationship Between Bruch's
729
+ Membrane Opening–Based Optic Nerve Head Parameters and Visual Field Defects in
730
+ Glaucoma. Investigative Ophthalmology & Visual Science, 2015. 56(5): p. 3320-3328.
731
+ 35.
732
+ Park, S.C., et al., Lamina cribrosa depth in different stages of glaucoma. Invest
733
+ Ophthalmol Vis Sci, 2015. 56(3): p. 2059-64.
734
+ 36.
735
+ Lee, S.H., et al., Diagnostic Power of Lamina Cribrosa Depth and Curvature in
736
+ Glaucoma. Investigative Ophthalmology & Visual Science, 2017. 58(2): p. 755-762.
737
+ 37.
738
+ Yang, H., et al., Posterior (outward) migration of the lamina cribrosa and early
739
+ cupping in monkey experimental glaucoma. Investigative ophthalmology & visual
740
+ science, 2011. 52(10): p. 7109-7121.
741
+ 38.
742
+ Takayama, K., et al., Three-Dimensional Imaging of Lamina Cribrosa Defects in
743
+ Glaucoma Using Swept-Source Optical Coherence Tomography. Investigative
744
+ Ophthalmology & Visual Science, 2013. 54(7): p. 4798-4807.
745
+ 39.
746
+ Najjar, R.P., et al., Handheld chromatic pupillometry can accurately and rapidly
747
+ reveal functional loss in glaucoma. British Journal of Ophthalmology, 2021: p.
748
+ bjophthalmol-2021-319938.
749
+ 40.
750
+ Sarossy, M., et al., Prediction of glaucoma severity using parameters from the
751
+ electroretinogram. Scientific Reports, 2021. 11(1): p. 23886.
752
+
753
+
754
+
755
+
756
+
757
+ 25
758
+ Figures
759
+
760
+
761
+
762
+ Figure 1. Overview of two approaches to describe the 3D structural phenotype of glaucoma
763
+ as a function of severity. Approach 1 was based on the comparison of well-established ONH
764
+ parameters between different glaucoma severity groups (a-c). Approach 2 leverages on
765
+
766
+
767
+ 26
768
+ geometric deep learning to identify important 3D landmarks of the ONH to differentiate ONHs
769
+ at different stages of glaucoma. By looking at the changes of these critical 3D structural
770
+ features with glaucoma severity, we were able to draw conclusions about the complex 3D
771
+ structural changes of the ONH taking place at different stages of glaucoma (a, b, d, and e).
772
+
773
+
774
+
775
+
776
+ 27
777
+
778
+
779
+
780
+ 28
781
+ Figure 2. Summary of statistical analysis of automatically extracted ONH parameters. RNFLT,
782
+ MRW, GCCT, and ChT are shown as sector plots (T: temporal, ST: superior-temporal, S:
783
+ superior, SN: superior-nasal, N: nasal, NI: nasal-inferior, and I: inferior sector) with values for
784
+ each group given as average  standard deviation. Non-sectorial parameters are presented
785
+ as boxplots. A significant difference between two groups (p<0.05) was indicated with a *
786
+ (determined by post-hoc Tukey HSD tests).
787
+
788
+
789
+
790
+
791
+
792
+ Figure 3. Critical points resulting from the three classification tasks: normal-mild, mild-
793
+ moderate, and moderate advanced. From left to right column: 3D, en face (top), and sagittal
794
+ (side) view. Surfaces represent the average anterior tissue boundaries for each respective
795
+ dataset: RNFL+PLT (red), GCL+IPL (green), ORL (blue), RPE (yellow), choroid (purple), sclera
796
+ (cyan), and LC (orange). Red colored critical points correspond to ONH regions with high
797
+ importance for the differentiation of the respective glaucoma severity groups.
798
+
799
+
800
+
801
+ 29
802
+
803
+ 1
804
+
805
+ 2
806
+ Figure 4. En face (top) view layer by layer comparison (columns) of critical points at different stages of glaucoma severity (rows). Critical points
807
+ 3
808
+ are presented as point cloud density maps with colours indicating the number of neighbouring points within a sphere with a radius of 75 µm.
809
+
810
+ 4
811
+
812
+
813
+ 30
814
+ Tables
815
+ 5
816
+
817
+ 6
818
+ Table 1. Summary of glaucoma severity groups.
819
+ 7
820
+
821
+ 8
822
+
823
+ NORMAL
824
+ (N=213)
825
+ MILD
826
+ (N=204)
827
+ MODERATE
828
+ (N=118)
829
+ ADVANCED
830
+ (N=118)
831
+ P*
832
+ AGE, YEARS
833
+ 63.36 (6.99)
834
+ 66.9 (6.42)
835
+ 68.05 (7.11)
836
+ 68.52 (7.69)
837
+ <0.001
838
+ SEX, FEMALE
839
+ 126 (59.15)
840
+ 91 (44.61)
841
+ 49 (41.52)
842
+ 43 (36.44)
843
+ <0.001
844
+ RACE
845
+
846
+
847
+
848
+
849
+
850
+ CHINESE
851
+ 213
852
+ 178
853
+ 97
854
+ 53
855
+ <0.001
856
+ CAUCASIAN
857
+ 0
858
+ 26
859
+ 21
860
+ 65
861
+ MD, DB
862
+ -1.41 (2.11)
863
+ -3.35 (1.95)
864
+ -8.16 (2.35)
865
+ -18.64 (5.31)
866
+ <0.001
867
+ Data are in mean (standard deviation) or n (%) as appropriate.
868
+ 9
869
+ MD = mean deviation of the 24-2 or 30-2 visual field test.
870
+ 10
871
+ *Comparison between the four groups using Fisher’s exact test (for sex and race) and ANOVA
872
+ 11
873
+ (for age and MD).
874
+ 12
875
+
3NE1T4oBgHgl3EQfAQKK/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
49FKT4oBgHgl3EQf9y5B/content/tmp_files/2301.11955v1.pdf.txt ADDED
@@ -0,0 +1,1414 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Statistical whitening of neural populations with gain-modulating interneurons
2
+ Lyndon R. Duong * 1 David Lipshutz * 2 David J. Heeger 1 Dmitri B. Chklovskii 2 3 Eero P. Simoncelli 1 2
3
+ Abstract
4
+ Statistical whitening transformations play a fun-
5
+ damental role in many computational systems,
6
+ and may also play an important role in biologi-
7
+ cal sensory systems. Individual neurons appear
8
+ to rapidly and reversibly alter their input-output
9
+ gains, approximately normalizing the variance of
10
+ their responses. Populations of neurons appear
11
+ to regulate their joint responses, reducing correla-
12
+ tions between neural activities. It is natural to see
13
+ whitening as the objective that guides these be-
14
+ haviors, but the mechanism for such joint changes
15
+ is unknown, and direct adjustment of synaptic in-
16
+ teractions would seem to be both too slow, and in-
17
+ sufficiently reversible. Motivated by the extensive
18
+ neuroscience literature on rapid gain modulation,
19
+ we propose a recurrent network architecture in
20
+ which joint whitening is achieved through modu-
21
+ lation of gains within the circuit. Specifically, we
22
+ derive an online statistical whitening algorithm
23
+ that regulates the joint second-order statistics of a
24
+ multi-dimensional input by adjusting the marginal
25
+ variances of an overcomplete set of interneuron
26
+ projections. The gains of these interneurons are
27
+ adjusted individually, using only local signals,
28
+ and feed back onto the primary neurons. The net-
29
+ work converges to a state in which the responses
30
+ of the primary neurons are whitened. We demon-
31
+ strate through simulations that the behavior of the
32
+ network is robust to poor conditioning or noise
33
+ when the gains are sign-constrained, and can be
34
+ generalized to achieve a form of local whitening
35
+ in convolutional populations, such as those found
36
+ throughout the visual or auditory system.
37
+ *Equal contribution 1Center for Neural Science, New York Uni-
38
+ versity, New York, NY 2Center for Computational Neuroscience,
39
+ Flatiron Institute, New York, NY 3Neuroscience Institute, New
40
+ York University School of Medicine, New York, NY. Correspon-
41
+ dence to: Lyndon R. Duong <[email protected]>, David
42
+ Lipshutz <dlipshutz@flatironinstitute.org>.
43
+ Under review.
44
+ 1. Introduction
45
+ Statistical whitening transformations, in which multi-
46
+ dimensional inputs are decorrelated and normalized to have
47
+ unit variance, are common in statistical signal processing
48
+ and machine learning systems. For example, they provide a
49
+ common step in statistical factorization methods (Hyv¨arinen
50
+ & Oja, 2000) and are often used as a preprocessing step
51
+ for training deep networks (Krizhevsky, 2009). Empiri-
52
+ cal evidence shows that statistical whitening improves un-
53
+ supervised feature learning (Coates et al., 2011). More
54
+ recently, self-supervised learning methods have used sta-
55
+ tistical whitening or related decorrelation transformations
56
+ to prevent representational collapse (Ermolov et al., 2021;
57
+ Zbontar et al., 2021; Bardes et al., 2021; Hua et al., 2021).
58
+ Whitening in neural networks is often performed in the of-
59
+ fline setting. However, online methods are useful, especially
60
+ when the inputs are from dynamic environments.
61
+ In early sensory systems, which receive inputs from dy-
62
+ namic environments, changes in sensory input statistics
63
+ induce rapid changes in the input-output gains of single
64
+ neurons, allowing cells to normalize their output variance
65
+ (Fairhall et al., 2001; Nagel & Doupe, 2006). This is hypoth-
66
+ esized to enable maximal information transmission (Barlow,
67
+ 1961; Laughlin, 1981; Fairhall et al., 2001). At the popu-
68
+ lation level, whitening and related adaptive decorrelation
69
+ transformations have been reported in sensory areas such
70
+ as the early visual cortex of cats (Benucci et al., 2013) and
71
+ the olfactory bulb in zebrafish (Friedrich, 2013; Wanner &
72
+ Friedrich, 2020) and mice (Giridhar et al., 2011; Gschwend
73
+ et al., 2015). However, the mechanisms underlying such
74
+ whitening behaviors are unknown, and would seem to re-
75
+ quire coordination among all pairs of neurons, as opposed to
76
+ the single-neuron case which relies only on gain rescaling.
77
+ Here, motivated by the large neuroscience literature on rapid
78
+ gain modulation, we propose a novel recurrent network ar-
79
+ chitecture for statistical whitening that exclusively relies on
80
+ gain modulation. In particular, we introduce a novel objec-
81
+ tive for statistical whitening that is expressed solely in terms
82
+ of the marginal variances of an overcomplete representation
83
+ of the input signal. We derive a recurrent circuit to optimize
84
+ the objective, and show that it corresponds to a network
85
+ comprising primary neurons and an auxiliary population of
86
+ interneurons with scalar gain modulation. Importantly, the
87
+ arXiv:2301.11955v1 [q-bio.NC] 27 Jan 2023
88
+
89
+ Statistical whitening of neural populations with gain-modulating interneurons
90
+ Figure 1. Schematic of a recurrent statistical whitening network with 2 primary neurons and 3 interneurons. Left: 2D Scatter plot of the
91
+ (non-Gaussian) network inputs x = (x1, x2) whose covariance is the ellipse. Center: Primary neurons, whose outputs are y = (y1, y2),
92
+ receive external feedforward inputs, x, and recurrent feedback inputs from an auxiliary population of interneurons, − �3
93
+ i=1 giziwi.
94
+ Linear projection vectors {w1, w2, w3} ∈ R2 encode non-negative feedforward synaptic weights connecting the primary neurons to
95
+ interneuron i = 1, 2, 3 (symmetric weights are used for feedback connections). The weights are shown in the left and right panels with
96
+ corresponding colors. Inset: The ith interneuron (e.g. here i = 2) receives input zi = w⊤
97
+ i y, which is multiplied by its gain gi to produce
98
+ output gizi. Its gain, gi, is adjusted s.t. ∆gi ∝ z2
99
+ i − 1. The dark arrow indicates that the gain update operates on a slower time scale.
100
+ Right: Scatter plots of the whitened network outputs y. Outputs have unit variance along all wi’s, which is equivalent to having identity
101
+ covariance matrix, i.e., Cyy = IN (black circle).
102
+ network operates online, and its responses converge to the
103
+ classical ZCA whitening solution without supervision or
104
+ backpropagation. To demonstrate potential applications of
105
+ this framework, we show that gain modulation serves as an
106
+ implicit gating mechanism, which facilitates fast context-
107
+ dependent whitening. Further, we show how non-negative
108
+ gain modulation provides a novel approach for dealing with
109
+ ill-conditioned or noisy data. Finally, we relax the overcom-
110
+ pleteness constraint in our objective and provide a method
111
+ for local decorrelation of convolutional populations.
112
+ 2. A novel objective for ZCA whitening
113
+ Consider a neural network with N primary neurons. For
114
+ each t = 1, 2, . . . , let xt and yt be N-dimensional vectors
115
+ whose components respectively denote the inputs and out-
116
+ puts of the primary neurons at time t, Figure 1. Without loss
117
+ of generality we assume the inputs xt are centered.
118
+ 2.1. Conventional objective
119
+ Statistical whitening aims to linearly transform inputs xt so
120
+ that the covariance of the outputs yt is identity, i.e.,
121
+ Cyy = ⟨yty⊤
122
+ t ⟩t = IN,
123
+ (1)
124
+ where ⟨·⟩t denotes the expectation operator over t, and IN
125
+ denotes the N × N identity matrix (see Appendix A for a
126
+ list of notation used in this work).
127
+ It is well known that whitening is not unique: any orthog-
128
+ onal rotation of a random vector with identity covariance
129
+ matrix also has identity covariance matrix. There are sev-
130
+ eral common choices to resolve this rotational ambiguity,
131
+ each with their own advantages (Kessy et al., 2018). Here,
132
+ we focus on the popular whitening transformation called
133
+ Zero-phase Component Analysis (ZCA) whitening or Ma-
134
+ halanobis whitening, which is the whitening transformation
135
+ that minimizes the mean-squared error between the inputs
136
+ and the whitened outputs (alternatively, the one whose trans-
137
+ formation matrix is symmetric). Mathematically, the ZCA-
138
+ whitened outputs are the optimal solution to the minimiza-
139
+ tion problem
140
+ min
141
+ {yt}⟨∥xt − yt∥2
142
+ 2⟩t
143
+ s.t.
144
+ ⟨yty⊤
145
+ t ⟩t = IN,
146
+ (2)
147
+ where ∥ · ∥2 denotes the Euclidean norm on RN. Assuming
148
+ the covariance of the inputs Cxx := ⟨xtx⊤
149
+ t ⟩t is positive
150
+ definite, the unique solution to the optimization problem
151
+ in Equation 2 is yt = C−1/2
152
+ xx
153
+ xt for t = 1, 2, . . . , where
154
+ C−1/2
155
+ xx
156
+ is the inverse matrix square root of Cxx.
157
+ Equation 2 provides a starting point for deriving online ZCA
158
+ whitening algorithms that can be implemented with recur-
159
+ rent neural networks that learn by updating their synaptic
160
+ weights (Pehlevan & Chklovskii, 2015).
161
+ 2.2. A novel objective using marginal statistics
162
+ We formulate a novel objective for learning the ZCA whiten-
163
+ ing transform via gain modulation. Our innovation exploits
164
+ the fact that a random vector has identity covariance matrix
165
+ (i.e., Equation 1 holds) if and only if it has unit marginal
166
+
167
+ g12
168
+ W2,1
169
+ 9222
170
+ 1
171
+ 1
172
+ W:
173
+ 92之。
174
+ W2,2
175
+ Close-up of
176
+ gain-modulated interneuron
177
+ Input:(1, 2Statistical whitening of neural populations with gain-modulating interneurons
178
+ variance along all possible 1D projections (a form of to-
179
+ mography; see Related Work). We can derive a tighter
180
+ statement, that holds for a finite but overcomplete set of at
181
+ least K ≥ KN := N(N + 1)/2 distinct axes (‘overcom-
182
+ plete’ simply means that the number of axes exceeds the
183
+ dimensionality of the input, i.e., K > N). Intuitively, this
184
+ equivalence holds because an N × N symmetric matrix has
185
+ KN degrees of freedom, so the marginal variances along
186
+ K ≥ KN distinct axes are sufficient to constrain the N ×N
187
+ (symmetric) covariance matrix. We formalize this equiva-
188
+ lence in the following proposition, whose proof is provided
189
+ in Appendix B.
190
+ Proposition 2.1. Fix K ≥ KN. Suppose w1, . . . , wK ∈
191
+ RN are unit vectors1 such that
192
+ span({w1w⊤
193
+ 1 , . . . , wKw⊤
194
+ K}) = SN,
195
+ (3)
196
+ where SN denotes the KN-dimensional vector space of N ×
197
+ N symmetric matrices. Then Equation 1 holds if and only if
198
+ the projection of yt onto each unit vector w1, . . . , wK has
199
+ unit variance, i.e.,
200
+ ⟨(w⊤
201
+ i yt)2⟩t = 1
202
+ for
203
+ i = 1, . . . , K.
204
+ (4)
205
+ Assuming Equation 3 holds, we can interpret the set of
206
+ vectors {w1, . . . , wK} as a frame (i.e., an overcomplete
207
+ basis; Casazza et al., 2013) in RN such that the covariance
208
+ of the outputs Cyy can be computed from the variances of
209
+ the K-dimensional projection onto the set of frame vectors.
210
+ Thus, we can replace the whitening constraint in Equation 2
211
+ with the equivalent marginal variance constraint to obtain
212
+ the following objective:
213
+ min
214
+ {yt}⟨∥xt − yt∥2
215
+ 2⟩t
216
+ s.t.
217
+ Equation 4 holds.
218
+ (5)
219
+ 3. A recurrent neural network with gain
220
+ adaptation for ZCA whitening
221
+ In this section, we derive an online algorithm for solving
222
+ the optimization problem in Equation 5 and map the algo-
223
+ rithm onto a recurrent neural network with gain modulation.
224
+ We first introduce Lagrange multipliers to enforce the con-
225
+ straints, which transforms the minimization problem into a
226
+ minimax problem. We then solve the minimax problem by
227
+ taking stochastic gradient steps.
228
+ Assume we have an overcomplete frame {w1, . . . , wK} in
229
+ RN satisfying Equation 3. We concatenate the frame vectors
230
+ into an N ×K matrix W := [w1, . . . , wK]. In our network,
231
+ primary neurons project onto the layer of K interneurons
232
+ with the synaptic weights representing matrix W. Then,
233
+ the post-synaptic currents in interneurons at time t encode
234
+ 1The unit-length assumption is without loss of generality and
235
+ is imposed here for notational convenience.
236
+ the K-dimensional vector zt := W⊤yt (Figure 1). We
237
+ emphasize that the synaptic weight matrix W will remain
238
+ fixed in our whitening algorithm.
239
+ 3.1. Enforcing the marginal variance constraints with
240
+ scalar gains
241
+ We introduce Lagrange multipliers g1, . . . , gK ∈ R to en-
242
+ force the K constraints in Equation 4. We concatenate
243
+ the Lagrange multipliers into the K-dimensional vector
244
+ g := [g1, . . . , gK]⊤ ∈ RK, and formulate the problem as a
245
+ saddle point optimization,
246
+ max
247
+ g
248
+ min
249
+ {yt}⟨ℓ(xt, yt, g)⟩t,
250
+ (6)
251
+ where ℓ(x, y, g) := ∥x − y∥2
252
+ 2 +
253
+ K
254
+
255
+ i=1
256
+ gi
257
+
258
+ (w⊤
259
+ i y)2 − 1
260
+
261
+ .
262
+ Here, we have interchanged the order of maximization over
263
+ g and minimization over yt, which is justified because
264
+ ℓ(xt, yt, g) is convex in yt and linear in g, see Appendix C.
265
+ In our neural network implementation, gi will correspond
266
+ to the multiplicative gain associated with the ith interneuron,
267
+ so that its output at time t is gizi,t (Figure 1, Inset). From
268
+ Equation 6, we see that the gain of the ith interneuron, gi,
269
+ enforces the marginal variance of yt along the axis spanned
270
+ by wi to be unity. Importantly, the gains are not hyper-
271
+ parameters, but rather they are optimization variables which
272
+ promote statistical whitening of {yt}, preventing the neural
273
+ outputs from trivially matching the inputs {xt}.
274
+ 3.2. Deriving recurrent neural network update rules
275
+ To solve Equation 6 in the online setting, we assume there
276
+ is a time-scale separation between ‘fast’ neural dynamics
277
+ and ‘slow’ gain updates, so that at each time step the neural
278
+ dynamics equilibrate before the gains are adjusted. This al-
279
+ lows us to perform the inner minimization over {yt} before
280
+ the outer maximization over the gains. In biological neural
281
+ networks, this is justifiable because a given neuron’s activa-
282
+ tions (i.e. action potential firing) operate on a much more
283
+ rapid time-scale than its intrinsic input-output gain, which
284
+ is driven by slower processes such as changes in calcium
285
+ ion concentration gradients (Ferguson & Cardin, 2020).
286
+ 3.2.1. FAST NEURAL ACTIVITY DYNAMICS
287
+ For each time step t = 1, 2, . . . , we minimize the objective
288
+ ℓ(xt, yt, g) over yt by recursively running gradient-descent
289
+ steps to equilibrium:
290
+ yt ← yt − γ
291
+ 2 ∇yℓ(xt, yt(τ), g)
292
+ = yt + γ {xt − W(g ◦ zt) − yt} ,
293
+ (7)
294
+ where γ > 0 is a small constant, the circle ‘◦’ denotes the
295
+ Hadamard (element-wise) product, g ◦ zt is a vector of K
296
+
297
+ Statistical whitening of neural populations with gain-modulating interneurons
298
+ gain-modulated interneuron outputs, and we assume the
299
+ primary cell outputs are initialized at zero.
300
+ We see from the right-hand-side of Equation 7 that the ‘fast’
301
+ dynamics of the primary neurons are driven by three terms
302
+ (inside the curly braces): i) constant feedforward external
303
+ input xt; ii) recurrent gain-modulated feedback from in-
304
+ terneurons −W(g ◦ zt); and iii) a leak term −yt. Because
305
+ the neural activity dynamics are linear, we can analytically
306
+ solve for their equilibrium (i.e. steady-state), ¯yt, by setting
307
+ the update in Equation 7 to zero:
308
+ ¯yt =
309
+
310
+ IN + W diag (g) W⊤�−1 xt
311
+ =
312
+
313
+ IN +
314
+ K
315
+
316
+ i=1
317
+ giwiw⊤
318
+ i
319
+ �−1
320
+ xt,
321
+ (8)
322
+ where diag (g) denotes the K × K diagonal matrix whose
323
+ (i, i)th entry is gi, for i = 1, . . . , K. The equilibrium feed-
324
+ forward interneuron inputs are then given by
325
+ ¯zt = W⊤¯yt.
326
+ (9)
327
+ The gain-modulated outputs of the K interneurons, g ◦ zt,
328
+ are then projected back onto the primary cells via symmetric
329
+ weights, −W (Figure 1).
330
+ 3.2.2. SLOW GAIN DYNAMICS
331
+ After the fast neural activities reach steady-state, the in-
332
+ terneuron gains are updated by taking a stochastic gradient-
333
+ ascent step with respect to g:
334
+ g ← g + η
335
+ 2∇gℓ(xt, ¯yt, g)
336
+ = g + η
337
+ �¯z◦2
338
+ t − 1
339
+
340
+ ,
341
+ (10)
342
+ where η
343
+ >
344
+ 0 is the learning rate, the superscript
345
+ ‘◦2’ denotes the element-wise squaring operation (i.e.,
346
+ ¯z◦2
347
+ t
348
+ = [¯z2
349
+ t,1, . . . , ¯z2
350
+ t,K]⊤) and 1 = [1, . . . , 1]⊤ is the K-
351
+ dimensional vector of ones2. Remarkably, the update to the
352
+ ith interneuron’s gain gi (Equation 10) depends only on the
353
+ online estimate of the variance of its equilibrium input ¯z2
354
+ t,i,
355
+ and its distance away from the target variance, 1. Networks
356
+ such as these which adapt using only local signals to each
357
+ interneuron are suitable candidates for hardware implemen-
358
+ tations using low-power neuromorphic chips (Pehlevan &
359
+ Chklovskii, 2019). Thus, although statistical whitening in-
360
+ herently requires a joint transformation in response to joint
361
+ statistics, our recurrent network solution operates solely
362
+ using single-neuron gain changes in response to marginal
363
+ statistics.
364
+ 2Appendix D generalizes the gain update to allowing for
365
+ temporal-weighted averaging of the variance over past samples.
366
+ 3.2.3. ONLINE UNSUPERVISED ALGORITHM
367
+ By combining Equations 7 – 10, we arrive at our online
368
+ recurrent neural network algorithm for statistical whitening
369
+ via gain modulation (Algorithm 1). We also provide batched
370
+ and offline versions of the algorithm in Appendix E.
371
+ Algorithm 1 Online ZCA whitening via gain modulation
372
+ 1: Input: Centered inputs x1, x2, · · · ∈ RN
373
+ 2: Initialize: W ∈ RN×K; g ∈ RK; η, γ > 0
374
+ 3: for t = 1, 2, . . . do
375
+ 4:
376
+ yt ← 0
377
+ 5:
378
+ {Run yt and zt dynamics to equilibrium}
379
+ 6:
380
+ while not converged do
381
+ 7:
382
+ zt ← W⊤yt
383
+ 8:
384
+ yt ← yt + γ {xt − W(g ◦ zt) − yt}
385
+ 9:
386
+ end while
387
+ 10:
388
+ g ← g + η
389
+
390
+ z◦2
391
+ t − 1
392
+
393
+ {Update gains}
394
+ 11: end for
395
+ There are a few points worth noting about this network:
396
+ • The weights W remain fixed in Algorithm 1. Rather,
397
+ the gains g adapt to statistically whiten the outputs.
398
+ This allows the whitening to be easily adjusted and
399
+ reversed, by simply returning the gains to their default
400
+ states.
401
+ • While the objective is effectively in the form of an auto-
402
+ encoding loss function involving an ℓ2 reconstruction
403
+ term (Eq. 6), the recurrent network never explicitly
404
+ reconstructs its inputs.
405
+ • Since all recurrent dynamics are linear, it is possible to
406
+ bypass the inner loop representing the fast dynamics
407
+ of the primary cells (lines 6 – 9 of Algorithm 1), by
408
+ directly computing the equilibrium responses of ¯yt,
409
+ and ¯z directly (Eqs. 8, 9).
410
+ 4. Numerical experiments and applications
411
+ We provide different applications of our recurrent ZCA
412
+ whitening network via gain modulation. In particular, we
413
+ emphasize that gain adaptation is distinct from, while also
414
+ complementary to, a synaptic weight learning. We therefore
415
+ side-step the goal of learning the frame W, and assume it
416
+ is known. This allows us to decouple and analyze the gen-
417
+ eral properties of our proposed gain modulation framework,
418
+ independently from the choice of frame.
419
+ 4.1. Gain modulation: a new solution to ZCA
420
+ whitening
421
+ We first demonstrate that our algorithm succeeds in yielding
422
+ statistically whitened outputs. We simulated a network with
423
+ interneuron weights, W, as illustrated in Figure 1 (N=2,
424
+
425
+ Statistical whitening of neural populations with gain-modulating interneurons
426
+ Figure 2. Network from Figure 1 (with corresponding colors;
427
+ N=2, K=KN=3, η=2E-3) whitening to two randomly gener-
428
+ ated statistical contexts online (10K steps each). Top: Marginal
429
+ variances (log scale) measured by interneurons approach 1 over
430
+ time. Middle: Dynamics of interneuron gains, which are applied
431
+ to zi before feeding back onto the primary cells. Dashed lines are
432
+ optimal gains (Appendix F). Bottom: Whitening error over time.
433
+ K=KN=3). Figure 2 shows network adaptation to inputs
434
+ from two contexts with randomly generated underlying in-
435
+ put covariances Cxx (10K gain update steps each). As up-
436
+ date steps progress, all marginal variances converge to unity,
437
+ as expected from the objective (top panel). To achieve ZCA
438
+ whitening at equilibrium, then IN +�K
439
+ i=1 giwiw⊤
440
+ i = C1/2
441
+ xx
442
+ (Equation 8). When the number of interneurons satisfies
443
+ K=KN, the optimal gains to achieve ZCA whitening can
444
+ be solved analytically (see Appendix F for details). These
445
+ are displayed as dashed lines in the (middle panel). We
446
+ found that the network successfully adapted to the two ran-
447
+ dom statistical contexts, and converged to the optimal set
448
+ of gains to achieve whitened yt (Figure 2). Accordingly,
449
+ the whitening error, as measured by the Frobenius norm be-
450
+ tween Cyy and IN, approached zero (bottom panel). Thus,
451
+ with each interneuron monitoring their respective marginal
452
+ input variances z2
453
+ i , and re-scaling their input-output gains to
454
+ modulate feedback onto the primary neurons, the network
455
+ succeeded in adapting to each context and yielded whitened
456
+ outputs.
457
+ 4.2. Rate of convergence depends on frame W
458
+ Thus far, we have assumed the frame, W, was fixed and
459
+ known (e.g., optimized through pre-training or long time-
460
+ scale development). This distinguishes our method from
461
+ existing ZCA whitening methods, which typically operate
462
+ by estimating the eigenvectors of the data. By contrast, our
463
+ network obviates learning the principal axes of the data
464
+ altogether, and instead uses a statistical sampling approach
465
+ along a fixed set of measurement axes.
466
+ If the number of interneurons K=KN, their gains will de-
467
+ scend the gradient of the objective (Equation 10), and by
468
+ Proposition Theorem 2.1, the outputs will become whitened.
469
+ We were interested in how effectively the network whitened
470
+ randomly sampled inputs with fixed input covariance de-
471
+ pending on its initialization. Figure 3 summarizes an empir-
472
+ ical convergence test of 100 networks where N = 2 with
473
+ three different kinds of frame W ∈ RN×KN : i) with i.i.d.
474
+ Gaussian entries (‘Random’); ii) through an optimization
475
+ procedure that finds a frame whose columns have mini-
476
+ mum mutual coherence and cover the ambient space (‘Opti-
477
+ mized’); and iii) a frame whose first N columns were the
478
+ eigenvectors of the data and the remaining KN −N columns
479
+ were random Gaussian entries (‘Spectral’). For clarity, we
480
+ have removed the effects of sampling stochasticity by run-
481
+ ning the offline version of our network, which assumes
482
+ having direct access to the input covariance (Appendix E);
483
+ the online version was qualitatively similar.
484
+ Figure 3. Convergence depends on qualitative structure of W.
485
+ Networks each had N=2, K=KN=3, η=5E-3. Shaded error
486
+ regions are standard errors over the 100 repeats.
487
+ The Spectral frame defines a bound on achievable perfor-
488
+ mance, converging much faster than the Random and Op-
489
+ timized frames. This is because the interneuron axes were
490
+ aligned with the input’s principal axes, and a simple gain
491
+ scaling along those directions is the optimal whitening solu-
492
+ tion. Interestingly, we found that networks with optimized
493
+ weights systematically converged faster than randomly-
494
+ initialized frames. These results indicate that the choice
495
+ of frame does in fact play an important role in the effective-
496
+ ness of our algorithm. Namely, increased coverage of the
497
+ space by the frame vectors facilitates whitening with our
498
+ gain re-scaling mechanism. The random sampling approach
499
+ has little hope of scaling to high dimensional inputs, and the
500
+ green line in Figure 3 shows that one would benefit from
501
+ aligning the frame vectors to the principal axes of the inputs.
502
+ 4.3. Implicit gating via gain modulation
503
+ Motivated by the findings in Figure 3, we wished to demon-
504
+ strate a way in which our adaptive gain modulation net-
505
+ work could complement or augment a network in which
506
+ context-dependent weights have already been learned. We
507
+ performed an experiment involving a network with ‘pre-
508
+ trained’ W (N=6, K=KN=21) whitening inputs from
509
+
510
+ 100
511
+ 2
512
+ ICy-Inl
513
+ 2F
514
+ 10-2
515
+ 10-6
516
+ 0
517
+ 10000
518
+ 20000
519
+ Stepw
520
+ Random
521
+ Optimized
522
+ 10-1
523
+ Spectral
524
+ 10-2
525
+ 10-3
526
+ 0
527
+ 200
528
+ 400
529
+ 600
530
+ 800
531
+ 1000
532
+ StepStatistical whitening of neural populations with gain-modulating interneurons
533
+ Figure 4. Gains can act as an implicit gating mechanism. Top:
534
+ Whitening error over time with a network (N=6; KN=21; η=1E-
535
+ 3) adapting to 2 alternating statistical contexts A and B, with
536
+ different input covariances for 10K steps each. W was initialized
537
+ as a Spectral frame, with the first 2N columns set to be the eigen-
538
+ vectors of covariances of contexts A and B, respectively. Bottom:
539
+ Gains can be seen to act as switches for context, gating the spectral
540
+ components to optimally whiten each context.
541
+ two alternating statistical contexts, A and B, for 10K steps
542
+ each. The frame was constructed such that the first and
543
+ second N columns were the eigenvectors of context A and
544
+ B’s covariance, respectively, and the remaining K − 2N
545
+ columns’ elements were random i.i.d. Gaussian. Figure 4
546
+ (top panel) shows that the network adaptively whitens the
547
+ inputs from each successive context. Surprisingly, upon
548
+ closer inspection to the K interneurons’ gains over time
549
+ (bottom panel) showed that they approximately served to
550
+ ‘select’ the frame vectors corresponding to the eigenvectors
551
+ of each respective condition (as indicated by the blue/red in-
552
+ tensity on the figure). Our gain modulation framework thus
553
+ serves as an effective means of gating context-dependent
554
+ information without an explicit context signal.
555
+ 4.4. Normalizing ill-conditioned data
556
+ When inputs are low-rank, Cxx is ill-conditioned (Fig-
557
+ ure 5A), and whitening can amplify directions of small
558
+ variance that are due to noise. In this section, we show how
559
+ our gain-modulating network can be simply modified to han-
560
+ dle these types of inputs. To prevent amplification of inputs
561
+ below a certain threshold, we can replace the unit marginal
562
+ variance equality constraints with upper bound constraints:
563
+ ⟨(w⊤
564
+ i yt)2⟩t ≤ 1
565
+ for
566
+ i = 1, . . . , K.
567
+ (11)
568
+ Our modified network objective then becomes
569
+ min
570
+ {yt}⟨∥xt − yt∥2
571
+ 2⟩t
572
+ s.t.
573
+ Equation 11 holds.
574
+ (12)
575
+ Figure 5. Two networks (N=2, K=3, η=0.02) whitening ill-
576
+ conditioned inputs. A: Outputs without whitening. 2D scatterplot
577
+ of a non-Gaussian density whose underlying signal lies close to
578
+ a latent 1D axis. The signal magnitude along that axis is denoted
579
+ by the colors. The covariance matrix is depicted as a black ellipse.
580
+ Gray dashed lines are axes spanned by W (here chosen to be an
581
+ equi-angular frame). B: ZCA whitening boosts small-amplitude
582
+ noise lying along the uninformative direction. C: Modulating gains
583
+ according to Eq. 14 rescales the data without amplifying noise. D:
584
+ Gains updated with Eq. 10 (solid) vs. Eq. 14 (dashed).
585
+ Intuitively, if the projected variance along a given direction
586
+ is already less than or equal to unity, then it will not affect
587
+ the overall loss. To enforce the upper bound constraints,
588
+ we introduce gains as Lagrange multipliers as before, but
589
+ restrict the domain of g to be the non-negative orthant RK
590
+ + ,
591
+ resulting in non-negative optimal gains:
592
+ max
593
+ g∈RK
594
+ +
595
+ min
596
+ {yt}⟨ℓ(xt, yt, g)⟩t,
597
+ (13)
598
+ where ℓ(x, y, g) is defined as in Equation 6. At each time
599
+ step t, we optimize Equation 13 by first taking gradient-
600
+ descent steps with respect to yt, resulting in the same neu-
601
+ ral dynamics (Equation 7) and equilibrium solution (Equa-
602
+ tion 8) as before. After the neural activities equilibrate, we
603
+ take a projected gradient-ascent step with respect to g:
604
+ g ← ⌊g + η(¯z◦2
605
+ t − 1)⌋
606
+ (14)
607
+ where ⌊·⌋ denotes the element-wise half-wave rectification
608
+ operation that projects its inputs onto the positive orthant
609
+ RK
610
+ + , i.e., ⌊v⌋ := [max(v1, 0), . . . , max(vK, 0)]⊤.
611
+ We simulated a network with gains set to either updates
612
+ using unconstrained gains (Equation 10), or rectified gains
613
+ (Equation 14), and observed that these two models con-
614
+ verged to two different solutions (Figure 5B, C). When
615
+
616
+ 10-2
617
+ Context A
618
+ Context B
619
+ Context A
620
+ Context B
621
+ 10-3
622
+ 10-4
623
+ 10-5
624
+ 0
625
+ Interneuron index
626
+ 5
627
+ 10
628
+ 15
629
+ 20
630
+ 0
631
+ 500
632
+ 1000
633
+ 1500
634
+ 2000
635
+ 2500
636
+ 3000
637
+ 3500
638
+ 4000
639
+ Step (x10)
640
+ -0.20
641
+ -0.15
642
+ -0.10
643
+ -0.05
644
+ 0.00
645
+ 0.05
646
+ 0.10
647
+ 0.15
648
+ 0.20
649
+ GainA
650
+ B
651
+ 2
652
+ 2
653
+ 1
654
+ 1
655
+ 0
656
+ 0
657
+ -1 -
658
+ -1
659
+ -2
660
+ -2
661
+ 1
662
+ T
663
+ -
664
+ -2
665
+ -1
666
+ 0
667
+ 1
668
+ 2
669
+ -2
670
+ -1
671
+ 0
672
+ 1
673
+ 2
674
+ y1
675
+ y1
676
+ C
677
+ D
678
+ 0.75
679
+ 2
680
+ 0.50
681
+ 0.25
682
+ 1
683
+ 0.00
684
+ 0
685
+ 9
686
+ -0.25
687
+ -0.50
688
+ -1
689
+ -0.75
690
+ gi
691
+ -2
692
+ -1.00
693
+ Lgi]
694
+ -2
695
+ -1
696
+ 0
697
+ 1
698
+ 2
699
+ 0
700
+ 100
701
+ 200
702
+ 300
703
+ y1
704
+ StepStatistical whitening of neural populations with gain-modulating interneurons
705
+ gi was not constrained to be non-negative, the network
706
+ achieved global whitening, as before. By contrast, the gains
707
+ constrained to be non-negative converged to different val-
708
+ ues altogether, with one of them converging to zero rather
709
+ than becoming negative. The whitening error for this net-
710
+ work unsurprisingly converged to a non-zero value with the
711
+ non-negative gain constraint. Thus, with a non-negative
712
+ constraint, the network failed to fully whiten y, but in doing
713
+ so, it did not amplify the noise. In Appendix G we show
714
+ additional cases that provide further geometric intuition on
715
+ differences between ZCA whitening and non-negative gain
716
+ constrained ZCA whitening with our network.
717
+ 4.5. Gain modulation enables local spatial
718
+ decorrelation
719
+ The requirement of KN interneurons to ensure a statisti-
720
+ cally white output becomes prohibitively costly for high-
721
+ dimensional inputs due to the number of interneurons scal-
722
+ ing as O(N 2). This led us to ask: how many interneurons
723
+ are needed in practice? For natural sensory inputs such
724
+ as images, it is well known that inter-pixel correlation is
725
+ highly structured, decaying as a function of distance. Using
726
+ a Gaussian random walk, we simulated gaze fixation and
727
+ micro-saccadic eye movements, drawing 12×12 patch sam-
728
+ ples from a natural image (Figure 6A; Hateren & Schaaf,
729
+ 1998). We did this for different randomly selected regions
730
+ of the image (colors). The content of each region is quite
731
+ different, but the inter-pixel correlation within each context
732
+ fell rapidly with distance (Figure 6B).
733
+ We relaxed the O(N 2) marginal variance constraint to in-
734
+ stead target whitening of spatially local neighborhoods of
735
+ primary neurons with image patch inputs. That is, the frame
736
+ W spanned K < KN axes in RN, but was constructed such
737
+ that overlapping neighborhoods of 4 × 4 primary neurons
738
+ were decorrelated, each by a population of interneurons that
739
+ was ‘overcomplete’ with respect to that neighborhood (see
740
+ Appendix H for frame construction details). Importantly,
741
+ taking into account convolutional structure dramatically re-
742
+ duces the interneuron complexity from O(N 2) → O(N)
743
+ (Appendix H). This frame is still overcomplete (K > N),
744
+ but because K<KN, we no longer guarantee at equilibrium
745
+ that Cyy = IN.
746
+ After running this local whitening network on the inputs
747
+ drawn from the red context, we found that (Figure 6C): i)
748
+ inter-pixel correlations drop within the region specified by
749
+ the local neighborhood; and ii) surprisingly, correlations at
750
+ longer-range are dramatically reduced. Accordingly, the co-
751
+ variance eigenspectrum of the locally whitened outputs was
752
+ significantly flatter compared to the inputs (Figure 6D left
753
+ vs. right columns). We also provide a 1D example in Ap-
754
+ pendix H. We remark that this empirical result is not at all
755
+ obvious – that whitening individual overlapping neighbor-
756
+ hoods of neurons should produce a more globally whitened
757
+ output covariance. Indeed, studying whether and when a
758
+ globally whitened solution is possible from whitening of
759
+ spatial overlapping neighborhoods is an interesting problem
760
+ that is worth pursuing.
761
+ 5. Related work
762
+ 5.1. Biologically plausible whitening networks
763
+ Biological circuits operate in the online setting and, due
764
+ to physical constraints, learn exclusively using local sig-
765
+ nals. Therefore, to plausibly model neural computation, a
766
+ neural network model must operate in the online setting
767
+ (i.e., streaming data) and use local learning rules. There
768
+ are a few existing normative models of statistical whiten-
769
+ ing and related transformations; however, these models use
770
+ synaptic plasticity mechanisms (i.e., changing W) to adapt
771
+ to changing input statistics (Pehlevan & Chklovskii, 2015;
772
+ Pehlevan et al., 2017; Chapochnikov et al., 2021; Lipshutz
773
+ et al., 2022). Adaptation of neural population responses to
774
+ changes in sensory inputs statistics occurs rapidly, on the
775
+ order of seconds (Benucci et al., 2013; Wanner & Friedrich,
776
+ 2020), so it could potentially be accounted for by short-
777
+ term synaptic plasticity, which operates on the timescale of
778
+ tens of milliseconds to minutes (Zucker et al., 2002), but
779
+ not by long-term synaptic plasticity, which operates on the
780
+ timescale of minutes or longer (Martin et al., 2000). Here,
781
+ we explore the alternative hypothesis that modulation of
782
+ neural gains, which operates on the order of tens of mil-
783
+ liseconds to minutes (Fairhall et al., 2001), facilitates rapid
784
+ adaptation of neural populations to changing input statistics.
785
+ 5.2. Tomography and “sliced” density measurements
786
+ Our leveraging of 1D projections to compute the ZCA
787
+ whitening transform is reminiscent of approaches taken in
788
+ the field of tomography. Geometrically, our method repre-
789
+ sents an ellipsoid (i.e., the N dimensional covariance ma-
790
+ trix) using noisy 1D projections of the ellipsoid onto axes
791
+ spanned by frame vectors (i.e., estimates of the marginal
792
+ variances). This is a special case of reconstruction problems
793
+ that have been studied in geometric tomography (Karl et al.,
794
+ 1994; Gardner, 1995). An important distinction between
795
+ tomographic reconstruction and our solution to ZCA whiten-
796
+ ing is that we are not using the 1D projections to reconstruct
797
+ the multi-dimensional inputs; instead, we are utilizing the
798
+ univariate measurements to transform the ellipsoid into a
799
+ new shape (a hyper-sphere, in the case of whitening).
800
+ In optimal transport, “sliced” methods offer a way to mea-
801
+ sure otherwise intractable p-Wasserstein distances in high
802
+ dimensions (Bonneel et al., 2015), thereby enabling its use
803
+ in optimization loss functions. Sliced methods compute
804
+ Wasserstein distance by repeatedly taking series of 1D pro-
805
+ jections of two densities, then computing the expectation
806
+
807
+ Statistical whitening of neural populations with gain-modulating interneurons
808
+ Figure 6. Local spatial whitening. A) Large grayscale image from which 12×12 image patch samples are drawn. Colors represent
809
+ random-walk sampling from regions of the image corresponding to contexts with different underlying statistics. Six samples from
810
+ each context are shown below. B) Without whitening, mean pairwise output pixel correlations decay rapidly with spatial distance in
811
+ each context, suggesting that local whitening may be effective. C) Pairwise output pixel correlation of patches from the red context
812
+ before (gray) and after global (black dots) vs. convolutional whitening with overlapping 4×4 neighborhoods (red). Shaded regions
813
+ represent standard deviations. D) Top: Expected correlation matrices of all flattened patches of the red context before whitening, and after
814
+ global/local ZCA whitening. Correlation and not covariance matrices are displayed here to facilitate comparison; all panels use the same
815
+ color scale. Bottom: Corresponding covariance eigenspectra.
816
+ over all 1D Wasserstein distances, for which there exists
817
+ an analytic solution. Notably, the 2-Wasserstein distance
818
+ between a 1D zero-mean Gaussian with variance σ2 and a
819
+ standard normal (i.e. white) density is
820
+ W2
821
+
822
+ N
823
+
824
+ 0, σ2�
825
+ ; N (0, 1)
826
+
827
+ = ∥σ − 1∥ .
828
+ Comparing this with the rule by which we update each in-
829
+ terneuron gain, gi ← gi + η((w⊤
830
+ i ¯yt)2 − 1) (Equation 10),
831
+ reveals striking similarity between our recurrent neural net-
832
+ work and methods optimizing using sliced Wasserstein dis-
833
+ tances. However, distinguishing characteristics of our ap-
834
+ proach include: 1) minimizing distance between univariate
835
+ variances rather than standard deviations; 2) the directions
836
+ along which we compute slices (columns of W) are fixed,
837
+ whereas sliced methods typically compute a new set of
838
+ random projections at each optimization step; 3) most im-
839
+ portantly, our network operates online, and minimizes sliced
840
+ variance distances without backpropagation.
841
+ 6. Discussion
842
+ We have derived a novel family of recurrent models for
843
+ whitening, which use gain modulation to transform joint
844
+ second-order statistics of their inputs based on marginal
845
+ variance measurements. We showed that, given sufficiently
846
+ many marginal measurements along unique axes, the net-
847
+ work will produce ZCA whitened outputs. In particular,
848
+ our objective (Equation 5) provides an elegant way to think
849
+ about the classical problem of statistical whitening, and
850
+ draws connections to old concepts in tomography and trans-
851
+ port theory. The framework developed here is flexible, with
852
+ several generalizations or extensions that we omitted due
853
+ to space limitations. For example, by replacing the unity
854
+ marginal variance constraint by a set of target variances
855
+ differing from 1, the network can be used to transform (i.e.
856
+ transport) its input density to one matching the correspond-
857
+ ing (non-white) covariance.
858
+ Modulating feature gains has proven effective in adapting
859
+ pre-trained neural networks to novel inputs with out-of-
860
+ training distribution statistics (Ball´e et al., 2020; Duong
861
+ et al., 2022; Mohan et al., 2021). In fact, adaptive gain
862
+ modulation is an old concept in neuroscience which we
863
+ believe would be of importance to the broader machine
864
+ learning community. In real neural networks, there exist
865
+ several computational processes operating concurrently at
866
+ different time-scales. Examples include synaptic weights
867
+ encoding long-term information, while faster processes like
868
+ gain modulation facilitate rapid adaptation to different con-
869
+ texts. Indeed, the demonstrations in this study were largely
870
+ agnostic to the exact structure of the weights W, and in-
871
+ stead focused on the computational role of adaptive gain
872
+ modulation itself. We showed how gains can adaptively
873
+ decorrelate a network’s outputs without modifying its pre-
874
+ trained weights in an online setting. Specifically, we showed
875
+ that gain modulation: 1) enables fast switching between pre-
876
+ learned context-dependent weight regimes; 2) can be used
877
+ in conjunction with properly-aligned interneuron projec-
878
+ tion weights to handle ill-conditioned inputs; and 3) reduce
879
+
880
+ A
881
+ 1.0
882
+ lation
883
+ Unwhitened
884
+ Global whitening
885
+ Local whitening
886
+ 0.5
887
+ Correl:
888
+ 0.0
889
+ 7
890
+ 7
891
+ 0
892
+ 6
893
+ 12
894
+ Distance (px)
895
+ 102
896
+ 1.0 b
897
+ Unwhitened
898
+ Locally whitened
899
+ 101↓
900
+ 1011
901
+ 1011
902
+ igenvalue
903
+ Globally whitened
904
+ Correlation
905
+ 10°1
906
+ oOT
907
+ 10°1
908
+ 0.5
909
+ 10-1 -
910
+ 10-11
911
+ LEF
912
+ 10-2 1
913
+ 10-2 1
914
+ T
915
+ 0.0
916
+ .......
917
+ 72
918
+ 144
919
+ 1
920
+ 72
921
+ 144
922
+ 1
923
+ 72
924
+ 144
925
+ 1
926
+ Eigenvector index
927
+ 0
928
+ 6
929
+ 12
930
+ Distance (px)Statistical whitening of neural populations with gain-modulating interneurons
931
+ long-range dependencies by modifying local signals.
932
+ Feature whitening and decorrelation has become an im-
933
+ portant objective constraint in self-supervised contrastive
934
+ learning methods to help prevent representational collapse
935
+ (Bardes et al., 2021; Zbontar et al., 2021; Ermolov et al.,
936
+ 2021). We believe that the networks developed in this study,
937
+ motivated by extensive neuroscience research on rapid gain
938
+ modulation, provide an effective whitening solution for
939
+ these methods – particularly in regimes which prioritize
940
+ streaming data, and networks designed for low-power con-
941
+ sumption hardware.
942
+ References
943
+ Ball´e, J., Chou, P. A., Minnen, D., Singh, S., Johnston, N.,
944
+ Agustsson, E., Hwang, S. J., and Toderici, G. Nonlinear
945
+ Transform Coding. arXiv:2007.03034 [cs, eess, math],
946
+ 2020.
947
+ Bardes, A., Ponce, J., and LeCun, Y. VICReg: Variance-
948
+ invariance-covariance regularization for self-supervised
949
+ learning. arXiv preprint arXiv:2105.04906, 2021.
950
+ Barlow, H. B. Possible Principles Underlying the Transfor-
951
+ mations of Sensory Messages. In Sensory Communica-
952
+ tion, pp. 216–234. The MIT Press, 1961.
953
+ Benucci, A., Saleem, A. B., and Carandini, M. Adapta-
954
+ tion maintains population homeostasis in primary visual
955
+ cortex. Nature neuroscience, 16(6):724–729, 2013.
956
+ Bonneel, N., Rabin, J., Peyr´e, G., and Pfister, H. Sliced and
957
+ Radon Wasserstein Barycenters of Measures. Journal of
958
+ Mathematical Imaging and Vision, 51(1):22–45, January
959
+ 2015. ISSN 0924-9907, 1573-7683.
960
+ Boyd, S. and Vandenberghe, L. Convex optimization. Cam-
961
+ bridge university press, 2004.
962
+ Casazza, P. G., Kutyniok, G., and Philipp, F. Introduction
963
+ to Finite Frame Theory. In Casazza, P. G. and Kutyniok,
964
+ G. (eds.), Finite Frames, pp. 1–53. Birkh¨auser Boston,
965
+ Boston, 2013. ISBN 978-0-8176-8372-6 978-0-8176-
966
+ 8373-3.
967
+ Chapochnikov, N. M., Pehlevan, C., and Chklovskii, D. B.
968
+ Normative and mechanistic model of an adaptive circuit
969
+ for efficient encoding and feature extraction. bioRxiv,
970
+ 2021.
971
+ Coates, A., Ng, A., and Lee, H. An analysis of single-
972
+ layer networks in unsupervised feature learning. In Pro-
973
+ ceedings of the fourteenth international conference on
974
+ artificial intelligence and statistics, pp. 215–223. JMLR
975
+ Workshop and Conference Proceedings, 2011.
976
+ Duong, L. R., Li, B., Chen, C., and Han, J.
977
+ Multi-
978
+ rate adaptive transform coding for video compression.
979
+ arXiv:2210.14308 [eess.IV], October 2022.
980
+ Ermolov, A., Siarohin, A., Sangineto, E., and Sebe, N.
981
+ Whitening for self-supervised representation learning. In
982
+ International Conference on Machine Learning, pp. 3015–
983
+ 3024. PMLR, 2021.
984
+ Fairhall, A. L., Lewen, G. D., and Bialek, W. Efficiency
985
+ and ambiguity in an adaptive neural code. Nature, 412:
986
+ 787–792, 2001.
987
+ Ferguson, K. A. and Cardin, J. A. Mechanisms underlying
988
+ gain modulation in the cortex. Nature Reviews Neuro-
989
+ science, 21(2):80–92, 2020. ISSN 1471-0048.
990
+ Friedrich, R. W. Neuronal computations in the olfactory
991
+ system of zebrafish. Annual review of neuroscience, 36:
992
+ 383–402, 2013.
993
+ Gardner, R. J. Geometric tomography, volume 58. Cam-
994
+ bridge University Press Cambridge, 1995.
995
+ Giridhar, S., Doiron, B., and Urban, N. N.
996
+ Timescale-
997
+ dependent shaping of correlation by olfactory bulb lateral
998
+ inhibition. Proceedings of the National Academy of Sci-
999
+ ences, 108(14):5843–5848, 2011.
1000
+ Gschwend, O., Abraham, N. M., Lagier, S., Begnaud, F.,
1001
+ Rodriguez, I., and Carleton, A. Neuronal pattern separa-
1002
+ tion in the olfactory bulb improves odor discrimination
1003
+ learning. Nature Neuroscience, 18(10):1474–1482, 2015.
1004
+ Hateren, J. H. v. and Schaaf, A. v. d. Independent component
1005
+ filters of natural images compared with simple cells in
1006
+ primary visual cortex. Proceedings: Biological Sciences,
1007
+ 265(1394):359–366, Mar 1998.
1008
+ Hua, T., Wang, W., Xue, Z., Ren, S., Wang, Y., and Zhao, H.
1009
+ On feature decorrelation in self-supervised learning. In
1010
+ Proceedings of the IEEE/CVF International Conference
1011
+ on Computer Vision, pp. 9598–9608, 2021.
1012
+ Hyv¨arinen, A. and Oja, E. Independent component analysis:
1013
+ algorithms and applications. Neural networks, 13(4-5):
1014
+ 411–430, 2000.
1015
+ Karl, W. C., Verghese, G. C., and Willsky, A. S. Recon-
1016
+ structing Ellipsoids from Projections. CVGIP: Graphi-
1017
+ cal Models and Image Processing, 56(2):124–139, 1994.
1018
+ ISSN 1049-9652.
1019
+ Kessy, A., Lewin, A., and Strimmer, K. Optimal whitening
1020
+ and decorrelation. The American Statistician, 72(4):309–
1021
+ 314, 2018.
1022
+ Krizhevsky, A. Learning multiple layers of features from
1023
+ tiny images. Master’s thesis, University of Toronto, 2009.
1024
+
1025
+ Statistical whitening of neural populations with gain-modulating interneurons
1026
+ Laughlin, S.
1027
+ A Simple Coding Procedure Enhances a
1028
+ Neuron’s Information Capacity. Zeitschrift fur Natur-
1029
+ forschung. C, Journal of biosciences, pp. 910–2, 1981.
1030
+ Lipshutz, D., Pehlevan, C., and Chklovskii, D. B.
1031
+ In-
1032
+ terneurons accelerate learning dynamics in recurrent neu-
1033
+ ral networks for statistical adaptation. arxiv preprint
1034
+ arxiv:2209.10634, 2022.
1035
+ Martin, S., Grimwood, P. D., and Morris, R. G. Synaptic
1036
+ plasticity and memory: an evaluation of the hypothesis.
1037
+ Annual Review of Neuroscience, 23(1):649–711, 2000.
1038
+ Mohan, S., Vincent, J. L., Manzorro, R., Crozier, P. A.,
1039
+ Simoncelli, E. P., and Fernandez-Granda, C. Adaptive
1040
+ Denoising via GainTuning. arXiv:2107.12815 [cs.CV],
1041
+ July 2021.
1042
+ Nagel, K. I. and Doupe, A. J. Temporal Processing and
1043
+ Adaptation in the Songbird Auditory Forebrain. Neuron,
1044
+ 51(6):845–859, September 2006.
1045
+ Pehlevan, C. and Chklovskii, D. B. A normative theory
1046
+ of adaptive dimensionality reduction in neural networks.
1047
+ Advances in Neural Information Processing Systems, 28,
1048
+ 2015.
1049
+ Pehlevan, C. and Chklovskii, D. B. Neuroscience-Inspired
1050
+ Online Unsupervised Learning Algorithms: Artificial
1051
+ Neural Networks. IEEE Signal Processing Magazine,
1052
+ 36(6):88–96, November 2019. ISSN 1053-5888, 1558-
1053
+ 0792.
1054
+ Pehlevan, C., Sengupta, A. M., and Chklovskii, D. B. Why
1055
+ do similarity matching objectives lead to hebbian/anti-
1056
+ hebbian networks? Neural Computation, 30(1):84–124,
1057
+ 2017.
1058
+ Wanner, A. A. and Friedrich, R. W. Whitening of odor
1059
+ representations by the wiring diagram of the olfactory
1060
+ bulb. Nature neuroscience, 23(3):433–442, 2020.
1061
+ Zbontar, J., Jing, L., Misra, I., LeCun, Y., and Deny, S. Bar-
1062
+ low twins: Self-supervised learning via redundancy reduc-
1063
+ tion. In International Conference on Machine Learning,
1064
+ pp. 12310–12320. PMLR, 2021.
1065
+ Zucker, R. S., Regehr, W. G., et al. Short-term synaptic
1066
+ plasticity. Annual Review of Physiology, 64(1):355–405,
1067
+ 2002.
1068
+
1069
+ Statistical whitening of neural populations with gain-modulating interneurons
1070
+ A. Notation
1071
+ For N ≥ 2, let KN := N(N + 1)/2. Let RN denote N-dimensional Euclidean space equipped with the Euclidean norm,
1072
+ denoted ∥·∥2. Let RN
1073
+ + denote the non-negative orthant in RN. Given K ≥ 2, let RN×K denote the set of N ×K real-valued
1074
+ matrices and SN denote the set of N × N symmetric matrices.
1075
+ Matrices are denoted using bold uppercase letters (e.g., M) and vectors are denoted using bold lowercase letters (e.g., v).
1076
+ Given a matrix M, Mij denotes the entry of M located at the ith row and jth column. Let 1 = [1, . . . , 1]⊤ denote the
1077
+ N-dimensional vector of ones. Let IN denote the N × N identity matrix.
1078
+ Given vectors v, w ∈ RN, define their Hadamard product by v ◦ w := (v1w1, . . . , vNwN) ∈ RN. Define v◦2 :=
1079
+ (v2
1080
+ 1, . . . , v2
1081
+ N) ∈ RN. Define diag(v) to be the N × N diagonal matrix whose (i, i)th entry is equal to vi, for i = 1, . . . , N.
1082
+ Let ⟨·⟩t denote expectation over t = 1, 2, . . . .
1083
+ The diag (·) operator, similar to numpy.diag() or MATLAB’s diag(), can either: 1) map a vector in RK to the
1084
+ diagonal of a K × K zeros matrix; or 2) map the diagonal entries of a K × K matrix to a vector in RK. The specific
1085
+ operation being used should be clear by context.
1086
+ B. Proof of Proposition 2.1
1087
+ Proof of Proposition 2.1. Suppose Equation 1 holds. Then, for i = 1, . . . , K,
1088
+ ⟨(w⊤
1089
+ i yt)2⟩t = ⟨w⊤
1090
+ i yty⊤
1091
+ t wi⟩t = w⊤
1092
+ i wi = 1.
1093
+ Therefore, Equation 4 holds.
1094
+ Now suppose Equation 4 holds. Let v ∈ RN be an arbitrary unit vector. Then vv⊤ ∈ SN and by Equation 3, there exist
1095
+ g1, . . . , gK ∈ R such that
1096
+ vv⊤ = g1w1w⊤
1097
+ 1 + · · · + gKwKw⊤
1098
+ K.
1099
+ (15)
1100
+ We have
1101
+ v⊤⟨yty⊤
1102
+ t ⟩tv = Tr(vv⊤⟨yty⊤
1103
+ t ⟩t) =
1104
+ K
1105
+
1106
+ i=1
1107
+ gi Tr(wiw⊤
1108
+ i ⟨yty⊤
1109
+ t ⟩t) =
1110
+ K
1111
+
1112
+ i=1
1113
+ gi Tr(wiw⊤
1114
+ i ) = Tr(vv⊤) = 1.
1115
+ (16)
1116
+ The first equality is a property of the trace operator. The second and fourth equalities follows from Equation 15 and the
1117
+ linearity of the trace operator. The third equality follows from Equation 3. The final equality holds because v is a unit vector.
1118
+ Since Equation 16 holds for every unit vector v ∈ RN, Equation 1 holds.
1119
+ C. Saddle point property
1120
+ We recall the following minmax property for a function that satisfies the saddle point property (Boyd & Vandenberghe, 2004,
1121
+ section 5.4).
1122
+ Theorem C.1. Let V ⊆ Rn, W ⊆ Rm and f : V × W → R. Suppose f satisfies the saddle point property; that is, there
1123
+ exists (a∗, b∗) ∈ V × W such that
1124
+ f(a∗, b) ≤ f(a∗, b∗) ≤ f(a, b∗),
1125
+ for all (a, b) ∈ V × W.
1126
+ Then
1127
+ min
1128
+ a∈V max
1129
+ b∈W f(a, b) = max
1130
+ b∈W min
1131
+ a∈V f(a, b) = f(a∗, b∗).
1132
+ D. Weighted average update rule for gi
1133
+ The update for g in Equation 10 can be generalized to allow for a weighted average over past samples. In particular, the
1134
+ general update is given by
1135
+ g ← g + η
1136
+
1137
+ 1
1138
+ Z
1139
+ t
1140
+
1141
+ s=1
1142
+ γt−sz◦2
1143
+ s − 1
1144
+
1145
+ ,
1146
+
1147
+ Statistical whitening of neural populations with gain-modulating interneurons
1148
+ where γ ∈ [0, 1] determines the decay rate and Z := 1 + γ + · · · + γt−1 is a normalizing factor.
1149
+ E. Batched and offline algorithms for whitening with RNNs via gain modulation
1150
+ In addition to the fully-online algorithm provided in the main text (Algorithm 1), we also provide two variants below. In
1151
+ many applications, streaming inputs arrive in batches rather than one at a time (e.g. video streaming frames). Similarly
1152
+ for conventional offline stochastic gradient descent training, data is sampled in batches. Algorithm 2 would be one way to
1153
+ accomplish this in our framework, where the main difference between the fully online version is taking the mean across
1154
+ samples in the batch to yield average gain update ∆g term. Furthermore, in the fully offline setting when the covariance of
1155
+ the inputs, Cxx is known, Algorithm 3 presents a way to whiten the covariance directly.
1156
+ Algorithm 2 Batched ZCA whitening
1157
+ 1: Input: Data matrix X ∈ RN×T (assumed centered)
1158
+ 2: Initialize: W ∈ RN×K; g ∈ RK; η; batch size B
1159
+ 3: while not converged do
1160
+ 4:
1161
+ XB ← sample batch(X, B){N × B}
1162
+ 5:
1163
+ Yb ← [IN + W diag (g) W⊤]−1XB
1164
+ 6:
1165
+ Zb ← W⊤Yb
1166
+ 7:
1167
+ ∆g ← Z◦2
1168
+ B − 1 {Subtract 1 from all entries}
1169
+ 8:
1170
+ g ← g + η mean(∆g, axis=1)
1171
+ 9: end while
1172
+ Algorithm 3 Offline ZCA whitening
1173
+ 1: Input: Input covariance Cxx
1174
+ 2: Initialize: W ∈ RN×K; g ∈ RK; η
1175
+ 3: while not converged do
1176
+ 4:
1177
+ M ← [IN + W diag (g) W⊤]−1
1178
+ 5:
1179
+ Cyy ← MCxxM⊤
1180
+ 6:
1181
+ ∆g ← diag
1182
+
1183
+ W⊤CyyW
1184
+
1185
+ − 1
1186
+ 7:
1187
+ g ← g + η∆g
1188
+ 8: end while
1189
+ F. Frame factorizations of symmetric matrices
1190
+ F.1. Analytic solution for the optimal gains
1191
+ Recall that the optimal solution of the ZCA objective in Equation 5 is given by yt = C−1/2
1192
+ xx
1193
+ xt for t = 1, 2, . . . . In our
1194
+ neural circuit with interneurons and gain control, the outputs of the primary neurons at equilibrium is (given in Equation 8,
1195
+ but repeated here for clarity)
1196
+ ¯yt =
1197
+
1198
+ IN + W diag (g) W⊤�−1 xt.
1199
+ Therefore, the circuit performs ZCA whitening when the gains g satisfy the relation
1200
+ IN + W diag (g) W⊤ = C1/2
1201
+ xx .
1202
+ (17)
1203
+ When K is exactly N(N + 1)/2, we can explicitly solve for the optimal gains ¯g (derived in the next subsection):
1204
+ ¯g =
1205
+ ��
1206
+ W⊤W
1207
+ �◦2�−1 �
1208
+ w⊤
1209
+ 1 C1/2
1210
+ xx w1 − 1, . . . , w⊤
1211
+ NC1/2
1212
+ xx wN − 1
1213
+ �⊤
1214
+ .
1215
+ (18)
1216
+ F.2. Deriving optimal gains
1217
+ We find it useful to first demonstrate that any matrix C ∈ SN, where SN is the space of symmetric N × N matrices, can be
1218
+ factorized as
1219
+ W diag (g) W⊤ = C
1220
+ (19)
1221
+ where W ∈ RN×K is some fixed, arbitrary, frame with K ≥ N(N+1)
1222
+ 2
1223
+ (i.e. a representation that is O(N 2) overcomplete),
1224
+ and g ∈ RK is a variable vector encoding information about C. We multiply both sides of Equation 19 from the left and
1225
+ right by W⊤ and W, respectively, then take the diagonal3 of the resultant matrices,
1226
+ diag
1227
+
1228
+ W⊤W diag (g) W⊤W
1229
+
1230
+ = diag
1231
+
1232
+ W⊤CW
1233
+
1234
+ .
1235
+ (20)
1236
+ 3Similar to commonly-used matrix libraries, the diag (·) operator here is overloaded and can map a vector to a matrix or vice versa.
1237
+ See Appendix A for details.
1238
+
1239
+ Statistical whitening of neural populations with gain-modulating interneurons
1240
+ Finally, employing a simple matrix identity involving the diag (·) operator yields
1241
+ (W⊤W)◦2g = diag
1242
+
1243
+ W⊤CW
1244
+
1245
+ ,
1246
+ (21)
1247
+ =⇒ g =
1248
+
1249
+ (W⊤W)◦2�−1 diag
1250
+
1251
+ WT CW
1252
+
1253
+ ,
1254
+ (22)
1255
+ where (·)◦2 denotes element-wise squaring. Thus, any N × N symmetric matrix, can be encoded as a vector, g, with respect
1256
+ to an arbitrary fixed frame, W, by solving a standard linear system of K equations of the form Ag = b. Importantly, when
1257
+ K = N(N + 1)/2, and the columns of W are not collinear, then the matrix on the LHS, (W⊤W)◦2 ∈ SK
1258
+ ++, is invertible,
1259
+ and the vector g is unique (Appendix B).
1260
+ Without loss of generality, assume that the columns of W are unit-norm (otherwise, we can always normalize them by
1261
+ absorbing their lengths into the elements of g). Furthermore, assume without loss of generality that C ∈ SN
1262
+ ++, the set of all
1263
+ symmetric positive definite matrices (e.g. covariance, precision, PSD square roots, etc.). When C is a covariance matrix,
1264
+ then diag
1265
+
1266
+ W⊤CW
1267
+
1268
+ can be interpreted as a vector of projected variances of C along each axis spanned by W. Therefore,
1269
+ Equation 21 states that the vector g is linearly related to the vector of projected variances via the element-wise squared
1270
+ frame Gramian, (W⊤W)◦2.
1271
+ G. Adaptation with inequality constraint
1272
+ In general, the modified objective with rectified gains (Equation 14) does not statistically whiten the inputs x1, x2, . . . ,
1273
+ but rather adapts the non-negative gains g1, . . . , gK to ensure that the variances of the outputs y1, y2, . . . in the directions
1274
+ spanned by the frame vectors {w1, . . . , wK} are bounded above by unity (Figure 7). This one-sided normalization
1275
+ carries interesting implications for how and when the circuit statistically whitens its outputs, which can be compared with
1276
+ experimental observations. For instance, the circuit performs ZCA whitening if and only if there are non-negative gains such
1277
+ that Equation 17 holds (see, e.g., the top right example in Figure 7), which corresponds to cases such that the matrix C1/2
1278
+ xx is
1279
+ an element of the following cone (with its vertex translated by IN):
1280
+
1281
+ IN +
1282
+ K
1283
+
1284
+ i=1
1285
+ giwiw⊤
1286
+ i : g ∈ RK
1287
+ +
1288
+
1289
+ .
1290
+ On the other hand, if the variance of an input projection is less than unity — i.e., w⊤
1291
+ i Cxxwi ≤ 1 for some i — then the
1292
+ corresponding gain gi remains zero. When this is true for all i = 1, . . . , K, the gains all remain zero and the circuit output
1293
+ is equal to its input (see, e.g., the bottom middle example of Figure 7).
1294
+ Figure 7. Geometric intuition of whitening with/without inequality constraint. Whitening efficacy using non-negative gains depends on W
1295
+ and Cxx. For N = 2 and K = 3, examples of covariance matrices Cyy (red ellipses) corresponding to optimal solutions y of objective
1296
+ 12, for varying input covariance matrices Cxx (black ellipses) and frames W (spanning axes denoted by gray lines). Unit circles, which
1297
+ correspond to the identity matrix target covariance, are shown with dashed lines. Each row corresponds to a different frame W and each
1298
+ column corresponds to a different input covariance Cxx.
1299
+
1300
+ Cxx,1
1301
+ Cxx,2
1302
+ Cxx,3
1303
+ IM
1304
+ W2Statistical whitening of neural populations with gain-modulating interneurons
1305
+ H. Whitening spatially local neighborhoods
1306
+ H.1. Spatially local whitening in 1D
1307
+ For an N-dimensional input, we consider a network that whitens spatially local neighborhoods of size M < N. To this end,
1308
+ we can construct N filters of the form
1309
+ wi = ei,
1310
+ i = 1, . . . , N
1311
+ and M(N − M+1
1312
+ 2
1313
+ ) filters of the form
1314
+ w = ei + ej
1315
+
1316
+ 2
1317
+ ,
1318
+ i, j = 1, . . . , N,
1319
+ 1 ≤ |i − j| ≤ M.
1320
+ The total number of filters is (M + 1)(N − M
1321
+ 2 ), so for fixed M the number of filters scales linearly in N rather than
1322
+ quadratically.
1323
+ We simulated a network comprising N = 10 primary neurons, and a convolutional weight matrix connecting each interneuron
1324
+ to spatial neighborhoods of three primary neurons. Given input data with covariance Cxx illustrated in Figure 8A (left
1325
+ panel), this modified network succeeded to statistically whiten local neighborhoods of size of primary 3 neurons (right
1326
+ panel). Notably, the eigenspectrum (Figure 8B) after local whitening is much closer to being equalized. Furthermore, while
1327
+ the global whitening solution produced a flat spectrum as expected, the local whitening network did not amplify the axis
1328
+ with very low-magnitude eigenvalues (Figure 8B right panel).
1329
+ Figure 8. Statistically adapting local neighborhoods of neurons. A) ˆCxx denotes correlation matrix, which are shown here for display
1330
+ purposes only, to facilitate comparisons. Network with 10-dimensional input correlation (left) 10-dimensional output correlation matrix
1331
+ after global whitening (middle); and output correlation matrix after statistically whitening local neighborhoods of size 3. The output
1332
+ correlation matrix of the locally adapted circuit has block-identity structure along the diagonal. B) Corresponding eigenspectra of
1333
+ covariance matrices of unwhitened (left), global whitened (middle), and locally whitened (right) network outputs. The black dashed line
1334
+ denotes unity.
1335
+ H.2. Filter bank construction in 2D
1336
+ Here, we describe one way of constructing a set of convolutional weights for overlapping spatial neighborhoods (e.g. image
1337
+ patches) of neurons. Given an n × m input and overlapping neighborhoods of size h × w to be statistically whitened, the
1338
+ samples are therefore matrices X ∈ Rn×m. In this case, filters w ∈ R1×n×m can be indexed by pairs of pixels that are in
1339
+ the same patch:
1340
+ ((i, j), (k, ℓ)),
1341
+ 1 ≤ i ≤ n,
1342
+ 1 ≤ j ≤ m,
1343
+ 0 ≤ |i − k| ≤ h,
1344
+ 0 ≤ |j − ℓ| ≤ w
1345
+
1346
+ A
1347
+ 1.0
1348
+ Cyy globally white
1349
+ 1.0
1350
+ yy locally white
1351
+ 1.0
1352
+ 0.5
1353
+ 0.5
1354
+ - 0.5
1355
+ - 0.0
1356
+ - 0.0
1357
+ - 0.0
1358
+ -0.5
1359
+ -0.5
1360
+ -0.5
1361
+ -1.0
1362
+ B
1363
+ -1.0
1364
+ -1.0
1365
+ 30
1366
+ 2.5
1367
+ 2.5
1368
+ 25
1369
+ 2.0
1370
+ 2.0
1371
+ 20
1372
+ 1.5
1373
+ 1.5
1374
+ 15
1375
+ 1.0
1376
+ 1.0
1377
+ 10
1378
+ 0.5
1379
+ 0.5
1380
+ 5
1381
+ 0
1382
+ 0.0
1383
+ 0.0
1384
+ 1
1385
+ 5
1386
+ 10
1387
+ 1
1388
+ 5
1389
+ 10
1390
+ 1
1391
+ 5
1392
+ 10
1393
+ EigenvectorStatistical whitening of neural populations with gain-modulating interneurons
1394
+ We can then construct the filters as,
1395
+ w(i,j),(k,ℓ)(X) =
1396
+
1397
+ xi,j
1398
+ if (i, j) = (k, ℓ),
1399
+ xi,j+xk,ℓ
1400
+
1401
+ 2
1402
+ if (i, j) ̸= (k, ℓ).
1403
+ In this case there are
1404
+ nm + wh
1405
+
1406
+ (n − w)(m − h) + (n − w)(h + 1)
1407
+ 2
1408
+ + (m − h)(w + 1)
1409
+ 2
1410
+ + (h + 1)(w + 1)
1411
+ 2
1412
+
1413
+ such filters, so the number of filters required scales linearly with nm rather than quadratically.
1414
+
49FKT4oBgHgl3EQf9y5B/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
5dE1T4oBgHgl3EQfTAM3/content/2301.03072v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5eaafe35fd89d1fed70c4753698879dc770296433bed03e869eba275e618266d
3
+ size 771780
5dE1T4oBgHgl3EQfTAM3/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:484b5bb7fa40be474d7efbd915a9ec9d03688c85737257dfcdc3288e08b25e42
3
+ size 2490413
7NE2T4oBgHgl3EQfkwf2/content/2301.03983v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:378f6f78520dca51dea404d649f4100e7004f2fe822c765af7618de04c9e19a2
3
+ size 1627498
7NE2T4oBgHgl3EQfkwf2/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2873cf8e571ea3e7bb5e9171322cdb36d676ba6b4aa805e80826c8b8528f60e9
3
+ size 61279
8NE4T4oBgHgl3EQfdAy1/content/tmp_files/2301.05088v1.pdf.txt ADDED
@@ -0,0 +1,1260 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Extreme mass ratio inspirals in galaxies with dark matter halos
2
+ Ning Dai,1, ∗ Yungui Gong,1, † Yang Zhao,1, ‡ and Tong Jiang1, §
3
+ 1School of Physics, Huazhong University of Science and Technology,
4
+ 1037 LuoYu Rd, Wuhan, Hubei 430074, China
5
+ Using the analytic, static and spherically symmetric metric for a Schwarzschild
6
+ black hole immersed in dark matter (DM) halos with Hernquist type density distri-
7
+ bution, we derive analytic formulae for the orbital period and orbital precession, the
8
+ evolutions of the semi-latus rectum and the eccentricity for eccentric EMRIs with the
9
+ environment of DM halos. We show how orbital precessions are decreased and even
10
+ reverse the direction if the density of DM halo is large enough. The presence of local
11
+ DM halos slows down the decrease of the semi-latus rectum and the eccentricity.
12
+ Comparing the number of orbital cycles with and without DM halos over one-year
13
+ evolution before the merger, we find that DM halos with the compactness as small
14
+ as 10−4 can be detected. By calculating the mismatch between GW waveforms with
15
+ and without DM halos, we show that we can use GWs from EMRIs in the environ-
16
+ ments of galaxies to test the existence of DM halos and detect the compactness as
17
+ small as 10−5.
18
+ I.
19
+ INTRODUCTION
20
+ The first detection of gravitational waves (GWs) from the merger of black hole (BH)
21
+ binary by the LIGO Scientific Collaboration and the Virgo Collaboration in 2015 [1, 2]
22
+ opened a new window for probing gravitational physics and fundamental physics. Since then,
23
+ tens of confirmed GW events have been detected by the ground-based GW observatories [3–
24
+ 6]. The ground-based GW observatories are only sensitive to GWs in the frequency range
25
+ of 10 − 103 Hz.
26
+ The space-based GW observatories such as LISA [7], TianQin [8] and
27
+ Taiji [9, 10] will usher a new era in GW astronomy due to their unprecedented accuracy
28
+ and their sensitive range of mHz [11–14]. One particular interesting target of space-based
29
30
+ † Corresponding author. [email protected]
31
32
33
+ arXiv:2301.05088v1 [gr-qc] 12 Jan 2023
34
+
35
+ 2
36
+ GW detectors is a stellar-mass compact object (SCO) inspiralling onto a massive black hole
37
+ (MBH), the extreme mass ratio inspirals (EMRIs) [15]. There are 105 − 106 GW cycles in
38
+ the detector band when the SCO inspirals deep inside the strong field region of the MBH,
39
+ and rich information about the spacetime geometry around the MBH is encoded in GW
40
+ waveforms. Therefore, the observations of GWs emitted from EMRIs present us a good
41
+ opportunity for the study of astrophysics, gravity in the strong and nonlinear regions and
42
+ the nature of BHs [15–20].
43
+ Although the property of DM is still a mystery in physics, there are a lot of indirect
44
+ evidence for the existence of dark matter (DM) in the Universe [21–30]. DM may cluster
45
+ at the center of galaxies and around BHs [31–34], and affect the dynamics of binaries and
46
+ hence GWs emitted from them. Since EMRIs are believed to reside in stellar clusters and
47
+ the center of galaxies, so DM may affect the dynamics of EMRIs and the observations of
48
+ GWs from EMRIs, especially those in DM environments may be used to understand the
49
+ astrophysical environment surrounding EMRIs and probably confirm the existence of DM
50
+ and uncover the nature of DM [35–49].
51
+ In the studies of DM effects discussed above, Newtonian approaches to the problems were
52
+ applied and the gravitational effects of DM on the dynamical evolution of EMRIs were mod-
53
+ eled at Newtonian level. In Ref. [50], the authors generalized Einstein clusters [51, 52] to
54
+ include horizons, solved Einstein’s equations sourced by DM halo of Hernquist type density
55
+ distribution [34] with a MBH at its center and obtained analytical formulae for the metric
56
+ of galaxies harboring MBHs. Exact solutions for the geometry of a MBH immersed in DM
57
+ halos with different density distributions were then derived [53, 54]. With the fully relativis-
58
+ tic formalism, it was found that the leading order correction to the ringdown stage induced
59
+ by the external matter and fluxes by orbiting particles is a gravitational redshift, and the
60
+ difference between the number of GW cycles accumulated by EMRIs with and without DM
61
+ halos over one year before the innermost stable circular orbit can reach about 500 [50]. In
62
+ galaxies harboring MBHs, tidal forces and geodesic deviation depend on the masses of the
63
+ DM halos and the typical length scales of the galaxies [55]. Due to the gravitational pull
64
+ of DM halos, the apsidal precession of the geodesic orbits for EMRIs is strongly affected
65
+ and even prograde-to-retrograde drift can occur [56]. In prograde-to-retrograde orbital al-
66
+ terations, GWs show transient frequency phenomena around a critial non-precessing turning
67
+ point [56]. A fully relativistic formalism to study GWs from EMRIs in static, spherically
68
+
69
+ 3
70
+ symmetric spacetimes describing a MBH immersed in generic astrophysical environments
71
+ was established in Ref. [57] and it was shown how the astrophysical environment changes
72
+ GW generation and propagation.
73
+ The above discussions are based on circular motions or eccentric cases without GW
74
+ reaction. In this paper, we study eccentric orbital motions and GWs of EMRIs in galaxies
75
+ with DM environments. The paper is organized as follows. A review of the spacetime of
76
+ galaxies harboring MBHs is given first, then we discuss the geodesic motions of EMRIs in
77
+ the spacetime in Section II. In Section III, we use the ”Numerical Klugde” method [58–
78
+ 60] to calculate GWs from eccentric EMRIs in galaxies with DM environments. To assess
79
+ the capability of detecting DM halos with LISA, we calculate the mismatch between GWs
80
+ from EMRIs with and without DM halos along with their signal-to noise (SNR) ratios in
81
+ Section III. We draw conclusions in Section IV. In this paper we use the units G = c = 1.
82
+ II.
83
+ THE MOTIONS OF BINARIES IN THE ENVIRONMENTS OF GALAXIES
84
+ Following [50], we use the Hernquist-type density distribution [34] to describe the profiles
85
+ observed in the bulges and elliptical galaxies
86
+ ρH =
87
+ Mr0
88
+ 2πr(r + r0)3,
89
+ (1)
90
+ where M is the total mass of the DM halo, and r0 is the typical lengthscale of a galaxy. The
91
+ energy-momentum tensor of a galaxy harboring a MBH with the mass MBH is assumed to
92
+ be an anisotropic fluid
93
+ T µ
94
+ ν = diag(−ρDM, 0, Pt, Pt),
95
+ (2)
96
+ where the density profile for a MBH residing at the center of the distribution (1) is
97
+ 4πρDM = m′
98
+ r2 = 2M(r0 + 2MBH)(1 − 2MBH/r)
99
+ r(r + r0)3
100
+ ,
101
+ (3)
102
+ the mass function m(r) is
103
+ m(r) = MBH +
104
+ Mr2
105
+ (r0 + r)2
106
+
107
+ 1 − 2MBH
108
+ r
109
+ �2
110
+ ,
111
+ (4)
112
+ and the tangential pressure Pt is
113
+ 2Pt = m(r)ρDM
114
+ r − 2m(r).
115
+ (5)
116
+
117
+ 4
118
+ Obviously, in the absence of the MBH, the density profile (3) reduces to Eq. (1). At large
119
+ distance, r ≫ MBH, the density profile ρDM becomes the Hernquist-type distribution (1) for
120
+ large galaxies with r0 ≫ MBH, ρDM ∼ (M/r0)2/(Mr), so the DM density ρDM is smaller
121
+ if the compactness M/r0 is smaller with fixed M or if M is larger with fixed compactness
122
+ M/r0. Using the following ansatz for the static, spherically symmetric spacetime [50],
123
+ ds2 = −f(r)dt2 +
124
+ dr2
125
+ 1 − 2m(r)/r + r2(dθ2 + sin2 θ dφ2),
126
+ (6)
127
+ and solving Einstein equations, we get [50]
128
+ f(r) =
129
+
130
+ 1 − 2MBH
131
+ r
132
+
133
+ eΥ,
134
+ Υ = −π
135
+
136
+ M
137
+ ξ + 2
138
+
139
+ M
140
+ ξ arctan
141
+ �r + r0 − M
142
+ √Mξ
143
+
144
+ ,
145
+ ξ = 2r0 − M + 4MBH.
146
+ (7)
147
+ The geometry (6) describes a BH spacetime with an horizon at r = 2MBH and a curvature
148
+ singularity at r = 0, the matter density vanishes at the horizon and the ADM mass of the
149
+ spacetime is M + MBH. In the absence of DM halo, M = 0, the spacetime (6) reduces to
150
+ Schwarzschild BH with mass MBH. In galaxies, the compactness M/r0 can be as large as
151
+ 10−4 [32]. In general astrophysical environments the compactness M/r0 is usually small.
152
+ Expanding the function f(r) in Eq. (7) about M/r0 = 0 to the second order we get
153
+ f(r) ≃
154
+
155
+ 1 − 2MBH
156
+ r
157
+ � �
158
+ 1 − 2M
159
+ r0
160
+ + 4M 2
161
+ 3r2
162
+ 0
163
+ + 2Mr
164
+ r2
165
+ 0
166
+ + O[r−3
167
+ 0 ]
168
+
169
+ =
170
+
171
+ 1 − 2MBH
172
+ r
173
+
174
+ (1 + α + rβ),
175
+ (8)
176
+ where α = −2M/r0 + 4M 2/3r2
177
+ 0 and β = 2M/r2
178
+ 0.
179
+ Now we consider a MBH in the center of a DM halo and a SCO moving on geodesics
180
+ around the MBH in the equatorial plane (θ = π/2). The geodesic equation is
181
+ duµ
182
+ dτ = 1
183
+ 2uαuβ∂µgαβ,
184
+ (9)
185
+ where uα = drα/dτ, τ is the proper time and rα = (t, r, θ, φ). Because the spacetime is
186
+ static and spherically symmetric, from the geodesic equation (9) we obtain two conserved
187
+ quantities u0 = −E/µ and uφ = L/µ,
188
+ u0 = −E/µ = −
189
+
190
+ 1 + 2ε,
191
+ (10)
192
+ uφ = L/µ = h,
193
+ (11)
194
+
195
+ 5
196
+ where E and L represent the orbital energy and angular momentum of the system, respec-
197
+ tively, and the reduced mass µ is approximately equal to the mass of the SCO. The radial
198
+ equation of motion is
199
+ 1 +
200
+ �dr
201
+
202
+ �2 �
203
+ 1 − 2m(r)
204
+ r
205
+ �−1
206
+ + h2
207
+ r2 = 1 + 2ε
208
+ f
209
+ .
210
+ (12)
211
+ For convenience, we introduce the orbital elements, the semi-latus rectum p and the
212
+ eccentricity e, to parameterize the orbital motion,
213
+ r =
214
+ p
215
+ 1 + e cos χ,
216
+ (13)
217
+ where χ is a parameter. Rewriting the variables h and ε in terms of p and e, we obtain
218
+ h2 =
219
+ p Rs (1 + α) + p3β (1 − e2)−1
220
+ 2(1 + α)
221
+
222
+ 1 − 1
223
+ 2
224
+ Rs
225
+ p (3 + e2)
226
+
227
+ + p β
228
+
229
+ 1 − 2 Rs
230
+ p
231
+ �,
232
+ (14)
233
+ ε = −
234
+ Rs
235
+ 2p (1 − e2)
236
+
237
+ 1 − 2Rs
238
+ p
239
+
240
+ + α j + α2 g + β k
241
+ 2
242
+
243
+ 1 − 1
244
+ 2
245
+ Rs
246
+ P (3 + e2)
247
+
248
+ (1 + α) + p β
249
+
250
+ 1 − 2 Rs
251
+ p
252
+ �,
253
+ (15)
254
+ where Rs = 2MBH,
255
+ j = −
256
+
257
+ 1 − 2Rs
258
+ p
259
+
260
+ + Rs
261
+ 2p
262
+
263
+ 1 − 4Rs
264
+ p
265
+
266
+ (1 − e2),
267
+ g = −
268
+
269
+ 1 − 2Rs
270
+ p
271
+
272
+ − R2
273
+ s
274
+ p2 (1 − e2),
275
+ k = −p(3 + e2)
276
+ 2(1 − e2)
277
+
278
+ 1 − 2Rs
279
+ p
280
+
281
+ − 2R2
282
+ s
283
+ p .
284
+ In terms of χ, Eqs. (10) and (11) become
285
+
286
+ dχ =
287
+ �1
288
+ 2
289
+ Rs
290
+ p (1 + α) + 1
291
+ 2pβ(1 − e2)−1
292
+ � 1
293
+ 2 �1
294
+ 2
295
+ Rs
296
+ p
297
+
298
+ 1 − Rs
299
+ p (3 + e cos χ)
300
+
301
+ + α A + 2α2 A + β B
302
+ �− 1
303
+ 2
304
+ J1,
305
+ (16)
306
+ dt
307
+ dχ =
308
+ p
309
+ (1 + e cos χ)2
310
+ ��
311
+ 1 − (1 + e)Rs
312
+ p
313
+ � �
314
+ 1 − (1 − e)Rs
315
+ p
316
+
317
+ + C
318
+ � 1
319
+ 2
320
+ ×
321
+
322
+ 1 − Rs
323
+ p (1 + e cos χ)
324
+ �−1 �1
325
+ 2
326
+ Rs
327
+ p
328
+
329
+ 1 − Rs
330
+ p (3 + e cos χ) + αA + 2��2A + βB
331
+ ��− 1
332
+ 2
333
+ J2,
334
+ (17)
335
+
336
+ 6
337
+ where
338
+ A = Rs
339
+ p
340
+
341
+ 1 − Rs
342
+ p (3 + e cos χ)
343
+
344
+ ,
345
+ B =
346
+ p
347
+ 2(1 − e2)(1 + e cos χ)
348
+
349
+ 2
350
+
351
+ 1 − Rs
352
+ p
353
+
354
+ +
355
+
356
+ 1 − 4Rs
357
+ p
358
+
359
+ �Rs
360
+ p
361
+ �2
362
+ (1 − e2)(1 + e cos χ) − Rs
363
+ p e2(1 + cos2 χ)
364
+ ��
365
+ ,
366
+ C = α
367
+
368
+ 1 − 1
369
+ 2(3 + e2)Rs
370
+ p
371
+
372
+ + 1
373
+ 2pβ
374
+
375
+ 1 − 2Rs
376
+ r
377
+
378
+ − (αj + α2g + βk),
379
+ J1 =
380
+
381
+ 1 + α +
382
+ βp
383
+ 1 + e cos χ
384
+ � 1
385
+ 2 �
386
+ 1 − 2Mp/(1 + e cos χ)
387
+ a + p/(1 + e cos χ)2
388
+
389
+ 1 − Rs
390
+ p (1 + e cos χ)
391
+ � �− 1
392
+ 2
393
+ ,
394
+ J2 =
395
+
396
+ 1 + α +
397
+ βp
398
+ 1 + e cos χ
399
+ �− 1
400
+ 2 �
401
+ 1 − 2Mp/(1 + e cos χ)
402
+ a + p/(1 + e cos χ)2
403
+
404
+ 1 − Rs
405
+ p (1 + e cos χ)
406
+ � �− 1
407
+ 2
408
+ .
409
+ Eqs. (16) and (17) can be integrated to obtain φ(χ) and t(χ). Taking different compact-
410
+ ness and mass for the DM halo, using Cartesian coordinate (x, y) = (r cos φ, r sin φ) in the
411
+ equatorial plane, we show the orbits of EMRIs in galaxies with and without DM in Fig.
412
+ 1. Due to the gravitational drag of DM halos, the orbits with DM halos are different from
413
+ those without DM. From Fig. 1, we see that for the same value of M, the effect of DM
414
+ halos on the orbital precession is larger if the compactness of the DM halo M/r0 is bigger.
415
+ DM halos decrease the orbital precessions, and can even reverse the direction of precession
416
+ if the density of DM halo ρDM is large enough. The result of retrograde precessions of the
417
+ orbital motion in the spacetime (6) is consistent with that found in [56], and the anomalous
418
+ precessions of binaries in DM environments were also found in [48, 61, 62].
419
+ To probe DM halos and study their impact on the orbits of EMRIs, we calculate the time
420
+ P and the orbital precession ∆φ over one cycle when the orbital parameter χ increases by
421
+ 2π,
422
+ T =
423
+ � 2π
424
+ 0
425
+ dt
426
+ dχdχ,
427
+ (18)
428
+ ∆φ =
429
+ � 2π
430
+ 0
431
+
432
+ dχdχ − 2π.
433
+ (19)
434
+ Expanding Eqs. (16) and (17) about Rs/p = 0 to the second order and substituting the
435
+
436
+ 7
437
+ -100
438
+ -50
439
+ 0
440
+ 50
441
+ 100
442
+ -100
443
+ -50
444
+ 0
445
+ 50
446
+ 100
447
+ x/Rs
448
+ y/Rs
449
+ r0=102M, M=102MBH
450
+ -100
451
+ -50
452
+ 0
453
+ 50
454
+ -100
455
+ -50
456
+ 0
457
+ 50
458
+ x/Rs
459
+ y/Rs
460
+ r0=103M, M=102MBH
461
+ -100
462
+ -50
463
+ 0
464
+ 50
465
+ -100
466
+ -50
467
+ 0
468
+ 50
469
+ x/Rs
470
+ y/Rs
471
+ r0=102M, M=103MBH
472
+ -100
473
+ -50
474
+ 0
475
+ 50
476
+ 100
477
+ -100
478
+ -50
479
+ 0
480
+ 50
481
+ 100
482
+ x/Rs
483
+ y/Rs
484
+ r0=103M, M=103MBH
485
+ FIG. 1.
486
+ The orbits of EMRIs in galaxies with and without DM halos. The mass of MBHs is set
487
+ as MBH = 106M⊙, the eccentricity e = 0.6, and the semi-latus rectum p = 20Rs. We take the
488
+ compactness M/r0 as 10−2 and 10−3, and the total mass M as 102MBH and 103MBH. The red
489
+ dashed lines show the trajectories with DM and the blue solid lines show the orbits without DM.
490
+ The arrows represent the directions of orbital precessions.
491
+
492
+ 8
493
+ results into Eqs. (18) and (19), we get
494
+ T = 2π
495
+
496
+ 2p3
497
+ Rs
498
+ 1
499
+ (1 − e2)3/2
500
+
501
+ 1 + 3
502
+ 2(1 − e2)Rs
503
+ p + 3
504
+ 2(1 − e2)
505
+
506
+ 1 + 5
507
+ 4(1 − e2)
508
+ 1
509
+ 2
510
+ � �Rs
511
+ p
512
+ �2
513
+ + M
514
+ r0
515
+ + 5M 2
516
+ 6r2
517
+ 0
518
+ +
519
+ Mp
520
+ r2
521
+ 0(1 − e2)
522
+
523
+ e2 − 11
524
+ 2
525
+
526
+ − 3Mp2/Rs
527
+ r2
528
+ 0(1 − e2)
529
+
530
+ ,
531
+ (20)
532
+ ∆φ = 3πRs
533
+ p + 3π
534
+ 8 (18 + e2)
535
+ �Rs
536
+ p
537
+ �2
538
+
539
+
540
+ 1 − e2
541
+ Mp
542
+ r2
543
+ 0
544
+
545
+ 3 +
546
+ 1 + e2 + 2 Rs
547
+ p
548
+ (1 − e2)1/2
549
+
550
+ .
551
+ (21)
552
+ The terms with M in the above Eqs. (20) and (21) come from DM halos. In the absence of
553
+ DM, M = 0, the above results (20) and (21) recover those for EMRIs with the central MBH
554
+ being a Schwarzschild BH. The dominant contribution to the period T in Eq. (20) is the first
555
+ term, so T becomes larger as the semi-latus rectum p increases. However, there are positive
556
+ and negative contributions from the local DM halos, the local DM halos may slow down the
557
+ increase of T as p increases because the negative contribution in the last term in Eq. (20)
558
+ and the presence of DM halos helps the increase of T with p if the last negative contribution
559
+ is negligible. From Eq. (21), it is easy to understand that the presence of DM halo decreases
560
+ the orbital procession and even retrogrades the orbital procession if the local density of DM
561
+ halos ρDM ∼ M/r2
562
+ 0 is large enough so that the third term dominates over the first two terms.
563
+ As the orbit becomes larger, i.e., the semi-latus rectum p increases, the orbital precession
564
+ decreases and the prograde precession decreases faster in the presence of DM halos because
565
+ the third term due to DM halos in Eq. (21) becomes bigger. With DM halos, the prograde-
566
+ to-retrograde precession transition happens at some critial value of p and then the prograde
567
+ precessions change to retrograde precessions as p increases further; afterwards, the retrograde
568
+ precessions increase as p increases. Choosing different values for the compactness M/r0 and
569
+ the total mass of DM halos M and using Eqs. (20) and (21), we plot the results of the period
570
+ T and the orbital precession ∆φ versus the semi-latus rectum p in Fig. 2. As expected,
571
+ the orbital period T increases with p; the prograde precessions decrease with p and DM
572
+ halos help the decrease. For the case of r0 = 102M and M = 102MBH, the periapsis shifts
573
+ change from prograde precessions to retrograde precessions at p = 60Rs and the retrograde
574
+ precession increases with p when p ≳ 60Rs.
575
+ From the above discussions, we see that the orbital motions of EMRIs are influenced by
576
+ DM halos, and we expect that the effects of local DM halos will leave imprints on GWs so
577
+ that we can probe local DM halos through the observations of GWs emitted from EMRIs.
578
+
579
+ 9
580
+ 20
581
+ 40
582
+ 60
583
+ 80
584
+ 100
585
+ -0.4
586
+ -0.2
587
+ 0.0
588
+ 0.2
589
+ 0.4
590
+ 0.6
591
+ 0.8
592
+ 1.0
593
+ p/RS
594
+ <Δϕ>
595
+ 90
596
+ 91
597
+ 0.07
598
+ 0.09
599
+ 0.11
600
+ r0=103M, M=103MBH
601
+ r0=102M, M=103MBH
602
+ r0=103M, M=102MBH
603
+ r0=102M, M=102MBH
604
+ M=0
605
+ 30
606
+ 40
607
+ 50
608
+ 60
609
+ 70
610
+ 80
611
+ 90
612
+ 100
613
+ 10
614
+ 20
615
+ 30
616
+ 40
617
+ 50
618
+ p/Rs
619
+ P/hour
620
+ 90
621
+ 91
622
+ 41
623
+ 41.5
624
+ FIG. 2.
625
+ The results of orbital period and precession for EMRIs in galaxies with and without DM.
626
+ The mass of central MBHs is set as MBH = 106M⊙ and the eccentricity e = 0.6. We take the
627
+ compactness M/r0 as 10−2 and 10−3, and the total mass M as 102MBH, 103MBH and M = 0. The
628
+ inserts show the evolution in a short time period.
629
+ III.
630
+ GWS OF EMRIS IN THE ENVIRONMENTS OF GALAXIES
631
+ Using the above results for the orbital motions of EMRIs, we get the leading order energy
632
+ and angular momentum fluxes
633
+ �dE
634
+ dt
635
+
636
+ GW
637
+ ≃ 32
638
+ 5
639
+
640
+ µ
641
+ MBH
642
+ �2 �MBH
643
+ p
644
+ �5
645
+ (1 − e2)3/2
646
+
647
+ 1 + 73
648
+ 24e2 + 37
649
+ 96e4
650
+ � �
651
+ 1 − 6M
652
+ r0
653
+
654
+ ,
655
+ (22)
656
+ �dL
657
+ dt
658
+
659
+ GW
660
+ ≃ 32
661
+ 5
662
+
663
+ µ
664
+ MBH
665
+ �2
666
+ MBH
667
+ �MBH
668
+ p
669
+ �7/2
670
+ (1 − e2)3/2
671
+
672
+ 1 + 7
673
+ 8e2
674
+ � �
675
+ 1 − 5M
676
+ r0
677
+
678
+ .
679
+ (23)
680
+ The last factors 1 − 6M/r0 and 1 − 5M/r0 are the corrections from DM halos around the
681
+ MBH. Note that the effects of environmental DM halos on the losses of energy and angular
682
+ momentum only depend on the compactness M/r0 and the energy and angular momentum
683
+ fluxes become smaller if the compactness is larger. In the absence of local DM halos, M = 0,
684
+ Eqs. (22) and (23) recover the standard results for eccentric binaries [63, 64]. Applying the
685
+ energy and angular momentum balance equations
686
+ �dE
687
+ dt
688
+
689
+ GW
690
+ = −
691
+ �dE
692
+ dt
693
+
694
+ orbit
695
+ ,
696
+ (24)
697
+ �dL
698
+ dt
699
+
700
+ GW
701
+ = −
702
+ �dL
703
+ dt
704
+
705
+ orbit
706
+ ,
707
+ (25)
708
+
709
+ 10
710
+ we get the leading order evolution of the orbital parameters p(t) and e(t) due to the emission
711
+ of GWs,
712
+ dp
713
+ dt = −64
714
+ 5
715
+ µ
716
+ MBH
717
+ �MBH
718
+ p
719
+ �3 �
720
+ 1 − e2� 3
721
+ 2
722
+
723
+ 1 + 7
724
+ 8e2
725
+ � �
726
+ 1 − 5M
727
+ r0
728
+
729
+ ,
730
+ (26)
731
+ de
732
+ dt = −304
733
+ 15
734
+ e
735
+ p
736
+ µ
737
+ MBH
738
+ �MBH
739
+ p
740
+ �3 �
741
+ 1 − e2� 3
742
+ 2
743
+
744
+ 1 + 121
745
+ 304e2
746
+ � �
747
+ 1 − 5M
748
+ r0
749
+
750
+ .
751
+ (27)
752
+ Since the right sides of Eqs. (26) and (27) are negative, both the semi-latus rectum p and
753
+ the eccentricity decrease with time due to the radiation of GWs. The presence of local DM
754
+ halos slows down the decrease of p and e, the bigger the compactness M/r0 is, the slower
755
+ the semi-latus rectum p(t) and the eccentricity decrease. In Fig. 3, we show the evolution
756
+ of the orbital parameters p(t) and e(t) due to the emission of GWs. Comparing with the
757
+ astrophysical environments without DM, it takes more time for EMRIs with DM halos to
758
+ evolve from p = 20Rs to p = 3Rs. The larger the compactness M/r0 is, the more time it
759
+ takes. The presence of DM halos also slows down the decrease rate of the eccentricity and
760
+ the final eccentricity is a bit larger with larger compactness.
761
+ r0=102M, e0=0.6
762
+ r0=103M, e0=0.6
763
+ r0=102M, e0=0.2
764
+ r0=103M, e0=0.2
765
+ M=0, e0=0.6
766
+ M=0, e0=0.2
767
+ 0
768
+ 200
769
+ 400
770
+ 600
771
+ 800
772
+ 1000
773
+ 0
774
+ 5
775
+ 10
776
+ 15
777
+ 20
778
+ t/yr
779
+ p/Rs
780
+ 0
781
+ 200
782
+ 400
783
+ 600
784
+ 800
785
+ 1000
786
+ 0.0
787
+ 0.1
788
+ 0.2
789
+ 0.3
790
+ 0.4
791
+ 0.5
792
+ 0.6
793
+ t/yr
794
+ e
795
+ FIG. 3.
796
+ The evolution of the orbital parameters p and e from the initial p = 20Rs to p = (3+e)Rs.
797
+ The mass of central MBHs is chosen as MBH = 106M⊙, the mass of the SCO is µ = 10M⊙ and the
798
+ initial eccentricity is chosen as e0 = 0.2, 0.6. We consider two different values for the compactness
799
+ of the DM halo, M/r0 = 10−2 and 10−3. The solid lines correspond to the cases without DM.
800
+ As discussed above, the effects of DM halos will be manifested in GW waveforms. The
801
+ quadrupole formula of GWs is
802
+ hjk = 2
803
+ dL
804
+ ¨Ijk,
805
+ (28)
806
+
807
+ 11
808
+ where dL is the luminosity distance between the detector and the source and Ijk is the
809
+ quadrupole moment of EMRIs. The tenser modes h+ and h× in the transverse-traceless
810
+ gauge are given by
811
+ h+ = 1
812
+ 2
813
+
814
+ ej
815
+ Xek
816
+ X − ej
817
+ Y ek
818
+ Y
819
+
820
+ hjk,
821
+ (29)
822
+ h× = 1
823
+ 2
824
+
825
+ ej
826
+ Xek
827
+ Y − ej
828
+ Y ek
829
+ X
830
+
831
+ hjk,
832
+ (30)
833
+ where eX and eY are the orthonormal vectors in the plane that is perpendicular to the
834
+ direction from the detector to the GW source. Plugging the results for the orbital evolution
835
+ obtained above into Eq. (28), we numerically calculate the time-domain GW waveforms.
836
+ The time-domain plus-mode GW waveforms for EMRIs with and without DM halos are
837
+ shown in Fig. 4. From Fig 4, we see that initially the difference between GW waveforms
838
+ with and without DM halos is negligible. One year later, the two waveforms for EMRIs with
839
+ and without DM halos are quite different.
840
+ In order to quantify the impact of DM halo environments on the dephasing of GW
841
+ waveforms, we calculate the number of orbital cycles accumulated from time ti to tf [65–67]
842
+ N(t) =
843
+ � tf
844
+ ti
845
+ ˙φ(t)dt.
846
+ (31)
847
+ Over one-year evolution before the merger, the numbers of orbital cycles for EMRIs with
848
+ and without DM halos are NDM and N0 respectively. In Fig 5, we show the difference ∆N =
849
+ NDM − N0 between the number of orbital cycles with and without DM halos accumulated
850
+ over one year before the merger. Following [68], we choose ∆N ∼ 1 rad as the threshold for
851
+ a detectable dephasing. The results show that we can detect the compactness as small as
852
+ ≲ 10−4. The results also show that eccentric orbits can help detect DM halos with smaller
853
+ compactness.
854
+ To distinguish the waveforms more accurately, we calculate the mismatch between GW
855
+ signals emitted from EMRIs with and without DM halos. Given two signals h1(t) and h2(t),
856
+ the inner product (h1|h2) is defined as
857
+ (h1|h2) = 2
858
+ � +∞
859
+ 0
860
+ ˜h1(f)˜h∗
861
+ 2(f) + ˜h2(f)˜h∗
862
+ 1(f)
863
+ Sh(f)
864
+ df,
865
+ (32)
866
+ where ˜h(f) is the Fourier transformation of the time-domain signal h(t), ˜h∗ denotes the
867
+ complex conjugate of ˜h, and the SNR for the signal h is
868
+
869
+ (h|h). For LISA, the one-side
870
+
871
+ 12
872
+ 0
873
+ 20
874
+ 40
875
+ 60
876
+ 80
877
+ 100
878
+ -5
879
+ 0
880
+ 5
881
+ t/hour
882
+ h+
883
+ At the beginning
884
+ 0
885
+ 20
886
+ 40
887
+ 60
888
+ 80
889
+ 100
890
+ -5
891
+ 0
892
+ 5
893
+ t/hour
894
+ h+
895
+ 365 days later
896
+ r0=102M, M=102MBH
897
+ M=0
898
+ 1023×
899
+ 1023×
900
+ 0
901
+ 20
902
+ 40
903
+ 60
904
+ 80
905
+ 100
906
+ -5
907
+ 0
908
+ 5
909
+ t/hour
910
+ h+
911
+ 0
912
+ 20
913
+ 40
914
+ 60
915
+ 80
916
+ 100
917
+ -5
918
+ 0
919
+ 5
920
+ t/hour
921
+ h+
922
+ r0=103M, M=102MBH
923
+ M=0
924
+ 1023×
925
+ 1023×
926
+ FIG. 4.
927
+ The time-domain plus mode GW waveforms for EMRIs with and without DM halos.
928
+ The mass of central MBHs is MBH = 106M⊙, the mass of the SCO is µ = 10M⊙, the total mass
929
+ of DM halos M is = 102MBH, the inclination angle ι = π/6, the luminosity distance dL = 1Gpc,
930
+ the initial longitude of pericenter ω0 = 0 and the initial eccentricity e0 = 0.6 at p0 = 20Rs. M = 0
931
+ corresponds to the case without DM halos. The left panels show the initial waveforms. The right
932
+ panels show the waveforms after one year. The top panels are for M/r0 = 10−2 and the bottom
933
+ panels are for M/r0 = 10−3.
934
+ noise power spectral density is [69]
935
+ Sh(f) = Sx
936
+ L2 + 2Sa [1 + cos2(2π fL/c)]
937
+ (2 πf)4L2
938
+
939
+ 1 +
940
+ �4 × 10−4Hz
941
+ f
942
+ ��
943
+ ,
944
+ (33)
945
+ where √Sa = 3 × 10−15 m s−2/Hz1/2 is the acceleration noise, √Sx = 1.5 × 10−11 m/Hz1/2
946
+ is the displacement noise and L = 2.5 × 106 km is the arm length of LISA [7]. The overlap
947
+ between two GW signals is quantified as [60]
948
+ O(˜h1, ˜h2) =
949
+ (˜h1|˜h2)
950
+
951
+ (˜h1|˜h1)(˜h2|˜h2)
952
+ ,
953
+ (34)
954
+
955
+ ro=102M, M=102MBH
956
+ - M=0ro=103M, M=102MBH
957
+ M-013
958
+ e0=0
959
+ e0=0.2
960
+ e0=0.4
961
+ e0=0.6
962
+ -5
963
+ -4.5
964
+ -4
965
+ -3.5
966
+ -3
967
+ -2.5
968
+ -2
969
+ 300
970
+ 250
971
+ 200
972
+ 150
973
+ 100
974
+ 50
975
+ 0
976
+ Log10[M/r0]
977
+ |Δ|
978
+ FIG. 5.
979
+ The difference between the orbital cycles with and without DM halos ∆N(t) over one-year
980
+ evolution before the merger for different compactness of halos M/r0. The initial eccentricity e0
981
+ is chosen at p0 = 20Rs. The mass of central MBHs is MBH = 106M⊙ and the mass of the SCO
982
+ is µ = 10M⊙. The masses of DM halos are M = 102MBH. The black dashed line corresponds to
983
+ ∆N = 1 rad.
984
+ and the mismatch between two signals is defined as
985
+ Mismatch = 1 − Omax(˜h1, ˜h2),
986
+ (35)
987
+ where the maximum is evaluated with respect to time and phase shifts. The mismatch is
988
+ zero if two signals are identical. Two signals are considered experimentally distinguishable if
989
+ their mismatch is larger than d/(2 SNR2), where d = 13 is the number of intrinsic parameters
990
+ of the GW source [70–72]. Considering EMRIs with masses (106+10)M⊙ at dL = 1 Gpc and
991
+ integration time of one year before the coalescence, we calculate the mismatch between GW
992
+ waveforms with and without DM halos and the results with LISA are shown in Fig 6. The
993
+ SNR is about 32 for the GW signals from EMRIs considered above. The initial eccentricity
994
+ e0 is chosen at p0 = 20Rs. As shown in Fig 6, if the compactness of DM halo M/r0 is
995
+ larger, then the mismatch between GW waveforms with and without DM halos is bigger,
996
+ so more compact DM halos can be detected easier with LISA. Again eccentric orbits can
997
+ detect smaller compactness. Therefore, we can use GWs from EMRIs in the environments
998
+ of galaxies to test the existence of DM halos and detect the compactness of the halos M/r0
999
+ as small as 10−5.
1000
+
1001
+ 14
1002
+ e0=0.2
1003
+ e0=0.6
1004
+ -6
1005
+ -5
1006
+ -4
1007
+ -3
1008
+ -2
1009
+ -1
1010
+ 0.001
1011
+ 0.010
1012
+ 0.100
1013
+ 1
1014
+ Log10[M/r0]
1015
+ Mismatch
1016
+ FIG. 6.
1017
+ The results of the mismatch between GW waveforms with and without DM halos for
1018
+ different compactness M/r0 and initial eccentricity e0. The black dashed line corresponds to the
1019
+ threshold d/(2 SNR2) ≈ 0.0072.
1020
+ IV.
1021
+ CONCLUSIONS AND DISCUSSIONS
1022
+ Using the analytic, static and spherically symmetric metric for a Schwarzschild black hole
1023
+ immersed in DM halos with Hernquist type density distribution, we derive analytic formulae
1024
+ for the orbital period and orbital precession for eccentric EMRIs with the environment of
1025
+ DM halos. The results show that the presence of DM halo decreases the orbital procession
1026
+ and even retrogrades the orbital procession if the local density of DM halos ρDM ∼ M/r2
1027
+ 0 is
1028
+ large enough. As the orbit becomes larger, the orbital precession decreases and the prograde
1029
+ precession decreases faster in the presence of DM halos. With DM halos, the prograde-to-
1030
+ retrograde precession transition happens at some critial value of p and then the prograde
1031
+ precessions change to retrograde precessions as p increases further; afterwards, the retrograde
1032
+ precessions increase as p increases.
1033
+ Taking the energy and angular momentum fluxes of GWs into consideration, we derive
1034
+ analytic formulae for the evolutions of the semi-latus rectum and the eccentricity.
1035
+ The
1036
+ presence of local DM halos slows down the decrease of the semi-latus rectum and the eccen-
1037
+ tricity. Comparing the numbers of orbital cycles with and without DM halos over one-year
1038
+ evolution before the merger, we find that DM halos with the compactness as small as 10−4
1039
+ can be detected. By calculating the mismatch between GW waveforms with and without
1040
+ DM halos, we show that we can use GWs from EMRIs in the environments of galaxies to
1041
+
1042
+ 15
1043
+ test the existence of DM halos and detect the compactness as small as 10−5. We also find
1044
+ that eccentric orbits can help detect DM halos with smaller compactness.
1045
+ Binaries in the environments of galaxies are also affected by the dynamical frictions of the
1046
+ surrounding medium [73–77], and the accretion of the medium [46, 78, 79]. It is necessary
1047
+ to consider the effects of dynamical frictions and accretion when the medium is dense. To
1048
+ distinguish the effects of DM halos from other mediums (e.g. accretion disks), or modified
1049
+ gravity on GWs, further study is needed [43, 68, 80–82].
1050
+ ACKNOWLEDGMENTS
1051
+ The computing work in this paper is supported by the Public Service Platform of High
1052
+ Performance Computing by Network and Computing Center of HUST. This research is
1053
+ supported in part by the National Key Research and Development Program of China under
1054
+ Grant No. 2020YFC2201504.
1055
+ [1] B. P. Abbott et al. (LIGO Scientific, Virgo), Observation of Gravitational Waves from a Binary
1056
+ Black Hole Merger, Phys. Rev. Lett. 116, 061102 (2016), arXiv:1602.03837 [gr-qc].
1057
+ [2] B. P. Abbott et al. (LIGO Scientific, Virgo), GW150914: The Advanced LIGO Detectors in
1058
+ the Era of First Discoveries, Phys. Rev. Lett. 116, 131103 (2016), arXiv:1602.03838 [gr-qc].
1059
+ [3] B. P. Abbott et al. (LIGO Scientific, Virgo), GWTC-1: A Gravitational-Wave Transient
1060
+ Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and
1061
+ Second Observing Runs, Phys. Rev. X 9, 031040 (2019), arXiv:1811.12907 [astro-ph.HE].
1062
+ [4] R. Abbott et al. (LIGO Scientific, Virgo), GWTC-2: Compact Binary Coalescences Observed
1063
+ by LIGO and Virgo During the First Half of the Third Observing Run, Phys. Rev. X 11,
1064
+ 021053 (2021), arXiv:2010.14527 [gr-qc].
1065
+ [5] R. Abbott et al. (LIGO Scientific, VIRGO), GWTC-2.1: Deep Extended Catalog of Com-
1066
+ pact Binary Coalescences Observed by LIGO and Virgo During the First Half of the Third
1067
+ Observing Run, arXiv:2108.01045 [gr-qc].
1068
+ [6] R. Abbott et al. (LIGO Scientific, VIRGO, KAGRA), GWTC-3: Compact Binary Coales-
1069
+ cences Observed by LIGO and Virgo During the Second Part of the Third Observing Run,
1070
+
1071
+ 16
1072
+ arXiv:2111.03606 [gr-qc].
1073
+ [7] P. Amaro-Seoane et al. (LISA), Laser Interferometer Space Antenna, arXiv:1702.00786 [astro-
1074
+ ph.IM].
1075
+ [8] J. Luo et al. (TianQin), TianQin: a space-borne gravitational wave detector, Class. Quant.
1076
+ Grav. 33, 035010 (2016), arXiv:1512.02076 [astro-ph.IM].
1077
+ [9] W.-R. Hu and Y.-L. Wu, The Taiji Program in Space for gravitational wave physics and the
1078
+ nature of gravity, Natl. Sci. Rev. 4, 685 (2017).
1079
+ [10] Y. Gong, J. Luo, and B. Wang, Concepts and status of Chinese space gravitational wave
1080
+ detection projects, Nature Astron. 5, 881 (2021), arXiv:2109.07442 [astro-ph.IM].
1081
+ [11] V. Baibhav et al., Probing the nature of black holes: Deep in the mHz gravitational-wave sky,
1082
+ Exper. Astron. 51, 1385 (2021), arXiv:1908.11390 [astro-ph.HE].
1083
+ [12] P. Amaro-Seoane et al., Astrophysics with the Laser Interferometer Space Antenna,
1084
+ arXiv:2203.06016 [gr-qc].
1085
+ [13] K. G. Arun et al. (LISA), New horizons for fundamental physics with LISA, Living Rev. Rel.
1086
+ 25, 4 (2022), arXiv:2205.01597 [gr-qc].
1087
+ [14] N. Karnesis et al., The Laser Interferometer Space Antenna mission in Greece White Paper,
1088
+ arXiv:2209.04358 [gr-qc].
1089
+ [15] S. Babak, J. Gair, A. Sesana, E. Barausse, C. F. Sopuerta, C. P. L. Berry, E. Berti, P. Amaro-
1090
+ Seoane, A. Petiteau, and A. Klein, Science with the space-based interferometer LISA. V:
1091
+ Extreme mass-ratio inspirals, Phys. Rev. D 95, 103012 (2017), arXiv:1703.09722 [gr-qc].
1092
+ [16] P. Amaro-Seoane, J. R. Gair, M. Freitag, M. Coleman Miller, I. Mandel, C. J. Cutler, and
1093
+ S. Babak, Astrophysics, detection and science applications of intermediate- and extreme mass-
1094
+ ratio inspirals, Class. Quant. Grav. 24, R113 (2007), arXiv:astro-ph/0703495.
1095
+ [17] C. P. L. Berry, S. A. Hughes, C. F. Sopuerta, A. J. K. Chua, A. Heffernan, K. Holley-
1096
+ Bockelmann, D. P. Mihaylov, M. C. Miller, and A. Sesana, The unique potential of extreme
1097
+ mass-ratio inspirals for gravitational-wave astronomy, arXiv:1903.03686 [astro-ph.HE].
1098
+ [18] P. A. Seoane et al., The effect of mission duration on LISA science objectives, Gen. Rel. Grav.
1099
+ 54, 3 (2022), arXiv:2107.09665 [astro-ph.IM].
1100
+ [19] D. Laghi, N. Tamanini, W. Del Pozzo, A. Sesana, J. Gair, S. Babak, and D. Izquierdo-Villalba,
1101
+ Gravitational-wave cosmology with extreme mass-ratio inspirals, Mon. Not. Roy. Astron. Soc.
1102
+ 508, 4512 (2021), arXiv:2102.01708 [astro-ph.CO].
1103
+
1104
+ 17
1105
+ [20] S. McGee, A. Sesana, and A. Vecchio, Linking gravitational waves and X-ray phenomena with
1106
+ joint LISA and Athena observations, Nature Astron. 4, 26 (2020), arXiv:1811.00050 [astro-
1107
+ ph.HE].
1108
+ [21] S. van den Bergh, The Early history of dark matter, Publ. Astron. Soc. Pac. 111, 657 (1999),
1109
+ arXiv:astro-ph/9904251.
1110
+ [22] V. C. Rubin and W. K. Ford, Jr., Rotation of the Andromeda Nebula from a Spectroscopic
1111
+ Survey of Emission Regions, Astrophys. J. 159, 379 (1970).
1112
+ [23] V. C. Rubin, N. Thonnard, and W. K. Ford, Jr., Rotational properties of 21 SC galaxies with
1113
+ a large range of luminosities and radii, from NGC 4605 /R = 4kpc/ to UGC 2885 /R = 122
1114
+ kpc/, Astrophys. J. 238, 471 (1980).
1115
+ [24] K. G. Begeman, A. H. Broeils, and R. H. Sanders, Extended rotation curves of spiral galaxies:
1116
+ Dark haloes and modified dynamics, Mon. Not. Roy. Astron. Soc. 249, 523 (1991).
1117
+ [25] M. Persic, P. Salucci, and F. Stel, The Universal rotation curve of spiral galaxies: 1. The Dark
1118
+ matter connection, Mon. Not. Roy. Astron. Soc. 281, 27 (1996), arXiv:astro-ph/9506004.
1119
+ [26] E. Corbelli and P. Salucci, The Extended Rotation Curve and the Dark Matter Halo of M33,
1120
+ Mon. Not. Roy. Astron. Soc. 311, 441 (2000), arXiv:astro-ph/9909252.
1121
+ [27] L. A. Moustakas et al., Strong gravitational lensing probes of the particle nature of dark
1122
+ matter, arXiv:0902.3219 [astro-ph.CO].
1123
+ [28] R. Massey, T. Kitching, and J. Richard, The dark matter of gravitational lensing, Rept. Prog.
1124
+ Phys. 73, 086901 (2010), arXiv:1001.1739 [astro-ph.CO].
1125
+ [29] J. Ellis and K. A. Olive, Supersymmetric Dark Matter Candidates, arXiv:1001.3651 [astro-
1126
+ ph.CO].
1127
+ [30] A. Challinor, CMB anisotropy science: a review, IAU Symp. 288, 42 (2013), arXiv:1210.6008
1128
+ [astro-ph.CO].
1129
+ [31] L. Sadeghian, F. Ferrer, and C. M. Will, Dark matter distributions around massive black
1130
+ holes: A general relativistic analysis, Phys. Rev. D 88, 063522 (2013), arXiv:1305.2619 [astro-
1131
+ ph.GA].
1132
+ [32] J. F. Navarro, C. S. Frenk, and S. D. M. White, A Universal density profile from hierarchical
1133
+ clustering, Astrophys. J. 490, 493 (1997), arXiv:astro-ph/9611107.
1134
+ [33] P. Gondolo and J. Silk, Dark matter annihilation at the galactic center, Phys. Rev. Lett. 83,
1135
+ 1719 (1999), arXiv:astro-ph/9906391.
1136
+
1137
+ 18
1138
+ [34] L. Hernquist, An Analytical Model for Spherical Galaxies and Bulges, Astrophys. J. 356, 359
1139
+ (1990).
1140
+ [35] N. Yunes, B. Kocsis, A. Loeb, and Z. Haiman, Imprint of Accretion Disk-Induced Migration
1141
+ on Gravitational Waves from Extreme Mass Ratio Inspirals, Phys. Rev. Lett. 107, 171103
1142
+ (2011), arXiv:1103.4609 [astro-ph.CO].
1143
+ [36] B. Kocsis, N. Yunes, and A. Loeb, Observable Signatures of EMRI Black Hole Binaries Embed-
1144
+ ded in Thin Accretion Disks, Phys. Rev. D 84, 024032 (2011), arXiv:1104.2322 [astro-ph.GA].
1145
+ [37] K. Eda, Y. Itoh, S. Kuroyanagi, and J. Silk, New Probe of Dark-Matter Properties: Gravita-
1146
+ tional Waves from an Intermediate-Mass Black Hole Embedded in a Dark-Matter Minispike,
1147
+ Phys. Rev. Lett. 110, 221101 (2013), arXiv:1301.5971 [gr-qc].
1148
+ [38] C. F. B. Macedo, P. Pani, V. Cardoso, and L. C. B. Crispino, Into the lair: gravitational-wave
1149
+ signatures of dark matter, Astrophys. J. 774, 48 (2013), arXiv:1302.2646 [gr-qc].
1150
+ [39] K. Eda, Y. Itoh, S. Kuroyanagi, and J. Silk, Gravitational waves as a probe of dark matter
1151
+ minispikes, Phys. Rev. D 91, 044045 (2015), arXiv:1408.3534 [gr-qc].
1152
+ [40] E. Barausse, V. Cardoso, and P. Pani, Can environmental effects spoil precision gravitational-
1153
+ wave astrophysics?, Phys. Rev. D 89, 104059 (2014), arXiv:1404.7149 [gr-qc].
1154
+ [41] L. Barack et al., Black holes, gravitational waves and fundamental physics: a roadmap, Class.
1155
+ Quant. Grav. 36, 143001 (2019), arXiv:1806.05195 [gr-qc].
1156
+ [42] O. A. Hannuksela, K. W. K. Wong, R. Brito, E. Berti, and T. G. F. Li, Probing the existence of
1157
+ ultralight bosons with a single gravitational-wave measurement, Nature Astron. 3, 447 (2019),
1158
+ arXiv:1804.09659 [astro-ph.HE].
1159
+ [43] V. Cardoso and A. Maselli, Constraints on the astrophysical environment of binaries with
1160
+ gravitational-wave observations, Astron. Astrophys. 644, A147 (2020), arXiv:1909.05870
1161
+ [astro-ph.HE].
1162
+ [44] X.-J. Yue and Z. Cao, Dark matter minispike: A significant enhancement of eccentricity for
1163
+ intermediate-mass-ratio inspirals, Phys. Rev. D 100, 043013 (2019), arXiv:1908.10241 [astro-
1164
+ ph.HE].
1165
+ [45] L. Annulli, V. Cardoso, and R. Vicente, Stirred and shaken: Dynamical behavior of boson stars
1166
+ and dark matter cores, Phys. Lett. B 811, 135944 (2020), arXiv:2007.03700 [astro-ph.HE].
1167
+ [46] A. Derdzinski, D. D’Orazio, P. Duffell, Z. Haiman, and A. MacFadyen, Evolution of gas
1168
+ disc–embedded intermediate mass ratio inspirals in the LISA band, Mon. Not. Roy. Astron.
1169
+
1170
+ 19
1171
+ Soc. 501, 3540 (2021), arXiv:2005.11333 [astro-ph.HE].
1172
+ [47] L. Zwick, P. R. Capelo, and L. Mayer, Priorities in gravitational waveform modelling for future
1173
+ space-borne detectors: vacuum accuracy or environment?, arXiv:2209.04060 [gr-qc].
1174
+ [48] N. Dai, Y. Gong, T. Jiang, and D. Liang, Intermediate mass-ratio inspirals with dark matter
1175
+ minispikes, Phys. Rev. D 106, 064003 (2022), arXiv:2111.13514 [gr-qc].
1176
+ [49] A. Coogan, G. Bertone, D. Gaggero, B. J. Kavanagh, and D. A. Nichols, Measuring the dark
1177
+ matter environments of black hole binaries with gravitational waves, Phys. Rev. D 105, 043009
1178
+ (2022), arXiv:2108.04154 [gr-qc].
1179
+ [50] V. Cardoso, K. Destounis, F. Duque, R. P. Macedo, and A. Maselli, Black holes in galaxies:
1180
+ Environmental impact on gravitational-wave generation and propagation, Phys. Rev. D 105,
1181
+ L061501 (2022), arXiv:2109.00005 [gr-qc].
1182
+ [51] A. Einstein, On a stationary system with spherical symmetry consisting of many gravitating
1183
+ masses, Annals Math. 40, 922 (1939).
1184
+ [52] A. Geralico, F. Pompi, and R. Ruffini, On Einstein clusters, Int. J. Mod. Phys. Conf. Ser. 12,
1185
+ 146 (2012).
1186
+ [53] R. A. Konoplya and A. Zhidenko, Solutions of the Einstein Equations for a Black Hole Sur-
1187
+ rounded by a Galactic Halo, Astrophys. J. 933, 166 (2022), arXiv:2202.02205 [gr-qc].
1188
+ [54] K. Jusufi, Black holes surrounded by Einstein clusters as models of dark matter fluid,
1189
+ arXiv:2202.00010 [gr-qc].
1190
+ [55] J. Liu, S. Chen, and J. Jing, Tidal effects of a dark matter halo around a galactic black hole*,
1191
+ Chin. Phys. C 46, 105104 (2022), arXiv:2203.14039 [gr-qc].
1192
+ [56] K. Destounis, A. Kulathingal, K. D. Kokkotas, and G. O. Papadopoulos, Gravitational-
1193
+ wave imprints of compact and galactic-scale environments in extreme-mass-ratio binaries,
1194
+ arXiv:2210.09357 [gr-qc].
1195
+ [57] V. Cardoso, K. Destounis, F. Duque, R. Panosso Macedo, and A. Maselli, Gravitational
1196
+ Waves from Extreme-Mass-Ratio Systems in Astrophysical Environments, Phys. Rev. Lett.
1197
+ 129, 241103 (2022), arXiv:2210.01133 [gr-qc].
1198
+ [58] J. R. Gair and K. Glampedakis, Improved approximate inspirals of test-bodies into Kerr black
1199
+ holes, Phys. Rev. D 73, 064037 (2006), arXiv:gr-qc/0510129.
1200
+ [59] J. R. Gair, D. J. Kennefick, and S. L. Larson, Semi-relativistic approximation to gravita-
1201
+ tional radiation from encounters with black holes, Phys. Rev. D 72, 084009 (2005), [Erratum:
1202
+
1203
+ 20
1204
+ Phys.Rev.D 74, 109901 (2006)], arXiv:gr-qc/0508049.
1205
+ [60] S. Babak, H. Fang, J. R. Gair, K. Glampedakis, and S. A. Hughes, ’Kludge’ gravitational wave-
1206
+ forms for a test-body orbiting a Kerr black hole, Phys. Rev. D 75, 024005 (2007), [Erratum:
1207
+ Phys.Rev.D 77, 04990 (2008)], arXiv:gr-qc/0607007.
1208
+ [61] T. Igata and Y. Takamori, Periapsis shifts in dark matter distribution with a dense core, Phys.
1209
+ Rev. D 105, 124029 (2022), arXiv:2202.03114 [gr-qc].
1210
+ [62] T. Igata, T. Harada, H. Saida, and Y. Takamori, Periapsis shifts in dark matter distribution
1211
+ around a black hole, arXiv:2202.00202 [gr-qc].
1212
+ [63] P. Peters and J. Mathews, Gravitational radiation from point masses in a Keplerian orbit,
1213
+ Phys. Rev. 131, 435 (1963).
1214
+ [64] P. Peters, Gravitational Radiation and the Motion of Two Point Masses, Phys. Rev. 136,
1215
+ B1224 (1964).
1216
+ [65] E. Berti, A. Buonanno, and C. M. Will, Estimating spinning binary parameters and testing al-
1217
+ ternative theories of gravity with LISA, Phys. Rev. D 71, 084025 (2005), arXiv:gr-qc/0411129.
1218
+ [66] B. J. Kavanagh, D. A. Nichols, G. Bertone, and D. Gaggero, Detecting dark matter around
1219
+ black holes with gravitational waves: Effects of dark-matter dynamics on the gravitational
1220
+ waveform, Phys. Rev. D 102, 083006 (2020), arXiv:2002.12811 [gr-qc].
1221
+ [67] S. Barsanti, N. Franchini, L. Gualtieri, A. Maselli, and T. P. Sotiriou, Extreme mass-ratio
1222
+ inspirals as probes of scalar fields: Eccentric equatorial orbits around Kerr black holes, Phys.
1223
+ Rev. D 106, 044029 (2022), arXiv:2203.05003 [gr-qc].
1224
+ [68] A. Maselli, N. Franchini, L. Gualtieri, and T. P. Sotiriou, Detecting scalar fields with Extreme
1225
+ Mass Ratio Inspirals, Phys. Rev. Lett. 125, 141101 (2020), arXiv:2004.11895 [gr-qc].
1226
+ [69] T. Robson, N. J. Cornish, and C. Liu, The construction and use of LISA sensitivity curves,
1227
+ Class. Quant. Grav. 36, 105011 (2019), arXiv:1803.01944 [astro-ph.HE].
1228
+ [70] E. E. Flanagan and S. A. Hughes, Measuring gravitational waves from binary black hole
1229
+ coalescences: 2. The Waves’ information and its extraction, with and without templates,
1230
+ Phys. Rev. D 57, 4566 (1998), arXiv:gr-qc/9710129.
1231
+ [71] L. Lindblom, B. J. Owen, and D. A. Brown, Model Waveform Accuracy Standards for Grav-
1232
+ itational Wave Data Analysis, Phys. Rev. D 78, 124020 (2008), arXiv:0809.3844 [gr-qc].
1233
+ [72] A. Buonanno, Y.-b. Chen, and M. Vallisneri, Detection template families for gravitational
1234
+ waves from the final stages of binary–black-hole inspirals: Nonspinning case, Phys. Rev. D
1235
+
1236
+ 21
1237
+ 67, 024016 (2003), [Erratum: Phys.Rev.D 74, 029903 (2006)], arXiv:gr-qc/0205122.
1238
+ [73] S. Chandrasekhar, Dynamical Friction. I. General Considerations: the Coefficient of Dynam-
1239
+ ical Friction, Astrophys. J. 97, 255 (1943).
1240
+ [74] E. C. Ostriker, Dynamical friction in a gaseous medium, Astrophys. J. 513, 252 (1999),
1241
+ arXiv:astro-ph/9810324.
1242
+ [75] H. Kim and W.-T. Kim, Dynamical Friction of a Circular-Orbit Perturber in a Gaseous
1243
+ Medium, Astrophys. J. 665, 432 (2007), arXiv:0705.0084 [astro-ph].
1244
+ [76] D. Traykova, K. Clough, T. Helfer, E. Berti, P. G. Ferreira, and L. Hui, Dynamical fric-
1245
+ tion from scalar dark matter in the relativistic regime, Phys. Rev. D 104, 103014 (2021),
1246
+ arXiv:2106.08280 [gr-qc].
1247
+ [77] R. Vicente and V. Cardoso, Dynamical friction of black holes in ultralight dark matter, Phys.
1248
+ Rev. D 105, 083008 (2022), arXiv:2201.08854 [gr-qc].
1249
+ [78] H. Bondi and F. Hoyle, On the mechanism of accretion by stars, Mon. Not. Roy. Astron. Soc.
1250
+ 104, 273 (1944).
1251
+ [79] R. G. Edgar, A Review of Bondi-Hoyle-Lyttleton accretion, New Astron. Rev. 48, 843 (2004),
1252
+ arXiv:astro-ph/0406166.
1253
+ [80] N. Becker and L. Sagunski, Comparing Accretion Disks and Dark Matter Spikes in Interme-
1254
+ diate Mass Ratio Inspirals, . (2022), arXiv:2211.05145 [gr-qc].
1255
+ [81] C. Zhang and Y. Gong, Detecting electric charge with extreme mass ratio inspirals, Phys.
1256
+ Rev. D 105, 124046 (2022), arXiv:2204.08881 [gr-qc].
1257
+ [82] V. Cardoso, G. Castro, and A. Maselli, Gravitational waves in massive gravity theories: wave-
1258
+ forms, fluxes and constraints from extreme-mass-ratio mergers, Phys. Rev. Lett. 121, 251103
1259
+ (2018), arXiv:1809.00673 [gr-qc].
1260
+
8NE4T4oBgHgl3EQfdAy1/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
8tAyT4oBgHgl3EQfc_f6/content/tmp_files/2301.00296v1.pdf.txt ADDED
@@ -0,0 +1,579 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Local Einstein relation for fractals
2
+ L. Padilla, J. L. Iguain
3
+ Instituto de Investigaciones F´ısicas de Mar del Plata (IFIMAR) and
4
+ Departamento de F´ısica FCEyN, Universidad Nacional de Mar del Plata,
5
+ De´an Funes 3350, 7600 Mar del Plata, Argentina
6
+ E-mail: [email protected]
7
+ Abstract.
8
+ We study single random walks and the electrical resistance for fractals
9
+ obtained as the limit of a sequence of periodic structures. In the long-scale regime,
10
+ power laws describe both the mean-square displacement of a random walk as a function
11
+ of time and the electrical resistance as a function of length.
12
+ We show that the
13
+ corresponding power-law exponents satisfy the Einstein relation. For shorter scales,
14
+ where these exponents depend on length, we find how the Einstein relation can be
15
+ generalized to hold locally. All these findings were analytically derived and confirmed
16
+ by numerical simulations.
17
+ Keywords: Fractals, Power-law behaviours, Einstein relation.
18
+ arXiv:2301.00296v1 [cond-mat.stat-mech] 31 Dec 2022
19
+
20
+ Local Einstein relation for fractals
21
+ 2
22
+ 1. Introduction
23
+ Fractals are characterized by quantities that exhibit power-law behaviour in space or
24
+ time. More precisely, as scale invariance occurs for integer powers of a characteristic
25
+ length, pure power laws are modulated by logarithmic periodic functions, that describe
26
+ the departures from the main trend at intermediate scales. These modulations have
27
+ been the object of recent interest and considerable effort has been devoted toward
28
+ understanding the relation between log-periodicity and discrete-scale invariance [1–13].
29
+ For a given fractal and some related observables, which show (modulated) power-
30
+ law behaviours, a problem of interest is to determine whether or not the exponents
31
+ associated with these quantities are independent. Sometimes we can expect a relation
32
+ as a consequence of underlying physical laws.
33
+ This is, for example, the case of the
34
+ mass m, the electric resistance R and the mean-square-displacement (MSD) ∆r2 for
35
+ a single random walker. On a fractal, the first two grow with length l as m(l) ∼ ldf
36
+ and R(l) ∼ lζ, while the last one grows with time t as ∆r2(t) ∼ t2/dw. The exponents
37
+ df, ζ and dw are known as the fractal, resistance and walk exponents, respectively, and
38
+ these power-law behaviours hold for scales large enough to ensure self-similarity. In an
39
+ d-dimensional euclidean space, the diffusion coefficient D and conductivity σ are related
40
+ by the Einstein equation [14]
41
+ σ = e2ρ
42
+ kBT D.
43
+ (1)
44
+ Here, D = limt→∞ ∆r2(t)/2t, ρ and e are the density and charge of mobile particles, T
45
+ is the temperature and kB is the Boltzmann constant. Equation (1) is one of the forms
46
+ of the fluctuation-dissipation theorem, and can be used together with simple scaling
47
+ heuristic arguments, to argue that the fractal, walk, and resistance exponents satisfy
48
+ the Einstein relation [14]
49
+ df = dw − ζ,
50
+ (2)
51
+ This property has been shown to hold asymptotically for some finitely ramified
52
+ fractals [15, 16]; which has been used to analyze the periodicity of the oscillations in
53
+ dynamic observables, in the first attempts to understand log-periodic modulation [17].
54
+ Einstein relation was also investigated for random walks on weighted graphs [18], and,
55
+ more recently, for karst networks structures [19].
56
+ A deterministic fractal can be obtained as the limit of a sequence of periodic
57
+ structures. In this procedure, the period increases at every step as Ln (n = 0, 1, 2, ...),
58
+ where L is a basic characteristic length scale. Self-similarity is manifested in power-law
59
+ behaviours, which occur for long enough scales. However, this does not always hold
60
+ for shorter lengths. Thus, the local slopes of the observables as a function of time or
61
+ length, in log-log scales, are variable quantities, which approach constant values only
62
+ asymptotically.
63
+ In this work we argue that the local fractal, walk, and resistance exponents are
64
+ related through an equation that generalizes (2).
65
+ This generalization is obtained
66
+
67
+ Local Einstein relation for fractals
68
+ 3
69
+ analytically, following the steady-state method for the calculation of the effective
70
+ diffusion coefficients for periodic substrates [20]. To further strengthen our findings we
71
+ perform numerical simulations for two models of fractals; which confirm the theoretical
72
+ predictions.
73
+ The paper is organized as follows. In Sec. 2 we relate the diffusion coefficient and the
74
+ unit cell resistance for a periodic structure. In Sec. 3 we derive the Einstein relation for
75
+ self-similar systems. In Sec. 4 we generalize this relation for scale-dependent exponents.
76
+ In Sec. 5 we confirm the generalized relation by numerical simulations performed on
77
+ models of asymptotic self-similar substrates. Finally, we give our conclusions in Sec. 6.
78
+ 2. Periodic systems
79
+ In this section we address the problem of the diffusion coefficient for a periodic substrate.
80
+ We follows the steady-state method developed in reference [20]. We start by introducing
81
+ the periodic substrate with unit cell of linear dimension l, schematized in figure 1, where
82
+ the points represent sites, and the arrows represent hopping rates. On this structure,
83
+ a mobile particle can jump between connected sites according to the hopping rates k′s
84
+ (for the sake of clarity only a few sites and arrows were highlighted). We focus on a
85
+ steady-state of non-interacting particles flowing with a constant current density j.
86
+ k
87
+ n
88
+ n
89
+ (f+1)
90
+ (f+1)
91
+ n
92
+ (f+1)
93
+ n
94
+ ns
95
+ t
96
+ v
97
+ u
98
+ k
99
+ n
100
+ n
101
+ (f)
102
+ (f)
103
+ (f)
104
+ n
105
+ (f)
106
+ n
107
+ n(f)
108
+ k
109
+ s
110
+ t
111
+ v
112
+ r
113
+ u
114
+ (f+1)
115
+ (f+1)
116
+ l
117
+ krs
118
+ st
119
+ k u
120
+ t
121
+ uv
122
+ k
123
+ k
124
+ k
125
+ r
126
+ rs
127
+ st
128
+ tu
129
+ uv
130
+ Figure 1. Two nearest-neighbor cells f and f +1, for a periodic substrate with linear
131
+ size period l. The points represent sites, which can be occupied by mobile particles.
132
+ The arrows represent hopping rates between pairs of sites. For clarity, only a few sites
133
+ and hopping rates were highlighted. n(f)
134
+ r
135
+ corresponds to the number of particles in the
136
+ internal site r of cell f
137
+ As shown in [20], this steady-state consists of a set of microscopic currents
138
+ distributed with the same periodicity as the substrate. In figure 1 two nearest-neighbor
139
+ (NN) unit cells are depicted schematically where, for example, n(f)
140
+ s
141
+ represents the
142
+ number of particles in site r (internal index) of cell f.
143
+ Because of the mentioned
144
+
145
+ Local Einstein relation for fractals
146
+ 4
147
+ periodicity, we get that for given pair of connected sites with internal indices s and
148
+ t,
149
+ i(f)
150
+ rs = i(f+1)
151
+ rs
152
+ ,
153
+ (3)
154
+ where i(f)
155
+ rs is the current from site s to site r in cell f. In addition, as hopping rates do
156
+ not depend on the cell either but only on the internal indices, the last equation can be
157
+ rewritten as
158
+ ksr(n(f)
159
+ s
160
+ − n(f)
161
+ r ) = ksr(n(f+1)
162
+ s
163
+ − n(f+1)
164
+ r
165
+ ),
166
+ (4)
167
+ or
168
+ n(f+1)
169
+ s
170
+ − n(f)
171
+ s
172
+ = n(f+1)
173
+ r
174
+ − n(f)
175
+ r .
176
+ (5)
177
+ Therefore, in the steady-state, the difference in the occupation number for a given
178
+ site and the equivalent site in a NN cell is the same for all sites.
179
+ The relation of the steady-state problem with the diffusion coefficient D is provided
180
+ by Fick’s law
181
+ j = −D∆n
182
+ l2 ,
183
+ (6)
184
+ which is valid for distances larger than l. Here ∆n corresponds to the particle number
185
+ difference for NN cells. Note that D also determines the mean-square displacement ∆2x
186
+ of a single random walker on the same structure, which behaves as
187
+ ∆2x(t) = 2Dt;
188
+ (7)
189
+ for time long enough for ∆x ≫ l.
190
+ n
191
+ n
192
+ k
193
+ a
194
+ b
195
+ i=(n −n )k
196
+ b
197
+ a
198
+ Va
199
+ R
200
+ b
201
+ Vb
202
+ i= (V −V )/R
203
+ a
204
+ Figure 2. Schematics of the equivalence between Fick’s law (left) and Ohm’s law
205
+ (right). In the mapping particles have unitary charge, while the other quantities are
206
+ related as V = n, and R = 1/k.
207
+ Transforming the steady-state problem into an equivalent electrical problem is
208
+ straightforward. Indeed, for particles of unitary electric charge, a mapping between
209
+ Fick’s law and Ohm’s law results by identifying particle number with electrostatic
210
+ potential (Va = na) and hopping rate with conductance (k = 1/R). In figure 2 we
211
+ represent this mapping for every pair of connected sites. Following this analogy, we see
212
+
213
+ Local Einstein relation for fractals
214
+ 5
215
+ that in the electric problem, the potential difference for a pair of equivalent sites in NN
216
+ cells takes the constant value
217
+ ∆V = n(i+1)
218
+ r
219
+ − n(i)
220
+ r ,
221
+ (8)
222
+ and that the difference between particle populations
223
+ ∆n =
224
+ M
225
+
226
+ r=1
227
+ (n(i+1)
228
+ r
229
+ − n(i)
230
+ r ) = M∆V,
231
+ (9)
232
+ is proportional to the potential difference ∆V , where the constant of proportionality M
233
+ corresponds to the number of sites per unit cell.
234
+ Thus, according to equation (6), we can conclude that, given a periodic substrate
235
+ with unit cell of linear dimension l and M sites, the diffusion coefficient and the potential
236
+ difference between two equivalent sites in NN cells, are connected through the relation
237
+ D = −j
238
+ l2
239
+ M∆V ,
240
+ (10)
241
+ where j is the steady-state current density.
242
+ 3. Self-similar substrates
243
+ Deterministic fractals are usually built by a recursive procedure, that results in a
244
+ sequence of structures called generations.
245
+ A generation consists of a periodic array
246
+ of sites connected by bonds. The process begins with a basic periodic structure (zeroth
247
+ generation). At every step the unit cell is scaled by a factor L and the building rules
248
+ ensure that self-similarity is obtained after a large number of iterations.
249
+ Following equation (10), the diffusion coefficient Dp for the generation p and the
250
+ potential difference ∆Vp between two equivalent points in NN unit cells are related as
251
+ Dp = −j
252
+ L2p
253
+ Mp∆Vp
254
+ ,
255
+ (11)
256
+ where Mp is the number of sites in the unit cell, and Lp is its linear dimension. Then, for
257
+ two consecutive generations p and p + 1, through which the same steady-state current
258
+ flows, we obtain
259
+ Dp
260
+ Dp+1
261
+ = L−2Mp+1
262
+ Mp
263
+ ∆Vp+1
264
+ ∆Vp
265
+ .
266
+ (12)
267
+ Now, since for a fractal the number of sites in a box with linear dimension l
268
+ behaves as m(l) ∼ ldf (i. e., df is the fractal dimension defined through box-counting),
269
+ Mp+1/Mp = (L(p+1)/Lp)df = Ldf, and the last equation can be rewritten as
270
+ Dp
271
+ Dp+1
272
+ = Ldf−2∆Vp+1
273
+ ∆Vp
274
+ ,
275
+ (13)
276
+
277
+ Local Einstein relation for fractals
278
+ 6
279
+ As previously shown [7,8], a perfect diffusive self-similar structure corresponds to
280
+ a ratio Dp/Dp+1 which does not depend on p, i. e.,
281
+ Dp
282
+ Dp+1
283
+ = 1 + λ,
284
+ (14)
285
+ with λ a positive constant. In this model, the mean-square displacement for a single
286
+ random walker behaves as
287
+ ∆2x(t) = f(t)t2ν.
288
+ (15)
289
+ The modulation f(t) is a log-periodic function, f(tτ) = f(t), and both ν and τ can be
290
+ analytically calculated in terms of L and λ:
291
+ ν =
292
+ 1
293
+ 2 + log(1 + λ)
294
+ log(L)
295
+ (16)
296
+ τ = L1/ν
297
+ (17)
298
+ The important partial conclusion in the context of this work is that, according to
299
+ above discussion, a perfect diffusive self-similar structure implies a power-law behaviour
300
+ for the resistance as a function of length. Indeed, equations (13) and (14) leads to
301
+ ∆Vp+1
302
+ ∆Vp
303
+ = L1/ν−df,
304
+ (18)
305
+ where we have used 1 + λ = L1/ν−2, from equation (16). Thus, for a perfect diffusive
306
+ self-similar fractal the potential difference, which corresponds to steady-state current,
307
+ scales with length l as
308
+ ∆V ∼ lζ,
309
+ (19)
310
+ where the exponent ζ is given by
311
+ ζ = 1/ν − df;
312
+ (20)
313
+ which is the Einstein relation (2), with dw = 1/ν.
314
+ 4. Local exponents
315
+ We consider now a generic substrate for which diffusive self-similarity is reached only
316
+ asymptotically. Let us assume a ratio between consecutive diffusion coefficients, that
317
+ depends on the generation p, as
318
+ Dp
319
+ Dp+1
320
+ = 1 + λp.
321
+ (21)
322
+ where, {λp : p = 1, 2, ...} is a sequence of non-negative real numbers, with lim
323
+ p→∞ λp = λ.
324
+ Because of this limit, at long enough times a single random walk on this substrate
325
+ will show a MSD behaviour as in equation (15), and, as pointed out before, for large
326
+
327
+ Local Einstein relation for fractals
328
+ 7
329
+ enough lengths the potential difference will behave as in equation (19); with ν and ζ
330
+ given by equations (16) and (20).
331
+ In this section we focus on local exponents, which correspond to the slopes in log-
332
+ log scales for finite length or time. As shown for example in [8], on a substrate on which
333
+ diffusion coefficients for generations p and p + 1 satisfy equation (21), the MSD for a
334
+ single random walker behaves as
335
+ ∆2x(t) ∼ t2νp,
336
+ for
337
+ Lp ≲ ∆x ≲ Lp+1,
338
+ (22)
339
+ with the local exponent νp given by
340
+ νp =
341
+ 1
342
+ 2 + log(1 + λp)
343
+ log(L)
344
+ ·
345
+ (23)
346
+ Then, after rearranging this equation as 1+λp = L1/νp−2, which corresponds to the
347
+ left hand side of equation (13), we obtain
348
+ ∆Vp+1
349
+ ∆Vp
350
+ = L1/νp−df.
351
+ (24)
352
+ Thus, we expect that the potential difference scales with length l as
353
+ ∆V (l) ∼ lζp,
354
+ for
355
+ Lp ≲ l ≲ Lp+1,
356
+ (25)
357
+ and that the local exponents satisfy the relation
358
+ ζp = 1/νp − df.
359
+ (26)
360
+ Therefore, local slopes in log-log scales for the resistance as a function of length
361
+ and for MSD of a single random walker as a function of time are related for all scales
362
+ through equation (26); which generalizes the Einstein relation.
363
+ 5. Numerical simulations
364
+ We study numerically the steady-state that corresponds to a unitary current on two
365
+ models, for which diffusive self-similarity appears asymptotically.
366
+ At finite lengths,
367
+ the local random-walk exponent νp is not constant. Thus, we expect an also variable
368
+ resistance exponent ζp, related to the former through equation (26).
369
+ The first model is a substrate built on a square lattice. A random walk consists in
370
+ a particle hopping among NN sites. If sites are connected by a bond, the hopping rate is
371
+ k = 1/4. If the sites are not connected, the hopping rate is k = 0. A fractal is obtained
372
+ by deleting some bonds. The characteristic scale factor is L = 3, and the unit cells for
373
+ the first, the second and the third generations are depicted schematically in figure 3.
374
+ For every generation the unit cell can be separated from the rest by cutting four bonds.
375
+ As shown in a previous work, the mass on this structure shows a power-law behaviour
376
+ with df = 2. However, the random walk exponent νp grows with time and approaches
377
+ a value ν < 1/2 when t → ∞ [8].
378
+
379
+ Local Einstein relation for fractals
380
+ 8
381
+ We have run numerical simulations on the unit cell of the sixth generation, to reach
382
+ the steady-state in which a unitary current flows between the left and right extremes. In
383
+ figure 4 we plot with symbols the potential differences for lengths x = 3i (i = 0, 1, ..., 6),
384
+ which are the unit cell linear sizes for the generations zero to six. In the same figure,
385
+ we plot a line using the relation (26) and the numerical values for νp, which are the
386
+ outcomes of random walk simulations reported in reference [8]. Notice that both data
387
+ set fall on the same curve, which confirms the relation (26).
388
+ Figure 3. Substrate in two dimensions, which results in scale-dependent walk and
389
+ resistance exponents. The schematics correspond to the unit cells for the first, second
390
+ and third generations. The segments represent bonds between sites.
391
+ The second model is a generalization of the one-dimensional self-similar model
392
+ introduced in [7]. We start with a single random walk on a one-dimensional lattice, with
393
+ a hopping rate k0 between any pair of NN sites. This homogeneous case corresponds to
394
+ generation zero. We introduce a natural number L to build the other generations.
395
+ In the first generation, we reset to k1 < k0 the hopping rate for every pair of sites j
396
+ and j +1, with mod(j, L) = 0. The other hopping rates remains as in zeroth generation.
397
+ In the second generation, we reset to k2 < k1 the hopping rate for every pair of sites
398
+ j and j +1, with mod(j, L2) = 0. The other hopping rates remains as in first generation.
399
+ This recursion follows indefinitely, in such a way that generation n is obtained from
400
+ generation n − 1 after resetting to kn < kn−1 the hopping rate for every pair of sites j
401
+ and j + 1, with mod(j, Ln) = 0. In figure 5 we show an schematics for L = 5.
402
+
403
+ Local Einstein relation for fractals
404
+ 9
405
+ 1
406
+ 10
407
+ 100
408
+ 1
409
+ 10
410
+ 100
411
+ 1000
412
+ ∆V
413
+ x
414
+ Figure 4. Potential difference as a function of length for a unitary current flowing
415
+ trough the unit cell of the sixth generation substrate in figure 3.
416
+ The symbols
417
+ correspond to simulations of the steady-state. The line was plotted with the exponents
418
+ ζp from equation (26) and the values of νp which result from random-walk numerical
419
+ simulations.
420
+ L
421
+ L2
422
+ k0
423
+ k1
424
+ k2
425
+ Figure 5. Schematics of the one-dimensional random-walk model. We begin with
426
+ a homogeneous lattice, and a hopping rate k0 between nearest-neighbor sites. Then,
427
+ hopping rates are reset to kj for transitions between sites j and j + 1 for every j such
428
+ that mod(j, Ln) = 0, and for n = 1, 2, .... In this example, L = 5.
429
+ If we ask for perfect self-similarity for diffusion, i. e. equation (14), the hopping
430
+ rates are found iteratively as in reference [7]. For the more general case of equation
431
+ (21), the sequence of hopping rates is given by
432
+ 1
433
+ ki
434
+ =
435
+ 1
436
+ ki−1
437
+ + Liλi−1
438
+ k0
439
+ i−2
440
+
441
+ j=0
442
+ (1 + λj),
443
+ for i = 1, 2, 3...
444
+ (27)
445
+ We test the validity of the relation (26) among the local exponents for a family of
446
+
447
+ Local Einstein relation for fractals
448
+ 10
449
+ substrates given by
450
+ λp = λ (1 − 2−p/5.).
451
+ (28)
452
+ At short enough lengths these substrates are nearly homogeneous (λp ≈ 0 for p ≪ 5),
453
+ while, on the other extreme, self-similarity for diffusion is reached for lengths much larger
454
+ than L5. The local random walk exponent (23) decreases with length and approaches
455
+ asymptotically ν in equation (16). Thus, the variation of νp in space increases with λ
456
+ and, because of equation (26), the same should occur with the variation of ζp. This is an
457
+ interesting model, because the variation of the exponents with length can be adjusted
458
+ through the parameter λ.
459
+ 100
460
+ 101
461
+ 102
462
+ 103
463
+ 104
464
+ 105
465
+ 106
466
+ 107
467
+ 108
468
+ 100
469
+ 101
470
+ 102
471
+ 103
472
+ 100
473
+ 101
474
+ 102
475
+ 103
476
+ 104
477
+ 105
478
+ 106
479
+ 100
480
+ 101
481
+ 102
482
+ 103
483
+ ∆V
484
+ x
485
+ ∆V
486
+ x
487
+ Figure 6. Potential difference as a function of length for unitary current on the one-
488
+ dimensional model with λp = λ (1−2−p/5.), and L = 2. (Main) Symbols correspond to
489
+ data obtained with numerical simulations on a tenth-generation substrate. Lines were
490
+ drawn using the values of theoretical exponents. From bottom to top, λ = 1 (red),
491
+ λ = 2 (green), λ = 4 (violet), λ = 5 (blue). (Inset) More detailed structure for λ = 2.
492
+ We have run numerical simulations for the steady-state that corresponds to a
493
+ unitary current flowing on this model, with L = 2 and λ = 1, 2, 4, 5. All substrates
494
+ were built until generation 10. In figure 6-main we plot with symbols the potential
495
+ difference as a function of the length x, for x = 2j (j = 0, 1, ..., 9). The lines correspond
496
+ to the exponents ζp obtained from equations (26) and (23). Note the excellent agreement
497
+ between theory and simulations. The inset in the same figure shows substructure of ∆V
498
+ for λ = 2.
499
+ 6. Conclusions
500
+ We have studied first the connection between single random walks and steady-state
501
+ potential difference for substrates with spatial periodicity.
502
+ Then, by considering a
503
+ sequence of periodic systems, a common procedure for deterministic fractal construction,
504
+ we find that the length dependent fractal, walk and resistance exponents, for the
505
+
506
+ Local Einstein relation for fractals
507
+ 11
508
+ substrate obtained in the infinite limit of this sequence, satisfy, at every length scale,
509
+ the relation (26). This can be considered as a local version of the Einstein relation (2).
510
+ We have tested our predictions numerically for two models. The first model is a fractal
511
+ in two dimensions, while the the second is a fractal in one dimension. Both models lead
512
+ to length-dependent exponents at intermediate scales. The excellent agreement between
513
+ the outcomes of these simulations and the theoretical predictions supports the validity
514
+ of the mentioned relation among exponents, not only in the asymptotic self-similar limit
515
+ but also locally, for all length scales.
516
+ Acknowledgments
517
+ We are grateful to H. O. M´artin for useful discussions. This research was supported
518
+ by the Universidad Nacional de Mar del Plata, 15/E1040, and the Consejo Nacional de
519
+ Investigaciones Cient´ıficas y T´ecnicas, PIP1748/21.
520
+ References
521
+ [1] Peter J. Grabner and Wolfgang Woess. Functional iterations and periodic oscillations for simple
522
+ random walk on the sierpi?ski graph. Stochastic Processes and their Applications, 69(1):127 –
523
+ 138, 1997.
524
+ [2] L. Acedo and S. B. Yuste. Territory covered by n random walkers on fractal media: The sierpinski
525
+ gasket and the percolation aggregate. Phys. Rev. E, 63:011105, Dec 2000.
526
+ [3] M. A. Bab, G. Fabricius, and E. V. Albano. On the occurrence of oscillatory modulations in the
527
+ power law behavior of dynamic and kinetic processes in fractals. EPL (Europhysics Letters),
528
+ 81(1):10003, 2008.
529
+ [4] M. A. Bab, G. Fabricius, and Ezequiel V. Albano. Revisiting random walks in fractal media: On
530
+ the occurrence of time discrete scale invariance.
531
+ The Journal of Chemical Physics, 128(4):–,
532
+ 2008.
533
+ [5] Alberto L. Maltz, Gabriel Fabricius, Marisa A. Bab, and Ezequiel V. Albano.
534
+ Random walks
535
+ in fractal media:
536
+ a theoretical evaluation of the periodicity of the oscillations in dynamic
537
+ observables. Journal of Physics A: Mathematical and Theoretical, 41(49):495004, 2008.
538
+ [6] Sebastian Weber, Joseph Klafter, and Alexander Blumen. Random walks on sierpinski gaskets of
539
+ different dimensions. Phys. Rev. E, 82:051129, Nov 2010.
540
+ [7] L. Padilla, H. O. M´artin, and J. L. Iguain. Log-periodic modulation in one-dimensional random
541
+ walks. EPL (Europhysics Letters), 85(2):20008, January 2009.
542
+ [8] L. Padilla, H. O. M´artin, and J. L. Iguain. Log-periodic oscillations for diffusion on self-similar
543
+ finitely ramified structures. Phys. Rev. E, 82:011124, Jul 2010.
544
+ [9] L. Padilla, H. M´artin, and J. Iguain.
545
+ Anomalous diffusion with log-periodic modulation in a
546
+ selected time interval. Physical Review E, 83(2):2–5, feb 2011.
547
+ [10] Daniel ben Avraham and Shlomo Havlin.
548
+ Diffusion and Reactions in Fractals and Disordered
549
+ Systems. Cambridge University Press, 2000.
550
+ [11] Bernhard Kr¨on and Elmar Teufl. Asymptotics of the transition probabilities of the simple random
551
+ walk on self-similar graphs. Trans. Amer. Math. Soc., 356:393–414, 2004.
552
+ [12] L. Padilla, H. O. M´artin, and J. L. Iguain. Anisotropic anomalous diffusion modulated by log-
553
+ periodic oscillations. Physical Review E, 86(1):011106, jul 2012.
554
+ [13] Frechero, M. A., Padilla, L., M´artin, H. O., and Iguain, J. L.
555
+ Intermediate-range structure in
556
+ ion-conducting tellurite glasses. EPL, 103(3):36002, 2013.
557
+ [14] Amin Bunde and Shlomo Havlin (Eds.). Fractals and Disordered Systems. Springer, 1996.
558
+
559
+ Local Einstein relation for fractals
560
+ 12
561
+ [15] J A Given and B B Mandelbrot.
562
+ Diffusion on fractal lattices and the fractal einstein relation.
563
+ Journal of Physics A: Mathematical and General, 16(15):L565, oct 1983.
564
+ [16] Astrid Franz, Christian Schulzky, and Karl Heinz Hoffmann.
565
+ The Einstein relation for finitely
566
+ ramified Sierpinski carpets. Nonlinearity, 14(5):1411, aug 2001.
567
+ [17] Alberto L Maltz, Gabriel Fabricius, Marisa A Bab, and Ezequiel V Albano.
568
+ Random walks
569
+ in fractal media:
570
+ a theoretical evaluation of the periodicity of the oscillations in dynamic
571
+ observables. Journal of Physics A: Mathematical and Theoretical, 41(49):495004, oct 2008.
572
+ [18] Andr´as Telcs. The Einstein Relation for Random Walks on Graphs. Journal of Statistical Physics,
573
+ 122(4):617–645, 2006.
574
+ [19] Martin Hendrick and Philippe Renard.
575
+ Fractal dimension, walk dimension and conductivity
576
+ exponent of karst networks around tulum. Frontiers in Physics, 4, 2016.
577
+ [20] C. M. Aldao, J. L. Iguain, and H. O. M´artin. Diffusion of tagged particle in an exclusion process.
578
+ Surf. Sci., 366:483–490, Apr 1996.
579
+
8tAyT4oBgHgl3EQfc_f6/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,310 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf,len=309
2
+ page_content='Local Einstein relation for fractals L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
3
+ page_content=' Padilla, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
4
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
5
+ page_content=' Iguain Instituto de Investigaciones F´ısicas de Mar del Plata (IFIMAR) and Departamento de F´ısica FCEyN, Universidad Nacional de Mar del Plata, De´an Funes 3350, 7600 Mar del Plata, Argentina E-mail: iguain@mdp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
6
+ page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
7
+ page_content='ar Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
8
+ page_content=' We study single random walks and the electrical resistance for fractals obtained as the limit of a sequence of periodic structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
9
+ page_content=' In the long-scale regime, power laws describe both the mean-square displacement of a random walk as a function of time and the electrical resistance as a function of length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
10
+ page_content=' We show that the corresponding power-law exponents satisfy the Einstein relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
11
+ page_content=' For shorter scales, where these exponents depend on length, we find how the Einstein relation can be generalized to hold locally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
12
+ page_content=' All these findings were analytically derived and confirmed by numerical simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
13
+ page_content=' Keywords: Fractals, Power-law behaviours, Einstein relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
14
+ page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
15
+ page_content='00296v1 [cond-mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
16
+ page_content='stat-mech] 31 Dec 2022 Local Einstein relation for fractals 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
17
+ page_content=' Introduction Fractals are characterized by quantities that exhibit power-law behaviour in space or time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
18
+ page_content=' More precisely, as scale invariance occurs for integer powers of a characteristic length, pure power laws are modulated by logarithmic periodic functions, that describe the departures from the main trend at intermediate scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
19
+ page_content=' These modulations have been the object of recent interest and considerable effort has been devoted toward understanding the relation between log-periodicity and discrete-scale invariance [1–13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
20
+ page_content=' For a given fractal and some related observables, which show (modulated) power- law behaviours, a problem of interest is to determine whether or not the exponents associated with these quantities are independent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
21
+ page_content=' Sometimes we can expect a relation as a consequence of underlying physical laws.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
22
+ page_content=' This is, for example, the case of the mass m, the electric resistance R and the mean-square-displacement (MSD) ∆r2 for a single random walker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
23
+ page_content=' On a fractal, the first two grow with length l as m(l) ∼ ldf and R(l) ∼ lζ, while the last one grows with time t as ∆r2(t) ∼ t2/dw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
24
+ page_content=' The exponents df, ζ and dw are known as the fractal, resistance and walk exponents, respectively, and these power-law behaviours hold for scales large enough to ensure self-similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
25
+ page_content=' In an d-dimensional euclidean space, the diffusion coefficient D and conductivity σ are related by the Einstein equation [14] σ = e2ρ kBT D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
26
+ page_content=' (1) Here, D = limt→∞ ∆r2(t)/2t, ρ and e are the density and charge of mobile particles, T is the temperature and kB is the Boltzmann constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
27
+ page_content=' Equation (1) is one of the forms of the fluctuation-dissipation theorem, and can be used together with simple scaling heuristic arguments, to argue that the fractal, walk, and resistance exponents satisfy the Einstein relation [14] df = dw − ζ, (2) This property has been shown to hold asymptotically for some finitely ramified fractals [15, 16];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
28
+ page_content=' which has been used to analyze the periodicity of the oscillations in dynamic observables, in the first attempts to understand log-periodic modulation [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
29
+ page_content=' Einstein relation was also investigated for random walks on weighted graphs [18], and, more recently, for karst networks structures [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
30
+ page_content=' A deterministic fractal can be obtained as the limit of a sequence of periodic structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
31
+ page_content=' In this procedure, the period increases at every step as Ln (n = 0, 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
32
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
33
+ page_content='), where L is a basic characteristic length scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
34
+ page_content=' Self-similarity is manifested in power-law behaviours, which occur for long enough scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
35
+ page_content=' However, this does not always hold for shorter lengths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
36
+ page_content=' Thus, the local slopes of the observables as a function of time or length, in log-log scales, are variable quantities, which approach constant values only asymptotically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
37
+ page_content=' In this work we argue that the local fractal, walk, and resistance exponents are related through an equation that generalizes (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
38
+ page_content=' This generalization is obtained Local Einstein relation for fractals 3 analytically, following the steady-state method for the calculation of the effective diffusion coefficients for periodic substrates [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
39
+ page_content=' To further strengthen our findings we perform numerical simulations for two models of fractals;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
40
+ page_content=' which confirm the theoretical predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
41
+ page_content=' The paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
42
+ page_content=' In Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
43
+ page_content=' 2 we relate the diffusion coefficient and the unit cell resistance for a periodic structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
44
+ page_content=' In Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
45
+ page_content=' 3 we derive the Einstein relation for self-similar systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
46
+ page_content=' In Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
47
+ page_content=' 4 we generalize this relation for scale-dependent exponents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
48
+ page_content=' In Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
49
+ page_content=' 5 we confirm the generalized relation by numerical simulations performed on models of asymptotic self-similar substrates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
50
+ page_content=' Finally, we give our conclusions in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
51
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
52
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
53
+ page_content=' Periodic systems In this section we address the problem of the diffusion coefficient for a periodic substrate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
54
+ page_content=' We follows the steady-state method developed in reference [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
55
+ page_content=' We start by introducing the periodic substrate with unit cell of linear dimension l, schematized in figure 1, where the points represent sites, and the arrows represent hopping rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
56
+ page_content=' On this structure, a mobile particle can jump between connected sites according to the hopping rates k′s (for the sake of clarity only a few sites and arrows were highlighted).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
57
+ page_content=' We focus on a steady-state of non-interacting particles flowing with a constant current density j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
58
+ page_content=' k n n (f+1) (f+1) n (f+1) n ns t v u k n n (f) (f) (f) n (f) n n(f) k s t v r u (f+1) (f+1) l krs st k u t uv k k k r rs st tu uv Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
59
+ page_content=' Two nearest-neighbor cells f and f +1, for a periodic substrate with linear size period l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
60
+ page_content=' The points represent sites, which can be occupied by mobile particles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
61
+ page_content=' The arrows represent hopping rates between pairs of sites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
62
+ page_content=' For clarity, only a few sites and hopping rates were highlighted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
63
+ page_content=' n(f) r corresponds to the number of particles in the internal site r of cell f As shown in [20], this steady-state consists of a set of microscopic currents distributed with the same periodicity as the substrate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
64
+ page_content=' In figure 1 two nearest-neighbor (NN) unit cells are depicted schematically where, for example, n(f) s represents the number of particles in site r (internal index) of cell f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
65
+ page_content=' Because of the mentioned Local Einstein relation for fractals 4 periodicity, we get that for given pair of connected sites with internal indices s and t, i(f) rs = i(f+1) rs , (3) where i(f) rs is the current from site s to site r in cell f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
66
+ page_content=' In addition, as hopping rates do not depend on the cell either but only on the internal indices, the last equation can be rewritten as ksr(n(f) s − n(f) r ) = ksr(n(f+1) s − n(f+1) r ), (4) or n(f+1) s − n(f) s = n(f+1) r − n(f) r .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
67
+ page_content=' (5) Therefore, in the steady-state, the difference in the occupation number for a given site and the equivalent site in a NN cell is the same for all sites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
68
+ page_content=' The relation of the steady-state problem with the diffusion coefficient D is provided by Fick’s law j = −D∆n l2 , (6) which is valid for distances larger than l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
69
+ page_content=' Here ∆n corresponds to the particle number difference for NN cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
70
+ page_content=' Note that D also determines the mean-square displacement ∆2x of a single random walker on the same structure, which behaves as ∆2x(t) = 2Dt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
71
+ page_content=' (7) for time long enough for ∆x ≫ l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
72
+ page_content=' n n k a b i=(n −n )k b a Va R b Vb i= (V −V )/R a Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
73
+ page_content=' Schematics of the equivalence between Fick’s law (left) and Ohm’s law (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
74
+ page_content=' In the mapping particles have unitary charge, while the other quantities are related as V = n, and R = 1/k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
75
+ page_content=' Transforming the steady-state problem into an equivalent electrical problem is straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
76
+ page_content=' Indeed, for particles of unitary electric charge, a mapping between Fick’s law and Ohm’s law results by identifying particle number with electrostatic potential (Va = na) and hopping rate with conductance (k = 1/R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
77
+ page_content=' In figure 2 we represent this mapping for every pair of connected sites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
78
+ page_content=' Following this analogy, we see Local Einstein relation for fractals 5 that in the electric problem, the potential difference for a pair of equivalent sites in NN cells takes the constant value ∆V = n(i+1) r − n(i) r , (8) and that the difference between particle populations ∆n = M � r=1 (n(i+1) r − n(i) r ) = M∆V, (9) is proportional to the potential difference ∆V , where the constant of proportionality M corresponds to the number of sites per unit cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
79
+ page_content=' Thus, according to equation (6), we can conclude that, given a periodic substrate with unit cell of linear dimension l and M sites, the diffusion coefficient and the potential difference between two equivalent sites in NN cells, are connected through the relation D = −j l2 M∆V , (10) where j is the steady-state current density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
80
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
81
+ page_content=' Self-similar substrates Deterministic fractals are usually built by a recursive procedure, that results in a sequence of structures called generations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
82
+ page_content=' A generation consists of a periodic array of sites connected by bonds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
83
+ page_content=' The process begins with a basic periodic structure (zeroth generation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
84
+ page_content=' At every step the unit cell is scaled by a factor L and the building rules ensure that self-similarity is obtained after a large number of iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
85
+ page_content=' Following equation (10), the diffusion coefficient Dp for the generation p and the potential difference ∆Vp between two equivalent points in NN unit cells are related as Dp = −j L2p Mp∆Vp , (11) where Mp is the number of sites in the unit cell, and Lp is its linear dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
86
+ page_content=' Then, for two consecutive generations p and p + 1, through which the same steady-state current flows, we obtain Dp Dp+1 = L−2Mp+1 Mp ∆Vp+1 ∆Vp .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
87
+ page_content=' (12) Now, since for a fractal the number of sites in a box with linear dimension l behaves as m(l) ∼ ldf (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
88
+ page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
89
+ page_content=', df is the fractal dimension defined through box-counting), Mp+1/Mp = (L(p+1)/Lp)df = Ldf, and the last equation can be rewritten as Dp Dp+1 = Ldf−2∆Vp+1 ∆Vp , (13) Local Einstein relation for fractals 6 As previously shown [7,8], a perfect diffusive self-similar structure corresponds to a ratio Dp/Dp+1 which does not depend on p, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
90
+ page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
91
+ page_content=', Dp Dp+1 = 1 + λ, (14) with λ a positive constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
92
+ page_content=' In this model, the mean-square displacement for a single random walker behaves as ∆2x(t) = f(t)t2ν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
93
+ page_content=' (15) The modulation f(t) is a log-periodic function, f(tτ) = f(t), and both ν and τ can be analytically calculated in terms of L and λ: ν = 1 2 + log(1 + λ) log(L) (16) τ = L1/ν (17) The important partial conclusion in the context of this work is that, according to above discussion, a perfect diffusive self-similar structure implies a power-law behaviour for the resistance as a function of length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
94
+ page_content=' Indeed, equations (13) and (14) leads to ∆Vp+1 ∆Vp = L1/ν−df, (18) where we have used 1 + λ = L1/ν−2, from equation (16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
95
+ page_content=' Thus, for a perfect diffusive self-similar fractal the potential difference, which corresponds to steady-state current, scales with length l as ∆V ∼ lζ, (19) where the exponent ζ is given by ζ = 1/ν − df;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
96
+ page_content=' (20) which is the Einstein relation (2), with dw = 1/ν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
97
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
98
+ page_content=' Local exponents We consider now a generic substrate for which diffusive self-similarity is reached only asymptotically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
99
+ page_content=' Let us assume a ratio between consecutive diffusion coefficients, that depends on the generation p, as Dp Dp+1 = 1 + λp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
100
+ page_content=' (21) where, {λp : p = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
101
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
102
+ page_content='} is a sequence of non-negative real numbers, with lim p→∞ λp = λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
103
+ page_content=' Because of this limit, at long enough times a single random walk on this substrate will show a MSD behaviour as in equation (15), and, as pointed out before, for large Local Einstein relation for fractals 7 enough lengths the potential difference will behave as in equation (19);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
104
+ page_content=' with ν and ζ given by equations (16) and (20).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
105
+ page_content=' In this section we focus on local exponents, which correspond to the slopes in log- log scales for finite length or time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
106
+ page_content=' As shown for example in [8], on a substrate on which diffusion coefficients for generations p and p + 1 satisfy equation (21), the MSD for a single random walker behaves as ∆2x(t) ∼ t2νp, for Lp ≲ ∆x ≲ Lp+1, (22) with the local exponent νp given by νp = 1 2 + log(1 + λp) log(L) (23) Then, after rearranging this equation as 1+λp = L1/νp−2, which corresponds to the left hand side of equation (13), we obtain ∆Vp+1 ∆Vp = L1/νp−df.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
107
+ page_content=' (24) Thus, we expect that the potential difference scales with length l as ∆V (l) ∼ lζp, for Lp ≲ l ≲ Lp+1, (25) and that the local exponents satisfy the relation ζp = 1/νp − df.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
108
+ page_content=' (26) Therefore, local slopes in log-log scales for the resistance as a function of length and for MSD of a single random walker as a function of time are related for all scales through equation (26);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
109
+ page_content=' which generalizes the Einstein relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
110
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
111
+ page_content=' Numerical simulations We study numerically the steady-state that corresponds to a unitary current on two models, for which diffusive self-similarity appears asymptotically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
112
+ page_content=' At finite lengths, the local random-walk exponent νp is not constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
113
+ page_content=' Thus, we expect an also variable resistance exponent ζp, related to the former through equation (26).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
114
+ page_content=' The first model is a substrate built on a square lattice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
115
+ page_content=' A random walk consists in a particle hopping among NN sites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
116
+ page_content=' If sites are connected by a bond, the hopping rate is k = 1/4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
117
+ page_content=' If the sites are not connected, the hopping rate is k = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
118
+ page_content=' A fractal is obtained by deleting some bonds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
119
+ page_content=' The characteristic scale factor is L = 3, and the unit cells for the first, the second and the third generations are depicted schematically in figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
120
+ page_content=' For every generation the unit cell can be separated from the rest by cutting four bonds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
121
+ page_content=' As shown in a previous work, the mass on this structure shows a power-law behaviour with df = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
122
+ page_content=' However, the random walk exponent νp grows with time and approaches a value ν < 1/2 when t → ∞ [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
123
+ page_content=' Local Einstein relation for fractals 8 We have run numerical simulations on the unit cell of the sixth generation, to reach the steady-state in which a unitary current flows between the left and right extremes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
124
+ page_content=' In figure 4 we plot with symbols the potential differences for lengths x = 3i (i = 0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
125
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
126
+ page_content=', 6), which are the unit cell linear sizes for the generations zero to six.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
127
+ page_content=' In the same figure, we plot a line using the relation (26) and the numerical values for νp, which are the outcomes of random walk simulations reported in reference [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
128
+ page_content=' Notice that both data set fall on the same curve, which confirms the relation (26).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
129
+ page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
130
+ page_content=' Substrate in two dimensions, which results in scale-dependent walk and resistance exponents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
131
+ page_content=' The schematics correspond to the unit cells for the first, second and third generations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
132
+ page_content=' The segments represent bonds between sites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
133
+ page_content=' The second model is a generalization of the one-dimensional self-similar model introduced in [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
134
+ page_content=' We start with a single random walk on a one-dimensional lattice, with a hopping rate k0 between any pair of NN sites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
135
+ page_content=' This homogeneous case corresponds to generation zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
136
+ page_content=' We introduce a natural number L to build the other generations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
137
+ page_content=' In the first generation, we reset to k1 < k0 the hopping rate for every pair of sites j and j +1, with mod(j, L) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
138
+ page_content=' The other hopping rates remains as in zeroth generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
139
+ page_content=' In the second generation, we reset to k2 < k1 the hopping rate for every pair of sites j and j +1, with mod(j, L2) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
140
+ page_content=' The other hopping rates remains as in first generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
141
+ page_content=' This recursion follows indefinitely, in such a way that generation n is obtained from generation n − 1 after resetting to kn < kn−1 the hopping rate for every pair of sites j and j + 1, with mod(j, Ln) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
142
+ page_content=' In figure 5 we show an schematics for L = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
143
+ page_content=' Local Einstein relation for fractals 9 1 10 100 1 10 100 1000 ∆V x Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
144
+ page_content=' Potential difference as a function of length for a unitary current flowing trough the unit cell of the sixth generation substrate in figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
145
+ page_content=' The symbols correspond to simulations of the steady-state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
146
+ page_content=' The line was plotted with the exponents ζp from equation (26) and the values of νp which result from random-walk numerical simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
147
+ page_content=' L L2 k0 k1 k2 Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
148
+ page_content=' Schematics of the one-dimensional random-walk model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
149
+ page_content=' We begin with a homogeneous lattice, and a hopping rate k0 between nearest-neighbor sites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
150
+ page_content=' Then, hopping rates are reset to kj for transitions between sites j and j + 1 for every j such that mod(j, Ln) = 0, and for n = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
151
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
152
+ page_content='. In this example, L = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
153
+ page_content=' If we ask for perfect self-similarity for diffusion, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
154
+ page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
155
+ page_content=' equation (14), the hopping rates are found iteratively as in reference [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
156
+ page_content=' For the more general case of equation (21), the sequence of hopping rates is given by 1 ki = 1 ki−1 + Liλi−1 k0 i−2 � j=0 (1 + λj), for i = 1, 2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
157
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
158
+ page_content=' (27) We test the validity of the relation (26) among the local exponents for a family of Local Einstein relation for fractals 10 substrates given by λp = λ (1 − 2−p/5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
159
+ page_content=' ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
160
+ page_content=' (28) At short enough lengths these substrates are nearly homogeneous (λp ≈ 0 for p ≪ 5), while, on the other extreme, self-similarity for diffusion is reached for lengths much larger than L5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
161
+ page_content=' The local random walk exponent (23) decreases with length and approaches asymptotically ν in equation (16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
162
+ page_content=' Thus, the variation of νp in space increases with λ and, because of equation (26), the same should occur with the variation of ζp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
163
+ page_content=' This is an interesting model, because the variation of the exponents with length can be adjusted through the parameter λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
164
+ page_content=' 100 101 102 103 104 105 106 107 108 100 101 102 103 100 101 102 103 104 105 106 100 101 102 103 ∆V x ∆V x Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
165
+ page_content=' Potential difference as a function of length for unitary current on the one- dimensional model with λp = λ (1−2−p/5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
166
+ page_content=' ), and L = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
167
+ page_content=' (Main) Symbols correspond to data obtained with numerical simulations on a tenth-generation substrate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
168
+ page_content=' Lines were drawn using the values of theoretical exponents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
169
+ page_content=' From bottom to top, λ = 1 (red), λ = 2 (green), λ = 4 (violet), λ = 5 (blue).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
170
+ page_content=' (Inset) More detailed structure for λ = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
171
+ page_content=' We have run numerical simulations for the steady-state that corresponds to a unitary current flowing on this model, with L = 2 and λ = 1, 2, 4, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
172
+ page_content=' All substrates were built until generation 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
173
+ page_content=' In figure 6-main we plot with symbols the potential difference as a function of the length x, for x = 2j (j = 0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
174
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
175
+ page_content=', 9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
176
+ page_content=' The lines correspond to the exponents ζp obtained from equations (26) and (23).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
177
+ page_content=' Note the excellent agreement between theory and simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
178
+ page_content=' The inset in the same figure shows substructure of ∆V for λ = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
179
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
180
+ page_content=' Conclusions We have studied first the connection between single random walks and steady-state potential difference for substrates with spatial periodicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
181
+ page_content=' Then, by considering a sequence of periodic systems, a common procedure for deterministic fractal construction, we find that the length dependent fractal, walk and resistance exponents, for the Local Einstein relation for fractals 11 substrate obtained in the infinite limit of this sequence, satisfy, at every length scale, the relation (26).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
182
+ page_content=' This can be considered as a local version of the Einstein relation (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
183
+ page_content=' We have tested our predictions numerically for two models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
184
+ page_content=' The first model is a fractal in two dimensions, while the the second is a fractal in one dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
185
+ page_content=' Both models lead to length-dependent exponents at intermediate scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
186
+ page_content=' The excellent agreement between the outcomes of these simulations and the theoretical predictions supports the validity of the mentioned relation among exponents, not only in the asymptotic self-similar limit but also locally, for all length scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
187
+ page_content=' Acknowledgments We are grateful to H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
188
+ page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
189
+ page_content=' M´artin for useful discussions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
190
+ page_content=' This research was supported by the Universidad Nacional de Mar del Plata, 15/E1040, and the Consejo Nacional de Investigaciones Cient´ıficas y T´ecnicas, PIP1748/21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
191
+ page_content=' References [1] Peter J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
192
+ page_content=' Grabner and Wolfgang Woess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
193
+ page_content=' Functional iterations and periodic oscillations for simple random walk on the sierpi?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
194
+ page_content='ski graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
195
+ page_content=' Stochastic Processes and their Applications, 69(1):127 – 138, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
196
+ page_content=' [2] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
197
+ page_content=' Acedo and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
198
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
199
+ page_content=' Yuste.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
200
+ page_content=' Territory covered by n random walkers on fractal media: The sierpinski gasket and the percolation aggregate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
201
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
202
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
203
+ page_content=' E, 63:011105, Dec 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
204
+ page_content=' [3] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
205
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
206
+ page_content=' Bab, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
207
+ page_content=' Fabricius, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
208
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
209
+ page_content=' Albano.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
210
+ page_content=' On the occurrence of oscillatory modulations in the power law behavior of dynamic and kinetic processes in fractals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
211
+ page_content=' EPL (Europhysics Letters), 81(1):10003, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
212
+ page_content=' [4] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
213
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
214
+ page_content=' Bab, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
215
+ page_content=' Fabricius, and Ezequiel V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
216
+ page_content=' Albano.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
217
+ page_content=' Revisiting random walks in fractal media: On the occurrence of time discrete scale invariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
218
+ page_content=' The Journal of Chemical Physics, 128(4):–, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
219
+ page_content=' [5] Alberto L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
220
+ page_content=' Maltz, Gabriel Fabricius, Marisa A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
221
+ page_content=' Bab, and Ezequiel V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
222
+ page_content=' Albano.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
223
+ page_content=' Random walks in fractal media: a theoretical evaluation of the periodicity of the oscillations in dynamic observables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
224
+ page_content=' Journal of Physics A: Mathematical and Theoretical, 41(49):495004, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
225
+ page_content=' [6] Sebastian Weber, Joseph Klafter, and Alexander Blumen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
226
+ page_content=' Random walks on sierpinski gaskets of different dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
227
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
228
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
229
+ page_content=' E, 82:051129, Nov 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
230
+ page_content=' [7] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
231
+ page_content=' Padilla, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
232
+ page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
233
+ page_content=' M´artin, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
234
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
235
+ page_content=' Iguain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
236
+ page_content=' Log-periodic modulation in one-dimensional random walks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
237
+ page_content=' EPL (Europhysics Letters), 85(2):20008, January 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
238
+ page_content=' [8] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
239
+ page_content=' Padilla, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
240
+ page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
241
+ page_content=' M´artin, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
242
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
243
+ page_content=' Iguain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
244
+ page_content=' Log-periodic oscillations for diffusion on self-similar finitely ramified structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
245
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
246
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
247
+ page_content=' E, 82:011124, Jul 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
248
+ page_content=' [9] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
249
+ page_content=' Padilla, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
250
+ page_content=' M´artin, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
251
+ page_content=' Iguain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
252
+ page_content=' Anomalous diffusion with log-periodic modulation in a selected time interval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
253
+ page_content=' Physical Review E, 83(2):2–5, feb 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
254
+ page_content=' [10] Daniel ben Avraham and Shlomo Havlin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
255
+ page_content=' Diffusion and Reactions in Fractals and Disordered Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
256
+ page_content=' Cambridge University Press, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
257
+ page_content=' [11] Bernhard Kr¨on and Elmar Teufl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
258
+ page_content=' Asymptotics of the transition probabilities of the simple random walk on self-similar graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
259
+ page_content=' Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
260
+ page_content=' Amer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
261
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
262
+ page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
263
+ page_content=', 356:393–414, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
264
+ page_content=' [12] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
265
+ page_content=' Padilla, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
266
+ page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
267
+ page_content=' M´artin, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
268
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
269
+ page_content=' Iguain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
270
+ page_content=' Anisotropic anomalous diffusion modulated by log- periodic oscillations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
271
+ page_content=' Physical Review E, 86(1):011106, jul 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
272
+ page_content=' [13] Frechero, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
273
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
274
+ page_content=', Padilla, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
275
+ page_content=', M´artin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
276
+ page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
277
+ page_content=', and Iguain, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
278
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
279
+ page_content=' Intermediate-range structure in ion-conducting tellurite glasses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
280
+ page_content=' EPL, 103(3):36002, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
281
+ page_content=' [14] Amin Bunde and Shlomo Havlin (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
282
+ page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
283
+ page_content=' Fractals and Disordered Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
284
+ page_content=' Springer, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
285
+ page_content=' Local Einstein relation for fractals 12 [15] J A Given and B B Mandelbrot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
286
+ page_content=' Diffusion on fractal lattices and the fractal einstein relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
287
+ page_content=' Journal of Physics A: Mathematical and General, 16(15):L565, oct 1983.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
288
+ page_content=' [16] Astrid Franz, Christian Schulzky, and Karl Heinz Hoffmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
289
+ page_content=' The Einstein relation for finitely ramified Sierpinski carpets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
290
+ page_content=' Nonlinearity, 14(5):1411, aug 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
291
+ page_content=' [17] Alberto L Maltz, Gabriel Fabricius, Marisa A Bab, and Ezequiel V Albano.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
292
+ page_content=' Random walks in fractal media: a theoretical evaluation of the periodicity of the oscillations in dynamic observables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
293
+ page_content=' Journal of Physics A: Mathematical and Theoretical, 41(49):495004, oct 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
294
+ page_content=' [18] Andr´as Telcs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
295
+ page_content=' The Einstein Relation for Random Walks on Graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
296
+ page_content=' Journal of Statistical Physics, 122(4):617–645, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
297
+ page_content=' [19] Martin Hendrick and Philippe Renard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
298
+ page_content=' Fractal dimension, walk dimension and conductivity exponent of karst networks around tulum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
299
+ page_content=' Frontiers in Physics, 4, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
300
+ page_content=' [20] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
301
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
302
+ page_content=' Aldao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
303
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
304
+ page_content=' Iguain, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
305
+ page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
306
+ page_content=' M´artin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
307
+ page_content=' Diffusion of tagged particle in an exclusion process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
308
+ page_content=' Surf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
309
+ page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
310
+ page_content=', 366:483–490, Apr 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8tAyT4oBgHgl3EQfc_f6/content/2301.00296v1.pdf'}
9NAzT4oBgHgl3EQfFPpp/content/tmp_files/2301.01007v1.pdf.txt ADDED
@@ -0,0 +1,2688 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A Bertrand duopoly game with differentiated products reconsidered
2
+ Xiaoliang Lia and Bo Li∗b
3
+ aSchool of Digital Economics, Dongguan City University, Dongguan 523419, China
4
+ bSchool of Finance, Anhui University of Finance and Economics, Bengbu 233030, China
5
+ Abstract
6
+ In this paper, we explore a dynamic Bertrand duopoly game with differentiated products, where
7
+ firms are boundedly rational and consumers are assumed to possess an underlying CES utility function.
8
+ We mainly focus on two distinct degrees of product substitutability. Several tools based on symbolic
9
+ computations such as the triangular decomposition method and the PCAD method are employed in the
10
+ analytical investigation of the model. The uniqueness of the non-vanishing equilibrium is proved and
11
+ rigorous conditions for the local stability of this equilibrium are established for the first time.
12
+ Most
13
+ importantly, we find that increasing the substitutability degree or decreasing the product differentiation
14
+ has an effect of destabilization for our Bertrand model, which is in contrast with the relative conclusions
15
+ for the Cournot models. This finding could be conducive to the revelation of the essential difference
16
+ between dynamic Cournot and Bertrand oligopolies with differentiated goods.
17
+ In the special case of
18
+ identical marginal costs, we derive that lower degrees of product differentiation mean lower prices, higher
19
+ supplies, lower profits, and lower social welfare. Furthermore, complex dynamics such as periodic orbits
20
+ and chaos are reported through our numerical simulations.
21
+ Keywords: Bertrand duopoly; differentiated product; symbolic computation; local stability
22
+ 1
23
+ Introduction
24
+ It is well known that Cournot [12] developed the first formal theory of oligopoly, which is a market supplied
25
+ by only a few firms. In Cournot’s framework, firms are supposed to make decisions on their quantities of
26
+ outputs and have perfect information on their rivals’ strategic behavior. In the strand of Cournot oligopoly
27
+ models, the market demand function is usually supposed to be linear for simplicity by many economists
28
+ (e.g., Fisher [16], McManus and Quandt[29]). In the real world, however, a non-linear demand is more
29
+ likely to exist. Puu [33] investigated a Cournot duopoly game under an isoelastic market demand, where
30
+ the price is simply the reciprocal of the total supply. Afterward, fruitful contributions including [2, 4, 7,
31
+ 9, 10, 13, 20, 21, 22, 24, 28, 31], were made in the literature on Cournot games. Related to our study,
32
+ Zhang and Zhang [39] considered a Cournot game in which each firm produces multiple products and sells
33
+ them in multiple markets. They obtained sufficient and necessary conditions for the local stability of the
34
+ Cournot-Nash equilibria.
35
+ Several decades later after Cournot’s seminal work, Bertrand [6] proposed a different framework to
36
+ describe oligopolistic competition, where prices rather than quantities are the strategic variables of the
37
+ competitors. Singh and Vives [35] analyzed the duality of prices and quantities, and found that Cournot
38
+ (Bertrand) competition with substitutes is the dual of Bertrand (Cournot) competition with complements.
39
+ L´opez and Naylor [26] compared Cournot and Bertrand equilibria in a downstream differentiated duopoly,
40
+ and proved that the classic conclusion that profits under Cournot equilibrium exceed those under Bertrand
41
+ competition could be reversible in the case of imperfect substitutes. Zhang et al. [40] considered a Bertrand
42
+ model formulated under a linear inverse demand, and obtained the existence and stability of the equilibrium.
43
+ Different from [40], Fanti et al. [15] developed a model with sound microeconomic foundations that deter-
44
+ mine the demand for differentiated products, and showed that synchronized dynamics and intermittency
45
+ phenomena may appear. Naimzada and Tramontana [32] also considered a Cournot-Bertrand duopoly model
46
+ with product differentiation and emphasized the role of best response dynamics and an adaptive adjustment
47
+ mechanism for stability. Brianzoni et al. [8] assumed quadratic costs in the study of the Bertrand duopoly
48
+ ∗Corresponding author: [email protected]
49
+ 1
50
+ arXiv:2301.01007v1 [econ.TH] 3 Jan 2023
51
+
52
+ game with horizontal product differentiation and discovered synchronized dynamics. Moreover, Ma and Guo
53
+ [27] studied the impacts of information on the dynamical Bertrand game. They showed that there exists
54
+ a fixed point independent of the amount of information for a triopoly, and the stable region of adjustment
55
+ parameter increases with the amount of information for a duopoly.
56
+ In all the aforementioned Bertrand games, the inverse demand function is supposed to be linear. Instead,
57
+ Gori and Sodini [17] explored the local and global dynamics of a Bertrand duopoly with a nonlinear demand
58
+ and horizontal product differentiation. Furthermore, Ahmed et al. [3] proposed a dynamic Bertrand duopoly
59
+ game with differentiated products, where firms are boundedly rational and consumers are assumed to possess
60
+ an underlying CES utility function. They only employed numerical simulations to investigate the dynamic
61
+ behavior of their model because the closed form of the equilibrium is extremely difficult to compute. They
62
+ observed that the Nash equilibrium loses its stability through a period-doubling bifurcation as the speed
63
+ of adjustment increases.
64
+ Motivated by [3], Agliari et al. [1] investigated a Cournot duopoly game with
65
+ differentiated goods. We should mention that Agliari et al. [1] used the same CES utility function as [3] to
66
+ derive the demand function of the market. They discovered that a low degree of product substitutability or
67
+ a higher degree of product differentiation may destabilize the Cournot game. This finding is in accordance
68
+ with that of Fanti and Gori [14], where the authors introduced a Cournot duopoly with a linear demand and
69
+ heterogeneous players to study the influence of product differentiation on stability and found that a higher
70
+ degree of product differentiation may destabilize the market equilibrium.
71
+ In this paper, we re-study the Bertrand duopoly game of Ahmed et al. [3] using several tools based on
72
+ symbolic computations such as the triangular decomposition method (see, e.g., [23]) and the PCAD method
73
+ (see, e.g., [11]). It is worth noting that the results of symbolic computations are exact, and thus can provide
74
+ theoretical foundations for the systematic analysis of economic models.
75
+ We analytically investigate the
76
+ local stability and bifurcations of the model. By using several tools based on symbolic computations, the
77
+ uniqueness of the non-vanishing equilibrium is proved and the rigorous conditions for the local stability of this
78
+ equilibrium are obtained for the first time. In the special case that the two companies have identical marginal
79
+ costs, we prove that the model can lose its stability only through a period-doubling bifurcation. The most
80
+ important finding is that increasing the substitutability degree or decreasing the product differentiation has
81
+ an effect of destabilizing the unique non-vanishing equilibrium. A possible explanation is that a decrease in
82
+ product differentiation may result in an increase in market competition intensity and even a price war, which
83
+ could lead to the destabilization of the equilibrium. It should be noted that our finding is in contrast with
84
+ the relative conclusions by Agliari et al. [1] and by Fanti and Gori [14]. This contradiction contributes to
85
+ the literature on the connection between Cournot and Bertrand oligopolies and may help reveal the essential
86
+ difference between them. In the special case of identical marginal costs, we derive the fact that lower degrees
87
+ of product differentiation can lead to lower prices, higher supplies, lower profits, and lower social welfare.
88
+ This fact is in line with our economic intuition. Complex dynamics such as periodic orbits and chaos can
89
+ be observed through our numerical simulations, which also confirm that an increase in the substitutability
90
+ degree leads to the emergence of instability in the considered model. Furthermore, we discover the existence
91
+ of a Neimark-Sacker bifurcation directly on the equilibrium, which is a new finding and has not yet been
92
+ discovered by Ahmed et al. [3]
93
+ The rest of this paper is structured as follows. In Section 2, we revisit the construction of the Bertrand
94
+ duopoly game investigated in our study. We analytically explore the stability and bifurcations of this model
95
+ for two different substitutability degrees, namely α = 1/2 and α = 1/3, in Sections 3 and 4, respectively.
96
+ The influence of the substitutability degree on the local stability of the equilibrium and related comparative
97
+ statics are discussed in Section 5. Numerical simulations are provided in Section 6. Concluding remarks are
98
+ given in Section 7.
99
+ 2
100
+ Model
101
+ In our study, we consider a market where two firms compete with each other and produce differentiated
102
+ goods. The prices and quantities of the two goods are denoted by pi and qi, respectively, with i = 1, 2.
103
+ Furthermore, it is assumed that the market possesses a continuum of identical consumers with a CES utility
104
+ function of the form
105
+ U(q1, q2) = qα
106
+ 1 + qα
107
+ 2 ,
108
+ 2
109
+
110
+ where α (0 < α < 1) is called the substitutability degree between the products. Consumers choose their
111
+ consumptions by maximizing the utility subject to the budget constraint
112
+ p1q1 + p2q2 = 1.
113
+ Consequently, we have the following demand functions (The reader can refer to [3] for the proof).
114
+ q1 = pβ
115
+ 2
116
+ p1
117
+ 1
118
+
119
+ 1 + pβ
120
+ 2
121
+ ,
122
+ q2 = pβ
123
+ 1
124
+ p2
125
+ 1
126
+
127
+ 1 + pβ
128
+ 2
129
+ ,
130
+ where β = α/(1 − α). Thus, the inverse demands of the two goods are
131
+ p1 =
132
+ qα−1
133
+ 1
134
+
135
+ 1 + qα
136
+ 2
137
+ ,
138
+ p2 =
139
+ qα−1
140
+ 2
141
+
142
+ 1 + qα
143
+ 2
144
+ .
145
+ (1)
146
+ Accordingly, a decrease in α would make the products less substitutable or more differentiated.
147
+ In
148
+ particular, if α = 0, the inverse demands become p1 =
149
+ 1
150
+ 2 q1 and p2 =
151
+ 1
152
+ 2 q2 , which means that the two goods
153
+ are completely independent. If α = 1, we obtain the inverse demand p1 = p2 =
154
+ 1
155
+ q1+q2 , which is the same as
156
+ the famous isoelastic demand function introduced by Puu [33]. In this case, the prices of the two goods are
157
+ equal. That is to say, the two commodities are regarded as indistinguishable or identical by consumers.
158
+ The cost functions are assumed to be linear, i.e.,
159
+ C1(q1) = c1q1,
160
+ C2(q2) = c2q2,
161
+ where c1 > 0 and c2 > 0. Then the profit of firm i (i = 1, 2) should be
162
+ Πi(pi, p−i) = piqi − ciqi = (pi − ci)pβ
163
+ −i
164
+ pi
165
+ 1
166
+
167
+ i + pβ
168
+ −i
169
+ ,
170
+ (2)
171
+ where p−i denotes the price of the commodity produced by the rival.
172
+ Furthermore, the gradient adjustment mechanism is formulated as
173
+ pi(t + 1) = pi(t) + ki
174
+ ∂Πi(t)
175
+ ∂pi(t) ,
176
+ where ki > 0 controls the adjustment speed of firm i. It is known that
177
+ ∂Πi
178
+ ∂pi
179
+ =
180
+ −pβ
181
+ −ip1+β
182
+ i
183
+ β +
184
+
185
+ p2β
186
+ −i + pβ
187
+ −ipβ
188
+ i (1 + β)
189
+
190
+ ci
191
+ p2
192
+ i
193
+
194
+
195
+ i + pβ
196
+ −i
197
+ �2
198
+ .
199
+ In short, the model can be described as the following iteration map.
200
+
201
+
202
+
203
+
204
+
205
+
206
+
207
+
208
+
209
+
210
+
211
+
212
+
213
+
214
+
215
+
216
+
217
+
218
+
219
+ p1(t + 1) = p1(t) + k1
220
+ −pβ
221
+ 2(t)p1+β
222
+ 1
223
+ (t)β +
224
+
225
+ p2β
226
+ 2 (t) + pβ
227
+ 2(t)pβ
228
+ 1(t) (1 + β)
229
+
230
+ c1
231
+ p2
232
+ 1(t)
233
+
234
+
235
+ 1(t) + pβ
236
+ 2(t)
237
+ �2
238
+ ,
239
+ p2(t + 1) = p2(t) + k2
240
+ −pβ
241
+ 1(t)p1+β
242
+ 2
243
+ (t)β +
244
+
245
+ p2β
246
+ 1 (t) + pβ
247
+ 1(t)pβ
248
+ 2(t) (1 + β)
249
+
250
+ c2
251
+ p2
252
+ 2(t)
253
+
254
+
255
+ 2(t) + pβ
256
+ 1(t)
257
+ �2
258
+ .
259
+ (3)
260
+ This game was first explored by Ahmed et al. [3] only through numerical simulations because no analytical
261
+ expressions of the Nash equilibria are available. In this paper, we reconsider this game using methods based
262
+ on symbolic computations and explore the influence of the substitutability degree on the local stability of
263
+ the equilibrium. One can see that for general β, it is impossible to analyze the equilibrium point of map
264
+ (3), because the system will have an exponential parameter. For such systems with exponential parameters,
265
+ existing analytical tools are quite limited. Therefore, similar to [3], our study mainly focuses on two specific
266
+ cases, namely β = 1 and β = 1/2, which are corresponding to α = 1/2 and α = 1/3, respectively.
267
+ 3
268
+
269
+ 3
270
+ α = 1/2
271
+ If α = 1/2, then β = 1. Hence, map (3) becomes
272
+
273
+
274
+
275
+
276
+
277
+
278
+
279
+
280
+
281
+ p1(t + 1) = p1(t) + k1
282
+ −2 p2(t)p2
283
+ 1(t) +
284
+
285
+ p2
286
+ 2(t) + 2 p2(t)p1(t)
287
+
288
+ c1
289
+ p2
290
+ 1(t) (p1(t) + p2(t))2
291
+ ,
292
+ p2(t + 1) = p2(t) + k2
293
+ −2 p1(t)p2
294
+ 2(t) +
295
+
296
+ p2
297
+ 1(t) + 2 p1(t)p2(t)
298
+
299
+ c2
300
+ p2
301
+ 2(t) (p2(t) + p1(t))2
302
+ .
303
+ (4)
304
+ From an economic point of view, it is important to identify the number of non-vanishing equilibria
305
+ (p1, p2) with p1 > 0 and p2 > 0. In order to compute the equilibrium, we set p1(t + 1) = p1(t) = p1 and
306
+ p2(t + 1) = p2(t) = p2. Then the following equations of the equilibrium are acquired.
307
+
308
+ −2p2p2
309
+ 1 +
310
+
311
+ p2
312
+ 2 + 2p2p1
313
+
314
+ c1 = 0,
315
+ −2p1p2
316
+ 2 +
317
+
318
+ p2
319
+ 1 + 2p1p2
320
+
321
+ c2 = 0.
322
+ (5)
323
+ The triangular decomposition method, which can be viewed as an extension of the Gaussian elimination
324
+ method, permits us to analyze the equilibria of non-linear economic models. Both the method of triangu-
325
+ lar decomposition and the method of Gaussian elimination can transform a system into triangular forms.
326
+ However, the triangular decomposition method is feasible for polynomial systems, while the Gaussian elim-
327
+ ination method is just for linear systems. Refer to [5, 19, 23, 36, 37] for more information on triangular
328
+ decomposition. Specifically, using the triangular decomposition method, we can decompose the solutions of
329
+ system (5) into zeros of the following two triangular polynomial sets.
330
+ T11 = [p1, p2] ,
331
+ T12 =
332
+
333
+ p3
334
+ 1 − 4 c1p2
335
+ 1 + (4 c2
336
+ 1 − 2 c1c2)p1 + 3 c2
337
+ 1c2, c1p2 − p2
338
+ 1 + 2 c1p1
339
+
340
+ .
341
+ (6)
342
+ The zero of T11 is corresponding to the origin (0, 0). Moreover, the non-vanishing equilibria can be
343
+ computed from T12. The first polynomial p3
344
+ 1 − 4 c1p2
345
+ 1 + (4 c2
346
+ 1 − 2 c1c2)p1 + 3 c2
347
+ 1c2 of T12 is univariate in p1 and
348
+ the second polynomial c1p2 − p2
349
+ 1 + 2 c1p1 of T12 has degree 1 with respect to p2. Consequently, if we solve p1
350
+ from the first polynomial, then we can substitute the solution of p1 into the second polynomial and easily
351
+ obtain p2. As the first polynomial of T12 has degree 3 with respect to p1, we know that there are at most 3
352
+ positive real solutions. Their analytical expressions exist but are quite complicated, though.
353
+ This is not an easy task to identify the exact number of positive real solutions if the analytical solutions
354
+ of T12 are complicated. However, the first author of this paper and his co-worker [25] proposed an algebraic
355
+ algorithm to systematically identify multiplicities of equilibria in semi-algebraic economies without obtaining
356
+ the closed-form solutions. We summarize the computational results for map (4) in Proposition 1. Interested
357
+ readers can refer to Section 3 of [25] for additional details of the algorithm.
358
+ Proposition 1. Let α = 1/2. The iteration map (4) possesses one unique equilibrium (p1, p2) with p1 > 0
359
+ and p2 > 0.
360
+ To explore the local stability of the equilibrium, the following Jacobian matrix plays an ambitious role.
361
+ J =
362
+ �J11
363
+ J12
364
+ J21
365
+ J22
366
+
367
+ ,
368
+ where
369
+ J11 = p6
370
+ 1 + 3 p5
371
+ 1p2 + 3 p4
372
+ 1p2
373
+ 2 +
374
+
375
+ p3
376
+ 2 + 2 k1p2
377
+
378
+ p3
379
+ 1 − 6 k1p2p2
380
+ 1c1 − 6 k1p2
381
+ 2p1c1 − 2 c1k1p3
382
+ 2
383
+ p3
384
+ 1 (p1 + p2)3
385
+ ,
386
+ J12 = k1 (2 c1 − p1 + p2)
387
+ (p1 + p2)3
388
+ ,
389
+ J21 = k2 (2 c2 + p1 − p2)
390
+ (p1 + p2)3
391
+ ,
392
+ J22 = p6
393
+ 2 + 3 p1p5
394
+ 2 + 3 p2
395
+ 1p4
396
+ 2 +
397
+
398
+ p3
399
+ 1 + 2 k2p1
400
+
401
+ p3
402
+ 2 − 6 k2p1p2
403
+ 2c2 − 6 k2p2
404
+ 1p2c2 − 2 c2k2p3
405
+ 1
406
+ p3
407
+ 2 (p1 + p2)3
408
+ .
409
+ 4
410
+
411
+ Then the characteristic polynomial of J is
412
+ CP(λ) = λ2 − Tr(J)λ + Det(J),
413
+ where Tr(J) = J11 + J22 and Det(J) = J11J22 − J12J21 are the trace and the determinant of J, respectively.
414
+ According to the Jury criterion [18], the conditions for the local stability include:
415
+ 1. CDJ
416
+ 1 ≡ CP(1) = 1 − Tr(J) + Det(J) > 0,
417
+ 2. CDJ
418
+ 2 ≡ CP(−1) = 1 + Tr(J) + Det(J) > 0,
419
+ 3. CDJ
420
+ 3 ≡ 1 − Det(J) > 0.
421
+ Remark 1. Furthermore, it is known that the discrete dynamic system may undergo a fold, period-doubling,
422
+ or Neimark-Sacker bifurcation when the equilibrium loses its stability at CDJ
423
+ 1 = 0, CDJ
424
+ 2 = 0, or CDJ
425
+ 3 = 0,
426
+ respectively.
427
+ 3.1
428
+ The special case of c1 = c2
429
+ If we set c1 = c2 = c in (5), then the triangular decomposition method permits us to transform the
430
+ equilibrium equations (5) into the following three triangular sets.
431
+ T21 = [p1, p2],
432
+ T22 = [p1 − 3 c, p2 − 3 c],
433
+ T23 = [p2
434
+ 1 − cp1 − c2, p2 + p1 − c].
435
+ The zero of T21 is simply (0, 0). From T23, we obtain two zeros1
436
+ ��√
437
+ 5
438
+ 2 + 1
439
+ 2
440
+
441
+ c,
442
+
443
+
444
+
445
+ 5
446
+ 2 + 1
447
+ 2
448
+
449
+ c
450
+
451
+ ,
452
+ ��
453
+
454
+
455
+ 5
456
+ 2 + 1
457
+ 2
458
+
459
+ c,
460
+ �√
461
+ 5
462
+ 2 + 1
463
+ 2
464
+
465
+ c
466
+
467
+ ,
468
+ which are useless as the component
469
+
470
+
471
+
472
+ 5
473
+ 2 + 1
474
+ 2
475
+
476
+ c is negative. Therefore, the only non-vanishing equilibrium
477
+ is (3 c, 3 c), which can be obtained from T22.
478
+ Theorem 1. Let α = 1/2 and c1 = c2 = c. The unique non-vanishing equilibrium (3 c, 3 c) is locally stable
479
+ if
480
+ c2 > 2 k1 + 2 k2 +
481
+
482
+ 4 k2
483
+ 1 − 7 k1k2 + 4 k2
484
+ 2
485
+ 216
486
+ .
487
+ The system may undergo a period-doubling bifurcation when
488
+ c2 = 2 k1 + 2 k2 +
489
+
490
+ 4 k2
491
+ 1 − 7 k1k2 + 4 k2
492
+ 2
493
+ 216
494
+ .
495
+ Furthermore, there exist no other bifurcations of the equilibrium.
496
+ Proof. Substituting p1 = 3 c and p2 = 3 c into J, we obtain that the Jacobian matrix at (3 c, 3 c) to be
497
+ J(3 c, 3 c) =
498
+
499
+ 27 c2−k1
500
+ 27 c2
501
+ k1
502
+ 216 c2
503
+ k2
504
+ 216 c2
505
+ 27 c2−k2
506
+ 27 c2
507
+
508
+ .
509
+ Consequently,
510
+ Tr(J) = 54 c2 − k1 − k2
511
+ 27 c2
512
+ ,
513
+ Det(J) = 5184 c4 − 192 c2k1 − 192 c2k2 + 7 k1k2
514
+ 5184 c4
515
+ .
516
+ 1These zeros can also be obtained from T12 in (6) by setting c1 = c2 = c.
517
+ 5
518
+
519
+ One can verify that the first condition for the local stability is always fulfilled since k1, k2, c > 0 and
520
+ CDJ
521
+ 1 ≡ 1 − Tr(J) + Det(J) = 5 k1k2
522
+ 3888 c4 .
523
+ The second condition is
524
+ CDJ
525
+ 2 ≡ 1 + Tr(J) + Det(J) = 15552 c4 + (−288 k1 − 288 k2) c2 + 5 k1k2
526
+ 3888 c4
527
+ > 0,
528
+ which means that
529
+ 15552 c4 + (−288 k1 − 288 k2) c2 + 5 k1k2 > 0,
530
+ i.e.,
531
+ c2 > 2 k1 + 2 k2 +
532
+
533
+ 4 k2
534
+ 1 − 7 k1k2 + 4 k2
535
+ 2
536
+ 216
537
+ or c2 < 2 k1 + 2 k2 −
538
+
539
+ 4 k2
540
+ 1 − 7 k1k2 + 4 k2
541
+ 2
542
+ 216
543
+ .
544
+ The third condition is
545
+ CDJ
546
+ 3 ≡ 1 − Det(J) = (144 k1 + 144 k2) c2 − 5 k1k2
547
+ 3888 c4
548
+ > 0,
549
+ which implies that
550
+ (144 k1 + 144 k2) c2 − 5 k1k2 > 0,
551
+ i.e.,
552
+ c2 >
553
+ 5 k1k2
554
+ 144 (k1 + k2).
555
+ Furthermore, it can be proved that
556
+ 2 k1 + 2 k2 −
557
+
558
+ 4 k2
559
+ 1 − 7 k1k2 + 4 k2
560
+ 2
561
+ 216
562
+ <
563
+ 5 k1k2
564
+ 144 (k1 + k2) < 2 k1 + 2 k2 +
565
+
566
+ 4 k2
567
+ 1 − 7 k1k2 + 4 k2
568
+ 2
569
+ 216
570
+ .
571
+ Accordingly, the equilibrium is locally stable if
572
+ c2 > 2 k1 + 2 k2 +
573
+
574
+ 4 k2
575
+ 1 − 7 k1k2 + 4 k2
576
+ 2
577
+ 216
578
+ .
579
+ The rest of the proof follows immediately from Remark 1.
580
+ Figure 1 depicts two 2-dimensional cross-sections of the stability region reported in Theorem 1. It is
581
+ observed that an increase in the marginal cost c or a decrease in the adjustment speeds k1, k2 has an effect
582
+ of stabilizing the unique non-vanishing equilibrium.
583
+ (a) k2 = 1/10
584
+ (b) c = 1/3
585
+ Figure 1: The 2-dimensional cross-sections of the stability region of the considered model with α = 1/2 and
586
+ c1 = c2 = c. The curves of CDJ
587
+ 2 = 0 and CDJ
588
+ 3 = 0 are marked in blue and green, respectively.
589
+ 6
590
+
591
+ 3.2
592
+ The general case
593
+ If c1 ̸= c2, then the analytical expression of the unique non-vanishing equilibrium would be quite complicated.
594
+ Thus, the proof of Theorem 1 can not work since it is impossible to substitute the analytical expression of
595
+ the equilibrium into the Jacobian matrix and obtain a neat result. Concerning the bifurcation analysis, we
596
+ need to determine the conditions on the parameters that CDJ
597
+ 1 = 0, CDJ
598
+ 2 = 0, and CDJ
599
+ 3 = 0 are satisfied at
600
+ the non-vanishing equilibrium. For this purpose, the following notation is required.
601
+ Definition 1. Let
602
+ A =
603
+ m
604
+
605
+ i=0
606
+ ai xi,
607
+ B =
608
+ l
609
+
610
+ j=0
611
+ bj xj
612
+ be two univariate polynomials in x with coefficients ai, bj, and am, bl ̸= 0. The determinant
613
+ ���������������
614
+ am
615
+ am−1
616
+ · · ·
617
+ a0
618
+ ...
619
+ ...
620
+ ...
621
+ ...
622
+ am
623
+ am−1
624
+ · · ·
625
+ a0
626
+ bl
627
+ bl−1
628
+ · · ·
629
+ b0
630
+ ...
631
+ ...
632
+ ...
633
+ ...
634
+ bl
635
+ bl−1
636
+ · · ·
637
+ b0
638
+ ���������������
639
+
640
+
641
+ � l
642
+
643
+
644
+ � m
645
+ is called the Sylvester resultant (or simply resultant) of A and B with respect to x, and denoted by
646
+ res(A, B, x).
647
+ The following lemma reveals the main property of the resultant, which can also be found in [30].
648
+ Lemma 1. Let A and B be two univariate polynomials in x. There exist two polynomials F and G in x
649
+ such that
650
+ FA + GB = res(A, B, x).
651
+ Furthermore, A and B have common zeros in the field of complex numbers if and only if res(A, B) = 0.
652
+ For a triangular set T = [T1(x), T2(x, y)] and a polynomial H(x, y), we define
653
+ res(H, T ) ≡ res(res(H, T2, y), T1(x), x).
654
+ By Lemma 1, if T1 = 0 and T2 = 0 (or simply denoted as T = 0), then one knows that H = 0 implies
655
+ res(H, T ) = 0, which means res(H, T ) = 0 is a necessary condition for H = 0. Consequently, the following
656
+ proposition is acquired. It should be emphasized that Proposition 2 only reports the results for the case
657
+ of k1 = k2 because the conditions for k1 ̸= k2 are too long to list in this paper due to space limitations.
658
+ However, readers can see that the idea of the proof also works for k1 ̸= k2 and can derive the complete
659
+ conditions themself.
660
+ Proposition 2. Let α = 1/2 and k1 = k2 = k. The system may undergo a period-doubling bifurcation when
661
+ R1 = 0 and a Neimark-Sacker bifurcation when R2 = 0, where R1 and R2 are given in Appendix.
662
+ Proof. It should be noted that the resultant is feasible only for polynomials. For CDJ
663
+ 1 , we consider its
664
+ numerator Num(CDJ
665
+ 1 ). Then one can obtain that
666
+ res(Num(CDJ
667
+ 1 ), T12) = 81 k6c18
668
+ 1 c6
669
+ 2 (c1 + c2)
670
+
671
+ 32c2
672
+ 1 + 61c1c2 + 32c2
673
+ 2
674
+
675
+ .
676
+ Since c1 > 0, c2 > 0, and k > 0, it is impossible that res(Num(CDJ
677
+ 1 ), T12) = 0 or CDJ
678
+ 1 = 0 provided that
679
+ T12 = 0. Hence, the equilibrium can not lose its stability through a fold bifurcation. Furthermore, we have
680
+ res(Num(CDJ
681
+ 2 ), T12) = −729 c32
682
+ 1 c8
683
+ 2(c1 + c2)R1,
684
+ res(Num(CDJ
685
+ 3 ), T12) = 729 k3c32
686
+ 1 c8
687
+ 2(c1 + c2)R2,
688
+ which will vanish only if R1 = 0 and R2 = 0, respectively.
689
+ Consequently, the system may undergo a
690
+ period-doubling bifurcation when R1 = 0 and a Neimark-Sacker bifurcation when R2 = 0.
691
+ 7
692
+
693
+ By Proposition 1, there exists only one equilibrium (p1, p2) with p1 > 0 and p2 > 0 although its analytical
694
+ expression is complicated. To explore the local stability, we need to determine the signs of CDJ
695
+ 1 , CDJ
696
+ 2 , and
697
+ CDJ
698
+ 3 at this equilibrium without using its closed form. It should be noted that CDJ
699
+ 1 , CDJ
700
+ 2 , and CDJ
701
+ 3 are
702
+ rational functions. Suppose that
703
+ CDJ
704
+ i = Num(CDJ
705
+ i )
706
+ Den(CDJ
707
+ i ) ,
708
+ where Num(·) and Den(·) denote the numerator and the denominator, respectively. Then the sign of CDJ
709
+ i
710
+ is the same as that of Num(CDJ
711
+ i ) · Den(CDJ
712
+ i ) if Den(CDJ
713
+ i ) ̸= 0. One could compute that
714
+ res(Num(CDJ
715
+ 1 ) · Den(CDJ
716
+ 1 ), T12) = −1594323 k6c50
717
+ 1 c17
718
+ 2 (c1 + c2)6(32 c2
719
+ 1 + 61 c1c2 + 32 c2
720
+ 2),
721
+ res(Num(CDJ
722
+ 2 ) · Den(CDJ
723
+ 2 ), T12) = 129140163 c70
724
+ 1 c22
725
+ 2 (c1 + c2)6R1,
726
+ and
727
+ res(Num(CDJ
728
+ 3 ) · Den(CDJ
729
+ 3 ), T12) = −129140163 k3c70
730
+ 1 c22
731
+ 2 (c1 + c2)6R2.
732
+ We should emphasize that the sign of res(Num(CDJ
733
+ i )·Den(CDJ
734
+ i ), T12) may not be the same as Num(CDJ
735
+ i )·
736
+ Den(CDJ
737
+ i ) or CDJ
738
+ i . However, it is known that res(Num(CDJ
739
+ i )·Den(CDJ
740
+ i ), T12) involves only the parameters
741
+ and its zeros divide the parameter space into several regions. In each region, the sign of CDJ
742
+ i is invariant.
743
+ Consequently, we just need to select one sample point from each region and identify the sign of CDJ
744
+ i at the
745
+ selected sample point. The selection of sample points might be extremely complicated in general and could
746
+ be automated using, e.g., the PCAD method [11].
747
+ In Table 1, we list all the selected sample points and the corresponding information on whether the
748
+ non-vanishing equilibrium is stable, i.e., whether CDJ
749
+ 1 > 0, CDJ
750
+ 2 > 0, and CDJ
751
+ 3 > 0 are simultaneously
752
+ satisfied. Moreover, Table 1 displays the signs of R1 and R2 at these sample points. One can observe that
753
+ the equilibrium is stable if R1 > 0 and R2 > 0, and vice versa. It should be mentioned that the calculations
754
+ involved in Table 1 are exact and rigorous. That is, the computational results provide theoretical foundations
755
+ for a systematic analysis of the local stability. Therefore, we acquire the following theorem.
756
+ Theorem 2. If k1 = k2 = k, the unique non-vanishing equilibrium (p1, p2) with p1 > 0 and p2 > 0 is locally
757
+ stable if R1 > 0 and R2 > 0, where R1 and R2 can be found in Appendix.
758
+ Table 1: Selected Sample Points in {(c1, c2, k) | c1 > 0, c2 > 0, k > 0} for α = 1/2
759
+ (c1, c2, k)
760
+ stable
761
+ R1
762
+ R2
763
+ (c1, c2, k)
764
+ stable
765
+ R1
766
+ R2
767
+ (1, 1/4, 1)
768
+ yes
769
+ +
770
+ +
771
+ (1, 5/16, 1)
772
+ yes
773
+ +
774
+ +
775
+ (1, 1/4, 7)
776
+ no
777
+
778
+ +
779
+ (1, 5/16, 10)
780
+ no
781
+
782
+ +
783
+ (1, 1/4, 29)
784
+ no
785
+
786
+
787
+ (1, 5/16, 30)
788
+ no
789
+
790
+
791
+ (1, 1/4, 51)
792
+ no
793
+ +
794
+
795
+ (1, 5/16, 51)
796
+ no
797
+ +
798
+
799
+ (1, 1/2, 1)
800
+ yes
801
+ +
802
+ +
803
+ ([1, 7/8, 1)
804
+ yes
805
+ +
806
+ +
807
+ (1, 1/2, 18)
808
+ no
809
+
810
+ +
811
+ (1, 7/8, 38)
812
+ no
813
+
814
+ +
815
+ (1, 1/2, 35)
816
+ no
817
+
818
+
819
+ (1, 7/8, 51)
820
+ no
821
+
822
+
823
+ (1, 1/2, 53)
824
+ no
825
+ +
826
+
827
+ (1, 7/8, 65)
828
+ no
829
+ +
830
+
831
+ (1, 9/8, 1)
832
+ yes
833
+ +
834
+ +
835
+ (1, 2, 1)
836
+ yes
837
+ +
838
+ +
839
+ (1, 9/8, 49)
840
+ no
841
+
842
+ +
843
+ (1, 2, 70)
844
+ no
845
+
846
+ +
847
+ (1, 9/8, 66)
848
+ no
849
+
850
+
851
+ (1, 2, 140)
852
+ no
853
+
854
+
855
+ (1, 9/8, 83)
856
+ no
857
+ +
858
+
859
+ (1, 2, 209)
860
+ no
861
+ +
862
+
863
+ (1, 3, 1)
864
+ yes
865
+ +
866
+ +
867
+ (1, 4, 1)
868
+ yes
869
+ +
870
+ +
871
+ (1, 3, 91)
872
+ no
873
+
874
+ +
875
+ (1, 4, 112)
876
+ no
877
+
878
+ +
879
+ (1, 3, 272)
880
+ no
881
+
882
+
883
+ (1, 4, 462)
884
+ no
885
+
886
+
887
+ (1, 3, 453)
888
+ no
889
+ +
890
+
891
+ (1, 4, 811)
892
+ no
893
+ +
894
+
895
+ 8
896
+
897
+ 4
898
+ α = 1/3
899
+ If α = 1/3, then β = 1/2. We have the iteration map
900
+
901
+
902
+
903
+
904
+
905
+
906
+
907
+
908
+
909
+
910
+
911
+
912
+
913
+
914
+
915
+
916
+
917
+
918
+
919
+ p1(t + 1) = p1(t) + k1
920
+ −p1(t)
921
+
922
+ p1(t)p2(t) +
923
+
924
+ 2 p2(t) + 3
925
+
926
+ p1(t)p2(t)
927
+
928
+ c1
929
+ 2 p2
930
+ 1(t)
931
+ ��
932
+ p1(t) +
933
+
934
+ p2(t)
935
+ �2
936
+ ,
937
+ p2(t + 1) = p2(t) + k2
938
+ −p2(t)
939
+
940
+ p1(t)p2(t) +
941
+
942
+ 2 p1(t) + 3
943
+
944
+ p1(t)p2(t)
945
+
946
+ c2
947
+ 2 p2
948
+ 2(t)
949
+ ��
950
+ p1(t) +
951
+
952
+ p2(t)
953
+ �2
954
+ .
955
+ (7)
956
+ By setting p1(t + 1) = p1(t) = p1 and p2(t + 1) = p2(t) = p2, one can obtain the equations of the
957
+ equilibrium
958
+
959
+ − p1
960
+ √p1p2 + (2 p2 + 3√p1p2) c1 = 0,
961
+ − p2
962
+ √p1p2 + (2 p1 + 3√p1p2) c2 = 0.
963
+ Denote √p1 = x and √p2 = y. The above equations become
964
+
965
+ − x3y + (2 y2 + 3 xy)c1 = 0,
966
+ − y3x + (2 x2 + 3 xy)c2 = 0.
967
+ (8)
968
+ Using the triangular decomposition method, we decompose the solutions of system (8) into zeros of the
969
+ following two triangular sets.
970
+ T31 = [x, y] ,
971
+ T32 =
972
+
973
+ x8 − 9 c1x6 + 27 c2
974
+ 1x4 + (−27 c3
975
+ 1 − 12 c2
976
+ 1c2)x2 + 20 c3
977
+ 1c2, 2 c1y − x3 + 3 c1x
978
+
979
+ .
980
+ Evidently, T31 is corresponding to the origin (0, 0). Therefore, the identification of the number of non-
981
+ vanishing equilibria can be transformed into the determination of the number of real solutions of the following
982
+ semi-algebraic system.
983
+
984
+
985
+
986
+
987
+
988
+ x8 − 9 c1x6 + 27 c2
989
+ 1x4 + (−27 c3
990
+ 1 − 12 c2
991
+ 1c2)x2 + 20 c3
992
+ 1c2 = 0,
993
+ 2 c1y − x3 + 3 c1x = 0,
994
+ x > 0, y > 0.
995
+ Using the algebraic approach by Li and Wang [25], we know that the above system has one unique real
996
+ solution for any parameter values of c1, c2 > 0, which implies the following proposition.
997
+ Proposition 3. Let α = 1/3. The iteration map (7) possesses one unique equilibrium (p1, p2) with p1 > 0
998
+ and p2 > 0.
999
+ To investigate the local stability of the equilibrium, we consider the Jacobian matrix
1000
+ M =
1001
+ �M11
1002
+ M12
1003
+ M21
1004
+ M22
1005
+
1006
+ ,
1007
+ where
1008
+ M11 = 12 p
1009
+ 9
1010
+ 2
1011
+ 1
1012
+ √p2 + 4 p
1013
+ 7
1014
+ 2
1015
+ 1 p
1016
+ 3
1017
+ 2
1018
+ 2 − 15 c1k1p
1019
+ 3
1020
+ 2
1021
+ 1
1022
+ √p2 − 8 c1k1p
1023
+ 3
1024
+ 2
1025
+ 2
1026
+ √p1 + 3 k1p
1027
+ 5
1028
+ 2
1029
+ 1
1030
+ √p2 + 4 p5
1031
+ 1 + 12 p4
1032
+ 1p2 − 21 c1k1p1p2 + k1p2
1033
+ 1p2
1034
+ 4 p
1035
+ 7
1036
+ 2
1037
+ 1
1038
+ �√p1 + √p2
1039
+ �3
1040
+ ,
1041
+ M12 =
1042
+ k1
1043
+ �√p2 p
1044
+ 3
1045
+ 2
1046
+ 1 − p2
1047
+ 1 + c1√p2√p1 + 3 p1c1
1048
+
1049
+ 4 p2
1050
+ 1
1051
+ �√p1 + √p2
1052
+ �3 √p2
1053
+ ,
1054
+ M21 =
1055
+ k2
1056
+ �√p1 p
1057
+ 3
1058
+ 2
1059
+ 2 + c2√p2√p1 + 3 p2c2 − p2
1060
+ 2
1061
+
1062
+ 4 p2
1063
+ 2
1064
+ �√p1 + √p2
1065
+ �3 √p1
1066
+ ,
1067
+ 9
1068
+
1069
+ M22 = 4 p
1070
+ 3
1071
+ 2
1072
+ 1 p
1073
+ 7
1074
+ 2
1075
+ 2 + 12 p
1076
+ 9
1077
+ 2
1078
+ 2
1079
+ √p1 − 8 c2k2p
1080
+ 3
1081
+ 2
1082
+ 1
1083
+ √p2 − 15 c2k2p
1084
+ 3
1085
+ 2
1086
+ 2
1087
+ √p1 + 3 k2p
1088
+ 5
1089
+ 2
1090
+ 2
1091
+ √p1 + 12 p1 p4
1092
+ 2 + 4 p5
1093
+ 2 − 21 c2k2p1p2 + k2p1p2
1094
+ 2
1095
+ 4p
1096
+ 7
1097
+ 2
1098
+ 2
1099
+ �√p1 + √p2
1100
+ �3
1101
+ .
1102
+ As in Section 3, we denote
1103
+ CDM
1104
+ 1 ≡ 1 − Tr(M) + Det(M),
1105
+ CDM
1106
+ 2 ≡ 1 + Tr(M) + Det(M),
1107
+ CDM
1108
+ 3 ≡ 1 − Det(M).
1109
+ 4.1
1110
+ The special case of c1 = c2
1111
+ If we set c1 = c2 = c, then the triangular decomposition method permits us to transform the equilibrium
1112
+ equations (8) into the following triangular sets.
1113
+ T41 = [x, y],
1114
+ T42 = [x2 − c, y + x],
1115
+ T43 = [x2 − 5 c, y − x],
1116
+ T44 = [x4 − 3 c x2 + 4 c2, 2 cy − x3 + 3 cx].
1117
+ Obviously, the zeros of T41 and T42 are economically uninteresting. Moreover, all the roots of x4−3 c x2+
1118
+ 4 c2 of T44, i.e.,
1119
+
1120
+ 2
1121
+
1122
+ 7c i + 6 c
1123
+ 2
1124
+ , −
1125
+
1126
+ 2
1127
+
1128
+ 7c i + 6 c
1129
+ 2
1130
+ ,
1131
+
1132
+ −2
1133
+
1134
+ 7c i + 6 c
1135
+ 2
1136
+ , −
1137
+
1138
+ −2
1139
+
1140
+ 7c i + 6 c
1141
+ 2
1142
+ ,
1143
+ are imaginary and also not of our concern.
1144
+ There exists only one non-vanishing equilibrium (p1, p2) =
1145
+ (5 c, 5 c), which is corresponding to the branch T43.
1146
+ Substituting p1 = 5 c and p2 = 5 c into M, we obtain the Jacobian matrix at the equilibrium (5 c, 5 c) to
1147
+ be
1148
+ M(5 c, 5 c) =
1149
+
1150
+ 500 c2−3 k1
1151
+ 500 c2
1152
+ k1
1153
+ 1000 c2
1154
+ k2
1155
+ 1000 c2
1156
+ 500 c2−3 k2
1157
+ 500 c2
1158
+
1159
+ .
1160
+ Hence,
1161
+ Tr(M) = 1000 c2 − 3 k1 − 3k2
1162
+ 500 c2
1163
+ ,
1164
+ Det(M) = 200000 c4 − 1200 c2k1 − 1200 c2k2 + 7 k1k2
1165
+ 200000 c4
1166
+ .
1167
+ Theorem 3. Let α = 1/3 and c1 = c2 = c. The unique non-vanishing equilibrium (5 c, 5 c) is locally stable
1168
+ if
1169
+ c2 > 3 k1 + 3 k2 +
1170
+
1171
+ 9 k2
1172
+ 1 − 17 k1k2 + 9 k2
1173
+ 2
1174
+ 2000
1175
+ .
1176
+ The system may undergo a period-doubling bifurcation when
1177
+ c2 = 3 k1 + 3 k2 +
1178
+
1179
+ 9 k2
1180
+ 1 − 17 k1k2 + 9 k2
1181
+ 2
1182
+ 2000
1183
+ .
1184
+ Furthermore, there exist no other bifurcations of the equilibrium.
1185
+ Proof. The first condition for the local stability is always fulfilled since
1186
+ CDM
1187
+ 1 ≡ 1 − Tr(M) + Det(M) =
1188
+ 7 k1k2
1189
+ 200000c4 .
1190
+ The second condition should be
1191
+ CDM
1192
+ 2 ≡ 1 + Tr(M) + Det(M) = 800000 c4 + (−2400 k1 − 2400 k2) c2 + 7 k1k2
1193
+ 200000 c4
1194
+ > 0,
1195
+ 10
1196
+
1197
+ which implies that
1198
+ 800000 c4 + (−2400 k1 − 2400 k2) c2 + 7 k1k2 > 0,
1199
+ i.e.,
1200
+ c2 > 3 k1 + 3 k2 +
1201
+
1202
+ 9 k2
1203
+ 1 − 17 k1k2 + 9 k2
1204
+ 2
1205
+ 2000
1206
+ or c2 < 3 k1 + 3 k2 −
1207
+
1208
+ 9 k2
1209
+ 1 − 17 k1k2 + 9 k2
1210
+ 2
1211
+ 2000
1212
+ .
1213
+ The third condition should be
1214
+ CDM
1215
+ 3 ≡ 1 − Det(M) = (1200 k1 + 1200 k2) c2 − 7 k1k2
1216
+ 200000 c4
1217
+ > 0,
1218
+ from which we have
1219
+ (1200 k1 + 1200 k2) c2 − 7 k1k2 > 0,
1220
+ i.e.,
1221
+ c2 >
1222
+ 7 k1k2
1223
+ 1200 (k1 + k2).
1224
+ It can be proved that
1225
+ 3 k1 + 3 k2 −
1226
+
1227
+ 9 k2
1228
+ 1 − 17 k1k2 + 9 k2
1229
+ 2
1230
+ 2000
1231
+ <
1232
+ 7 k1k2
1233
+ 1200 (k1 + k2) < 3 k1 + 3 k2 +
1234
+
1235
+ 9 k2
1236
+ 1 − 17 k1k2 + 9 k2
1237
+ 2
1238
+ 2000
1239
+ .
1240
+ Therefore, the equilibrium is locally stable if
1241
+ c2 > 3 k1 + 3 k2 +
1242
+
1243
+ 9 k2
1244
+ 1 − 17 k1k2 + 9 k2
1245
+ 2
1246
+ 2000
1247
+ .
1248
+ The rest of the proof follows from Remark 1.
1249
+ In Figure 2, we show two 2-dimensional cross-sections of the stability region reported in Theorem 3. One
1250
+ can see that the equilibrium may lose its stability if the adjustment speeds k1, k2 are large enough or the
1251
+ marginal cost c is small enough.
1252
+ (a) k2 = 1/10
1253
+ (b) c = 1/10
1254
+ Figure 2: The 2-dimensional cross-sections of the stability region of the considered model with α = 1/3 and
1255
+ c1 = c2 = c. The curves of CDM
1256
+ 2 = 0 and CDM
1257
+ 3 = 0 are marked in blue and green, respectively.
1258
+ 4.2
1259
+ The general case
1260
+ As in Section 3.2, we set k1 = k2 = k. We should mention that the method employed in this section also
1261
+ works for the case of k1 ̸= k2. However, the conditions for k1 ̸= k2 are tedious and not reported in this
1262
+ section due to space limitations. Interested readers can use our method to compute the complete conditions
1263
+ themself. The case of c1 = c2 has been explored in Section 4.1, hence we suppose that c1 ̸= c2 in what
1264
+ follows. The bifurcations are analyzed in the following proposition.
1265
+ 11
1266
+
1267
+ Proposition 4. Let α = 1/3, k1 = k2 = k and c1 ̸= c2. The iteration map (7) may undergo a period-
1268
+ doubling bifurcation when R3 = 0 and a Neimark-Sacker bifurcation when R4 = 0, where R3 and R4 are
1269
+ given in Appendix.
1270
+ Proof. Computing the resultant of Num(CDM
1271
+ 1 ) with respect to T32, one obtains
1272
+ res(Num(CDM
1273
+ 1 ), T32) = 879609302220800000 k16c51
1274
+ 1 c11
1275
+ 2 (c1 − c2)2 �
1276
+ 2187 c2
1277
+ 1 − 4031 c1c2 + 2187 c2
1278
+ 2
1279
+ �2 .
1280
+ It is evident that
1281
+ 2187 c2
1282
+ 1 − 4031 c1c2 + 2187 c2
1283
+ 2 = 2187(c1 − c2)2 + 343 c1c2 > 0.
1284
+ Therefore, res(Num(CDM
1285
+ 1 ), T32) ̸= 0, which means that CDM
1286
+ 1 ̸= 0 at the unique non-vanishing equilibrium.
1287
+ Hence, there exist no fold bifurcations in map (7). Furthermore, we have
1288
+ res(Num(CDJ
1289
+ 2 ), T32) = 99035203142830421991929937920000000 c101
1290
+ 1
1291
+ c13
1292
+ 2 (c1 − c2)2 R2
1293
+ 3,
1294
+ and
1295
+ res(Num(CDJ
1296
+ 3 ), T32) = 99035203142830421991929937920000000 k8c101
1297
+ 1
1298
+ c13
1299
+ 2 (c1 − c2)10R2
1300
+ 4.
1301
+ Consequently, a period-doubling bifurcation may occur when R3 = 0, while a Neimark-Sacker bifurcation
1302
+ may take place when R4 = 0.
1303
+ To investigate the local stability, we need to consider Num(CDJ
1304
+ i ) · Den(CDJ
1305
+ i ) and compute its resultant
1306
+ with respect to T32. Then it is obtained that
1307
+ res(Num(CDJ
1308
+ 1 ) · Den(CDJ
1309
+ 1 ), T32) =
1310
+ 5708990770823839524233143877797980545530986496 · 1020
1311
+ · k16c156
1312
+ 1
1313
+ c36
1314
+ 2 (c1 − c2)12(2187 c2
1315
+ 1 − 4031 c1c2 + 2187 c2
1316
+ 2)2,
1317
+ res((Num(CDJ
1318
+ 2 ) · Den(CDJ
1319
+ 2 ), T32) =
1320
+ 6582018229284824168619876730229402019930943462534319453394436096 · 1024
1321
+ · c218
1322
+ 1
1323
+ c42
1324
+ 2 (c1 − c2)10R2
1325
+ 3,
1326
+ res((Num(CDJ
1327
+ 3 ) · Den(CDJ
1328
+ 3 ), T32) =
1329
+ 6582018229284824168619876730229402019930943462534319453394436096 · 1024
1330
+ · k8c218
1331
+ 1
1332
+ c42
1333
+ 2 (c1 − c2)10R2
1334
+ 4.
1335
+ These res(Num(CDJ
1336
+ i )·Den(CDJ
1337
+ i ), T32) involve only the parameters and their zeros divide the parameter
1338
+ set {(c1, c2, k) | c1 > 0, c2 > 0, k > 0} into several regions.
1339
+ In each region, the signs of CDM
1340
+ 1 , CDM
1341
+ 2 ,
1342
+ and CDM
1343
+ 3
1344
+ are fixed and can be identified by checking at a selected sample point. In Table 2, we list the
1345
+ 40 selected sample points and the signs of R3, R4 at these sample points.
1346
+ Moreover, Table 2 provides
1347
+ the information on whether the non-vanishing equilibrium is stable, i.e., whether the stability conditions
1348
+ CDM
1349
+ 1
1350
+ > 0, CDM
1351
+ 2
1352
+ > 0 and CDM
1353
+ 3
1354
+ > 0 are satisfied simultaneously.
1355
+ Interested readers may check the
1356
+ correctness of Table 2 themselves. Based on a series of computations, we acquire the following theorem.
1357
+ Theorem 4. Let k1 = k2 = k and c1 ̸= c2. The unique non-vanishing equilibrium of map (7) is locally
1358
+ stable if one of the following conditions is satisfied:
1359
+ 1. R3 > 0, R4 > 0;
1360
+ 2. R3 < 0, R4 > 0 and A1 > 0, A2 < 0, A3 > 0,
1361
+ where R3, R4, A1, A2, and A3 can be found in Appendix.
1362
+ Remark 2. From Table 2, we see that the equilibrium is stable if R3 > 0 and R4 > 0. Hence, R3 > 0, R4 > 0
1363
+ is a sufficient condition for the local stability. However, this condition is not necessary. For example, at the
1364
+ first sample point (1, 1/4, 1/512) listed in Table 2, the equilibrium is locally stable, but one can verify that
1365
+ R3 < 0 and R4 > 0 at this point. Thus, the second condition of Theorem 4 is needed.
1366
+ The necessity of the second condition can also be illustrated by Figure 4 (b, d, f), where the regions
1367
+ defined by the first and second conditions are marked in light grey and dark grey, respectively. By economic
1368
+ 12
1369
+
1370
+ intuition, we know that for a fixed value of the marginal cost c2, a decrease in the adjustment speed k would
1371
+ be beneficial to the local stability of the equilibrium. That is to say, the dark grey regions defined by the
1372
+ second condition would be more likely to be included in the stability regions.
1373
+ It is noted that A1, A2, and A3 involved in the second condition are contained in the so-called generalized
1374
+ discriminant list and can be picked out by repeated trials. Concerning the generalized discriminant list, the
1375
+ reader may refer to [38] for more details. The polynomials A1, A2, and A3 are needed here since the condition
1376
+ that R3 < 0 and R4 > 0 is not a sufficient condition for the local stability. For example, the model is stable
1377
+ at (1, 1/4, 1/512), where R3 < 0 and R4 > 0. But, the model is unstable at (1, 1/4, 34), where R3 < 0 and
1378
+ R4 > 0 are also satisfied. Consequently, additional polynomials are needed to constrict the region defined
1379
+ by R3 < 0 and R4 > 0 such that the complete stability conditions can be acquired.
1380
+ Table 2: Selected Sample Points in {(c1, c2, k) | c1 > 0, c2 > 0, k > 0} for α = 1/3
1381
+ (c1, c2, k)
1382
+ stable
1383
+ R3
1384
+ R4
1385
+ (c1, c2, k)
1386
+ stable
1387
+ R3
1388
+ R4
1389
+ (1, 1/4, 1/512)
1390
+ yes
1391
+
1392
+ +
1393
+ (1, 3/8, 1/128)
1394
+ yes
1395
+
1396
+ +
1397
+ (1, 1/4, 1)
1398
+ yes
1399
+ +
1400
+ +
1401
+ (1, 3/8, 1)
1402
+ yes
1403
+ +
1404
+ +
1405
+ (1, 1/4, 34)
1406
+ no
1407
+
1408
+ +
1409
+ (1, 3/8, 64)
1410
+ no
1411
+
1412
+ +
1413
+ (1, 1/4, 153)
1414
+ no
1415
+
1416
+
1417
+ (1, 3/8, 175)
1418
+ no
1419
+
1420
+
1421
+ (1, 1/4, 273)
1422
+ no
1423
+ +
1424
+
1425
+ (1, 3/8, 287)
1426
+ no
1427
+ +
1428
+
1429
+ (1, 5/8, 1/32)
1430
+ yes
1431
+
1432
+ +
1433
+ (1, 7/8, 1/128)
1434
+ yes
1435
+
1436
+ +
1437
+ (1, 5/8, 1)
1438
+ yes
1439
+ +
1440
+ +
1441
+ (1, 7/8, 1)
1442
+ yes
1443
+ +
1444
+ +
1445
+ (1, 5/8, 145)
1446
+ no
1447
+
1448
+ +
1449
+ (1, 7/8, 244)
1450
+ no
1451
+
1452
+ +
1453
+ (1, 5/8, 231)
1454
+ no
1455
+
1456
+
1457
+ (1, 7/8, 302)
1458
+ no
1459
+
1460
+
1461
+ (1, 5/8, 317)
1462
+ no
1463
+ +
1464
+
1465
+ (1, 7/8, 361)
1466
+ no
1467
+ +
1468
+
1469
+ (1, 5/4, 1/32)
1470
+ yes
1471
+
1472
+ +
1473
+ (1, 3/2, 1/16)
1474
+ yes
1475
+
1476
+ +
1477
+ (1, 5/4, 1)
1478
+ yes
1479
+ +
1480
+ +
1481
+ (1, 3/2, 1)
1482
+ yes
1483
+ +
1484
+ +
1485
+ (1, 5/4, 335)
1486
+ no
1487
+
1488
+ +
1489
+ (1, 3/2, 362)
1490
+ no
1491
+
1492
+ +
1493
+ (1, 5/4, 436)
1494
+ no
1495
+
1496
+
1497
+ (1, 3/2, 544)
1498
+ no
1499
+
1500
+
1501
+ (1, 5/4, 538)
1502
+ no
1503
+ +
1504
+
1505
+ (1, 3/2, 726)
1506
+ no
1507
+ +
1508
+
1509
+ (1, 2, 1/16)
1510
+ yes
1511
+
1512
+ +
1513
+ (1, 3, 1/16)
1514
+ yes
1515
+
1516
+ +
1517
+ (1, 2, 1)
1518
+ yes
1519
+ +
1520
+ +
1521
+ (1, 3, 1)
1522
+ yes
1523
+ +
1524
+ +
1525
+ (1, 2, 403)
1526
+ no
1527
+
1528
+ +
1529
+ (1, 3, 471)
1530
+ no
1531
+
1532
+ +
1533
+ (1, 2, 804)
1534
+ no
1535
+
1536
+
1537
+ (1, 3, 1503)
1538
+ no
1539
+
1540
+
1541
+ (1, 2, 1205)
1542
+ no
1543
+ +
1544
+
1545
+ (1, 3, 2536)
1546
+ no
1547
+ +
1548
+
1549
+ 5
1550
+ Influence of the Substitutability Degree
1551
+ Firstly, we analyze the influence of the substitutability degree α on the size of the stability region of the
1552
+ equilibrium. We start by considering the special case of c1 = c2.
1553
+ Proposition 5. Let c1 = c2. The stability region for α = 1/2 is a proper subset of that for α = 1/3.
1554
+ Proof. Recall Theorems 1 and 3. We need to prove that
1555
+ 2 k1 + 2 k2 +
1556
+
1557
+ 4 k2
1558
+ 1 − 7 k1k2 + 4 k2
1559
+ 2
1560
+ 216
1561
+ > 3 k1 + 3 k2 +
1562
+
1563
+ 9 k2
1564
+ 1 − 17 k1k2 + 9 k2
1565
+ 2
1566
+ 2000
1567
+ ,
1568
+ which is equivalent to
1569
+
1570
+ 2 k1 + 2 k2 +
1571
+
1572
+ 4 k2
1573
+ 1 − 7 k1k2 + 4 k2
1574
+ 2
1575
+ 216
1576
+ �2
1577
+
1578
+
1579
+ 3 k1 + 3 k2 +
1580
+
1581
+ 9 k2
1582
+ 1 − 17 k1k2 + 9 k2
1583
+ 2
1584
+ 2000
1585
+ �2
1586
+ > 0.
1587
+ The left-hand side of the above inequality can be simplified into
1588
+ − (4374 k1 + 4374 k2)
1589
+
1590
+ 9 k2
1591
+ 1 − 17 k1k2 + 9 k2
1592
+ 2
1593
+ 2916000000
1594
+ + (250000 k1 + 250000 k2)
1595
+
1596
+ 4 k2
1597
+ 1 − 7 k1k2 + 4 k2
1598
+ 2
1599
+ 2916000000
1600
+ 13
1601
+
1602
+ + 243439 k2
1603
+ 1
1604
+ 1458000000 + 61771 k1k2
1605
+ 2916000000 + 243439 k2
1606
+ 2
1607
+ 1458000000.
1608
+ It is easy to check that
1609
+ (4374 k1 + 4374 k2)
1610
+
1611
+ 9 k2
1612
+ 1 − 17 k1k2 + 9 k2
1613
+ 2
1614
+ 2916000000
1615
+ < (250000 k1 + 250000 k2)
1616
+
1617
+ 4 k2
1618
+ 1 − 7 k1k2 + 4 k2
1619
+ 2
1620
+ 2916000000
1621
+ ,
1622
+ which completes the proof.
1623
+ If c1 ̸= c2, however, the conclusion of the above proposition would be incorrect. For example, if we
1624
+ assume k1 = k2 = k and take (c1, c2, k) = (261/65536, 1/2, 79/1024), then
1625
+ R1 =
1626
+ 588713082686404258452596575293972215811486125608829
1627
+ 6129982163463555433433388108601236734474956488734408704 > 0,
1628
+ R2 = 108130364702270905134254005155560019343
1629
+ 340282366920938463463374607431768211456 > 0.
1630
+ Hence, (261/65536, 1/2, 79/1024) is in the stability region of the model for α = 1/2. But, at the same
1631
+ parameter point, namely (c1, c2, k) = (261/65536, 1/2, 79/1024), we have
1632
+ R3 = − 791461358900213183480020700044263844445257635142615074110540187
1633
+ 26328072917139296674479506920917608079723773850137277813577744384 < 0,
1634
+ R4 = 526438846625624761986017962528229497389068363385599391
1635
+ 374144419156711147060143317175368453031918731001856
1636
+ > 0,
1637
+ and
1638
+ A1 =
1639
+ 44864955
1640
+ 4294967296 > 0,
1641
+ A2 = −
1642
+ 842240947483983714275440267
1643
+ 81129638414606681695789005144064 < 0,
1644
+ A3 = − 63936547182666560163845458457577
1645
+ 649037107316853453566312041152512 < 0.
1646
+ This means that the stability conditions of Theorem 4 for α = 1/3 are not satisfied.
1647
+ On the other hand, one can also find some points where the model is stable for α = 1/3 but unstable for
1648
+ α = 1/2. For example, at (c1, c2, k) = (3/8, 1/2, 827/64), we know
1649
+ R3 = 40079185741889580295152003015
1650
+ 288230376151711744
1651
+ > 0,
1652
+ R4 = 29339436396656781
1653
+ 17179869184
1654
+ > 0.
1655
+ Therefore, (3/8, 1/2, 827/64) is in the stability region for α = 1/3. However,
1656
+ R1 = −24200272602071108539
1657
+ 17592186044416
1658
+ < 0,
1659
+ R2 = −96467864887
1660
+ 67108864
1661
+ < 0.
1662
+ That is to say, (3/8, 1/2, 827/64) is an unstable parameter point for α = 1/2.
1663
+ Figure 3 depicts the 2-dimensional cross-sections of the stability regions for α = 1/2 and α = 1/3. For
1664
+ comparison purposes, we place the cross-sections for α = 1/2 on the left and those for α = 1/3 on the right.
1665
+ We set k1 = k2 = k and choose three different values of the parameter k, i.e., k = 1/2, 1, 10, to observe the
1666
+ effect of variation of k on the size of the stability regions. The curves of R1 = 0 and R3 = 0 are marked in
1667
+ blue; the curves of R2 = 0 and R3 are marked in green; the curves of A1 = 0, A2 = 0 and A3 = 0 are marked
1668
+ in red. The stability regions are colored in light grey. From Figure 3, we find that the stability region would
1669
+ shrink if the firms react or adjust their outputs faster both for α = 1/2 and α = 1/3. Similarly, in Figure
1670
+ 4, we assume that k1 and k2 are identical and choose three different values of c1, i.e., c1 = 1/2, 1, 10. The
1671
+ regions of R1 > 0, R2 > 0 and those of R3 > 0, R4 > 0 are colored in light grey, while the regions defined by
1672
+ R3 < 0, R4 > 0, A1 > 0, A2 < 0, A3 > 0 are colored in dark grey. From Figure 4, we observe that increasing
1673
+ the marginal cost c1 of the first firm could result in the enlargement of the stability region for α = 1/2 and
1674
+ α = 1/3.
1675
+ As aforementioned, in the case of c1 ̸= c2 and k1 = k2, it can not be proved that the stability region
1676
+ for α = 1/3 covers that for α = 1/2. From Figures 3 and 4, however, it seems that the stability region for
1677
+ α = 1/3 is larger than that for α = 1/2. Consequently, for the Bertrand duopoly model considered in this
1678
+ paper, we may conclude that increasing the substitutability degree α has an effect of destabilizing the unique
1679
+ 14
1680
+
1681
+ non-vanishing equilibrium in some sense. In other words, product differentiation might make the considered
1682
+ model more stable, which is an important finding from an economic point of view. Shy [34] discussed the
1683
+ traditional view on the degree of product differentiation, i.e., a decrease in product differentiation may result
1684
+ in an increase in market competition intensity and even a price war among involved firms. The possible
1685
+ explanation for our finding is that a price war might destabilize the equilibrium of the Bertrand game with
1686
+ differentiated goods. It should be noted that our conclusion is in contrast with the one by Agliari et al.
1687
+ [1]. Specifically, Agliari et al. [1] investigated a Cournot duopoly model with differentiated products and
1688
+ employed the same CES utility function and the same linear cost functions as in our study.
1689
+ However,
1690
+ they discovered that a higher degree of product differentiation or a lower degree of substitutability leads to
1691
+ the destabilization of their model. This contradiction may help reveal the essential difference between the
1692
+ Bertrand and Cournot oligopolies with differentiated goods.
1693
+ From an economic point of view, the effects on economic variables such as prices and profits of changing
1694
+ the substitutability degree are interesting. In the sequel, we focus on the comparative statics in the special
1695
+ case of identical marginal costs. Let c1 = c2 = c. According to (3), the equilibrium satisfies that
1696
+
1697
+ − pβ
1698
+ 2p1+β
1699
+ 1
1700
+ β + p2β
1701
+ 2 c + (pβ
1702
+ 1pβ
1703
+ 2)(1 + β)c = 0,
1704
+ − pβ
1705
+ 1p1+β
1706
+ 2
1707
+ β + p2β
1708
+ 1 c + (pβ
1709
+ 1pβ
1710
+ 2)(1 + β)c = 0.
1711
+ (9)
1712
+ Hence,
1713
+ −pβ
1714
+ 2p1+β
1715
+ 1
1716
+ β + p2β
1717
+ 2 c = −pβ
1718
+ 1p1+β
1719
+ 2
1720
+ β + p2β
1721
+ 1 c,
1722
+ which implies that
1723
+ (p2β
1724
+ 1 − p2β
1725
+ 2 )c = (p2 − p1)pβ
1726
+ 1pβ
1727
+ 2β.
1728
+ Without loss of generality, we suppose that p1 ≥ p2. Since c > 0 and β > 0, we know (p2β
1729
+ 1 − p2β
1730
+ 2 )c ≥ 0 and
1731
+ (p2 − p1)pβ
1732
+ 1pβ
1733
+ 2β ≤ 0, which implies p1 = p2. Plugging p1 = p2 into the first equation of (9), one can solve
1734
+ p1 = p2 = c(2+β)
1735
+ β
1736
+ . Therefore, at the equilibrium q1 = q2 =
1737
+ β
1738
+ 2 c(2+β). As β = α/(1 − α), we obtain
1739
+ ∂pi
1740
+ ∂α = −2 c
1741
+ α2 < 0,
1742
+ ∂qi
1743
+ ∂α =
1744
+ 1
1745
+ (−2 + α)2 c
1746
+ > 0.
1747
+ According to (2), the profits of the two firms would be
1748
+ Π1 = Π2 =
1749
+ �c(2 + β)
1750
+ β
1751
+ − c
1752
+
1753
+ β
1754
+ 2 c(2 + β) =
1755
+ 1
1756
+ 2 + β = 1 +
1757
+ 1
1758
+ α − 2.
1759
+ Hence, for i = 1, 2,
1760
+ ∂Πi
1761
+ ∂α = −
1762
+ 1
1763
+ (α − 2)2 < 0.
1764
+ Recalling the inverse demands (1), for a point (q∗
1765
+ 1, q∗
1766
+ 2) on the indifference curve, we define the consumer
1767
+ surplus of the first product to be
1768
+ CS1 =
1769
+ � q∗
1770
+ 1
1771
+ 0
1772
+ qα−1
1773
+ 1
1774
+
1775
+ 1 + q∗α
1776
+ 2
1777
+ dq1 = 1
1778
+ α
1779
+ � q∗
1780
+ 1
1781
+ 0
1782
+ d(qα
1783
+ 1 + q∗α
1784
+ 2 )
1785
+
1786
+ 1 + q∗α
1787
+ 2
1788
+ = 1
1789
+ α ln
1790
+
1791
+ 1 +
1792
+ �q∗
1793
+ 1
1794
+ q∗
1795
+ 2
1796
+ �α�
1797
+ .
1798
+ In the case of c1 = c2, the outputs of the two products are equal at the equilibrium. Therefore, we have
1799
+ that CS1 = CS2 = 1
1800
+ α ln 2. Accordingly, the social welfare is
1801
+ W = CS1 + CS2 + Π1 + Π2 = 2
1802
+ α ln 2 +
1803
+ 2
1804
+ α − 2 + 2.
1805
+ Then it is known that
1806
+ ∂W
1807
+ ∂α = −2 ln 2
1808
+ α2
1809
+
1810
+ 2
1811
+ (α − 2)α < 0.
1812
+ To summarize, in the special case of identical marginal costs, an increase in the substitutability degree
1813
+ α leads to a stable equilibrium with lower prices, higher supplies, lower profits, and lower welfare. In other
1814
+ words, the degree of product differentiation is positively related to the prices of the goods, the profits of the
1815
+ involved companies, and the social welfare, which is consistent with our economic intuition.
1816
+ 15
1817
+
1818
+ (a) α = 1/2, k = 1/2
1819
+ (b) α = 1/3, k = 1/2
1820
+ (c) α = 1/2, k = 1
1821
+ (d) α = 1/3, k = 1
1822
+ (e) α = 1/2, k = 10
1823
+ (f) α = 1/3, k = 10
1824
+ Figure 3: The 2-dimensional cross-sections of the stability regions for α = 1/2 and α = 1/3 if we set
1825
+ k1 = k2 = k and fix k = 1/2, 1, 10. The curves of R1 = 0 and R3 = 0 are marked in blue; the curves of
1826
+ R2 = 0 and R3 are marked in green; the curves of A1 = 0, A2 = 0 and A3 = 0 are marked in red. The
1827
+ stability regions are colored in light grey.
1828
+ 16
1829
+
1830
+ (a) α = 1/2, c1 = 1/2
1831
+ (b) α = 1/3, c1 = 1/2
1832
+ (c) α = 1/2, c1 = 1
1833
+ (d) α = 1/3, c1 = 1
1834
+ (e) α = 1/2, c1 = 10
1835
+ (f) α = 1/3, c1 = 10
1836
+ Figure 4: The 2-dimensional cross-sections of the stability regions for α = 1/2 and α = 1/3 if we set
1837
+ k1 = k2 = k and fix c1 = 1/2, 1, 10. The curves of R1 = 0 and R3 = 0 are marked in blue; the curves of
1838
+ R2 = 0 and R3 are marked in green; the curves of A1 = 0, A2 = 0 and A3 = 0 are marked in red. The
1839
+ regions of R1 > 0, R2 > 0 and those of R3 > 0, R4 > 0 are colored in light grey, while the regions defined
1840
+ by R3 < 0, R4 > 0, A1 > 0, A2 < 0, A3 > 0 are colored in dark grey.
1841
+ 17
1842
+
1843
+ 6
1844
+ Numerical Simulations
1845
+ This section provides numerical simulations to illustrate the complex dynamics of the considered Bertrand
1846
+ duopoly model. The first purpose of our simulations is to confirm the main conclusion of Section 5 that
1847
+ increasing the substitutability degree α could destabilize the unique non-vanishing equilibrium. In Figure
1848
+ 5, we depict the 1-dimensional bifurcation diagrams with respect to α, where we fix the other parameters
1849
+ k1 = k2 = 1, c1 = c2 = 0.2 and set the initial point to be (0.56, 1.06). The bifurcation diagrams against
1850
+ p1 and p2 are given in Figure 5 (a, c) and (b, d), respectively. It is observed that complex dynamics appear
1851
+ when α becomes large enough. Specifically, there exists one unique stable equilibrium at first, then a stable
1852
+ 2-cycle orbit, and finally a chaotic set as α varies from 0.1 up to 0.7.
1853
+ To show the transition clearly,
1854
+ the 1-dimensional bifurcation diagrams are enlarged for α ∈ (0.55, 0.6) in (c, d). One can see that, when
1855
+ α = 0.553372, a branching point occurs and the unique fixed point bifurcates into a 2-cycle orbit, which,
1856
+ however, is not a period-doubling bifurcation point. This 2-cycle orbit loses its stability through a Neimark-
1857
+ Sacker bifurcation rather than a period-doubling bifurcation at α = 0.577570.
1858
+ More details can be found in Figure 6, where we plot the phase portraits for k1 = k2 = 1 and c1 = c2 = 0.2
1859
+ with the initial point (0.56, 1.06). From Figure 6 (a), we observe that, after the occurrence of a Neimark-
1860
+ Sacker bifurcation, the 2-cycle orbit (P21(0.464194, 0.607384) and P21(0.607384, 0.464194)) becomes unstable
1861
+ and bifurcates into two invariant closed orbits when α = 0.58; the unique equilibrium E1(0.492557, 0.492557)
1862
+ goes to E1new(0.489655, 0.489655) when α = 0.58.
1863
+ Furthermore, all points on the diagonal line x = y
1864
+ converge to E1new. The two invariant closed orbits marked in blue are stable and points converge to them
1865
+ from inside and outside. Figure 6 (b) depicts the phase portrait when α = 0.59 and the other parameters
1866
+ are set to be the same as (a). From (b), one can discover chaotic attractors with symmetry. The above
1867
+ observations show that an increase in the substitutability degree α leads to the emergence of instability,
1868
+ complex dynamics, and even chaos in the considered model.
1869
+ (a) against p1
1870
+ (b) against p2
1871
+ (c) against p1 and enlarged for α ∈ (0.55, 0.6)
1872
+ (d) against p2 and enlarged for α ∈ (0.55, 0.6)
1873
+ Figure 5: The 1-dimensional bifurcation diagrams with respect to α if we fix k1 = k2 = 1, c1 = c2 = 0.2 and
1874
+ set the initial point to be (0.56, 1.06).
1875
+ 18
1876
+
1877
+ 4
1878
+ pi
1879
+ 3
1880
+ 2
1881
+ 1
1882
+ 0
1883
+ 0.1
1884
+ 0.2
1885
+ 0.3
1886
+ 0.4
1887
+ a0.5
1888
+ 0.6
1889
+ 0.77
1890
+ 9
1891
+ 54
1892
+ 3
1893
+ 2
1894
+ 1
1895
+ 0
1896
+ 0.1
1897
+ 0.2
1898
+ 0.3
1899
+ 0.4
1900
+ L
1901
+ a0.5
1902
+ 0.6
1903
+ 0.77
1904
+ 9
1905
+ 50.7
1906
+ pi
1907
+ 0.6
1908
+ NS
1909
+ BP
1910
+ 0.5
1911
+ NS
1912
+ 0.4
1913
+ 0.3
1914
+ 0.55
1915
+ 0.56
1916
+ 0.57
1917
+ 0.58
1918
+ a0.59
1919
+ 0.61
1920
+ 0.9
1921
+ 0.80.7
1922
+ 0.6
1923
+ NS
1924
+ BP
1925
+ 0.5
1926
+ NS
1927
+ 0.4
1928
+ 0.3
1929
+ 0.55
1930
+ 0.56
1931
+ 0.57
1932
+ 0.58
1933
+ a0.59
1934
+ 0.61
1935
+ 0.9
1936
+ 0.80.4
1937
+ 0.45
1938
+ 0.5
1939
+ 0.55
1940
+ 0.6
1941
+ 0.65
1942
+ 0.7
1943
+ p1
1944
+ 0.4
1945
+ 0.45
1946
+ 0.5
1947
+ 0.55
1948
+ 0.6
1949
+ 0.65
1950
+ 0.7
1951
+ p2
1952
+ (a) α = 0.58
1953
+ 0.4
1954
+ 0.45
1955
+ 0.5
1956
+ 0.55
1957
+ 0.6
1958
+ 0.65
1959
+ 0.7
1960
+ 0.75
1961
+ 0.8
1962
+ p1
1963
+ 0.4
1964
+ 0.45
1965
+ 0.5
1966
+ 0.55
1967
+ 0.6
1968
+ 0.65
1969
+ 0.7
1970
+ 0.75
1971
+ 0.8
1972
+ p2
1973
+ (b) α = 0.59
1974
+ Figure 6: Phase portraits for k1 = k2 = 1 and c1 = c2 = 0.2 with the initial point (0.56, 1.06).
1975
+ To illustrate the influence of other parameters, several 2-dimensional bifurcation diagrams are computed
1976
+ and displayed in the sequel. Figure 7 depicts the 2-dimensional bifurcation diagram of map (4) (α = 1/2)
1977
+ with respect to k1 and k2 if we fix c1 = 0.3, c2 = 0.4 and set the initial point to be (0.5, 0.8). We detect
1978
+ periodic orbits with distinct orders and mark the corresponding parameter points in different colors in Figure
1979
+ 7. It should be mentioned that the parameter points where there exist periodic orbits with orders more than
1980
+ 25 are marked in light yellow as well. Two different routes from the unique stable equilibrium to complex
1981
+ dynamics can be observed. For example, if we fix k2 = 7.5 and change the value of k1 from 0.0 to 10.0, the
1982
+ dynamics of the system start from one unique stable equilibrium (the dark blue region), then transition to
1983
+ a stable 2-cycle orbit (the light blue region) and finally to invariant closed orbits as well as chaos (the light
1984
+ yellow region). This is similar to the route displayed in Figure 5, where the stable 2-cycle loses its stability
1985
+ through a Neimark-Sacker bifurcation. The other route can be discovered, e.g., if we fix k2 = 2.5 and keep
1986
+ k1 as a free parameter. Then it is observed that the unique stable equilibrium loses its stability through a
1987
+ cascade of period-doubling bifurcations.
1988
+ In Figure 8, we plot the 2-dimensional bifurcation diagram of map (7) (α = 1/3) with respect to k1
1989
+ and k2 if fixing c1 = 0.1, c2 = 0.15 and setting the initial point to be (0.6, 0.9). Similar to Figure 7, the
1990
+ aforementioned two routes from local stability to complex dynamics can also be observed in Figure 8.
1991
+ The 2-dimensional bifurcation diagrams with respect to c1 and c2 for α = 1/2 and α = 1/3 are displayed
1992
+ in Figures 9 and 10, respectively. One can see that complicated dynamic phenomena take place if one of
1993
+ the cost parameters c1, c2 is small enough. Similarly, we find the above two routes to chaotic behavior, i.e.,
1994
+ through a cascade of period-doubling bifurcation and through a Neimark-Sacker bifurcation on a 2-cycle
1995
+ orbit, which have already been discovered by Ahmed et al. [3]. However, from Figure 9, we also find the
1996
+ existence of a Neimark-Sacker bifurcation directly on the unique equilibrium, which is a new result that
1997
+ has not been observed by Ahmed et al. [3] yet. Specifically, Figure 9 shows that, if we fix c1 = 0.9 and
1998
+ decrease the value of c2 from 1.0 to 0.0, the dynamics of the system directly transition from the unique
1999
+ stable equilibrium (the dark blue region) to invariant closed orbits (the light yellow region). In this case,
2000
+ the behavior of the market suddenly changes from an ordered state to a disordered state at some critical
2001
+ point, which can hardly be learned by even rational players.
2002
+ 7
2003
+ Concluding Remarks
2004
+ In this paper, we investigated the local stability, bifurcations, and comparative statics of a dynamic Bertrand
2005
+ duopoly game with differentiated products. This duopoly is assumed to possess two boundedly rational
2006
+ players adopting a gradient adjustment mechanism and a continuum of identical consumers with a CES
2007
+ utility function. Moreover, the cost functions are supposed to be linear. It should be mentioned that the
2008
+ nonlinearity of the resulting demand function derived from the underlying utility permits us to extend the
2009
+ applications of Bertrand games to more realistic economies, compared to the widely used Bertrand models
2010
+ with linear demands.
2011
+ The considered game was first explored by Ahmed et al. [3], where only numerical simulations are
2012
+ 19
2013
+
2014
+ Figure 7: The 2-dimensional bifurcation diagram of map (4) (α = 1/2) with respect to k1 and k2 if we fix
2015
+ c1 = 0.3, c2 = 0.4 and set the initial point to be (0.5, 0.8).
2016
+ Figure 8: The 2-dimensional bifurcation diagram of map (7) (α = 1/3) with respect to k1 and k2 if we fix
2017
+ c1 = 0.1, c2 = 0.15 and set the initial point to be (0.6, 0.9).
2018
+ 20
2019
+
2020
+ 10.00
2021
+ 25
2022
+ 24
2023
+ 23
2024
+ 22
2025
+ 21
2026
+ 20
2027
+ 7.50
2028
+ 19
2029
+ 18
2030
+ 17
2031
+ 16
2032
+ 15
2033
+ 14
2034
+ 5.00
2035
+ 13
2036
+ 12
2037
+ 11
2038
+ 10
2039
+ 8
2040
+ 2.50
2041
+ 6
2042
+ 5
2043
+ 3
2044
+ 2
2045
+ 0.00
2046
+ 0.00
2047
+ 2.50
2048
+ 5.00
2049
+ 7.50
2050
+ 10.00
2051
+ k16.00
2052
+ 25
2053
+ 24
2054
+ 23
2055
+ 22
2056
+ 21
2057
+ 20
2058
+ 4.50
2059
+ 19
2060
+ 18
2061
+ 17
2062
+ 16
2063
+ 15
2064
+ 14
2065
+ 3.00
2066
+ 13
2067
+ 12
2068
+ 11
2069
+ 10
2070
+ 9
2071
+ 1.50
2072
+ 6
2073
+ 5
2074
+ 3
2075
+ 2
2076
+ 0.00
2077
+ 0.00
2078
+ 1.50
2079
+ 3.00
2080
+ 4.50
2081
+ 6.00
2082
+ k1Figure 9: The 2-dimensional bifurcation diagram of map (4) (α = 1/2) with respect to c1 and c2 if we fix
2083
+ k1 = 6, k2 = 12 and set the initial point to be (0.5, 0.8).
2084
+ Figure 10: The 2-dimensional bifurcation diagram of map (7) (α = 1/3) with respect to c1 and c2 if we fix
2085
+ k1 = 0.3, k2 = 0.6 and set initial point to be (0.6, 0.9).
2086
+ 21
2087
+
2088
+ 1.00
2089
+ 2.
2090
+ 25
2091
+ 24
2092
+ 23
2093
+ 22
2094
+ 21
2095
+ 20
2096
+ 0.75.
2097
+ 19
2098
+ 18
2099
+ 17
2100
+ 16
2101
+ 15
2102
+ 14
2103
+ 0.50
2104
+ 13
2105
+ 12
2106
+ 11
2107
+ 10
2108
+ 9
2109
+ 8
2110
+ 0.25.
2111
+ 6
2112
+ 5
2113
+ 4
2114
+ 3
2115
+ 2
2116
+ 0.00
2117
+ 0.00
2118
+ 0.25
2119
+ 0.50
2120
+ 0.75
2121
+ 1.00
2122
+ C10.10
2123
+ 24
2124
+ 25
2125
+ 24
2126
+ 23
2127
+ 22
2128
+ 21
2129
+ 20
2130
+ 0.08
2131
+ 19
2132
+ 18
2133
+ 17
2134
+ 16
2135
+ 15
2136
+ 14
2137
+ 0.05
2138
+ 13
2139
+ 12
2140
+ 11
2141
+ 10
2142
+ 9
2143
+ 0.03
2144
+ 5
2145
+ 4
2146
+ 3
2147
+ 2
2148
+ 0.00
2149
+ 0.00
2150
+ 0.03
2151
+ 0.05
2152
+ 0.08
2153
+ 0.10
2154
+ C1employed to investigate the dynamic behavior and it was observed that the Nash equilibrium loses its
2155
+ stability through a period-doubling bifurcation as the speed of adjustment increases. In our study, however,
2156
+ we re-investigated this game using several tools based on symbolic computations such as the triangular
2157
+ decomposition method (refer to, e.g., [23]) and the PCAD method (refer to, e.g., [11]).
2158
+ The results of
2159
+ symbolic computations are exact, and thus provide theoretical foundations for the systematic analysis of
2160
+ economic models.
2161
+ For simplicity, our work mainly focused on two specific degrees of product substitutability, namely
2162
+ α = 1/2 and α = 1/3. In both cases, we proved the uniqueness of the non-vanishing equilibrium using the
2163
+ algebraic approach of detecting the multiplicity of equilibria proposed by the first author and his co-worker
2164
+ [25]. We introduce several tools based on symbolic computations and used them to obtain the rigorous
2165
+ conditions for the local stability of the unique non-vanishing equilibrium for the first time. In the special
2166
+ case that the two firms have identical marginal costs, we proved that the model can lose its stability only
2167
+ through a period-doubling bifurcation. From an economic point of view, the most interesting finding was
2168
+ that an increase in the substitutability degree or a decrease in the product differentiation leads to the
2169
+ destabilization of the Bertrand model. This is because a price war, which might destabilize the equilibrium,
2170
+ can take place if the substitutability degree is large enough.
2171
+ We should mention that our finding is in
2172
+ contrast with that by Agliari et al. [1] and that by Fanti and Gori [14]. This contradiction contributes to the
2173
+ literature on the connection between Cournot and Bertrand oligopolies and may help reveal the essential
2174
+ difference between them. Moreover, we conducted the comparative statics in the special case of identical
2175
+ marginal costs. The resulting conclusion was that lower degrees of product differentiation mean lower prices,
2176
+ higher supplies, lower profits, and lower social welfare, which is consistent with our economic intuition.
2177
+ Numerical simulations were provided in the end, through which complex dynamics such as periodic
2178
+ orbits and chaos can be observed. The simulations confirmed that an increase in the substitutability degree
2179
+ α leads to the emergence of instability, complex dynamics, and even chaos in the considered model. Two-
2180
+ dimensional bifurcation diagrams were also provided to show different possible routes to chaotic behavior,
2181
+ e.g., through a cascade of period-doubling bifurcation and through a Neimark-Sacker bifurcation on a 2-cycle
2182
+ orbit. Furthermore, we discovered the existence of a Neimark-Sacker bifurcation directly on the equilibrium,
2183
+ which is a new finding and has not yet been discovered by Ahmed et al. [3].
2184
+ Appendix
2185
+ R1 = 15552 c10
2186
+ 1 c6
2187
+ 2 + 62208 c9
2188
+ 1c7
2189
+ 2 + 93312 c8
2190
+ 1c8
2191
+ 2 + 62208 c7
2192
+ 1c9
2193
+ 2 + 15552 c6
2194
+ 1c10
2195
+ 2 + 73728 c11
2196
+ 1 c3
2197
+ 2k + 327168 c10
2198
+ 1 c4
2199
+ 2k
2200
+ + 576576 c9
2201
+ 1c5
2202
+ 2k + 541440 c8
2203
+ 1c6
2204
+ 2k + 436608 c7
2205
+ 1c7
2206
+ 2k + 541440 c6
2207
+ 1c8
2208
+ 2k + 576576 c5
2209
+ 1c9
2210
+ 2k + 327168 c4
2211
+ 1c10
2212
+ 2 k
2213
+ + 73728 c3
2214
+ 1c11
2215
+ 2 k + 32768 c11
2216
+ 1 c2 k2 + 94208 c10
2217
+ 1 c2
2218
+ 2k2 + 284160 c9
2219
+ 1c3
2220
+ 2k2 + 1163712 c8
2221
+ 1c4
2222
+ 2k2 + 2855520 c7
2223
+ 1c5
2224
+ 2k2
2225
+ + 3825168 c6
2226
+ 1c6
2227
+ 2k2 + 2855520 c5
2228
+ 1c7
2229
+ 2k2 + 1163712 c4
2230
+ 1c8
2231
+ 2k2 + 284160 c3
2232
+ 1c9
2233
+ 2k2 + 94208 c2
2234
+ 1c10
2235
+ 2 k2 + 32768 c1c11
2236
+ 2 k2
2237
+ + 77824 c9
2238
+ 1c2 k3 + 359936 c8
2239
+ 1c2
2240
+ 2k3 + 644608 c7
2241
+ 1c3
2242
+ 2k3 + 610976 c6
2243
+ 1c4
2244
+ 2k3 + 494368 c5
2245
+ 1c5
2246
+ 2k3 + 610976 c4
2247
+ 1c6
2248
+ 2k3
2249
+ + 644608 c3
2250
+ 1c7
2251
+ 2k3 + 359936 c2
2252
+ 1c8
2253
+ 2k3 + 77824 c1c9
2254
+ 2k3 − 4096c8
2255
+ 1k4 − 12288 c7
2256
+ 1c2 k4 + 4544 c6
2257
+ 1c2
2258
+ 2k4
2259
+ + 70360 c5
2260
+ 1c3
2261
+ 2k4 + 114600 c4
2262
+ 1c4
2263
+ 2k4 + 70360 c3
2264
+ 1c5
2265
+ 2k4 + 4544 c2
2266
+ 1c6
2267
+ 2k4 − 12288 c1c7
2268
+ 2k4 − 4096 c8
2269
+ 2k4
2270
+ − 1024 c5
2271
+ 1c2 k5 − 3232 c4
2272
+ 1c2
2273
+ 2k5 − 4488 c3
2274
+ 1c3
2275
+ 2k5 − 3232 c2
2276
+ 1c4
2277
+ 2k5 − 1024 c1c5
2278
+ 2k5 + 32 c3
2279
+ 1c2 k6 + 61 c2
2280
+ 1c2
2281
+ 2k6
2282
+ + 32 c1c3
2283
+ 2k6,
2284
+ R2 = 1152 c8
2285
+ 1c2
2286
+ 2 + 5832 c7
2287
+ 1c3
2288
+ 2 + 12960 c6
2289
+ 1c4
2290
+ 2 + 16560 c5
2291
+ 1c5
2292
+ 2 + 12960 c4
2293
+ 1c6
2294
+ 2 + 5832 c3
2295
+ 1c7
2296
+ 2 + 1152 c2
2297
+ 1c8
2298
+ 2 + 1024 c8
2299
+ 1k
2300
+ + 3584 c7
2301
+ 1c2 k + 5920 c6
2302
+ 1c2
2303
+ 2k + 6224 c5
2304
+ 1c3
2305
+ 2k + 5836 c4
2306
+ 1c4
2307
+ 2k + 6224 c3
2308
+ 1c5
2309
+ 2k + 5920 c2
2310
+ 1c6
2311
+ 2k + 3584 c1c7
2312
+ 2k
2313
+ + 1024 c8
2314
+ 2k + 512 c5
2315
+ 1c2 k2 + 1616 c4
2316
+ 1c2
2317
+ 2k2 + 2244 c3
2318
+ 1c3
2319
+ 2k2 + 1616 c2
2320
+ 1c4
2321
+ 2k2 + 512 c1c5
2322
+ 2k2 − 32 c3
2323
+ 1c2 k3
2324
+ − 61 c2
2325
+ 1c2
2326
+ 2k3 − 32 c1c3
2327
+ 2k3,
2328
+ R3 = − 209715200000 c12
2329
+ 1 c8
2330
+ 2 + 838860800000 c11
2331
+ 1 c9
2332
+ 2 − 1258291200000 c10
2333
+ 1 c10
2334
+ 2 + 838860800000 c9
2335
+ 1c11
2336
+ 2
2337
+ − 209715200000 c8
2338
+ 1c12
2339
+ 2 + 1160950579200 c13
2340
+ 1 c5
2341
+ 2k − 5170397184000 c12
2342
+ 1 c6
2343
+ 2k + 9284105011200 c11
2344
+ 1 c7
2345
+ 2k
2346
+ − 9178054656000 c10
2347
+ 1 c8
2348
+ 2k + 7806792499200 c9
2349
+ 1c9
2350
+ 2k − 9178054656000 c8
2351
+ 1c10
2352
+ 2 k + 9284105011200 c7
2353
+ 1c11
2354
+ 2 k
2355
+ − 5170397184000 c6
2356
+ 1c12
2357
+ 2 k + 1160950579200 c5
2358
+ 1c13
2359
+ 2 k + 626913312768 c13
2360
+ 1 c3
2361
+ 2k2 − 1827529703424 c12
2362
+ 1 c4
2363
+ 2k2
2364
+ + 6377496477696 c11
2365
+ 1 c5
2366
+ 2k2 − 24562717922304 c10
2367
+ 1 c6
2368
+ 2k2 + 56911413825536 c9
2369
+ 1c7
2370
+ 2k2
2371
+ 22
2372
+
2373
+ − 74841436780544 c8
2374
+ 1c8
2375
+ 2k2 + 56911413825536 c7
2376
+ 1c9
2377
+ 2k2 − 24562717922304 c6
2378
+ 1c10
2379
+ 2 k2
2380
+ + 6377496477696 c5
2381
+ 1c11
2382
+ 2 k2 − 1827529703424 c4
2383
+ 1c12
2384
+ 2 k2 + 626913312768 c3
2385
+ 1c13
2386
+ 2 k2 − 117546246144 c12
2387
+ 1 c2
2388
+ 2k3
2389
+ + 2268751389696 c11
2390
+ 1 c3
2391
+ 2k3 − 8446241806848 c10
2392
+ 1 c4
2393
+ 2k3 + 13848228389376 c9
2394
+ 1c5
2395
+ 2k3 − 12871123435008 c8
2396
+ 1c6
2397
+ 2k3
2398
+ + 10570707526656 c7
2399
+ 1c7
2400
+ 2k3 − 12871123435008 c6
2401
+ 1c8
2402
+ 2k3 + 13848228389376 c5
2403
+ 1c9
2404
+ 2k3
2405
+ − 8446241806848 c4
2406
+ 1c10
2407
+ 2 k3 + 2268751389696 c3
2408
+ 1c11
2409
+ 2 k3 − 117546246144 c2
2410
+ 1c12
2411
+ 2 k3 + 7346640384 c11
2412
+ 1 c2 k4
2413
+ + 23872802112 c10
2414
+ 1 c2
2415
+ 2k4 − 79144786368 c9
2416
+ 1c3
2417
+ 2k4 − 389232360000 c8
2418
+ 1c4
2419
+ 2k4 + 1762366805056 c7
2420
+ 1c5
2421
+ 2k4
2422
+ − 2639431381760 c6
2423
+ 1c6
2424
+ 2k4 + 1762366805056 c5
2425
+ 1c7
2426
+ 2k4 − 389232360000 c4
2427
+ 1c8
2428
+ 2k4 − 79144786368 c3
2429
+ 1c9
2430
+ 2k4
2431
+ + 23872802112 c2
2432
+ 1c10
2433
+ 2 k4 + 7346640384 c1c11
2434
+ 2 k4 − 153055008 c10
2435
+ 1 k5 + 444048480 c9
2436
+ 1c2 k5
2437
+ − 2281361760 c8
2438
+ 1c2
2439
+ 2k5 − 6359031360 c7
2440
+ 1c3
2441
+ 2k5 + 33853070112 c6
2442
+ 1c4
2443
+ 2k5 − 51945109632 c5
2444
+ 1c5
2445
+ 2k5
2446
+ + 33853070112 c4
2447
+ 1c6
2448
+ 2k5 − 6359031360 c3
2449
+ 1c7
2450
+ 2k5 − 2281361760 c2
2451
+ 1c8
2452
+ 2k5 + 444048480 c1c9
2453
+ 2k5
2454
+ − 153055008 c10
2455
+ 2 k5 + 36636624 c7
2456
+ 1c2 k6 − 65578896 c6
2457
+ 1c2
2458
+ 2k6 + 239834412 c5
2459
+ 1c3
2460
+ 2k6 − 377249916 c4
2461
+ 1c4
2462
+ 2k6
2463
+ + 239834412 c3
2464
+ 1c5
2465
+ 2k6 − 65578896 c2
2466
+ 1c6
2467
+ 2k6 + 36636624c1c7
2468
+ 2k6 − 669222 c5
2469
+ 1c2 k7 + 1023534 c4
2470
+ 1c2
2471
+ 2k7
2472
+ − 951468 c3
2473
+ 1c3
2474
+ 2k7 + 1023534 c2
2475
+ 1c4
2476
+ 2k7 − 669222 c1c5
2477
+ 2k7 + 2187 c3
2478
+ 1c2 k8 − 4031 c2
2479
+ 1c2
2480
+ 2k8 + 2187 c1c3
2481
+ 2k8,
2482
+ R4 = 17714700 c10
2483
+ 1 c2
2484
+ 2 − 84798900 c9
2485
+ 1c3
2486
+ 2 + 166819500 c8
2487
+ 1c4
2488
+ 2 ��� 187523100 c7
2489
+ 1c5
2490
+ 2 + 175575600 c6
2491
+ 1c6
2492
+ 2 − 187523100 c5
2493
+ 1c7
2494
+ 2
2495
+ + 166819500 c4
2496
+ 1c8
2497
+ 2 − 84798900 c3
2498
+ 1c9
2499
+ 2 + 17714700 c2
2500
+ 1c10
2501
+ 2 + 19131876 c10
2502
+ 1 k − 55506060 c9
2503
+ 1c2 k
2504
+ + 70441812 c8
2505
+ 1c2
2506
+ 2k − 70683840 c7
2507
+ 1c3
2508
+ 2k + 106503012 c6
2509
+ 1c4
2510
+ 2k − 136123200 c5
2511
+ 1c5
2512
+ 2k + 106503012 c4
2513
+ 1c6
2514
+ 2k
2515
+ − 70683840 c3
2516
+ 1c7
2517
+ 2k + 70441812 c2
2518
+ 1c8
2519
+ 2k − 55506060 c1c9
2520
+ 2k + 19131876 c10
2521
+ 2 k − 9159156 c7
2522
+ 1c2 k2
2523
+ + 23480604 c6
2524
+ 1c2
2525
+ 2k2 − 24625107 c5
2526
+ 1c3
2527
+ 2k2 + 19286271 c4
2528
+ 1c4
2529
+ 2k2 − 24625107 c3
2530
+ 1c5
2531
+ 2k2 + 23480604 c2
2532
+ 1c6
2533
+ 2k2
2534
+ − 9159156 c1c7
2535
+ 2k2 + 334611 c5
2536
+ 1c2 k3 − 511767 c4
2537
+ 1c2
2538
+ 2k3 + 475734 c3
2539
+ 1c3
2540
+ 2k3 − 511767 c2
2541
+ 1c4
2542
+ 2k3 + 334611 c1c5
2543
+ 2k3
2544
+ − 2187 c3
2545
+ 1c2 k4 + 4031 c2
2546
+ 1c2
2547
+ 2k4 − 2187 c1c3
2548
+ 2k4,
2549
+ A1 = 243 c2
2550
+ 1 + 352 c1c2 − 9 k,
2551
+ A2 = − 8000 c5
2552
+ 1c3
2553
+ 2 + 19683 c6
2554
+ 1k − 17496 c5
2555
+ 1c2 k + 3024 c4
2556
+ 1c2
2557
+ 2k + 1728 c3
2558
+ 1c3
2559
+ 2k − 2187 c4
2560
+ 1k2 + 3564 c3
2561
+ 1c2 k2
2562
+ − 432 c2
2563
+ 1c2
2564
+ 2k2 + 81 c2
2565
+ 1k3 + 36 c1c2 k3 − k4,
2566
+ A3 = 12754584 c7
2567
+ 1 − 12171384 c6
2568
+ 1c2 + 3708504 c5
2569
+ 1c2
2570
+ 2 + 84096 c4
2571
+ 1c3
2572
+ 2 + 2519424 c3
2573
+ 1c4
2574
+ 2 − 72171 c5
2575
+ 1k − 3576744 c4
2576
+ 1c2 k
2577
+ + 5126856 c3
2578
+ 1c2
2579
+ 2k − 629856 c2
2580
+ 1c3
2581
+ 2k − 25272 c3
2582
+ 1k2 + 98966 c2
2583
+ 1c2 k2 + 52488 c1c2
2584
+ 2k2 + 387 c1 k3 − 1458 c2 k3.
2585
+ Acknowledgements
2586
+ The authors wish to thank Dr. Li Su for the beneficial discussions. The authors are grateful to the anonymous
2587
+ referees for their helpful comments.
2588
+ This work has been supported by Philosophy and Social Science Foundation of Guangdong (Grant No.
2589
+ GD21CLJ01), Natural Science Foundation of Anhui Province (Grant No. 2008085QA09), University Natural
2590
+ Science Research Project of Anhui Province (Grant No. KJ2021A0482), Major Research and Cultivation
2591
+ Project of Dongguan City University (Grant No. 2021YZDYB04Z).
2592
+ References
2593
+ [1] A. Agliari, A. Naimzada, and N. Pecora. Nonlinear dynamics of a Cournot duopoly game with differ-
2594
+ entiated products. Applied Mathematics and Computation, 281:1–15, 2016.
2595
+ [2] E. Ahmed, H. Agiza, and S. Hassan. On modifications of Puu’s dynamical duopoly. Chaos, Solitons &
2596
+ Fractals, 11(7):1025–1028, 2000.
2597
+ [3] E. Ahmed, A. Elsadany, and T. Puu. On Bertrand duopoly game with differentiated goods. Applied
2598
+ Mathematics and Computation, 251:169–179, 2015.
2599
+ [4] S. Askar. The rise of complex phenomena in Cournot duopoly games due to demand functions without
2600
+ inflection points. Communications in Nonlinear Science and Numerical Simulation, 19(6):1918–1925,
2601
+ 2014.
2602
+ 23
2603
+
2604
+ [5] P. Aubry and M. Moreno Maza. Triangular sets for solving polynomial systems: a comparative imple-
2605
+ mentation of four methods. Journal of Symbolic Computation, 28(1–2):125–154, 1999.
2606
+ [6] J. Bertrand. Review of ‘Th´eorie Math´ematique de la Richesse Sociale’ and ‘Recherches sur les Principes
2607
+ Math´ematiques de la Richesse’. Journal des Savants, pages 499–508, 1883.
2608
+ [7] G. I. Bischi, A. Naimzada, and L. Sbragia. Oligopoly games with local monopolistic approximation.
2609
+ Journal of Economic Behavior & Organization, 62(3):371–388, 2007.
2610
+ [8] S. Brianzoni, L. Gori, and E. Michetti. Dynamics of a Bertrand duopoly with differentiated products
2611
+ and nonlinear costs: analysis, comparisons and new evidences. Chaos, Solitons & Fractals, 79:191–203,
2612
+ 2015.
2613
+ [9] J. S. C´anovas and M. Mu˜noz-Guillermo. On the dynamics of Kopel’s Cournot duopoly model. Applied
2614
+ Mathematics and Computation, 330:292–306, 2018.
2615
+ [10] F. Cavalli, A. Naimzada, and F. Tramontana. Nonlinear dynamics and global analysis of a heterogeneous
2616
+ Cournot duopoly with a local monopolistic approach versus a gradient rule with endogenous reactivity.
2617
+ Communications in Nonlinear Science and Numerical Simulation, 23(1-3):245–262, 2015.
2618
+ [11] G. E. Collins and H. Hong.
2619
+ Partial cylindrical algebraic decomposition for quantifier elimination.
2620
+ Journal of Symbolic Computation, 12(3):299–328, 1991.
2621
+ [12] A. A. Cournot. Recherches sur les Principes Math´ematiques de la Th´eorie des Richesses. L. Hachette,
2622
+ Paris, 1838.
2623
+ [13] A. A. Elsadany. Dynamics of a Cournot duopoly game with bounded rationality based on relative profit
2624
+ maximization. Applied Mathematics and Computation, 294:253–263, 2017.
2625
+ [14] L. Fanti and L. Gori. The dynamics of a differentiated duopoly with quantity competition. Economic
2626
+ Modelling, 29(2):421–427, 2012.
2627
+ [15] L. Fanti, L. Gori, C. Mammana, and E. Michetti. The dynamics of a Bertrand duopoly with differ-
2628
+ entiated products: synchronization, intermittency and global dynamics. Chaos, Solitons & Fractals,
2629
+ 52:73–86, 2013.
2630
+ [16] F. M. Fisher. The stability of the Cournot oligopoly solution: the effects of speeds of adjustment and
2631
+ increasing marginal costs. The Review of Economic Studies, 28(2):125, 1961.
2632
+ [17] L. Gori and M. Sodini. Price competition in a nonlinear differentiated duopoly. Chaos, Solitons &
2633
+ Fractals, 104:557–567, 2017.
2634
+ [18] E. Jury, L. Stark, and V. Krishnan. Inners and stability of dynamic systems. IEEE Transactions on
2635
+ Systems, Man, and Cybernetics, (10):724–725, 1976.
2636
+ [19] M. Kalkbrener. A generalized Euclidean algorithm for computing triangular representations of algebraic
2637
+ varieties. Journal of Symbolic Computation, 15(2):143–167, 1993.
2638
+ [20] M. Kopel. Simple and complex adjustment dynamics in Cournot duopoly models. Chaos, Solitons &
2639
+ Fractals, 7(12):2031–2048, 1996.
2640
+ [21] B. Li, Q. He, and R. Chen. Neimark-Sacker bifurcation and the generate cases of Kopel oligopoly model
2641
+ with different adjustment speed. Advances in Difference Equations, 2020(1):1–18, 2020.
2642
+ [22] B. Li, H. Liang, L. Shi, and Q. He. Complex dynamics of Kopel model with nonsymmetric response
2643
+ between oligopolists. Chaos, Solitons & Fractals, 156:111860, 2022.
2644
+ [23] X. Li, C. Mou, and D. Wang. Decomposing polynomial sets into simple sets over finite fields: the
2645
+ zero-dimensional case. Computers & Mathematics with Applications, 60(11):2983–2997, 2010.
2646
+ [24] X. Li and L. Su. A heterogeneous duopoly game under an isoelastic demand and diseconomies of scale.
2647
+ Fractal and Fractional, 6(8):459, 2022.
2648
+ 24
2649
+
2650
+ [25] X. Li and D. Wang. Computing equilibria of semi-algebraic economies using triangular decomposition
2651
+ and real solution classification. Journal of Mathematical Economics, 54:48–58, 2014.
2652
+ [26] M. C. L´opez and R. A. Naylor. The Cournot–Bertrand profit differential: a reversal result in a differ-
2653
+ entiated duopoly with wage bargaining. European Economic Review, 48(3):681–696, 2004.
2654
+ [27] J. Ma and Z. Guo. The influence of information on the stability of a dynamic Bertrand game. Com-
2655
+ munications in Nonlinear Science and Numerical Simulation, 30(1-3):32–44, 2016.
2656
+ [28] A. Matsumoto, Y. Nonaka, and F. Szidarovszky. Nonlinear dynamics and adjunct profits in two bound-
2657
+ edly rational models of monopoly. Communications in Nonlinear Science and Numerical Simulation,
2658
+ 116:106868, 2022.
2659
+ [29] M. McManus and R. E. Quandt. Comments on the stability of the Cournot oligipoly model. The
2660
+ Review of Economic Studies, 28(2):136–139, 1961.
2661
+ [30] B. Mishra. Algorithmic Algebra. Springer-Verlag, New York, 1993.
2662
+ [31] A. Naimzada and F. Tramontana.
2663
+ Controlling chaos through local knowledge.
2664
+ Chaos, Solitons &
2665
+ Fractals, 42(4):2439–2449, 2009.
2666
+ [32] A. Naimzada and F. Tramontana.
2667
+ Dynamic properties of a Cournot–Bertrand duopoly game with
2668
+ differentiated products. Economic Modelling, 29(4):1436–1439, 2012.
2669
+ [33] T. Puu. Chaos in duopoly pricing. Chaos, Solitons & Fractals, 1(6):573–581, 1991.
2670
+ [34] O. Shy. Industrial Organization: Theory and Applications. MIT Press, Cambridge, 1995.
2671
+ [35] N. Singh and X. Vives. Price and Quantity Competition in a Differentiated Duopoly. The RAND
2672
+ Journal of Economics, 15(4):546–554, 1984.
2673
+ [36] D. Wang.
2674
+ Computing triangular systems and regular systems.
2675
+ Journal of Symbolic Computation,
2676
+ 30(2):221–236, 2000.
2677
+ [37] W.-T. Wu.
2678
+ Basic principles of mechanical theorem proving in elementary geometries.
2679
+ Journal of
2680
+ Automated Reasoning, 2(3):221–252, 1986.
2681
+ [38] L. Yang, X. Hou, and B. Xia. A complete algorithm for automated discovering of a class of inequality-
2682
+ type theorems. Science in China Series F, 44:33–49, 2001.
2683
+ [39] A. Zhang and Y. Zhang. Stability of a Cournot-Nash equilibrium: the multiproduct case. Journal of
2684
+ Mathematical Economics, 26(4):441–462, 1996.
2685
+ [40] J. Zhang, Q. Da, and Y. Wang. The dynamics of Bertrand model with bounded rationality. Chaos,
2686
+ Solitons & Fractals, 39(5):2048–2055, 2009.
2687
+ 25
2688
+
9NAzT4oBgHgl3EQfFPpp/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
9dAzT4oBgHgl3EQfSfu4/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0b925c0d7bf4dd07524bb098a5c92474b8c95ec73230f8ee1638ae13b1a428b
3
+ size 83261
9tE5T4oBgHgl3EQfRQ5h/content/2301.05519v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9152d60b66f7bc20af5b3d81c40da41794dedf27c5c121c494b5e07af978ebe6
3
+ size 4610933
ANFLT4oBgHgl3EQfEi_Y/content/tmp_files/2301.11984v1.pdf.txt ADDED
@@ -0,0 +1,1549 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.11984v1 [eess.SY] 27 Jan 2023
2
+ 1
3
+ Dual Control of Exploration and Exploitation for
4
+ Self-Optimisation Control in Uncertain
5
+ Environments
6
+ Zhongguo Li, Member, IEEE, Wen-Hua Chen, Fellow, IEEE, Jun Yang, Fellow, IEEE
7
+ Yunda Yan, Member, IEEE
8
+ Abstract—This paper develops a dual control framework for
9
+ exploration and exploitation (DCEE) to solve a self-optimisation
10
+ problem in unknown and uncertain environment. In general,
11
+ there is a fundamental conflict between tracking an unknown
12
+ optimal operational condition and parameter identification. Dif-
13
+ ferent from existing adaptive control methods, the proposed
14
+ DCEE does not need to introduce additional perturbation signals,
15
+ since it naturally embraces an exploration effect to actively
16
+ probe the uncertain environment to reduce belief uncertainty. An
17
+ ensemble based multi-estimator approach is developed to learn
18
+ the environmental parameters and in the meanwhile quantify the
19
+ estimation uncertainty in real time. The control action is devised
20
+ with dual effects, which not only minimises the tracking error
21
+ between the current state and the believed unknown optimal
22
+ operational condition but also reduces belief uncertainty by
23
+ actively exploring the environment. Formal properties of the
24
+ proposed DCEE framework like convergence are established. A
25
+ numerical example is used to validate the effectiveness of the
26
+ proposed DCEE. Simulation results for maximum power point
27
+ tracking are provided to further demonstrate the potential of
28
+ this new framework in real world applications.
29
+ Index Terms—Dual control, self-optimisation control, active
30
+ learning, exploration and exploitation, adaptation and control.
31
+ I. INTRODUCTION
32
+ Traditionally, adaptive control algorithms are mostly de-
33
+ signed for either regulation problems with known setpoints or
34
+ tracking problems with known reference trajectories. In many
35
+ applications, setpoints or references are usually dependent
36
+ on unknown or changing environment parameters, and thus
37
+ cannot be pre-specified in advance. Operating a system at
38
+ optimal condition is strongly desirable for best profit, pro-
39
+ ductivity or efficiency, but it can be particularly challenging
40
+ in an unknown or changing environment due to the presence
41
+ of uncertainties, disturbances and noises. Typical examples
42
+ include anti-lock braking systems to maintain maximal friction
43
+ under various unknown road surfaces and vehicle conditions
44
+ This work was supported by the UK Engineering and Physical Sciences
45
+ Research Council (EPSRC) Established Career Fellowship “Goal-Oriented
46
+ Control Systems: Disturbance, Uncertainty and Constraints” under the grant
47
+ number EP/T005734/1.
48
+ Z. Li is with Department of Computer Science, University College London,
49
+ London, WC1E 6BT, U.K. (email: [email protected]).
50
+ W.-H. Chen and J. Yang are with Department of Aeronautical and Automo-
51
+ tive Engineering, Loughborough University, Loughborough, LE11 3TU, U.K.
52
53
+ Y.
54
+ Yan
55
+ is
56
+ with
57
+ School
58
+ of
59
+ Engineering
60
+ and
61
+ Sustainable
62
+ Devel-
63
+ opment,
64
+ De
65
+ Montfort
66
+ University,
67
+ Leicester,
68
+ LE1
69
+ 9BH,
70
+ U.K.
71
+ (email:
72
73
+ [1], maximum power point tracking to continuously deliver the
74
+ highest possible power to the load in presence of variations in
75
+ environments [2], [3].
76
+ As a classic control problem with a wide range of applica-
77
+ tions, early solution for static optimal operation can be traced
78
+ as far back as 1922 [4], [5]. It was popular in 1950s and 1960s,
79
+ and regained significant attention since 2000s due to a solid
80
+ theoretical foundation established for the stability and per-
81
+ formance in [6], [7]. Several approaches have been proposed
82
+ under different names including self-optimisation control [8],
83
+ extremum seeking control [6], [9] and hill-climbing systems
84
+ [10]. The goal of self-optimisation control is to keep the
85
+ system operating at a setpoint that optimises a performance
86
+ function dependent upon unknown or changing environment
87
+ parameters, despite uncertainties, disturbances and noises.
88
+ Since the optimal operation is unknown and possibly changes
89
+ during the operation, a control system must be able to adapt
90
+ to unknown or changing environments, for example, by means
91
+ of learning, adaptation and action through limited interactions
92
+ between the system and its operational environment. Then,
93
+ the control system devises possible strategies to track the
94
+ estimated setpoints or references based on its perceived en-
95
+ vironment knowledge and the level of confidence.
96
+ Generally speaking, there are dual objectives in a self-
97
+ optimisation control problem in an unknown and uncertain
98
+ environment: parameter identification and optimality tracking.
99
+ Quite often, the dual objectives are conflicting in the sense
100
+ that new observations do not provide sufficient information
101
+ for identifying the unknown parameters when the system state
102
+ settles to some local optimal solutions. This phenomenon
103
+ widely exists in adaptive extremum seeking when an extreme
104
+ searching algorithm converges to its local optimal solution, the
105
+ identifiability will naturally loss due to the lack of persistent
106
+ excitation (PE). As a trade-off, dither perturbations are intro-
107
+ duced on purpose to sustain the identifiability, but such dithers
108
+ inevitably deteriorate the tracking performance. Various ap-
109
+ proaches have been proposed to design the dither signals, e.g.,
110
+ sinusoidal perturbations [6], [11], stochastic perturbations [12],
111
+ [13] and decaying perturbations [14]. However, they are usu-
112
+ ally pre-specified, and thereby cannot make online adjustments
113
+ according to real-time inference performance. In other words,
114
+ active learning cannot be embedded, that is, actively generate
115
+ data for the purpose of learning.
116
+ This paper proposes a new approach to self-optimisation
117
+ control by embedding active learning from a new perspective:
118
+
119
+ 2
120
+ dual control of exploration and exploitation (DCEE). DCEE
121
+ was originally proposed in [15] for autonomous search of
122
+ sources of atmospheric release where the source location
123
+ and other environmental factors are unknown. To realise
124
+ autonomous search, it proposes each move of the robotic
125
+ agent shall have dual effects: driving the agent towards the
126
+ believed location of the source (exploitation) and probing the
127
+ environment to reduce the level of uncertainty of the current
128
+ belief (exploration). An optimal autonomous search strategy is
129
+ realised by optimally trading-off these two effects. We argue
130
+ in this paper that DCEE is actually applicable to a much wider
131
+ range of systems that operate in an unknown or uncertain
132
+ environment without well-defined control specifications (e.g.
133
+ the reward or cost functions are unknown). We present a new
134
+ self-optimisation control framework by extending DCEE from
135
+ a specific autonomous search application to a general design
136
+ approach for achieving or maintaining optimal operation in an
137
+ unknown environment.
138
+ The contribution of this paper is of twofold. On one side,
139
+ for self-optimisation control problems, we propose a new
140
+ and systematic framework which is able to actively probe
141
+ the environment to reduce the level of uncertainty through
142
+ active learning. There is no need to artificially introduce
143
+ perturbation as in the current extremum seeking control. It
144
+ also provides an optimal transition from any initial operation
145
+ condition to acquire the unknown optimal operation condition
146
+ in terms of a reformulated objective conditional upon current
147
+ knowledge and future predicted information. By formulating
148
+ the self-optimisation control in this framework, it enables to
149
+ establish proven properties by getting access to a wide range of
150
+ theoretic tools in control theory such as parameter adaptation
151
+ and optimal control. On the other side, we generalise and
152
+ extend the DCEE concept from a specific application, where
153
+ specific system dynamics, reward function and properties are
154
+ considered, to a general control system problem. A systematic
155
+ design procedure for general descriptions of the system and
156
+ control objectives is presented. We show that DCEE provides a
157
+ powerful and promising framework to design control systems
158
+ operating in an uncertain environment, which is an important
159
+ feature of autonomous systems.
160
+ Compared
161
+ with
162
+ all
163
+ the
164
+ existing
165
+ schemes
166
+ for
167
+ self-
168
+ optimisation control, our approach is most related to the work
169
+ where the model based approach is adopted and the uncertainty
170
+ of the objective or system dynamics are parameterised by
171
+ uncertain parameters [6], [8], [9], [16], [17]. There are three
172
+ main features in the new DCEE based self-optimisation control
173
+ framework, detailed as follows.
174
+ 1) It is developed through an optimal control approach,
175
+ which is able to achieve best transition from any admis-
176
+ sible initial operation condition to the optimal operation
177
+ condition in terms of a reformulated objective.
178
+ 2) It embeds an active learning effect allowing the system
179
+ to actively explore the unknown environment to reduce
180
+ the level of uncertainty. Instead of using computationally
181
+ expensive particle filters in information-driven methods,
182
+ this paper develops an efficient multi-estimator based en-
183
+ semble approach to quantify the estimation uncertainty
184
+ online, based on which the controller effectively trades
185
+ off between exploration and exploitation to balance the
186
+ dual objectives of identification and tracking.
187
+ 3) Different from all the existing schemes where probing
188
+ effect is artificially introduced or inserted (usually by
189
+ means of dithers and perturbations), the probing effect
190
+ naturally occurs depending on the confidence of the es-
191
+ timation by assembling the outcomes of these individual
192
+ estimators.
193
+ The rest of this paper is organised as follows. In Sec-
194
+ tion II, we formulate the self-optimisation control problem and
195
+ demonstrate the dual effects embedded in the new formulation.
196
+ In Section III, an active learning based ensemble approach is
197
+ developed for unknown environment acquisition and then a
198
+ dual controller for exploration and exploitation is designed
199
+ to achieve optimal trade-off between parameter identification
200
+ and optimality tracking for a special single integrator system.
201
+ Section IV extends DCEE to general linear systems and formal
202
+ properties of the proposed self-optimisation control method are
203
+ established. Section V demonstrates the effectiveness of the
204
+ proposed algorithm using a numerical example. Section VI
205
+ formulates maximum power point tracking (MPPT) problem
206
+ as a self-optimisation control problem and compares the pro-
207
+ posed algorithm with other existing approaches. Section VII
208
+ concludes this paper.
209
+ II. PROBLEM STATEMENT
210
+ In this section, we elaborate the dual effects embedded
211
+ in the reformulated self-optimisation control problem. Then,
212
+ an ensemble active learning based approach is introduced to
213
+ realise efficient parameter adaptation and assess the estimation
214
+ performance.
215
+ A. Dual Control Reformulation
216
+ Consider a reward function for a system operating in an
217
+ unknown environment
218
+ J(θ∗, y) = φT(y)θ∗
219
+ (1)
220
+ where θ∗ ⊂ Rm is unknown, depending on the operational
221
+ environment, y ∈ Rq is the system output, and φ(y) ∈ Rm
222
+ is the basis function of the reward function. In other words,
223
+ the reward function is parameterised by unknown θ∗. Without
224
+ loss of generality, it is assumed the the optimal condition is
225
+ achieved at the maximum of J. A self-optimisation control
226
+ is designed to automatically drive the system to the unknown
227
+ operational condition, maintain there despite disturbances and
228
+ automatically adjust the optimal operation condition accord-
229
+ ingly when the operational environment changes.
230
+ The system dynamics under concern are described by
231
+ x(k + 1) = Ax(k) + Bu(k)
232
+ y(k) = Cx(k)
233
+ (2)
234
+ where x(k) ∈ Rn, u(k) ∈ Rp and y(k) ∈ Rq are system state,
235
+ control input and output, respectively, and A ∈ Rn×n, B ∈
236
+ Rn×p, C ∈ Rq×n are constant matrices. Suppose that at each
237
+ time, the system output and the reward J(k) can be measured
238
+ or derived subject to measurement noise v(k). We have
239
+ z(k) = [x(k); y(k); J(k) + v(k)]
240
+ (3)
241
+
242
+ 3
243
+ and the information state is denoted as
244
+ Ik = [u(k − 1); z(k)]
245
+ (4)
246
+ All the measurement up to the current time k is given by
247
+ Ik = [I0, I1, . . . , Ik]
248
+ (5)
249
+ with I0 = [z(0)].
250
+ There are two ways to formulate this problem using the dual
251
+ control for exploration and exploitation (DCEE) concept. The
252
+ first approach is similar to extremum seeking control [9], [16]
253
+ aiming to select the control such that the reward function is
254
+ maximised with all the information up to now including the
255
+ prior and all the measurements
256
+ max
257
+ u(k)∈Rp Eθ,Ik+1|k{J(θ, y(k + 1|k))|Ik+1|k}
258
+ (6)
259
+ subject to the system dynamics (2), where Ik+1|k
260
+ =
261
+ [Ik, Ik+1|k] with Ik+1|k = [u(k), z(k + 1|k)]. z(k + 1|k)
262
+ consists of the predicted output y(k + 1|k) and the predicted
263
+ reward function under the control u(k).
264
+ Another approach is to drive the system output to the
265
+ unknown optimal condition directly, which is closer to the
266
+ classic self-optimisation control [8]. Since the optimal oper-
267
+ ation condition is unknown, the best one can do is to drive
268
+ the system to the best estimation of the optimal operation
269
+ condition with all the information up to now. This can be
270
+ formulated as
271
+ min
272
+ u(k)∈Rp E{(y(k + 1|k) − r∗)T(y(k + 1|k) − r∗)|Ik+1|k} (7)
273
+ where r∗ = l(θ∗) denotes the predicted optimal operational
274
+ condition conditional upon Ik+1|k, which is a function of
275
+ the environment parameter θ∗. In realm of self-optimisation
276
+ control, it is often required that the mapping l(θ) is a smooth
277
+ function of θ and r∗ = l(θ∗) is a unique optimum of the
278
+ objective function [6].
279
+ These two problems have been solved previously in au-
280
+ tonomous search [15], [18]. The research question is how to
281
+ extend these results from this specific application to general
282
+ self-optimisation control problems. In this paper, we will focus
283
+ our attention on the latter formulation in (7), which is related to
284
+ the operational condition determined by unknown environment
285
+ parameters.
286
+ Before proceeding further, we demonstrate that the control
287
+ input u(k) obtained by minimising (7) naturally carries dual
288
+ effects corresponding to exploration and exploitation, respec-
289
+ tively. Intuitively, the control input u(k) will influence the
290
+ future system state y(k+1|k) via the system dynamics (2), and
291
+ at the same time affect the future information to be collected
292
+ Ik+1|k via the reward function in (1) from the environment
293
+ subject to uncertainties.
294
+ We define the predicted nominal operational condition as
295
+ ¯r(k + 1|k) = E
296
+
297
+ r∗(k + 1|k)|Ik+1|k
298
+
299
+ (8)
300
+ based on which the prediction error conditional on Ik+1|k can
301
+ be written as
302
+ ˜r(k + 1|k) = r∗(k + 1|k) − ¯r(k + 1|k).
303
+ (9)
304
+ Expanding (7) and substituting (8) and (9) into (7), we have
305
+ E
306
+
307
+ ∥y(k + 1|k) − ¯r(k + 1|k) − ˜r(k + 1|k)∥2|Ik+1|k
308
+
309
+ = E
310
+
311
+ ∥y(k + 1|k) − ¯r(k + 1|k)∥2|Ik+1|k
312
+
313
+ − 2 E
314
+
315
+ (y(k + 1|k) − ¯r(k + 1|k))T˜r(k + 1|k)|Ik+1|k
316
+
317
+ + E
318
+
319
+ ∥˜r(k + 1|k)∥2|Ik+1|k
320
+
321
+ .
322
+ (10)
323
+ It follows from the definition of ˜r(k + 1|k) in (9) that
324
+ E
325
+
326
+ ˜r(k + 1|k)|Ik+1|k
327
+
328
+ = 0. Thus, by further noting that
329
+ y(k + 1|k) and ¯r(k + 1|k) are deterministic, the cross term in
330
+ (10) equals to zero, yielding
331
+ D(u(k)) := E
332
+
333
+ ∥y(k + 1|k) − ¯r(k + 1|k)∥2|Ik+1|k
334
+
335
+ + E
336
+
337
+ ∥˜r(k + 1|k)∥2|Ik+1|k
338
+
339
+ .
340
+ (11)
341
+ Remark 1: The objective function in (11) exhibits dual
342
+ effects. Minimising the first term in (11) drives the system
343
+ state to estimated nominal value, which corresponds to the
344
+ exploitation effect. In control terminology, it can be understood
345
+ as tracking a nominal reference, thus also referred to as
346
+ optimality tracking. The second term characterises the level
347
+ of uncertainty (variance) associated with the predicted optimal
348
+ operational condition, which is related to the exploration
349
+ effect. According to the classic dual control concept [19],
350
+ [20], a control input is said to have dual effects if it can
351
+ affect at least one rth-order central moment of a state variable
352
+ (r > 1), in addition to its effect on the state. In fact,
353
+ the dual control framework developed in this paper is a
354
+ generalisation of the classic one [19] in the sense that our
355
+ formulation deals with not only system uncertainty but also
356
+ environment uncertainty (the operational condition r∗ = l(θ∗)
357
+ is determined by the environment parameters θ∗). This subtle
358
+ difference endows the system with capability of exploring the
359
+ operational environment and in the meanwhile exploiting its
360
+ current belief. Recently, DCEE has demonstrated superior and
361
+ promising performance in autonomous search [15], [21].
362
+ Remark 2: According to [22], the level of autonomy can be
363
+ measured in terms of the set of goals that the system is able to
364
+ accomplish subject to a set of uncertainties. As a result, it is
365
+ required that the system can exploit its available knowledge to
366
+ accomplish the goals, and at the same time it should be able to
367
+ actively explore the operational environment to reduce knowl-
368
+ edge uncertainty. Effective trading-off between exploration and
369
+ exploitation has been a long standing issue, particularly in
370
+ artificial intelligence, control and decision-making in complex
371
+ and uncertain environment. In control society, some recent
372
+ works explicitly introduce trade-off coefficients to incorporate
373
+ the exploration terms into model predictive control problems,
374
+ e.g., [17], [23]. This inevitably incurs tedious efforts in tuning
375
+ the coefficients to balance exploration and exploitation. In
376
+ view of the derivation of (11), it is clear that the dual effects
377
+ in DCEE are naturally embedded, since they are derived from
378
+ a physically meaningful value function in (7).
379
+ B. Ensemble based Active Learning
380
+ Efficient gradient descent algorithms can be used to es-
381
+ timate the unknown parameters. The performance of single
382
+ estimator based optimisation algorithm is quite poor, due to
383
+
384
+ 4
385
+ noisy measurement and nonlinear modelling (see examples in
386
+ autonomous search [18], [24]). Recently, the ensemble-based
387
+ approximation in machine learning community has demon-
388
+ strated great success with tractable computational load [25],
389
+ [26]. In this paper, we develop a multi-estimator based learning
390
+ method for parameter adaptation, which shows comparable
391
+ performance as particle filter using much less computational
392
+ resources in autonomous search application [18].
393
+ Considering an ensemble of N estimators, the dual formu-
394
+ lation in (11) becomes
395
+ min
396
+ u(k)∈Rp D(u) = ∥y(k + 1|k) − ¯r(k + 1|k)∥2 + P(k + 1|k)
397
+ subject to x(k + 1|k) = Ax(k) + Bu(k)
398
+ y(k + 1|k) = Cx(k + 1|k)
399
+ (12)
400
+ where the nominal estimate and variance of the estimated
401
+ optimal condition are drawn from the ensemble, i.e.,
402
+ ¯r(k + 1|k) = 1
403
+ N
404
+ N
405
+
406
+ i=1
407
+ ri(k + 1|k) = 1
408
+ N
409
+ N
410
+
411
+ i=1
412
+ l(θi(k + 1|k))
413
+ (13)
414
+ P(k + 1|k) = 1
415
+ N
416
+ N
417
+
418
+ i=1
419
+ (ri(k + 1|k) − ¯ri(k + 1|k))T
420
+ × (ri(k + 1|k) − ¯ri(k + 1|k))
421
+ (14)
422
+ where the subscript i ∈ N denotes the index of the estimators,
423
+ with N representing the set of the ensemble. Note that
424
+ the relationship between the predicted optimal condition and
425
+ the unknown parameter, i.e., ri
426
+ k+1|k = l(θi
427
+ k+1|k), is usually
428
+ known. For example, in autonomous search application, θ∗
429
+ is composed of the unknown source location and other envi-
430
+ ronment parameters, like wind direction and wind speed. The
431
+ optimal operation condition r∗ in autonomous search is the
432
+ source location, i.e., part of θ∗, which serves as a tracking
433
+ reference for the search agent.
434
+ In order to estimate the unknown parameter θ∗, we apply a
435
+ gradient-descent regression method [27], designed as
436
+ θi(k) =θi(k − 1) − ηiφ(y(k − 1))
437
+ ×
438
+
439
+ φ(y(k − 1))Tθi(k − 1) − J(k − 1)
440
+
441
+ , ∀i ∈ N.
442
+ (15)
443
+ where ηi > 0 is the learning rate of the ith estimator; J(k−1)
444
+ denotes the observed reward with measurement noise in (3) at
445
+ y(k − 1); and θ(k) denotes the estimate of unknown reward
446
+ parameter θ∗. The estimators are randomly initialised or they
447
+ can be initialised according to a priori pdfs of the unknown
448
+ parameters if available. Denote the estimation error as ˜θi(k) =
449
+ θi(k) − θ∗. Then, by noting J(k − 1) = φ(y(k − 1))Tθ∗ +
450
+ v(k − 1), we have
451
+ ˜θi(k) =
452
+ ���
453
+ Im − ηiφ(y(k − 1))φ(y(k − 1))T� ˜θi(k − 1)
454
+ − ηiφ(y(k − 1))v(k), ∀i ∈ N.
455
+ (16)
456
+ Denoting
457
+ the
458
+ extended
459
+ parameter
460
+ error
461
+ as
462
+ ˜Θ(k)
463
+ =
464
+ col{˜θ1(k), . . . , ˜θN(k)}, where col{·} denotes a column vector
465
+ formed by stacking the elements on top of each other, (16)
466
+ can be written in a compact form as
467
+ ˜Θ(k) =
468
+
469
+ IN ⊗
470
+
471
+ Im − ηiφ(y(k − 1))φ(y(k − 1))T� �˜Θ(k − 1)
472
+
473
+
474
+ IN ⊗ ηiφ(y(k − 1))
475
+
476
+ (1N ⊗ v(k − 1)).
477
+ (17)
478
+ In an ensemble-based adaptation, we take their average
479
+ as the current estimation of the unknown parameters. Thus,
480
+ averaging (16), we have
481
+ ˜Θav(k) = 1
482
+ N (1T
483
+ N ⊗ Im)˜Θ(k)
484
+ = 1
485
+ N (1T
486
+ N ⊗ Im)
487
+
488
+ IN ⊗ (Im − ηφφT)
489
+ � ˜Θ(k − 1)
490
+ − 1
491
+ N (1T
492
+ N ⊗ Im)
493
+
494
+ IN ⊗ ηφ
495
+
496
+ (1N ⊗ v(k − 1))
497
+ = 1
498
+ N (1T
499
+ N ⊗ Im)˜Θ(k − 1) − 1
500
+ N (1T
501
+ N ⊗ ηφφT)˜Θ(k − 1)
502
+ − ηφv(k − 1).
503
+ (18)
504
+ Remark 3: An important observation is that even though we
505
+ have the same regressor φ at one time instant, its excitation im-
506
+ pact to each estimator will be different since φφT˜θi ̸= φφT˜θj,
507
+ ∀i ̸= j, almost surely. Due to the introduction of parameter
508
+ extension by multiple estimators, at any time instant the aver-
509
+ age estimation can always be excited when there are sufficient
510
+ estimators. In addition, by introducing a group of estimators,
511
+ it is possible to evaluate and make full use of the estimation
512
+ uncertainty by sampling the outcomes of the ensemble in an
513
+ online manner, which is proved to be crucial in DCEE [15]
514
+ as we will discuss in the sequel. Another desirable feature of
515
+ the ensemble approach is its resilience to measurement noises.
516
+ In view of the last term in (18), instantaneous noises will be
517
+ averaged out under multiple estimators such that the overall
518
+ performance of the ensemble can be improved.
519
+ III. DCEE FOR SINGLE INTEGRATOR
520
+ A. Algorithm Development
521
+ In high-level decision-making, system behaviours are usu-
522
+ ally simplified as single integrators by ignoring low-level
523
+ dynamics. In this paper, we begin with DCEE for this special
524
+ case
525
+ y(k + 1) = y(k) + u(k).
526
+ (19)
527
+ For general linear systems, we will use this as an internal
528
+ reference generator, as will be shown later in Section IV.
529
+ With the estimated environment parameter in (15), the dual
530
+ controller can be designed as
531
+ y(k + 1) = y(k) + u(k)
532
+ u(k) = −δk
533
+
534
+ ∇yC(k + 1|k) + ∇yP(k + 1|k)
535
+
536
+ (20)
537
+ where C(k + 1|k) = ∥y(k) − ¯r(k + 1|k)∥2 denotes the
538
+ exploitation term, and P(k+1|k) is the exploration term in the
539
+ dual objective (12). To obtain the future mean and covariance,
540
+ we utilise the classic principles in extended Kalman filter.
541
+ According to the gradient-descent regression in (15), the
542
+
543
+ 5
544
+ predicted mean of the N ensemble θi(k + 1|k), denoted as
545
+ ¯θ(k + 1|k), is given by
546
+ ¯θ(k + 1|k) = 1
547
+ N
548
+ N
549
+
550
+ i=1
551
+ θi(k + 1|k)
552
+ = 1
553
+ N
554
+ N
555
+
556
+ i=1
557
+ (1m − ηiFi(k + 1|k))Tθi(k)
558
+ (21)
559
+ where
560
+ Fi(k + 1|k) =[J(θi(k), y) − J(k + 1|k)]φ(y)
561
+ (22)
562
+ with J(k + 1|k) being the predicted future reward based
563
+ on current belief {θi(k), ∀i ∈ N}. Note that the predicted
564
+ future reward is noise-free as there is no influence from
565
+ sensory devices in prediction. In this paper, we use the average
566
+ of θi(k), ∀i ∈ N to evaluate the predicted future reward.
567
+ Similarly, the predicted variance of the ensemble is given by
568
+ P(k + 1|k) = trace(F (k + 1|k)TP(k|k)F (k + 1|k))
569
+ (23)
570
+ where
571
+ F (k + 1|k) = col{F1(k + 1|k), . . . , FN(k + 1|k)}
572
+ P(k|k) = cov{θi(k), ∀i ∈ N}
573
+ = diag{(θ1(k) − ¯θ(k))(θ1(k) − ¯θ(k))T, . . . ,
574
+ (θN(k) − ¯θ(k))(θN(k) − ¯θ(k))T}
575
+ (24)
576
+ where cov{·} is a covariance operator evaluating the covari-
577
+ ance matrix of the ensemble, and diag{·} denotes a block-
578
+ diagonal matrix by putting the elements on its main diagonal.
579
+ Using the predicted mean ¯θi(k + 1|k) and the predicted co-
580
+ variance P(k+1|k) of the unknown environmental parameter,
581
+ the dual control terms in (2) can be obtained by using the
582
+ mapping between the operational condition and the unknown
583
+ environmental parameter, i.e., r = l(θ).
584
+ B. Convergence Analysis
585
+ In this section, we will examine the convergence of the
586
+ proposed dual control algorithm, by leveraging parameter
587
+ adaptation and optimisation techniques. To this end, we in-
588
+ troduce some fundamental assumptions that will be used to
589
+ facilitate the convergence analysis of the proposed dual control
590
+ algorithm.
591
+ Assumption 1: There exist positive constants T ∈ Z+ and
592
+ β > 0 such that
593
+ t+T
594
+
595
+ k=t
596
+ [φ(y(k))][φ(y(k))]T ≥ βIm > 0, ∀t > 0.
597
+ (25)
598
+ Assumption 2: The measurement noise v(k) is independent
599
+ and identically distributed with bounded variance, i.e.,
600
+ E [v(k)] = 0
601
+ E
602
+
603
+ ∥v(k)∥2�
604
+ ≤ ̺2.
605
+ (26)
606
+ Assumption 3: The reward function J(θ, y) is twice dif-
607
+ ferentiable and strictly concave on y for any θ ∈ Rm, that
608
+ is,
609
+ ∂2J(θ, y)
610
+ ∂y2
611
+ > 0.
612
+ (27)
613
+ Remark 4: Assumption 1 is a standard persistent excitation
614
+ (PE) condition to ensure the identifiability of the unknown
615
+ environmental parameter θ. Extensive techniques on parameter
616
+ adaptation have been reported in the past few decades aiming
617
+ at relaxing or fulfilling the conditions of PE [9], [27]. If we
618
+ introduce a memory-based regressor extension to the param-
619
+ eter adaptation algorithm in (15), the PE condition can be
620
+ relaxed to interval excitation [27]. Assumption 2 implies that
621
+ the noises imposed on sensory information are unbiased with
622
+ bounded variances. Assumption 3 guarantees the existence
623
+ and uniqueness of the optimal operational condition, i.e.,
624
+ r∗ = l(θ∗), which is widely used in adaptive self-optimisation
625
+ and extremum seeking control [9], [28]. Note that the mapping
626
+ between the optimal operational condition and parameter θ can
627
+ be obtained by solving ∂J(θ,y)
628
+ ∂y
629
+ = 0.
630
+ First, we examine the convergence of the gradient-descent
631
+ regression method in (15).
632
+ Theorem 1: Under Assumptions 1 and 2, there exists a
633
+ constant η∗ > 0 such that, for any 0 < ηi < η∗, the estimates,
634
+ ˆθi(k), ∀i ∈ N, converge to a bounded neighbourhood of the
635
+ true environmental parameter θ∗. Moreover, the mean-square-
636
+ error of the estimator is convergent and bounded by
637
+ E ∥˜θi(k)∥2 ≤
638
+ η2
639
+ i L2̺2
640
+ 1 − maxj∈{1,...,k−1} ∥Ai(j)∥
641
+ (28)
642
+ where Ai(j) = Im − ηi[φ(y(j))][φ(y(j))]T and L denotes the
643
+ bound of the regressor φ. Moreover, in absence of measure-
644
+ ment noises, limk→∞ E ∥˜θi(k)∥2 = 0.
645
+ Proof: In view of (16) and Assumption 2, the expectation of
646
+ the estimate is given by
647
+ E[˜θi(k)] =
648
+
649
+ Im − ηi[φ(y(k − 1))][φ(y(k − 1))]T�
650
+ E[˜θi(k − 1)]
651
+ ∀i ∈ N.
652
+ (29)
653
+ According to Assumption 1, there exists a constant η∗ such
654
+ that, for any 0 < ηi < η∗, 0 < ηi[φ(y(k−1))][φ(y(k−1))]T <
655
+ Im. Consequently, for any 0 < ηi < η∗, we have
656
+ 0 < Im − ηi[φ(y(k − 1))][φ(y(k − 1))]T < Im.
657
+ (30)
658
+ It follows from (29) that
659
+ ∥ E[˜θi(k)]∥ ≤
660
+ ��Im − ηi[φ(y(k − 1))][φ(y(k − 1))]T��
661
+ × ∥ E[˜θi(k − 1)]∥, ∀i ∈ N.
662
+ (31)
663
+ Therefore,
664
+ ∥ E[˜θi(k)]∥ ≤
665
+ k
666
+
667
+ j=1
668
+ ��Im − ηi[φ(y(j − 1))][φ(y(j − 1))]T��
669
+ × ∥ E[˜θi(0)]∥, ∀i ∈ N.
670
+ (32)
671
+ For any bounded error ˜θi(0), the expectation of the estimator
672
+ converge to zero.
673
+
674
+ 6
675
+ Moreover, the variance of the estimators can be bounded
676
+ under Assumption 2. Taking the squared Euclidean norm of
677
+ (16) yields
678
+ ∥˜θi(k)∥2 =
679
+ ���
680
+ Im − ηi[φ(y(k − 1))][φ(y(k − 1))]T�˜θi(k − 1)
681
+ ��2
682
+ + ∥ηiφ(y(k − 1))v(k)∥2
683
+ − 2
684
+
685
+ [Im − ηi[φ(y(k − 1))][φ(y(k − 1))]T]
686
+ × ˜θi(k − 1)
687
+
688
+ [ηiφ(y(k − 1))v(k)] , ∀i ∈ N.
689
+ (33)
690
+ Applying expectation operation to (33) leads to
691
+ E ∥˜θi(k)∥2 = E
692
+ �� �
693
+ Im − ηi[φ(y(k − 1))][φ(y(k − 1))]T�
694
+ × ˜θi(k − 1)
695
+ ��2
696
+ + E ∥ηiφ(y(k − 1))v(k)∥2, ∀i ∈ N.
697
+ (34)
698
+ where E [v(k)] = 0 has been used to eliminate the cross term.
699
+ Denoting Ai(k−1) = Im −ηi[φ(y(k−1))][φ(y(k−1))]T and
700
+ applying the variance bound in (26), we have
701
+ E ∥˜θi(k)∥2 ≤ E
702
+ ��˜θi(k − 1)
703
+ ��2
704
+ Ai(k−1) + η2
705
+ i L2̺2.
706
+ (35)
707
+ For any 0 < ηi < η∗, the mean-square-error of the estimator
708
+ is convergent and bounded by
709
+ E ∥˜θi(k)∥2 ≤
710
+ η2
711
+ i L2̺2
712
+ 1 − maxj∈{1,...,k−1} ∥Ai(j)∥.
713
+ (36)
714
+ In absence of measurement noise v(k)
715
+ =
716
+ 0, limk→∞
717
+ E ∥˜θi(k)∥2 = 0. This completes the proof.
718
+
719
+ Remark 5: Theorem 1 establishes the convergence of the
720
+ estimators under mild assumptions on the measurement noises
721
+ and persistent excitation. The parameter adaptation algorithm
722
+ together with its convergence analysis under measurement
723
+ noises forms a new feature of this paper since existing studies
724
+ mainly focus on noise-free scenarios [27], [29]. As having
725
+ been discussed in Remark 4, PE is a standard and commonly-
726
+ used condition to guarantee the convergence of parameter
727
+ estimators. Despite significant research efforts have been dedi-
728
+ cated to explore weak/alternative assumptions, very few result
729
+ has been obtained (see recent survey in [27]). In the proposed
730
+ dual controller (20), a probing effort is inherently embedded
731
+ aiming to reduce the estimation uncertainty. Such an explo-
732
+ ration effect from active learning is beneficial to environment
733
+ acquisition, which has been validated in autonomous search
734
+ application [15], [18].
735
+ Remark 6: The proposed multi-estimator assisted ensemble
736
+ method for environment adaptation is a hybrid approach that
737
+ combines both model-based and model-free techniques. The
738
+ model-based estimators are trained according to the model
739
+ structures of the reward function in (1). A model-free ensemble
740
+ approximation is used to estimate the mean and variance of the
741
+ unknown environmental parameters in an online manner. It is
742
+ widely perceived in machine learning community that model-
743
+ based approach benefits from high learning efficiency due
744
+ to the utilisation of model knowledge but inevitably inherits
745
+ model biased errors; on the other hand, model-free approach
746
+ provides a reliable way to quantify the level of estimation
747
+ uncertainty but may incur additional computational burden.
748
+ Recently, the hybrid method has demonstrated superior perfor-
749
+ mance in simulation and experiment in machine learning due
750
+ to its combined strength from both model-based and model-
751
+ free learning [25], [26]. Theoretical guarantee on convergence
752
+ and performance of the hybrid approach has not been well-
753
+ established but mainly verified by extensive simulation and
754
+ experimental results. Inspired by its recent success, we develop
755
+ a concurrent active learning based ensemble algorithm and
756
+ establish its formal properties in this paper.
757
+ Denote the tracking error between current state and un-
758
+ known optimal condition r∗ as ˜y(k) = y(k) − r∗. Then, it
759
+ follows from (20) that
760
+ ˜y(k + 1) = ˜y(k) − δk
761
+
762
+ ∇yC(k + 1|k) + ∇yP(k + 1|k)
763
+
764
+ .
765
+ (37)
766
+ Now, we analyse the convergence to the optimal operational
767
+ condition.
768
+ Theorem 2: Under Assumptions 1-3, for any 0 < ηi < η∗,
769
+ y converges to a bounded neighbourhood of the optimal
770
+ operational condition r∗ = l(θ∗) if there exists a step size
771
+ δk such that 0 < 2∥[In − δkL(k)]∥2 < 1 with L(k) =
772
+ � 1
773
+ 0 ∇2
774
+ yC(r∗ + τ ˜y(k))dτ.
775
+ Proof: To relate the gradient term ∇yC(k + 1|k) with ˜y(k),
776
+ we recall the mean value theorem [30], that is, for a twice-
777
+ differentiable function h(y) : Rm → R,
778
+ ∇h(y1) =∇h(y2) +
779
+ �� 1
780
+ 0
781
+ ∇2h[y2 + τ(y1 − y2)]dτ
782
+
783
+ (y1 − y2),
784
+ ∀y1, y2 ∈ Rm .
785
+ (38)
786
+ Thus, we have
787
+ ∇yC(y(k)) =∇yC(r∗) +
788
+ � � 1
789
+ 0
790
+ ∇2
791
+ yC(r∗ + τ ˜y(k))dτ
792
+
793
+ ˜y(k)
794
+ (39)
795
+ where the time stamps in C(k+1|k) have been dropped for no-
796
+ tational convenience. Denoting L(k) =
797
+ � 1
798
+ 0 ∇2
799
+ yC(r∗+τ ˜y(k))dτ
800
+ and applying ∇yC(r∗) = 0, we have
801
+ ∇yC(y(k)) = L(k)˜y(k).
802
+ (40)
803
+ Applying (40) to (37) results in
804
+ ˜y(k + 1) = [In − δkL(k)]˜y(k) − δk∇yP(k + 1|k).
805
+ (41)
806
+ To examine the boundedness of the tracking error, we take the
807
+ Euclidean norm for both sides of (41), yielding
808
+ ∥˜y(k + 1)∥2 =∥[In − δkL(k)]˜y(k)∥2 + ∥δk∇yP(k + 1|k)∥2
809
+ − 2δk[(In − δkL(k))˜y(k)]T∇T
810
+ yP(k + 1|k).
811
+ (42)
812
+ Taking the expectation of (42) leads to
813
+ E ∥˜y(k + 1)∥2 ≤∥[In − δkL(k)]∥2 E ∥˜y(k)∥2
814
+ + E ∥δk∇yP(k + 1|k)∥2
815
+ + E[−2δk∇T
816
+ yP(k + 1|k)[(In − δkL(k))˜y(k)]].
817
+ (43)
818
+ The last term in (43) can be written as
819
+ E[−2δk∇T
820
+ yP(k + 1|k)[(In − δkL(k))˜y(k)]]
821
+ ≤ ∥[In − δkL(k)]∥2 E ∥˜y(k)∥2 + E ∥δk∇yP(k + 1|k)∥2.
822
+ (44)
823
+
824
+ 7
825
+ Therefore, substituting (44) into (43) results in
826
+ E ∥˜y(k + 1)∥2 ≤2∥[In − δkL(k)]∥2 E ∥˜y(k)∥2
827
+ + 2 E ∥δk∇yP(k + 1|k)∥2.
828
+ (45)
829
+ From Theorem 1, the estimation errors are bounded within
830
+ E ∥˜θi(k)∥2 ≤ max
831
+
832
+ ∥˜θi(0)∥2,
833
+ η2
834
+ i L2̺2
835
+ 1 − maxj∈{1,...,k−1} ρ(Ai(j))
836
+
837
+ .
838
+ (46)
839
+ As a result, 0 ≤ E ∥δk∇yP(k +1|k)∥2 ≤ µ is upper bounded,
840
+ since it is a measure of covariance of the bounded estimators.
841
+ Consequently, we have
842
+ E ∥˜y(k + 1)∥2 ≤2∥[In − δkL(k)]∥2 E ∥˜y(k)∥2 + µ.
843
+ (47)
844
+ If there exists a step size δk such that 0 < 2∥[In−δkL(k)]∥2 <
845
+ 1, then the expected mean square of the tracking error is
846
+ convergent. Recursively iterating (47) gives
847
+ E ∥˜y(k + 1)∥2 ≤ ¯αk E ∥˜y(0)∥2 +
848
+ k−1
849
+
850
+ j=0
851
+ ¯αjµ
852
+ (48)
853
+ where ¯α := maxj∈{1,...,k} αj with 0 < αk := 2∥[In −
854
+ δjL(j)]∥2 < 1. Since limk→∞ ¯αk E ∥˜y(0)∥2 → 0, we have
855
+ lim
856
+ k→∞ E ∥y(k) − r∗∥2 ≤
857
+ µ
858
+ 1 − ¯α.
859
+ (49)
860
+ This completes the proof.
861
+
862
+ Remark 7: In general, traditional adaptive control can
863
+ be regarded as passive learning [9], [17] where parameter
864
+ estimators are updated by accidentally collected data sam-
865
+ ples. For example, MPC in autonomous search is targeted at
866
+ navigating the agent to the source position, whereas during
867
+ this pure exploitation process the estimators are updated pas-
868
+ sively by accidentally collected concentration measurements
869
+ from the environment [15], [31]. Recently, there are a wide
870
+ range of engineering problems involved in balancing between
871
+ exploration and exploitation, e.g., machine learning, control
872
+ and decision-making in uncertain environment [20], [32]–
873
+ [34]. In control society, related works are usually focused on
874
+ stochastic model predictive control with active learning [17]. A
875
+ similar concept is referred to as active reinforcement learning
876
+ in artificial intelligence [34], [35]. Nevertheless, there is a
877
+ critical distinction between previous works and the proposed
878
+ DCEE framework for self-optimisation control. In existing
879
+ dual control formulation, the probing effect is introduced to
880
+ learn the system states or parameters (e.g. MPC with active
881
+ learning [36] and active adaptive control [37], [38]), while
882
+ in our formulation the probing effect is used to actively
883
+ explore the operational environment. We believe that future
884
+ autonomous control should be able to deal with not only
885
+ system uncertainty but also environment uncertainty [15], [22].
886
+ IV. DCEE FOR LINEAR SYSTEMS
887
+ In this section, we deal with general linear systems. As
888
+ the environment estimators are designed by information mea-
889
+ surements, the parameter adaptation algorithm in (15) can be
890
+ used and Theorem 1 remains valid. Now, we design a dual
891
+ controller that regulates the system output y(k) to minimise
892
+ the reformulated objective function defined in (12).
893
+ The dual controller is proposed as
894
+ u(k) = −Kx(k) + (G + KΨ)ξ(k)
895
+ (50)
896
+ where the optimal reference ξ(k) is generated by
897
+ ξ(k) = ξ(k − 1) + ψ(k)
898
+ ψ(k) = −δk
899
+
900
+ ∇ξC(k + 1|k) + ∇ξP(k + 1|k)
901
+
902
+ (51)
903
+ where G and Ψ are gain matrices obtained by solving
904
+ (A − I)Ψ + BG = 0
905
+ CΨ − I = 0.
906
+ (52)
907
+ and K is chosen such that A − BK is Schur stable as (A, B)
908
+ is controllable. Note that ψ(k) is exactly the dual gradient
909
+ term used in the integrator dynamics in Section III. For
910
+ linear systems, the control input u(k) not only needs to have
911
+ dual effects for exploration and exploitation but additionally
912
+ requires control effort to stabilise the system dynamics as in
913
+ (50).
914
+ Assumption 4: The pair (A, B) is controllable, and
915
+ rank
916
+
917
+ A − I
918
+ B
919
+ C
920
+ 0
921
+
922
+ = n + q.
923
+ (53)
924
+ Remark 8: The dual control design in (50)-(52) is partly
925
+ inspired by conventional internal model approaches [39]. The
926
+ solvability of (52) is guaranteed by (53), which is widely
927
+ known as regulation equations [39]. The existence of Ψ
928
+ ensures the existence of optimal state x∗ = Ψr∗ such that
929
+ Cx∗ = r∗.
930
+ Define state transformations xs(k) = Ψξ(k), us(k) =
931
+ Gξ(k). Let ¯x(k) = x(k) − xs(k) and ¯u(k) = u(k) − us(k).
932
+ Applying the transformation to the system dynamics (2) leads
933
+ to
934
+ ¯x(k + 1) = x(k + 1) − xs(k + 1)
935
+ = Ax(k) + Bu(k) − Ψ(ξ(k) + ψ(k))
936
+ = A¯x(k) + B¯u(k) − Ψψ(k)
937
+ e(k) = C¯x(k)
938
+ (54)
939
+ where (52) has been used to derive above dynamics. Applying
940
+ the control input (50), we have the closed loop dynamics
941
+ ¯x(k + 1) = (A − BK)¯x(k) − Ψψ(k)
942
+ e(k) = C¯x(k).
943
+ (55)
944
+ The following lemma can be regarded as input-to-output
945
+ stability of the transformed dynamics (55) by viewing ψ(k)
946
+ and e(k) as the input and output, respectively.
947
+ Lemma 1: Let Assumptions 1–4 hold. Suppose the condi-
948
+ tions specified in Theorems 1–2 hold. If the gain matrices G
949
+ and Ψ are designed according to (52) and K is chosen such
950
+ that (A − BK) is Schur stable, then
951
+ lim sup
952
+ k→∞
953
+ ∥e(k)∥ ≤
954
+ 1
955
+ 1 − ∥A − BK∥ lim sup
956
+ k→∞
957
+ ∥ψ(k)∥
958
+ (56)
959
+ Furthermore, if lim supk→∞ ψ(k) = 0, then lim supk→∞ e(k)
960
+ = 0.
961
+ Proof: Putting (52) into a matrix form leads to
962
+ � A − I
963
+ B
964
+ C
965
+ 0
966
+ � � Ψ
967
+ G
968
+
969
+ =
970
+ � 0
971
+ I
972
+
973
+ (57)
974
+
975
+ 8
976
+ of which the solvability is guaranteed under (53) in Assump-
977
+ tion 4 by transforming the matrix equation (57) to standard
978
+ linear algebraic equations. For notational convenience, we
979
+ denote Ac = A − BK and Bc = −Ψ. Then, we have
980
+ ¯x(k + 1) = Ac¯x(k) + Bcψ(k).
981
+ (58)
982
+ Recursively iterating (58) results in
983
+ ¯x(k) = Ak
984
+ c ¯x(0) +
985
+ k−1
986
+
987
+ j=0
988
+ Ak−j−1
989
+ c
990
+ Bcψ(j).
991
+ (59)
992
+ Hence, we have
993
+ e(k) = C¯x(k) = CAk
994
+ c ¯x(0) −
995
+ k−1
996
+
997
+ j=0
998
+ Ak−j−1
999
+ c
1000
+ ψ(j)
1001
+ (60)
1002
+ where CΨ − I = 0 has been used. Because Ac is Schur, we
1003
+ have limk→∞ CAk
1004
+ c ¯x(0) = 0.
1005
+ The convergence of reference generator (51) has been
1006
+ established in Theorem 2, and thereby ψ(k), i.e., the gradient
1007
+ of the dual controller, is bounded and converges to zero as
1008
+ k → ∞. Denoting ̟ := lim supk→∞ ∥ψ(k)∥, it can be
1009
+ obtained that, for any small constant ǫ > 0, there exists a
1010
+ positive time index ζ > 0 such that
1011
+ ∥ψ(k)∥ < ̟ + ǫ, ∀k > ζ.
1012
+ (61)
1013
+ Now, the second term in (60) can be separated into two
1014
+ parts, written as
1015
+ k−1
1016
+
1017
+ j=0
1018
+ Ak−j−1
1019
+ c
1020
+ ψ(j) =
1021
+ ζ
1022
+
1023
+ j=0
1024
+ Ak−j−1
1025
+ c
1026
+ ψ(j) +
1027
+ k−1
1028
+
1029
+ j=ζ+1
1030
+ Ak−j−1
1031
+ c
1032
+ ψ(j).
1033
+ (62)
1034
+ Taking the Euclidean norm of (62) and invoking (61), we
1035
+ obtain
1036
+ ����
1037
+ k−1
1038
+
1039
+ j=0
1040
+ Ak−j−1
1041
+ c
1042
+ ψ(j)
1043
+ ���� =
1044
+ ����
1045
+ ζ
1046
+
1047
+ j=0
1048
+ Ak−j−1
1049
+ c
1050
+ ψ(j)
1051
+ +
1052
+ k−1
1053
+
1054
+ j=ζ+1
1055
+ Ak−j−1
1056
+ c
1057
+ ψ(j)
1058
+ ����
1059
+
1060
+ ��Ak−ζ−1
1061
+ c
1062
+ ��
1063
+ ����
1064
+ ζ
1065
+
1066
+ j=0
1067
+ Aζ−j
1068
+ c
1069
+ ψ(j)
1070
+ ���� + (̟ + ǫ)
1071
+ ����
1072
+ k−1
1073
+
1074
+ j=ζ+1
1075
+ Ak−j−1
1076
+ c
1077
+ ����.
1078
+ (63)
1079
+ Therefore, combining (60) and (63) leads to
1080
+ lim sup
1081
+ k→∞
1082
+ ∥e(k)∥ ≤
1083
+ 1
1084
+ 1 − ∥Ac∥ (̟ + ε)
1085
+ (64)
1086
+ by noting that
1087
+ t−1
1088
+
1089
+ j=ζ+1
1090
+ ∥Ac∥t−1−j = 1 − ∥Ac∥t−ζ
1091
+ 1 − ∥Ac∥
1092
+ <
1093
+ 1
1094
+ 1 − ∥Ac∥
1095
+ (65)
1096
+ and that
1097
+ lim
1098
+ k→∞
1099
+ ��Ak−ζ−1
1100
+ c
1101
+ �� = 0
1102
+ (66)
1103
+ since Ac is Schur stable. As ǫ can be set arbitrarily small, it
1104
+ follows from (64) that
1105
+ lim sup
1106
+ k→∞
1107
+ ∥e(k)∥ ≤
1108
+ 1
1109
+ 1 − ∥Ac∥ lim sup
1110
+ k→∞
1111
+ ∥ψ(k)∥.
1112
+ (67)
1113
+ This completes the proof.
1114
+
1115
+ Now, combining the results in Theorems 1–2 and Lemma 1,
1116
+ we are ready to establish the convergence of the self-
1117
+ optimisation control for linear systems.
1118
+ Theorem 3: Let Assumptions 1–4 hold. Suppose the
1119
+ conditions specified in Theorems 1–2 and Lemma 1 hold.
1120
+ The output y(k) of the linear system (2) converges to the
1121
+ neighbourhood of the optimum r∗, using control input (50) to-
1122
+ gether with reference generator (51). Moreover, in the absence
1123
+ of measurement noises, y(k) converges to the true optimal
1124
+ solution r∗.
1125
+ Proof: Denoting ˜x(k) = x(k) − Ψr∗, we have
1126
+ ˜x(k + 1) = Ax(k) + B[−Kx(k) + (G + KΨ)ξ(k)] − Ψr∗
1127
+ = (A − BK)˜x(k) + B(G + KΨ)(ξ(k) − r∗)
1128
+ (68)
1129
+ It follows from Theorems 1–2 that ξ(k) converges to the
1130
+ neighbourhood of r∗ with bounded error. Thus, the result can
1131
+ be concluded by treating B(G + KΨ)(ξ(k) − r∗) as ψ(k) in
1132
+ Lemma 1.
1133
+
1134
+ Remark 9: The self-optimisation control in this paper is
1135
+ similar to the classic formulation of reinforcement learning
1136
+ in the sense that both of them are targeted to operate a
1137
+ system in an unknown and uncertain environment. There are
1138
+ two bottlenecks in widely applying reinforcement learning,
1139
+ particularly deep RL: one is a large number of trials are
1140
+ required to achieve a satisfactory performance (big data) and
1141
+ the other is its performance could significantly degrade if
1142
+ the real operational environment is different from the training
1143
+ environment (poor adaptiveness) [40]. DCEE establishes a new
1144
+ control framework to provide a promising and complementary
1145
+ method to reinforcement learning in control and robotics
1146
+ society. In fact, active learning for exploration and exploitation
1147
+ in machine intelligence can find strong evidence in human
1148
+ intelligence, which is supported by the biological principles
1149
+ in functional integration in the human brain and neuronal in-
1150
+ teractions (known as free-energy principle and active inference
1151
+ in neuroscience [41]). Interested readers are referred to [40]
1152
+ for detailed discussions.
1153
+ V. NUMERICAL EXAMPLE
1154
+ In this section, we verify the effectiveness of the proposed
1155
+ algorithm using a dedicate numerical example. Consider a
1156
+ linear system (2) with
1157
+ A =
1158
+
1159
+ 0
1160
+ 1
1161
+ 2
1162
+ 1
1163
+
1164
+ , B =
1165
+
1166
+ 1
1167
+ 1
1168
+
1169
+ , C =
1170
+ � 0
1171
+ 1 �
1172
+ .
1173
+ (69)
1174
+ The reward function is given by
1175
+ J(θ∗, y) = 2y − θ∗y2 =
1176
+ � 2y
1177
+ −y2 � � 1
1178
+ θ∗
1179
+
1180
+ (70)
1181
+ where θ∗ is affected by the unknown environment. The true
1182
+ value is θ∗ = 1 but unavailable a priori. The optimal oper-
1183
+ ational condition r∗ is determined by θ∗, i.e., r∗ = l(θ∗) =
1184
+ 1/θ∗ = 1.
1185
+ We assume the measurements are subject to Gaussian noise
1186
+ v(k) ∼ N(0, 2), which implies that the observations from
1187
+ environment are J(k) = J(θ∗, y(k))+ v(k). Decision-making
1188
+
1189
+ 9
1190
+ under uncertain environment with noisy measurements is of
1191
+ significant importance to promote the system intelligence.
1192
+ In order to explore the uncertain environment, the first step
1193
+ is to quantify the level of uncertainty. An ensemble based
1194
+ multi-estimator approach has been developed in previous
1195
+ sections. Now, the size of the estimator ensemble is chosen
1196
+ as N
1197
+ = 100, and each of them is randomly initialised
1198
+ according to a uniform distribution between 0 and 20, i.e.,
1199
+ θi(0) ∼ U(0, 20), ∀i = 1, 2, . . . , 100. The step sizes are set
1200
+ as ηi = 0.005 and δk = 0.5. The system is controllable
1201
+ and regulation condition in (53) is satisfied such that the gain
1202
+ matrices can be obtained as Ψ = [ 1
1203
+ 3, 1]T and G = − 2
1204
+ 3. The
1205
+ gain matrix K = [−1.24, 1.14] is chosen by placing the poles
1206
+ of (A − BK) at [0.4; 0.7].
1207
+ Fig. 1 shows the estimated environmental parameters. Ini-
1208
+ tially, the mean and standard deviation of the ensemble
1209
+ {θi, i = 1, . . . , 100} are 10.87 and 5.57, respectively, ran-
1210
+ domly initialised using a uniform distribution. The mean of
1211
+ the estimators converges to the true environment parameter
1212
+ θ∗ = 1, and the standard deviation among the estimators
1213
+ shrinks quickly, indicating that the estimation uncertainty
1214
+ reduces (quantified by the variance among the estimators in
1215
+ the ensemble). Despite increasing the iteration k significantly,
1216
+ the estimated parameters remain fluctuating within a small
1217
+ neighbourhood of the true value due to the presence of noisy
1218
+ measurements. Fig. 2 displays the observed rewards from the
1219
+ environment. Even though we have imposed quite significant
1220
+ noises to the measurements, the performance of the estimators
1221
+ is fairly satisfactory, which manifests the ensemble based
1222
+ active learning provides superior robustness against noises.
1223
+ Implementing the dual control in (50) not only contributes to
1224
+ enhanced parameter adaptation performance but also drives the
1225
+ system output to the optimal operational condition, as shown in
1226
+ Fig. 3. The system output approaches the optimal operational
1227
+ point r∗ = 1 as shown in Fig. 3, and the system states are
1228
+ displayed in Fig. 4. It can be verified that x∗ = Ψr∗ = [ 1
1229
+ 3, 1]T.
1230
+ The tracking error is determined by the estimation error. In this
1231
+ process, there is no need to tune the weights of exploration
1232
+ and exploitation. As a principled approach, the dual controller
1233
+ in (50) is derived from a physically meaningful objective
1234
+ function, which naturally embeds balanced dual effects for
1235
+ active environment learning and optimality tracking.
1236
+ VI. APPLICATION FOR MPPT
1237
+ DCEE was originally developed to solve autonomous search
1238
+ problem in [15], which demonstrates outstanding performance
1239
+ compared with other existing approaches. In this section, we
1240
+ take the optimal control for photovoltaic (PV) systems as an
1241
+ example to illustrate that DCEE can be implemented to solve
1242
+ a much wider class of self-optimisation control problems in
1243
+ real-world applications. Extracting maximum power is a long-
1244
+ lasting pursuit in operating PV systems. Despite significant
1245
+ research efforts made over the past few decades [42]–[44],
1246
+ the energy conversion efficiency of PV systems remains very
1247
+ poor due to high environment uncertainties in temperature,
1248
+ irradiance level, partial shading and other atmospheric con-
1249
+ ditions. The primary goal in PV operation is simply to
1250
+ Fig. 1: Mean and standard deviation of estimated θ(k) using
1251
+ ensemble based estimators.
1252
+ Fig. 2: Observed reward J(k) from unknown and uncertain
1253
+ environment with measurement noises v(t).
1254
+ extract solar energy as much as possible despite changing
1255
+ operational environment, termed as maximum power point
1256
+ tracking (MPPT). There have been a wide variety of methods
1257
+ targeting to solve this problem, which can be roughly classified
1258
+ into three categories: offline methods, online methods, and
1259
+ other methods. Detailed comparisons and classifications can
1260
+ be found in comprehensive survey papers, e.g., [42], [43].
1261
+ In this section, the proposed DCEE is implemented as an
1262
+ alternative approach to achieve MPPT, and two representa-
1263
+ tive approaches, hill climbing method (HC) and incremental
1264
+ conductance method (IC), are deployed for comparison. It is
1265
+ Fig. 3: System output y(k) using DCEE.
1266
+
1267
+ 10
1268
+ Fig. 4: System state x(k).
1269
+ Fig. 5: Time-varying solar irradiance profile.
1270
+ worth noting that all the three algorithms can be classified
1271
+ as online methods. It has been widely perceived that online
1272
+ methods usually outperform offline counterparts in terms of
1273
+ conversion efficiency due to their inherent adaptiveness to
1274
+ changing environment. According to the curve-fitting based
1275
+ MPPT [42], the power and voltage (P-V ) characteristics can
1276
+ be modelled by
1277
+ P = φT(V )θ
1278
+ (71)
1279
+ where φ(V ) is the polynomial regressor [1, V, V 2, . . . , V n]T
1280
+ and θ ∈ Rn+1 is the polynomial coefficient. To solve the
1281
+ maximum problem of (71), we need to estimate the unknown
1282
+ parameters θ and then maximise the power output by regulat-
1283
+ ing the voltage V according to
1284
+ V (k + 1) = V (k) + u(k).
1285
+ (72)
1286
+ We use solar panel A10Green Technology model number
1287
+ A10J-S72-175 for this simulation [45]. To mimic the real
1288
+ operational environment of PV systems, a time-varying solar
1289
+ irradiance profile is stimulated as shown in Fig. 5, and the
1290
+ temperature is initially set as 25°C and then jumps to 35°C
1291
+ at t = 1s. It should be noted that the unknown environment
1292
+ parameter θ changes as the operational condition varies. Al-
1293
+ though the proposed algorithm is theoretically analysed for
1294
+ static parameters identification, the use of constant learning
1295
+ rate ηi renders the adaptation algorithm in (15) with the
1296
+ capability of tracking drifting parameters.
1297
+ Simulation results using different algorithms (DCEE, HC
1298
+ and IC) are shown in Fig. 6, 7 and 8. To illustrate more de-
1299
+ tailed features of different algorithms, enlarged sub-figures are
1300
+ displayed for the time intervals t ∈ [0, 0.1], and t ∈ [0.3, 0.4].
1301
+ The power losses, as displayed in Fig. 9, are calculated by
1302
+ integrating the differences between the maximum power point
1303
+ and real power outputs stimulated using different algorithms.
1304
+ Convergence speed, sensed signals, algorithm complexity and
1305
+ conversion efficiency are four commonly-used criteria to as-
1306
+ sess the characteristics of MPPT techniques. According to the
1307
+ simulation results, we summarise and compare the features of
1308
+ different approaches in Table I. Conversion efficiency directly
1309
+ influences the energy extracted from the PV systems, which
1310
+ is ratio between real generated energy and maximum energy
1311
+ (accumulated over the simulation time interval [0, 2]). DCEE
1312
+ produces quite high efficiency (99.1%). Due to the use of
1313
+ perturbed signals in hill climbing method, there are very
1314
+ large voltage and current fluctuations in steady state. This
1315
+ undesirable property not only causes low conversion efficiency
1316
+ but also leads to fast degradation in low level electronic
1317
+ devices. The oscillations are partially solved by incremental
1318
+ conductance method, which measures incremental current and
1319
+ voltage changes to predict the effect of voltage change.
1320
+ Different from HC, incremental inductance method is able
1321
+ to maintain at MPP without oscillations when there is no
1322
+ change in operational environment. From the simulation re-
1323
+ sults using HC and IC, there is a trade-off between transient
1324
+ convergence speed and steady-state oscillations. The steady-
1325
+ state oscillation of IC is reduced at the cost of slow tracking
1326
+ performance, leading to larger power loss with a conversion
1327
+ efficiency 97.2%. It is argued that DCEE as a balanced ap-
1328
+ proach is able to optimally trade-off between exploitation and
1329
+ exploration: when there is large uncertainty in estimated MPP,
1330
+ it will explore quickly to gain information to construct more
1331
+ accurate estimate of MPP; and when there is less change in
1332
+ operational environment, it will maintain at the current belief
1333
+ of MPP without causing large oscillations. All three algorithms
1334
+ need to measure voltage and current: DCEE requires voltage
1335
+ and power (calculated by the product of current and voltage) to
1336
+ construct P-V curve in (71) (i.e., reward-state mapping), while
1337
+ HC and IC use incremental power to deicide the direction of
1338
+ voltage regulation. As mature MPPT techniques, both HC and
1339
+ IC are simple to implement using dedicated hardware devices.
1340
+ Since efficient ensemble approximation and gradient based
1341
+ control are developed in this new approach, DCEE is ready to
1342
+ be implemented in real PV platforms without incurring heavy
1343
+ computational load.
1344
+ VII. CONCLUSION
1345
+ In this paper, a general framework of dual control for ex-
1346
+ ploration and exploitation has been developed to solve a wide
1347
+ range of self-optimisation control problems in an uncertain
1348
+ environment. A real-time ensemble based estimation approach
1349
+ is proposed for efficient environment acquisition, which con-
1350
+ sequently provides a measure of knowledge uncertainty to
1351
+ the unknown environment. The proposed DCEE algorithm
1352
+ optimally balances between exploration and exploitation to
1353
+
1354
+ 11
1355
+ TABLE I: Features of different MPPT techniques.
1356
+ Methods
1357
+ Convergence speed
1358
+ Sensed variables
1359
+ Algorithm complexity
1360
+ Conversion efficiency
1361
+ 1
1362
+ DCEE
1363
+ Fast
1364
+ Voltage and current
1365
+ Medium
1366
+ 99.1%
1367
+ 2
1368
+ Hill climbing
1369
+ Fast
1370
+ Voltage and current
1371
+ Simple
1372
+ 98.3%
1373
+ 3
1374
+ Incremental conductance
1375
+ Medium
1376
+ Voltage and current
1377
+ Simple
1378
+ 97.2%
1379
+ Fig. 6: Power profile using different algorithms.
1380
+ Fig. 7: Voltage profile using different algorithms.
1381
+ Fig. 8: Current profile using different algorithms.
1382
+ Fig. 9: Power losses using different algorithms.
1383
+ handle the intrinsic conflict between parameter identifiability
1384
+ and optimality tracking. Guaranteed convergence and perfor-
1385
+ mance are established in relation to the reward function and
1386
+ the noise characteristics. A numerical example and a classic
1387
+ application of MPPT are provided to validate the effectiveness
1388
+ and potential of DCEE.
1389
+ REFERENCES
1390
+ [1] C. Zhang and R. Ordonez, “Numerical optimization-based extremum
1391
+ seeking control with application to ABS design,” IEEE Transactions on
1392
+ Automatic Control, vol. 52, no. 3, pp. 454–467, 2007.
1393
+ [2] R. Leyva, C. Alonso, I. Queinnec, A. Cid-Pastor, D. Lagrange, and
1394
+ L. Martinez-Salamero, “MPPT of photovoltaic systems using extremum-
1395
+ seeking control,” IEEE Transactions on Aerospace and Electronic Sys-
1396
+ tems, vol. 42, no. 1, pp. 249–258, 2006.
1397
+ [3] Z.-D. Zhong, H.-B. Huo, X.-J. Zhu, G.-Y. Cao, and Y. Ren, “Adaptive
1398
+ maximum power point tracking control of fuel cell power plants,”
1399
+ Journal of Power Sources, vol. 176, no. 1, pp. 259–269, 2008.
1400
+ [4] Y. Tan, W. H. Moase, C. Manzie, D. Neˇsi´c, and I. M. Mareels,
1401
+ “Extremum seeking from 1922 to 2010,” in Proceedings of the 29th
1402
+ Chinese Control Conference.
1403
+ IEEE, 2010, pp. 14–26.
1404
+ [5] M. Leblanc, “Sur l’electrification des chemins de fer au moyen de
1405
+ courants alternatifs de frequence elevee,” Revue g´en´erale de l’´electricit´e,
1406
+ vol. 12, no. 8, pp. 275–277, 1922.
1407
+ [6] M. Krsti´c and H.-H. Wang, “Stability of extremum seeking feedback
1408
+ for general nonlinear dynamic systems,” Automatica, vol. 36, no. 4, pp.
1409
+ 595–601, 2000.
1410
+ [7] M. Krsti´c, “Performance improvement and limitations in extremum
1411
+ seeking control,” Systems & Control Letters, vol. 39, no. 5, pp. 313–326,
1412
+ 2000.
1413
+ [8] S. Skogestad, “Plantwide control: The search for the self-optimizing
1414
+ control structure,” Journal of Process Control, vol. 10, no. 5, pp. 487–
1415
+ 507, 2000.
1416
+ [9] M. Guay and T. Zhang, “Adaptive extremum seeking control of nonlinear
1417
+ dynamic systems with parametric uncertainties,” Automatica, vol. 39,
1418
+ no. 7, pp. 1283–1293, 2003.
1419
+ [10] K. A. Sullivan and S. H. Jacobson, “A convergence analysis of general-
1420
+ ized hill climbing algorithms,” IEEE Transactions on Automatic Control,
1421
+ vol. 46, no. 8, pp. 1288–1293, 2001.
1422
+ [11] Y. Tan, D. Neˇsi´c, and I. Mareels, “On non-local stability properties of
1423
+ extremum seeking control,” Automatica, vol. 42, no. 6, pp. 889–903,
1424
+ 2006.
1425
+ [12] C. Manzie and M. Krstic, “Extremum seeking with stochastic pertur-
1426
+ bations,” IEEE Transactions on Automatic Control, vol. 54, no. 3, pp.
1427
+ 580–585, 2009.
1428
+
1429
+ 8
1430
+ 6
1431
+ 5
1432
+ loss
1433
+ power
1434
+ 3
1435
+ 2
1436
+ hill climbing
1437
+ 1
1438
+ incremental conductance
1439
+ DCEE
1440
+ 0
1441
+ 0
1442
+ 0.2
1443
+ 0.4
1444
+ 0.6
1445
+ 0.8
1446
+ 1
1447
+ 1.2
1448
+ 1.4
1449
+ 1.6
1450
+ 1.8
1451
+ 2
1452
+ t [s]12
1453
+ [13] S.-J. Liu and M. Krstic, “Stochastic source seeking for nonholonomic
1454
+ unicycle,” Automatica, vol. 46, no. 9, pp. 1443–1453, 2010.
1455
+ [14] S. Xie and L. Y. Wang, “Adaptive optimization with decaying periodic
1456
+ dither signals,” IEEE Transactions on Automatic Control, 2022.
1457
+ [15] W.-H. Chen, C. Rhodes, and C. Liu, “Dual control for exploitation and
1458
+ exploration (DCEE) in autonomous search,” Automatica, vol. 133, no.
1459
+ 109851, 2021.
1460
+ [16] M. Guay and D. J. Burns, “A proportional integral extremum-seeking
1461
+ control approach for discrete-time nonlinear systems,” International
1462
+ Journal of Control, vol. 90, no. 8, pp. 1543–1554, 2017.
1463
+ [17] A. Mesbah, “Stochastic model predictive control with active uncertainty
1464
+ learning: A survey on dual control,” Annual Reviews in Control, vol. 45,
1465
+ pp. 107–117, 2018.
1466
+ [18] Z. Li, W.-H. Chen, and J. Yang, “Concurrent active learning in au-
1467
+ tonomous airborne source search: Dual control for exploration and
1468
+ exploitation,” IEEE Transactions on Automatic Control, 2022.
1469
+ [19] A. A. Feldbaum, “Dual control theory I,” Avtomatika i Telemekhanika,
1470
+ vol. 21, no. 9, pp. 1240–1249, 1960.
1471
+ [20] Y. Bar-Shalom and E. Tse, “Dual effect, certainty equivalence, and sep-
1472
+ aration in stochastic control,” IEEE Transactions on Automatic Control,
1473
+ vol. 19, no. 5, pp. 494–500, 1974.
1474
+ [21] C. Rhodes, C. Liu, and W.-H. Chen, “Autonomous source term estima-
1475
+ tion in unknown environments: From a dual control concept to UAV
1476
+ deployment,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp.
1477
+ 2274–2281, 2021.
1478
+ [22] P. Antsaklis, “Autonomy and metrics of autonomy,” Annual Reviews in
1479
+ Control, vol. 49, pp. 15–26, 2020.
1480
+ [23] T. A. N. Heirung, B. E. Ydstie, and B. Foss, “Dual adaptive model
1481
+ predictive control,” Automatica, vol. 80, pp. 340–348, 2017.
1482
+ [24] M. Hutchinson, H. Oh, and W.-H. Chen, “A review of source term
1483
+ estimation methods for atmospheric dispersion events using static or
1484
+ mobile sensors,” Information Fusion, vol. 36, pp. 130–148, 2017.
1485
+ [25] K. Chua, R. Calandra, R. McAllister, and S. Levine, “Deep rein-
1486
+ forcement learning in a handful of trials using probabilistic dynamics
1487
+ models,” Advances in Neural Information Processing Systems (NIPS
1488
+ 2018), vol. 31, 2018.
1489
+ [26] B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable
1490
+ predictive uncertainty estimation using deep ensembles,” Advances in
1491
+ Neural Information Processing Systems, vol. 30, 2017.
1492
+ [27] R. Ortega, V. Nikiforov, and D. Gerasimov, “On modified parameter
1493
+ estimators for identification and adaptive control. a unified framework
1494
+ and some new schemes,” Annual Reviews in Control, vol. 50, pp. 278–
1495
+ 293, 2020.
1496
+ [28] V. Adetola and M. Guay, “Parameter convergence in adaptive extremum-
1497
+ seeking control,” Automatica, vol. 43, no. 1, pp. 105–110, 2007.
1498
+ [29] F. Ding and T. Chen, “Performance analysis of multi-innovation gradient
1499
+ type identification methods,” Automatica, vol. 43, no. 1, pp. 1–14, 2007.
1500
+ [30] W. Rudin, Principles of Mathematical Analysis, 3rd ed.
1501
+ New York,
1502
+ NY, USA: McGraw-hill, 1976.
1503
+ [31] Z. Li, W.-H. Chen, and J. Yang, “A dual control perspective for
1504
+ exploration and exploitation in autonomous search,” in European Control
1505
+ Conference, 2022, Conference Proceedings.
1506
+ [32] E. Tse and Y. Bar-Shalom, “An actively adaptive control for linear
1507
+ systems with random parameters via the dual control approach,” IEEE
1508
+ Transactions on Automatic Control, vol. 18, no. 2, pp. 109–117, 1973.
1509
+ [33] A. I. Cowen-Rivers, D. Palenicek, V. Moens, M. Abdullah, A. Sootla,
1510
+ J. Wang, and H. Ammar, “Samba: Safe model-based & active reinforce-
1511
+ ment learning,” arXiv preprint arXiv:2006.09436, 2020.
1512
+ [34] M. Ghavamzadeh, S. Mannor, J. Pineau, and A. Tamar, “Bayesian
1513
+ reinforcement learning: A survey,” Foundations and Trends® in Machine
1514
+ Learning, vol. 8, no. 5-6, pp. 359–483, 2015.
1515
+ [35] H. Jeong, B. Schlotfeldt, H. Hassani, M. Morari, D. D. Lee, and G. J.
1516
+ Pappas, “Learning q-network for active information acquisition,” arXiv
1517
+ preprint arXiv:1910.10754, 2019.
1518
+ [36] A. Mesbah, “Stochastic model predictive control: An overview and
1519
+ perspectives for future research,” IEEE Control Systems Magazine,
1520
+ vol. 36, no. 6, pp. 30–44, 2016.
1521
+ [37] M. K. Bugeja, S. G. Fabri, and L. Camilleri, “Dual adaptive dynamic
1522
+ control of mobile robots using neural networks,” IEEE Transactions on
1523
+ Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 39, no. 1, pp.
1524
+ 129–141, 2008.
1525
+ [38] T. Alpcan and I. Shames, “An information-based learning approach to
1526
+ dual control,” IEEE transactions on Neural Networks and Learning
1527
+ Systems, vol. 26, no. 11, pp. 2736–2748, 2015.
1528
+ [39] J. Huang, Nonlinear Output Regulation: Theory and Applications.
1529
+ SIAM, 2004.
1530
+ [40] W.-H. Chen, “Perspective view of autonomous control in unknown envi-
1531
+ ronment: Dual control for exploitation and exploration vs reinforcement
1532
+ learning,” Neurocomputing, 2022.
1533
+ [41] K. Friston, “The free-energy principle: a unified brain theory?” Nature
1534
+ Reviews Neuroscience, vol. 11, no. 2, pp. 127–138, 2010.
1535
+ [42] P. Bhatnagar and R. Nema, “Maximum power point tracking control
1536
+ techniques: State-of-the-art in photovoltaic applications,” Renewable and
1537
+ Sustainable Energy Reviews, vol. 23, pp. 224–241, 2013.
1538
+ [43] A. R. Reisi, M. H. Moradi, and S. Jamasb, “Classification and com-
1539
+ parison of maximum power point tracking techniques for photovoltaic
1540
+ system: A review,” Renewable and Sustainable Energy Reviews, vol. 19,
1541
+ pp. 433–443, 2013.
1542
+ [44] T. Esram and P. L. Chapman, “Comparison of photovoltaic array
1543
+ maximum power point tracking techniques,” IEEE Transactions on
1544
+ Energy Conversion, vol. 22, no. 2, pp. 439–449, 2007.
1545
+ [45] I. Shams, S. Mekhilef, and K. S. Tey, “Maximum power point tracking
1546
+ using modified butterfly optimization algorithm for partial shading,
1547
+ uniform shading, and fast varying load conditions,” IEEE Transactions
1548
+ on Power Electronics, vol. 36, no. 5, pp. 5569–5581, 2020.
1549
+
ANFLT4oBgHgl3EQfEi_Y/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
AdE0T4oBgHgl3EQfxgLI/content/2301.02648v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb6aa85ee61ee47fe4c5ef3874775ddbc1a88d44933e9aa617d1ac4c1312956a
3
+ size 3106674
AdE1T4oBgHgl3EQfpAU_/content/2301.03326v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6df4e7b85bd917c82c0d4306342f575bc0e2a86915d22ce1008f1ac5dc8208c4
3
+ size 827157
AdE1T4oBgHgl3EQfpAU_/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fefbed0747afa7c5f266b70bb5c4effc5646206b8ba726a44a5a5272b564573
3
+ size 524333
AdE1T4oBgHgl3EQfpAU_/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba1ac8ee668f15b7cb5f6d0d71407a35943c51a4a613a6f872f5fa4cb4aa7a71
3
+ size 31323
B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2dd0db1d5ca05de5245fcd4440d8d33bc547dec83ab624e08014c13a9492bae
3
+ size 1290766
B9E0T4oBgHgl3EQfgAGB/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae8dab11e85786ee036dca19a41b018ba15f175b8a777433ab38eb0bd2bf57f9
3
+ size 4522029
B9E0T4oBgHgl3EQfgAGB/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85da224537e2e32dda36901159df69d632faf9901c32fc6acb681ef517992300
3
+ size 170031
BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf ADDED
Binary file (86.2 kB). View file
 
BNFRT4oBgHgl3EQfuzgt/content/tmp_files/2301.13632v1.pdf.txt ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ There are 2-tough 4-regular graphs with claws
2
+ W. Goddard, Clemson University
3
+ Chv´atal [1] defined the toughness of a graph G to be the minimum value of
4
+ |S|/k(G−S) where k(G−S) denotes the number of components of G−S and the
5
+ minimum is taken over all cut-sets S ⊆ V (G). It is immediate that the toughness
6
+ is at most half the connectivity. Matthews and Sumner [5] showed that there is
7
+ equality if the graph is claw-free.
8
+ For cubic graphs, Jackson and Katerinis [4] showed that being claw-free is
9
+ also necessary for the graph to have toughness 3
10
+ 2. In [2] we conjectured that the
11
+ analogous result holds for all r-regular graphs, and in [3] we expressed the belief
12
+ that the analogous result does not hold for all r, thus ensuring that we have to
13
+ be correct at least once.
14
+ We note here that it is the latter belief that is true.
15
+ The graph below is
16
+ 4-regular and has claws and its toughness is 2. It is one of the two of smallest
17
+ order.
18
+ References
19
+ [1] V. Chv´atal. Tough graphs and Hamiltonian circuits. Discrete Math. 5 (1973),
20
+ 215–28.
21
+ [2] W. Goddard and H.C. Swart. On the toughness of a graph. Quaestiones Math.
22
+ 13 (1990), 217–232.
23
+ 1
24
+ arXiv:2301.13632v1 [math.CO] 27 Jan 2023
25
+
26
+ [3] W. Goddard. The toughness of cubic graphs. Graphs Combin. 12 (1996), 17–
27
+ 22.
28
+ [4] B. Jackson and P. Katerinis. A characterization of 3
29
+ 2-tough cubic graphs. Ars
30
+ Combin. 38 (1994), 145–148.
31
+ [5] M.M. Matthews and D.P. Sumner. Hamiltonian results in K1,3-free graphs.
32
+ J. Graph Theory 8 (1984), 139–146.
33
+ 2
34
+
BNFRT4oBgHgl3EQfuzgt/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf,len=43
2
+ page_content='There are 2-tough 4-regular graphs with claws W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
3
+ page_content=' Goddard, Clemson University Chv´atal [1] defined the toughness of a graph G to be the minimum value of |S|/k(G−S) where k(G−S) denotes the number of components of G−S and the minimum is taken over all cut-sets S ⊆ V (G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
4
+ page_content=' It is immediate that the toughness is at most half the connectivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
5
+ page_content=' Matthews and Sumner [5] showed that there is equality if the graph is claw-free.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
6
+ page_content=' For cubic graphs, Jackson and Katerinis [4] showed that being claw-free is also necessary for the graph to have toughness 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
7
+ page_content=' In [2] we conjectured that the analogous result holds for all r-regular graphs, and in [3] we expressed the belief that the analogous result does not hold for all r, thus ensuring that we have to be correct at least once.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
8
+ page_content=' We note here that it is the latter belief that is true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
9
+ page_content=' The graph below is 4-regular and has claws and its toughness is 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
10
+ page_content=' It is one of the two of smallest order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
11
+ page_content=' References [1] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
12
+ page_content=' Chv´atal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
13
+ page_content=' Tough graphs and Hamiltonian circuits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
14
+ page_content=' Discrete Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
15
+ page_content=' 5 (1973), 215–28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
16
+ page_content=' [2] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
17
+ page_content=' Goddard and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
18
+ page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
19
+ page_content=' Swart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
20
+ page_content=' On the toughness of a graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
21
+ page_content=' Quaestiones Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
22
+ page_content=' 13 (1990), 217–232.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
23
+ page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
24
+ page_content='13632v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
25
+ page_content='CO] 27 Jan 2023 [3] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
26
+ page_content=' Goddard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
27
+ page_content=' The toughness of cubic graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
28
+ page_content=' Graphs Combin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
29
+ page_content=' 12 (1996), 17– 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
30
+ page_content=' [4] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
31
+ page_content=' Jackson and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
32
+ page_content=' Katerinis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
33
+ page_content=' A characterization of 3 2-tough cubic graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
34
+ page_content=' Ars Combin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
35
+ page_content=' 38 (1994), 145–148.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
36
+ page_content=' [5] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
37
+ page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
38
+ page_content=' Matthews and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
39
+ page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
40
+ page_content=' Sumner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
41
+ page_content=' Hamiltonian results in K1,3-free graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
42
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
43
+ page_content=' Graph Theory 8 (1984), 139–146.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
44
+ page_content=' 2' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNFRT4oBgHgl3EQfuzgt/content/2301.13632v1.pdf'}
CdE1T4oBgHgl3EQfDwOw/content/2301.02882v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff451136110f1e452abb0bc13d917fd1b765e0d5e8762fae7edc2a84f970a80f
3
+ size 363403
CdE1T4oBgHgl3EQfDwOw/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d314e945f5954b3d7f384540a6a6448529c3a277064b25a7d0cbfb2932574274
3
+ size 1900589
CdE1T4oBgHgl3EQfDwOw/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:454a09e1e36caa988b41607c2c73605f0aa7272eaa1e2cbb02f223840e49997e
3
+ size 82160
JtAzT4oBgHgl3EQfj_0_/content/2301.01524v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c1eb26e5443c009842c29d1309c69b347ff74edfcf6063c5ae2a994a51a09bb
3
+ size 585954
JtAzT4oBgHgl3EQfj_0_/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5512d6aaf88389c9599424d8a25efa928ce53850482290ca46e0a05ab9caf41
3
+ size 1638445
JtAzT4oBgHgl3EQfj_0_/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ccc44cc6f1929e4a3f25fc9c093d4b463a2a2784a2244ec2270535b05cc385a
3
+ size 55378
LdAyT4oBgHgl3EQfsvlL/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36931f7446a544a5013125cb5bab8bae999ff00c6ea2575970defbf7b50033bb
3
+ size 3276845
M9AyT4oBgHgl3EQfUPcx/content/2301.00120v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97fd2d81bd97cad358608f579e1cee39eafea36120b55d5e18e287e636c87a8e
3
+ size 210871
M9AyT4oBgHgl3EQfUPcx/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d06cca14ee2c4929185d3332782931d9d2dda95a4f5960b11cc88b826c0452b5
3
+ size 2949165
M9AyT4oBgHgl3EQfUPcx/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84883334eb61ef96f1d1a23a6bebad58b173de3aa53c656afd8ccbeeae6a0899
3
+ size 105203
OdFJT4oBgHgl3EQf0y3g/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c085e8ebfb7c67519e9029108455c3d4e39b2da5b5382f8108065f836b1cbcfb
3
+ size 2621485
OdFRT4oBgHgl3EQf4zhC/content/tmp_files/2301.13670v1.pdf.txt ADDED
@@ -0,0 +1,1563 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ What Makes Good Examples for Visual In-Context Learning?
2
+ Yuanhan Zhang 1 Kaiyang Zhou 1 Ziwei Liu 1
3
+ Abstract
4
+ Large-scale models trained on broad data have
5
+ recently become the mainstream architecture in
6
+ computer vision due to their strong generalization
7
+ performance. In this paper, the main focus is
8
+ on an emergent ability in large vision models,
9
+ known as in-context learning, which allows
10
+ inference on unseen tasks by conditioning on
11
+ in-context examples (a.k.a. prompt) without
12
+ updating the model parameters. This concept has
13
+ been well-known in natural language processing
14
+ but has only been studied very recently for large
15
+ vision models. We for the first time provide a
16
+ comprehensive investigation on the impact of
17
+ in-context examples in computer vision, and find
18
+ that the performance is highly sensitive to the
19
+ choice of in-context examples.
20
+ To overcome
21
+ the problem, we propose a prompt retrieval
22
+ framework to automate the selection of in-context
23
+ examples. Specifically, we present (1) an unsuper-
24
+ vised prompt retrieval method based on nearest
25
+ example search using an off-the-shelf model,
26
+ and (2) a supervised prompt retrieval method,
27
+ which trains a neural network to choose examples
28
+ that
29
+ directly
30
+ maximize
31
+ in-context
32
+ learning
33
+ performance. The results demonstrate that our
34
+ methods can bring non-trivial improvements
35
+ to visual in-context learning in comparison to
36
+ the commonly-used random selection.
37
+ The
38
+ code and models are available at https:
39
+ //github.com/ZhangYuanhan-AI/
40
+ visual_prompt_retrieval.
41
+ 1. Introduction
42
+ In recent years, large-scale models have emerged in com-
43
+ puter vision: they have enormous parameter size and are
44
+ pre-trained on broad data to gain wide-ranging knowledge.
45
+ These models have demonstrated remarkable generaliza-
46
+ tion performance and have great potential for numerous
47
+ 1S-Lab, Nanyang Technological University, Singapore. Corre-
48
+ spondence to: Ziwei Liu <[email protected]>.
49
+ Preliminary work. Do not distribute.
50
+ downstream applications (Bommasani et al., 2021). How-
51
+ ever, due to the large model size and the potentially pro-
52
+ prietary data used for training, entities able to develop
53
+ large-scale models typically only provide users with APIs,
54
+ known as Model-as-a-Service (Maas). Representative exam-
55
+ ples include the prominent text-to-image generation models,
56
+ DALL·E (Ramesh et al., 2021) and Imagen (Saharia et al.,
57
+ 2022), and OpenAI’s powerful language models like GPT-
58
+ 3/ChatGPT (Radford et al., 2021). As a result, users are
59
+ unable to apply full fine-tuning or some parameter-efficient
60
+ tuning techniques, such as prompt learning (Li & Liang,
61
+ 2021; Lester et al., 2021; Zhou et al., 2022c;b; Zhang et al.,
62
+ 2022; Pan et al., 2022), for model adaptation, largely limit-
63
+ ing downstream performance.
64
+ In-context learning, which is a “hidden” capability origi-
65
+ nally found in large autoregressive language models (Rad-
66
+ ford et al., 2021), has recently been investigated for large
67
+ vision models (Bar et al., 2022), and more importantly, has
68
+ the potential to become the mainstream approach for MaaS
69
+ applications in the near future. Without the need to update
70
+ any parameter for previously unseen tasks, in-context learn-
71
+ ing simply prepends some domain-specific input-output
72
+ pairs, called in-context examples or prompt,1 to a test ex-
73
+ ample, which together guide the model to produce an ideal
74
+ result. For instance, in natural language processing one
75
+ could prepend a French-English sentence pair to a French
76
+ sentence, and the model would produce an English transla-
77
+ tion of the French sentence. In computer vision, Bar et al.
78
+ (2022) pre-trained a neural network to fill missing patches
79
+ in grid-like images, which allows the model to perform in-
80
+ context learning for unseen tasks like image segmentation
81
+ (see the grid images in Fig. 1(a) bottom).
82
+ In this work, we focus on visual in-context learning, a rel-
83
+ atively new concept with little existing research regarding
84
+ how to better apply it in practice. We for the first time
85
+ conduct a comprehensive investigation on the impact of in-
86
+ context examples for large vision models, and identify a
87
+ critical issue: downstream performance is highly sensitive
88
+ to the choice of in-context examples. This is evidenced by
89
+ the large variances observed for a variety of test examples
90
+ shown in Fig. 1(a) top. By visualizing the results in Fig. 1(a)
91
+ bottom, it seems to suggest that the closer the in-context
92
+ 1These two terms are used interchangeably in this paper.
93
+ arXiv:2301.13670v1 [cs.CV] 31 Jan 2023
94
+
95
+ What Makes Good Examples for Visual In-Context Learning?
96
+ 0
97
+ 20
98
+ 40
99
+ 60
100
+ 80
101
+ 100
102
+ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
103
+ Foreground segmentation (mIoU)
104
+ Average
105
+ Standard deviation
106
+ Best prompt
107
+ Worst prompt
108
+
109
+
110
+ Example
111
+ Annotation
112
+ Query
113
+ Output
114
+ Database (train)
115
+ Query (test)
116
+ Large-scale
117
+ vision model
118
+ (b) Prompt retrieval for visual in-context learning
119
+ Score
120
+ function
121
+ Retrieved
122
+ image
123
+ (a)Visual in-context learning is sensitive to prompt selection
124
+ Query image index
125
+ Figure 1: (a) Different choices of in-context examples (outlined in green) often lead to significantly different results. Here
126
+ we show 30 random query images (x-axis) from Pascal-5i (Shaban et al., 2017) split 0, and measure the performance range
127
+ using 50 different in-context examples. (b) We propose a prompt retrieval framework aiming to automate the selection of
128
+ in-context examples. We provide two implementations of the idea: one is unsupervised while the other is supervised, both
129
+ outperforming random selection by a clear margin.
130
+ example to the query, the better the result. For example, the
131
+ best prompt image is closer to the query as they are similar
132
+ in object pose and background; on the other hand, the worst
133
+ prompt image has a drastically different style than the query
134
+ image, which might explain why the predicted mask focuses
135
+ on the wrong region, i.e., the white pillar instead of the cat.
136
+ Clearly, designing a proper prompt containing the optimal
137
+ in-context example(s) by hand would be extremely difficult.
138
+ To overcome the problem, we propose a prompt retrieval
139
+ framework where the core component is a score function,
140
+ which aims to give each source instance a score to indicate
141
+ the level of suitability for being included in the prompt.
142
+ Once the scoring process is done, we can simply pick one
143
+ or multiple examples with the highest score(s) to construct
144
+ a prompt. An overview of our framework is depicted in
145
+ Fig. 1(b).
146
+ We provide two implementations for the prompt retrieval
147
+ framework, both interpreting the score as the cosine distance
148
+ measuring similarity between a query and a source exam-
149
+ ple. The first is an unsupervised method based on nearest
150
+ example search using an off-the-shelf model. The second
151
+ is a supervised method, which learns a neural network to
152
+ choose examples that directly maximize in-context learning
153
+ performance. Since there is no ground-truth score to be
154
+ used as the supervisory signal, we resort to a contrastive
155
+ learning paradigm: source examples that result in better (or
156
+ worse) in-context learning performance should get closer
157
+ (or farther) to the query in feature space.
158
+ Our contributions and the main findings are summarized
159
+ as follows. (1) We present the first comprehensive study
160
+ concerning how to select good examples for the emerg-
161
+ ing visual in-context learning, and reveal a critical issue
162
+ that the choice of in-context examples has a huge impact
163
+ on performance. (2) From the technical perspective, we
164
+ present a prompt retrieval framework that can automate the
165
+ prompt selection process, and provide two simple implemen-
166
+ tations: an unsupervised method and a supervised method.
167
+ (3) By conducting extensive experiments on three visual
168
+ in-context learning tasks (which have not been seen dur-
169
+ ing pre-training), namely foreground segmentation, single
170
+ object detection and image colorization, we share valuable
171
+ insights with the community on how to find good visual
172
+ in-context examples, e.g., the supervised method performs
173
+ the best and often finds examples that are both semantically
174
+ close and spatially similar to a query.
175
+ 2. Methods
176
+ 2.1. Visual In-Context Learning
177
+ In-context learning is a new paradigm that originally
178
+ emerged from large autoregressive language models pre-
179
+ trained on broad data, such as GPT-3 (Brown et al., 2020).
180
+ Unlike traditional learning methods, in-context learning
181
+
182
+ What Makes Good Examples for Visual In-Context Learning?
183
+ Feature
184
+ space
185
+ Learnable
186
+ feature
187
+ extractor
188
+
189
+
190
+ IoU:60.2%
191
+ IoU:30.1%
192
+ Query
193
+ 🔥
194
+ 🧊
195
+ Database (train)
196
+ Positive
197
+ Negative
198
+ Large-scale
199
+ vision model
200
+ Query (test)
201
+ high
202
+ low
203
+ 🔥
204
+ Figure 2: Overview of the supervised prompt retrieval method. The main idea is to compute the in-context learning result for
205
+ each source example, and pick those with the highest/lowest results to form a positive/negative set for contrastive learning.
206
+ does not require any parameter update and instead condi-
207
+ tions prediction on some in-context examples in the form
208
+ of input-output pairs. For example, in natural language pro-
209
+ cessing one might give a French-English sentence pair and
210
+ a test French sentence as input to the model, which then
211
+ produces the English version of the sentence. In computer
212
+ vision, such a paradigm has only been studied very recently.
213
+ For example, Bar et al. (2022) trained a neural network to
214
+ fill missing patches in grid-like images, which in turn allows
215
+ the model to perform in-context learning on unseen tasks.
216
+ Formally, given a dataset D = {(xn, yn)}N
217
+ n=1 containing
218
+ N image-label pairs (e.g., an image and its segmentation
219
+ mask), a query example xq, and a model gτ, in-context
220
+ learning can be formulated as:
221
+ yq = gτ(P, xq),
222
+ (1)
223
+ where P is called a prompt, which consists of K input-
224
+ output pairs, P = {xc1, yc1, ..., xcK, ycK} ⊂ D. In partic-
225
+ ular, the prompt P provides some context for guiding the
226
+ model to produce the ideal yq for xq without updating the
227
+ large model’s parameters τ.
228
+ Problem. The most common approach for designing the
229
+ prompt P in the vision domain is (within-class) random
230
+ selection proposed by Bar et al. (2022): one or multiple
231
+ image-label pairs (with the same label as the test example)
232
+ are randomly chosen from the training dataset. As illus-
233
+ trated in Fig. 1(a), the performance is highly sensitive to the
234
+ selection of in-context examples—the gap between the best
235
+ and worst prompt could reach over 70% mIoU. Below we
236
+ propose two automatic prompt selection methods to tackle
237
+ this problem.
238
+ 2.2. Prompt Retrieval
239
+ Our goal is to automatically select the most suitable exam-
240
+ ple(s) from the training dataset for a query xq. To this end,
241
+ we propose a prompt retrieval framework in the following
242
+ form,
243
+ x∗ = arg max
244
+ xn∈D fθ(xn, xq),
245
+ (2)
246
+ where fθ is a function parameterized by θ, aiming to produce
247
+ a score for a pair of xn and xq. When K = 1, we choose
248
+ the optimal example pair as the prompt, P = {x∗, y∗}.
249
+ When K > 1, we rank the training examples by their scores
250
+ and choose the top-K example pairs. An overview of our
251
+ methods is provided in Fig. 1(b).
252
+ In this work, we implement fθ as a combination of a neural
253
+ network for feature extraction and the cosine distance func-
254
+ tion for measuring similarity between two feature vectors.
255
+ 2.2.1. UNSUPERVISED PROMPT RETRIEVAL
256
+ Our first method is unsupervised prompt retrieval where
257
+ the key idea is to use an off-the-shelf feature extractor for
258
+ extracting image features so that we can compare the cosine
259
+ distance between the query xq and each training example
260
+ xn ∈ D. In this case, the parameters θ for the score function
261
+ fθ correspond to the off-the-shelf feature extractor, which
262
+ are kept fixed.
263
+ 2.2.2. SUPERVISED PROMPT RETRIEVAL
264
+ The unsupervised method discussed above is not explicitly
265
+ optimized for in-context learning; instead, it relies on how
266
+ the feature extractor was pre-trained and the objective (func-
267
+ tion) used in pre-training may well not align with that of
268
+ in-context learning. We propose a second method based on
269
+
270
+ AIR
271
+ CANADAWhat Makes Good Examples for Visual In-Context Learning?
272
+ supervised prompt retrieval where we assume the source
273
+ data contains labels. The goal is to directly optimize the
274
+ score function fθ such that the chosen in-context example(s)
275
+ can maximize the log-likelihood,
276
+ max
277
+ P
278
+ log p(yq|P, xq).
279
+ (3)
280
+ In this work, we present a simple implementation for the
281
+ supervised method, which simply turns the unsupervised
282
+ method into a supervised one by making the feature extrac-
283
+ tor learnable. In other words, we directly optimize Eq. 3
284
+ with respect to the feature extractor. Below we explain in
285
+ detail how we train the feature extractor (see Fig. 2 for an
286
+ overview).
287
+ Data. Recall that we interpret the score fθ(·, ·) as the co-
288
+ sine distance between two images in feature space. We
289
+ would like to learn a space such that an image pair (xn, xq)
290
+ with high in-context learning performance is close to each
291
+ other, or far away from each other if the performance is
292
+ low. Since there is no label defining how close a distance
293
+ should be, we resort to contrastive learning for training the
294
+ feature extractor. The goal is then to find a positive and
295
+ a negative set for each training example xn ∈ D treated
296
+ as a query. Specifically, for each example xn we compute
297
+ the prediction ˆyn = gτ((xm, ym), xn) where gτ is the large
298
+ vision model defined in Sec. 2.1 and xm ∈ D but xm ̸= xn.
299
+ Since we have the ground truth yn for xn, we can measure
300
+ the performance by comparing the prediction ˆyn with the
301
+ ground truth yn. Then, for each xn we choose the top-5
302
+ examples with the highest/lowest performance to form a
303
+ positive/negative set.
304
+ Training. Let zn denote the features of xn extracted by the
305
+ neural network we aim to optimize. At each iteraction, we
306
+ sample a mini-batch B from the training dataset. Then, for
307
+ each example in B, we sample one example from the top-5
308
+ positive and negative sets, respectively. The contrastive loss
309
+ is computed as
310
+ ℓ = − 1
311
+ |B|
312
+
313
+ xn∼B
314
+ log
315
+ ecos(zn,z+
316
+ n )
317
+ ecos(zn,z+
318
+ n ) +
319
+
320
+ z−
321
+ n ∈N
322
+ ecos(zn,z−
323
+ n ) , (4)
324
+ where cos(·, ·) is the cosine distance function, z+
325
+ n denotes
326
+ the feature representation of a positive example, and z−
327
+ n
328
+ denotes the feature representation of a negative example. It
329
+ is worth noting that for mini-batch training, the negative
330
+ set N contains a negative example of xn sampled from
331
+ the top-5 negative set and other examples within the same
332
+ mini-batch.
333
+ 3. Experiments
334
+ In this section we conduct a comprehensive evaluation using
335
+ different prompt selection methods (Sec. 3.1) and compare
336
+ their robustness to distribution shifts (Sec. 3.2). We also
337
+ provide extensive quantitative and qualitative analyses in
338
+ Sec. 3.3 to help understand why our methods work and how
339
+ to better apply them in practice. Source code will be released
340
+ to the community for reproducing the full experiments.
341
+ Methods. All experiments are based on the image inpaint-
342
+ ing model pre-trained by Bar et al. (2022) on a dataset
343
+ consisting of academic figures.2 We mainly compare the fol-
344
+ lowing methods: (1) Random, the baseline method that ran-
345
+ domly samples in-context examples from the source training
346
+ dataset; (2) Unsupervised prompt retrieval (UnsupPR), our
347
+ first proposed method that uses off-the-shelf features for
348
+ nearest example search. The main experiments are based
349
+ on CLIP’s vision encoder (Radford et al., 2021), which was
350
+ pre-trained using multimodal contrastive learning; (3) Su-
351
+ pervised prompt retrieval (SupPR), our second proposed
352
+ method that fine-tunes CLIP’s vision encoder by directly
353
+ optimizing in-context learning performance on downstream
354
+ datasets. A variety of backbones are evaluated in Sec. 3.3.
355
+ Training details for the supervised model. The super-
356
+ vised model is trained for 200 epochs using SGD. The initial
357
+ learning rate is set to 0.005, decayed by the cosine annealing
358
+ rule.
359
+ 3.1. Main Results
360
+ Setup. Following Bar et al. (2022), we evaluate our meth-
361
+ ods on three computer vision tasks, which have not been
362
+ seen during the training of the image inpainting model. We
363
+ provide the details about the datasets used for these tasks
364
+ as follows. (1) Foreground segmentation: We use Pascal-
365
+ 5i (Shaban et al., 2017), which has four non-overlapping
366
+ splits each containing five categories. The results are aver-
367
+ aged over all splits. (2) Single object detection: The experi-
368
+ ments are done on Pascal VOC (Everingham et al., 2015).
369
+ (3) Colorization: We use ImageNet-2012 (Russakovsky
370
+ et al., 2015), where the original validation set containing
371
+ 50,000 images is used as our test set. The training data used
372
+ to learn our supervised prompt retrieval model is created by
373
+ randomly sampling 50,000 images from ImageNet’s 1.2M
374
+ training set. For all experiments, in-context examples come
375
+ from the training set.
376
+ Results. Table 1 shows the results on the three benchmarks
377
+ covering foreground segmentation, single object detection,
378
+ and colorization. We summarize our findings as follows.
379
+ First, prompt retrieval clearly outperforms random selection.
380
+ In particular, the improvements of prompt retrieval over
381
+ random selection are significant in foreground segmentation
382
+ and single object detection: more than 6% on the former
383
+ 2https://github.com/amirbar/visual_
384
+ prompting
385
+
386
+ What Makes Good Examples for Visual In-Context Learning?
387
+ Table 1: Main results. The two prompt retrieval methods outperform random selection, and the supervised method achieves
388
+ the best performance.
389
+ Seg. (mIoU) ↑
390
+ Det. (mIoU) ↑
391
+ Color. (mse) ↓
392
+ Split-0
393
+ Split-1
394
+ Split-2
395
+ Split-3
396
+ Avg
397
+ Random
398
+ 28.66
399
+ 30.21
400
+ 27.81
401
+ 23.55
402
+ 27.56
403
+ 25.45
404
+ 0.67
405
+ UnsupPR
406
+ 34.75
407
+ 35.92
408
+ 32.41
409
+ 31.16
410
+ 33.56
411
+ 26.84
412
+ 0.63
413
+ SupPR
414
+ 37.08
415
+ 38.43
416
+ 34.40
417
+ 32.32
418
+ 35.56
419
+ 28.22
420
+ 0.63
421
+ Table 2: Results on distribution shifts (from Pascal to
422
+ MSCOCO). Despite being a learning-based approach,
423
+ SupPR shows stronger robustness than UnsupPR and Ran-
424
+ dom, which do not require any training.
425
+ Seg. (mIoU) ↑
426
+ Split-0
427
+ Split-1
428
+ Split-2
429
+ Split-3
430
+ Avg
431
+ Random
432
+ 12.17
433
+ 18.47
434
+ 20.55
435
+ 15.94
436
+ 16.78
437
+ UnsupPR
438
+ 12.67
439
+ 19.62
440
+ 21.33
441
+ 18.44
442
+ 18.02
443
+ SupPR
444
+ 13.62
445
+ 21.25
446
+ 24.46
447
+ 20.44
448
+ 19.95
449
+ and 1% on the latter. However, the gains on colorization
450
+ are only marginal (0.63 vs. 0.67), suggesting that the image
451
+ inpainting model is probably weak at image colorization.
452
+ Second, the supervised prompt retrieval method performs
453
+ the best. This is not surprising as the supervised method
454
+ optimizes in-context learning performance concerning the
455
+ prompt selection module. In contrast, the unsupervised
456
+ method relies more on the off-the-shelf feature extractor.
457
+ Overall, the results well justify the design of the prompt
458
+ retrieval framework, which can serve as a strong baseline
459
+ for future research.
460
+ 3.2. Experiments on Distribution Shifts
461
+ Setup. Distribution shifts are commonly seen in real-world
462
+ applications, and therefore AI models need to be robust to
463
+ distribution shifts (Zhou et al., 2022a). To test this ability in
464
+ visual in-context learning, we create a new protocol focusing
465
+ on foreground segmentation where the source dataset is
466
+ Pascal while the target dataset is MSCOCO (Lin et al., 2014).
467
+ Specifically, we follow the design of Pascal-5i and create
468
+ MSCOCO-5i, which also has four splits, each having the
469
+ same set of categories as in the corresponding split in Pascal-
470
+ 5i. Note that such a shift mainly affects the supervised
471
+ prompt retrieval method that requires training but not the
472
+ unsupervised UnsupPR and Random.
473
+ Results. The results are shown in Table 2. First of all,
474
+ the unsupervised prompt retrieval method beats the random
475
+ selection method by a clear margin. By comparing the
476
+ two prompt retrieval methods, we find that the supervised
477
+ method again performs better than the unsupervised one
478
+ despite being a learning-based approach—this is an exciting
479
+ Table 3: Comparison between different backbones pre-
480
+ trained using different methods: multimodal contrastive
481
+ learning for CLIP, self-supervised learning for EVA, and
482
+ supervised learning for ViT. Overall, the performance is
483
+ insensitive to the choice of different backbones.
484
+ Seg. (mIoU) ↑
485
+ Split-0
486
+ Split-1
487
+ Split-2
488
+ Split-3
489
+ Avg
490
+ UnsupPR
491
+ CLIP
492
+ 34.75
493
+ 35.92
494
+ 32.41
495
+ 31.16
496
+ 33.56
497
+ EVA
498
+ 34.75
499
+ 36.09
500
+ 32.11
501
+ 31.61
502
+ 33.64
503
+ ViT
504
+ 35.10
505
+ 37.37
506
+ 32.05
507
+ 30.80
508
+ 33.83
509
+ SupPR
510
+ CLIP
511
+ 37.08
512
+ 38.43
513
+ 34.40
514
+ 32.32
515
+ 35.56
516
+ EVA
517
+ 36.11
518
+ 39.14
519
+ 34.31
520
+ 33.30
521
+ 35.71
522
+ ViT
523
+ 36.80
524
+ 39.70
525
+ 34.71
526
+ 33.25
527
+ 36.12
528
+ finding as it means the supervised method does not have
529
+ the overfitting problem here. Nonetheless, we observe that
530
+ the gains achieved by the prompt retrieval methods here
531
+ are generally smaller than the gains achieved on the stan-
532
+ dard foreground segmentation benchmark: here SupPR is
533
+ only around 3% better on average than Random (19.95%
534
+ vs. 16.78%) while the improvement in Table 1 reaches 8%
535
+ (35.56% vs. 27.56%). One potential solution to reduce the
536
+ gap might be to improve the image inpainting model, which
537
+ is beyond the scope of this paper.
538
+ 3.3. Further Analysis
539
+ What are good in-context examples? To answer this ques-
540
+ tion, we visualize the in-context examples found by Un-
541
+ supPR and SupPR in Fig. 3. We focus on foreground seg-
542
+ mentation and choose two categories from Pascal (person
543
+ and cow).3 In each grid, the first row corresponds to the re-
544
+ trieved in-context example (i.e., an input-output pair) while
545
+ the second row contains the query and model prediction. By
546
+ comparing the in-context examples picked by UnsupPR and
547
+ those picked by SupPR, we find the reason why SupPR per-
548
+ forms better than UnsupPR: the examples found by SupPR
549
+ are more similar to the queries in terms of semantics (e.g.,
550
+ Fig. 3(e)), background (e.g., Fig. 3(a)), object pose (e.g.,
551
+ Fig. 3(b), object appearance (e.g., Fig. 3(i)), viewpoint (e.g.,
552
+ 3The results of the remaining categories of Pascal and the
553
+ results on other tasks are provided in the supplementary.
554
+
555
+ What Makes Good Examples for Visual In-Context Learning?
556
+ (g)
557
+ (h)
558
+ (i)
559
+ (j)
560
+ (k)
561
+ (l)
562
+ IoU: 37.85
563
+ IoU: 47.48
564
+ IoU: 42.36
565
+ IoU: 69.46
566
+ IoU: 26.47
567
+ IoU: 27.78
568
+ IoU: 59.34
569
+ IoU: 46.74
570
+ (a)
571
+ (b)
572
+ (c)
573
+ (d)
574
+ (e)
575
+ (f)
576
+ IoU: 49.12
577
+ IoU: 23.21
578
+ IoU: 66.93
579
+ IoU: 61.25
580
+ IoU: 29.34
581
+ IoU: 63.38
582
+ IoU: 8.45
583
+ IoU: 36.67
584
+ IoU: 86.44
585
+ IoU: 86.64
586
+ IoU: 92.32
587
+ IoU: 80.14
588
+ IoU: 63.14
589
+ IoU: 79.22
590
+ IoU: 57.48
591
+ IoU: 49.87
592
+ vvv
593
+ v v
594
+ vv
595
+ Figure 3: In-context examples retrieved by UnsupPR and SupPR. In each grid, the first row contains the prompt while the
596
+ second row contains the query and prediction. The in-context examples found by SupPR are more similar than those found
597
+ by UnsupPR to the queries in a numer of ways: semantics (e.g., (e)), background (e.g., (a)), object pose (e.g., (b), object
598
+ appearance (e.g., (i)), viewpoint (e.g., (k)), etc. More examples can be found in the supplementary.
599
+ Fig. 3(k)), and so on. We also observe similar patterns in
600
+ other categories/tasks (please refer to the supplementary).
601
+ Backbone. To understand if using a different backbone than
602
+ CLIP would make a big difference, we further evaluate our
603
+ prompt retrieval methods, UnsupPR and SupPR, on the fore-
604
+ ground segmentation benchmark using two other backbones:
605
+ EVA (Fang et al., 2022) pre-trained using self-supervised
606
+ learning (i.e., masked image modeling) and ViT (Dosovit-
607
+ skiy et al., 2020) pre-trained using supervised learning. The
608
+ results are reported in Table 3. Although these three back-
609
+ bones perform differently on image recognition under the
610
+ fine-tuning setting—EVA performed the best—the gap be-
611
+ tween them for both UnsupPR and SupPR is less than 1%.
612
+ Therefore, we can conclude that the backbone for visual
613
+ in-context learning does not matter much.
614
+ Size of retrieval set. Recall that in-context examples are
615
+ sampled from the training dataset, namely the retrieval set.
616
+ We are interested to know whether the size has any impact on
617
+ performance, especially for the supervised prompt retrieval
618
+ method. To this end, we build seven subsets for each split
619
+ in Pascal-5i, which cover a wide range of sizes (see the
620
+ x-axis in Fig. 4 left). The results are plotted in Fig. 4 left.
621
+ For random selection, the size does not matter at all. In
622
+ contrast, the two prompt retrieval methods clearly benefit
623
+ from a bigger size. But their performance plateaus when the
624
+ size reaches a certain level. It is worth noting that for the
625
+ supervised method, 20% of the total data is sufficient for
626
+ achieving a decent performance.
627
+
628
+ What Makes Good Examples for Visual In-Context Learning?
629
+ 27.58
630
+ 27.58
631
+ 27.66
632
+ 27.63
633
+ 27.83
634
+ 27.42
635
+ 27.56
636
+ 29.81
637
+ 32.01
638
+ 32.46
639
+ 32.85
640
+ 33.09
641
+ 33.50
642
+ 33.56
643
+ 31.30
644
+ 34.62
645
+ 35.42
646
+ 35.41
647
+ 35.55
648
+ 35.64
649
+ 35.56
650
+ 26
651
+ 27
652
+ 28
653
+ 29
654
+ 30
655
+ 31
656
+ 32
657
+ 33
658
+ 34
659
+ 35
660
+ 36
661
+ 0.01
662
+ 0.1
663
+ 0.2
664
+ 0.4
665
+ 0.6
666
+ 0.8
667
+ 1
668
+ mIou
669
+ Size of retrieval set (% of full set)
670
+ Random
671
+ UnsupPR
672
+ SupPR
673
+ 33.56
674
+ 35.56
675
+ 34.15
676
+ 33.70
677
+ 35.56
678
+ 34.03
679
+ 33.71
680
+ 35.57
681
+ 34.05
682
+ 33
683
+ 34
684
+ 35
685
+ 36
686
+ UnsupPR
687
+ SupPR
688
+ AVG
689
+ mIOU
690
+ Similarity metric selection
691
+ Cosine
692
+ Euclidean
693
+ Manhattan
694
+ Figure 4: (Left) Impact of the size of retrieval set. (Right) Ablation study on distance metric used to compute the score
695
+ function in Eq. 2. It can be observed that different metrics perform similarly.
696
+ Table 4: Impact of the order of in-context examples.
697
+ Seg. (mIoU) ↑
698
+ Split-0
699
+ Split-1
700
+ Split-2
701
+ Split-3
702
+ Avg
703
+ Random
704
+ 17.93 ± 0.20
705
+ 25.48 ± 0.27
706
+ 21.34 ± 0.73
707
+ 21.12 ± 0.53
708
+ 21.46 ± 0.43
709
+ UnsupPR
710
+ 20.22 ± 0.31
711
+ 27.58 ± 0.40
712
+ 22.42 ± 0.38
713
+ 23.36 ± 0.42
714
+ 23.39 ± 0.37
715
+ SupPR
716
+ 20.74 ± 0.40
717
+ 28.19 ± 0.37
718
+ 23.09 ± 0.34
719
+ 24.22 ± 0.48
720
+ 24.06 ± 0.40
721
+ Number of in-context examples. We follow Bar et al.
722
+ (2022) and create a large grid enough to fit 8 examples
723
+ at maximum (as shown in Fig. 5 right). By varying the
724
+ number of in-context examples from 1 to 7, we obtain a
725
+ set of results and plot them in Fig. 5 left. Clearly, more
726
+ in-context examples lead to better performance for all three
727
+ methods, including SupPR, UnsupPR, and Random. This
728
+ is probably because in-context examples can be viewed
729
+ as “training data”, and having more training data typically
730
+ benefits performance—in visual in-context learning, more
731
+ training data gives a more comprehensive “context.” We
732
+ show a few example cases in Fig. 5 right to explain this
733
+ observation.
734
+ Order of in-context examples. To understand if changing
735
+ the order of in-context examples makes a difference, we
736
+ fix the number of in-context examples to 3, evaluate all
737
+ possible combinations, and compute the mean and standard
738
+ deviation. As shown in Table 4, the standard deviation is
739
+ generally small, so the order is not a concern as long as
740
+ good examples are chosen.
741
+ Distance metric. We use the cosine distance by default to
742
+ compute the score function in Eq. 2. Here we evaluate other
743
+ design choices including Euclidean distance and Manhattan
744
+ distance. As shown in Fig. 4 right, the results are very
745
+ similar for different distance metrics.
746
+ 4. Related Work
747
+ 4.1. In-Context Learning
748
+ In-context learning is a novel paradigm that emerged in large
749
+ language models, such as GPT-3 (Brown et al., 2020). It al-
750
+ lows an autoregressive language model to perform inference
751
+ on unseen tasks by conditioning the input on some target-
752
+ specific input-output pairs serving as “context.” Such a pow-
753
+ erful paradigm allows users to customize a model’s output
754
+ according to their downstream datasets without changing
755
+ the internal model parameters, which are often inaccessi-
756
+ ble. Recent research in natural language processing has
757
+ shown that in-context learning can be applied to numerous
758
+ language tasks, such as machine translation (Garcia & Firat,
759
+ 2022), sentiment analysis (Min et al., 2021), and question
760
+ answering (Press et al., 2022).
761
+ In computer vision, in-context learning is still a relatively
762
+ new concept. One of the earliest works tackling in-context
763
+ learning is Flamingo (Alayrac et al., 2022), a large visual
764
+ language model taking language as instruction and allowing
765
+ the processing of both images and videos. More relevant
766
+ to our work is a pure vision model developed by Bar et al.
767
+ (2022), which was pre-trained to fill missing patches in
768
+ images made of academic figures and infographics. Bar
769
+ et al. (2022) found that such an image inpainting model
770
+ can solve problems unseen during training, like foreground
771
+ segmentation and image colorization.
772
+ Our work follows Bar et al. (2022) but studies visual in-
773
+ context learning from a different dimension: how to find
774
+
775
+ What Makes Good Examples for Visual In-Context Learning?
776
+ IoU: 20.98
777
+ IoU: 15.68
778
+ IoU: 32.50
779
+ IoU:29.09
780
+ (a)
781
+ Num. of examples = 1
782
+ (b)
783
+ Num. of examples = 7
784
+ 18.56
785
+ 21.90
786
+ 22.87
787
+ 25.39
788
+ 21.22
789
+ 23.41
790
+ 24.39
791
+ 26.51
792
+ 21.90
793
+ 24.43
794
+ 25.87
795
+ 27.99
796
+ 18
797
+ 20
798
+ 22
799
+ 24
800
+ 26
801
+ 28
802
+ 1
803
+ 3
804
+ 5
805
+ 7
806
+ mIoU
807
+ Number of in-context
808
+ examples
809
+ Random
810
+ UnsupPR
811
+ SupPR
812
+ (c)
813
+ IoU:34.19
814
+ IoU:56.46
815
+ Figure 5: (Left) Impact of the number of in-context examples. (Right) More in-context examples can lead to better
816
+ performance. The query in each grid is shown in the bottom right.
817
+ good visual in-context examples that benefit downstream
818
+ performance.
819
+ 4.2. Prompt Retrieval in NLP
820
+ The natural language processing community has found that
821
+ the choice of in-context examples has a huge impact on per-
822
+ formance (Agrawal et al., 2022; Liu et al., 2021). Moreover,
823
+ the way how in-context examples, also called prompts, are
824
+ constructed can also affect performance, e.g., prompt length
825
+ and the order of in-context examples, as reported in the lit-
826
+ erature (Agrawal et al., 2022). These findings prompted the
827
+ community to study how to find good in-context examples
828
+ for large language models, which has inspired our research.
829
+ Liu et al. (2021) assumed that good in-context examples
830
+ should be semantically close to query sentences, based on
831
+ which they proposed to select nearest neighbors in the train-
832
+ ing set measured by a sentence encoder like RoBERTa (Liu
833
+ et al., 2019). Rubin et al. (2021) first used an unsupervised
834
+ method to retrieve some candidates, among which top ex-
835
+ amples were chosen using a supervised prompt retriever to
836
+ maximize downstream performance.
837
+ 5. Discussion and Conclusion
838
+ Our research presents a timely study on an emergent abil-
839
+ ity termed in-context learning for large vision models. We
840
+ systematically investigate how the choice of in-context ex-
841
+ amples impacts downstream performance, exposing a crit-
842
+ ical issue that different in-context examples could lead to
843
+ drastically different results. We then propose an effective
844
+ prompt retrieval framework for visual in-context learning,
845
+ with two simple implementations provided: one based on
846
+ unsupervised learning and the other based on supervised
847
+ learning. Our methods obtain significant improvements over
848
+ random selection under various problem settings, showing
849
+ the potential of using prompt retrieval in vision applications
850
+ with a Model-as-a-Service (MaaS) business structure.
851
+ Our research also unveils some intriguing phenomena. For
852
+ instance, we show that a good in-context example should
853
+ be semantically similar to the query and closer in context,
854
+ e.g., viewpoint, background, and appearance. As such, state-
855
+ of-the-art vision models like CLIP would not be sufficient
856
+ because these models often emphasize semantics but not
857
+ other elements critical to finding good visual in-context
858
+ examples. A model that can better balance spatial and se-
859
+ mantic closedness in feature space would be more ideal for
860
+ visual in-context learning. We hope the insights presented in
861
+ this work could pave the way for developing more effective
862
+ prompt retrieval methods.
863
+ Our experiments show that our methods are not strong
864
+ enough to cope with distribution shifts. Though our meth-
865
+ ods outperform random selection under distribution shifts,
866
+ the gap is much smaller than that on a standard benchmark,
867
+ suggesting huge room for improvement.
868
+
869
+ What Makes Good Examples for Visual In-Context Learning?
870
+ References
871
+ Agrawal, S., Zhou, C., Lewis, M., Zettlemoyer, L., and
872
+ Ghazvininejad, M. In-context examples selection for
873
+ machine translation. arXiv preprint arXiv:2212.02437,
874
+ 2022. 8
875
+ Alayrac, J.-B., Donahue, J., Luc, P., Miech, A., Barr, I.,
876
+ Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds,
877
+ M., et al. Flamingo: a visual language model for few-shot
878
+ learning. arXiv preprint arXiv:2204.14198, 2022. 7
879
+ Bar, A., Gandelsman, Y., Darrell, T., Globerson, A., and
880
+ Efros, A. A. Visual prompting via image inpainting. arXiv
881
+ preprint arXiv:2209.00647, 2022. 1, 3, 4, 7
882
+ Bommasani, R., Hudson, D. A., Adeli, E., Altman, R.,
883
+ Arora, S., von Arx, S., Bernstein, M. S., Bohg, J., Bosse-
884
+ lut, A., Brunskill, E., et al. On the opportunities and risks
885
+ of foundation models. arXiv preprint arXiv:2108.07258,
886
+ 2021. 1
887
+ Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D.,
888
+ Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
889
+ Askell, A., et al. Language models are few-shot learn-
890
+ ers. Advances in neural information processing systems
891
+ (NeurIPS), 33:1877–1901, 2020. 2, 7
892
+ Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn,
893
+ D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M.,
894
+ Heigold, G., Gelly, S., et al. An image is worth 16x16
895
+ words: Transformers for image recognition at scale. arXiv
896
+ preprint arXiv:2010.11929, 2020. 6
897
+ Everingham, M., Eslami, S., Van Gool, L., Williams, C. K.,
898
+ Winn, J., and Zisserman, A. The pascal visual object
899
+ classes challenge: A retrospective. International journal
900
+ of computer vision (ICCV), 111(1):98–136, 2015. 4
901
+ Fang, Y., Wang, W., Xie, B., Sun, Q., Wu, L., Wang, X.,
902
+ Huang, T., Wang, X., and Cao, Y. Eva: Exploring the
903
+ limits of masked visual representation learning at scale.
904
+ arXiv preprint arXiv:2211.07636, 2022. 6
905
+ Garcia, X. and Firat, O. Using natural language prompts for
906
+ machine translation. arXiv preprint arXiv:2202.11822,
907
+ 2022. 7
908
+ Lester, B., Al-Rfou, R., and Constant, N. The power of scale
909
+ for parameter-efficient prompt tuning. arXiv preprint
910
+ arXiv:2104.08691, 2021. 1
911
+ Li, X. L. and Liang, P. Prefix-tuning: Optimizing continuous
912
+ prompts for generation. arXiv preprint arXiv:2101.00190,
913
+ 2021. 1
914
+ Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ra-
915
+ manan, D., Doll´ar, P., and Zitnick, C. L. Microsoft coco:
916
+ Common objects in context. In European conference on
917
+ computer vision (ECCV), pp. 740–755. Springer, 2014. 5
918
+ Liu, J., Shen, D., Zhang, Y., Dolan, B., Carin, L., and Chen,
919
+ W. What makes good in-context examples for gpt-3?
920
+ arXiv preprint arXiv:2101.06804, 2021. 8
921
+ Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D.,
922
+ Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V.
923
+ Roberta: A robustly optimized bert pretraining approach.
924
+ arXiv preprint arXiv:1907.11692, 2019. 8
925
+ Min, S., Lewis, M., Zettlemoyer, L., and Hajishirzi, H.
926
+ Metaicl: Learning to learn in context. arXiv preprint
927
+ arXiv:2110.15943, 2021. 7
928
+ Pan, J., Lin, Z., Zhu, X., Shao, J., and Li, H. St-adapter:
929
+ Parameter-efficient image-to-video transfer learning for
930
+ action recognition. arXiv preprint arXiv:2206.13559,
931
+ 2022. 1
932
+ Press, O., Zhang, M., Min, S., Schmidt, L., Smith, N. A.,
933
+ and Lewis, M.
934
+ Measuring and narrowing the com-
935
+ positionality gap in language models. arXiv preprint
936
+ arXiv:2210.03350, 2022. 7
937
+ Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G.,
938
+ Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J.,
939
+ et al. Learning transferable visual models from natural
940
+ language supervision. In International Conference on
941
+ Machine Learning (ICML), pp. 8748–8763. PMLR, 2021.
942
+ 1, 4
943
+ Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Rad-
944
+ ford, A., Chen, M., and Sutskever, I. Zero-shot text-to-
945
+ image generation. In International Conference on Ma-
946
+ chine Learning (ICML), pp. 8821–8831. PMLR, 2021.
947
+ 1
948
+ Rubin, O., Herzig, J., and Berant, J.
949
+ Learning to re-
950
+ trieve prompts for in-context learning. arXiv preprint
951
+ arXiv:2112.08633, 2021. 8
952
+ Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S.,
953
+ Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein,
954
+ M., Berg, A. C., and Fei-Fei, L. ImageNet Large Scale
955
+ Visual Recognition Challenge. International Journal of
956
+ Computer Vision (IJCV), 115(3):211–252, 2015. doi:
957
+ 10.1007/s11263-015-0816-y. 4
958
+ Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton,
959
+ E., Ghasemipour, S. K. S., Ayan, B. K., Mahdavi, S. S.,
960
+ Lopes, R. G., et al. Photorealistic text-to-image diffusion
961
+ models with deep language understanding. arXiv preprint
962
+ arXiv:2205.11487, 2022. 1
963
+ Shaban, A., Bansal, S., Liu, Z., Essa, I., and Boots, B. One-
964
+ shot learning for semantic segmentation. british machine
965
+ vision conference (BMVC), 2017. 2, 4
966
+
967
+ What Makes Good Examples for Visual In-Context Learning?
968
+ Zhang, Y., Zhou, K., and Liu, Z. Neural prompt search.
969
+ arXiv preprint arXiv:2206.04673, 2022. 1
970
+ Zhou, K., Liu, Z., Qiao, Y., Xiang, T., and Loy, C. C. Do-
971
+ main generalization: A survey. IEEE Transactions on Pat-
972
+ tern Analysis and Machine Intelligence (TPAMI), 2022a.
973
+ 5
974
+ Zhou, K., Yang, J., Loy, C. C., and Liu, Z. Conditional
975
+ prompt learning for vision-language models. In Con-
976
+ ference on Computer Vision and Pattern Recognition
977
+ (CVPR), 2022b. 1
978
+ Zhou, K., Yang, J., Loy, C. C., and Liu, Z. Learning to
979
+ prompt for vision-language models. International Jour-
980
+ nal of Computer Vision (IJCV), 2022c. 1
981
+
982
+ What Makes Good Examples for Visual In-Context Learning?
983
+ A. Illustration of In-context Examples
984
+ In the supplementary material, we illustrate more in-context
985
+ learning results of foreground segmentation, single object
986
+ detection, and colorization tasks.
987
+ A.1. Foreground Segmentation
988
+ The main paper presents the in-context examples from the
989
+ person and cow categories. In the supplementary, as shown
990
+ in Fig. 6-11, we present examples from the remained 18
991
+ categories in Pascal-5i.
992
+ A.2. Single Object Detection
993
+ As shown in Fig. 12-13, we illustrate the in-context ex-
994
+ amples from the single object detection task. By compar-
995
+ ing the in-context examples picked by UnsupPR and those
996
+ picked by SupPR, we find the examples found by SupPR
997
+ are more similar to the queries in terms of object pose (e.g.,
998
+ Fig. 12(f)), viewpoint (e.g., Fig. 12(r).
999
+ A.3. Coloralization
1000
+ As shown in Fig. 14-15, we illustrate the in-context exam-
1001
+ ples from the colorization task. This task aims to map a
1002
+ gray-scale image to a color image. By comparing the in-
1003
+ context examples picked by UnsupPR and those picked by
1004
+ SupPR, we find the ground truth images of examples found
1005
+ by SupPR are more similar to that of the queries in terms of
1006
+ image style, e.g. the background color (e.g., Fig. 14(g)(h)).
1007
+
1008
+ What Makes Good Examples for Visual In-Context Learning?
1009
+ (d)
1010
+ IoU: 85.00
1011
+ IoU: 47.70
1012
+ IoU: 90.00
1013
+ (g)
1014
+ (h)
1015
+ (i)
1016
+ (j)
1017
+ (k)
1018
+ (l)
1019
+ (a)
1020
+ (b)
1021
+ (c)
1022
+ (e)
1023
+ (f)
1024
+ IoU: 13.23
1025
+ IoU: 12.32
1026
+ IoU: 37.01
1027
+ IoU: 56.60
1028
+ IoU: 30.35
1029
+ IoU: 24.85
1030
+ IoU: 25.80
1031
+ IoU: 60.85
1032
+ IoU: 74.10
1033
+ IoU: 73.68
1034
+ IoU: 36.65
1035
+ IoU: 74.71
1036
+ IoU: 51.86
1037
+ IoU: 53.91
1038
+ IoU: 80.97
1039
+ IoU: 63.18
1040
+ IoU: 31.63
1041
+ IoU: 65.34
1042
+ IoU: 13.23
1043
+ IoU: 65,34
1044
+ IoU: 52.47
1045
+ IoU: 27.78
1046
+ IoU: 63.30
1047
+ IoU: 66.49
1048
+ IoU: 51.86
1049
+ IoU: 75.23
1050
+ IoU: 70.98
1051
+ IoU: 65.87
1052
+ IoU: 82.55
1053
+ IoU: 80.37
1054
+ IoU: 27.79
1055
+ IoU: 30.71
1056
+ IoU: 48.08
1057
+ IoU: 53.17
1058
+ (m)
1059
+ (n)
1060
+ (o)
1061
+ (p)
1062
+ (r)
1063
+ (s)
1064
+ Figure 6: In-context examples, which are from the foreground segmentation task, retrieved by UnsupPR and SupPR. These
1065
+ grids show examples from the train, tv, and bus categories.
1066
+
1067
+ What Makes Good Examples for Visual In-Context Learning?
1068
+ (g)
1069
+ (h)
1070
+ (i)
1071
+ (j)
1072
+ (k)
1073
+ (l)
1074
+ (m)
1075
+ (n)
1076
+ (o)
1077
+ (p)
1078
+ (r)
1079
+ (s)
1080
+ IoU: 67.02
1081
+ IoU: 71.39
1082
+ IoU: 38.79
1083
+ IoU: 47.04
1084
+ IoU: 5.50
1085
+
1086
+ IoU: 28.45
1087
+ IoU: 13.89
1088
+ IoU: 52.95
1089
+ (a)
1090
+ (b)
1091
+ (c)
1092
+ (d)
1093
+ (e)
1094
+ (f)
1095
+ IoU: 29.58
1096
+ IoU: 7.06
1097
+ IoU: 34.12
1098
+ IoU: 43.23
1099
+ IoU: 74.64
1100
+ IoU: 78.78
1101
+ IoU: 4.09
1102
+ IoU: 43.13
1103
+ IoU: 40.52
1104
+ IoU: 35.16
1105
+ IoU: 38.64
1106
+ IoU: 16.38
1107
+ IoU: 21.61
1108
+ IoU: 20.45
1109
+ IoU: 66.50
1110
+ IoU: 55.35
1111
+ IoU: 66.08
1112
+ IoU: 57.10
1113
+ IoU: 30.44
1114
+ IoU: 45.98
1115
+ IoU: 65.49
1116
+ IoU: 54.25
1117
+ IoU: 77.60
1118
+ IoU: 81.50
1119
+ IoU: 35.28
1120
+ IoU: 66.75
1121
+ IoU: 61.31
1122
+ IoU: 34.47
1123
+ Figure 7: In-context examples, which are from the foreground segmentation task, retrieved by UnsupPR and SupPR. These
1124
+ grids show examples from the bottle, sheep, and bird categories.
1125
+
1126
+ What Makes Good Examples for Visual In-Context Learning?
1127
+ (g)
1128
+ (h)
1129
+ (i)
1130
+ (j)
1131
+ (k)
1132
+ (l)
1133
+ (m)
1134
+ (n)
1135
+ (o)
1136
+ (p)
1137
+ (r)
1138
+ (s)
1139
+ IoU: 6.695
1140
+ IoU: 21.72
1141
+ IoU: 8.06
1142
+ IoU: 26.55
1143
+ IoU: 5.50
1144
+
1145
+ IoU: 6.96
1146
+ IoU: 13.89
1147
+ IoU: 25.93
1148
+ (a)
1149
+ (b)
1150
+ (c)
1151
+ (d)
1152
+ (e)
1153
+ (f)
1154
+ IoU: 5.24
1155
+ IoU: 11.12
1156
+ IoU: 1.96
1157
+ IoU: 61.63
1158
+ IoU: 43.06
1159
+ IoU: 52.62
1160
+ IoU: 62.24
1161
+ IoU: 58.01
1162
+ IoU: 28.07
1163
+ IoU: 56.08
1164
+ IoU: 17.95
1165
+ IoU: 57.14
1166
+ IoU: 38.60
1167
+ IoU: 7.50
1168
+ IoU: 34.96
1169
+ IoU: 71.44
1170
+ IoU: 29.17
1171
+ IoU: 70.87
1172
+ IoU: 74.21
1173
+ IoU: 14.59
1174
+ IoU: 41.31
1175
+ IoU: 64.74
1176
+ IoU: 52.98
1177
+ IoU: 63.29
1178
+ IoU: 68.04
1179
+ IoU: 72.49
1180
+ IoU: 8.52
1181
+ IoU: 15.43
1182
+ Figure 8: In-context examples, which are from the foreground segmentation task, retrieved by UnsupPR and SupPR. These
1183
+ grids show examples from the boat, airplane, and bicycle categories.
1184
+
1185
+ What Makes Good Examples for Visual In-Context Learning?
1186
+ (d)
1187
+ IoU: 59.41
1188
+
1189
+ IoU: 67.96
1190
+ IoU: 76.11
1191
+ (g)
1192
+ (h)
1193
+ (i)
1194
+ (j)
1195
+ (k)
1196
+ (l)
1197
+ (m)
1198
+ (n)
1199
+ (o)
1200
+ (p)
1201
+ (r)
1202
+ (s)
1203
+ IoU: 0.00
1204
+ IoU: 40.91
1205
+ IoU: 10.53
1206
+ IoU: 23.59
1207
+ IoU: 5.49
1208
+ IoU: 28.56
1209
+ IoU: 28.38
1210
+ IoU: 42.44
1211
+ (a)
1212
+ (b)
1213
+ (c)
1214
+ (e)
1215
+ (f)
1216
+ IoU: 23.46
1217
+ IoU: 0.25
1218
+ IoU: 33.00
1219
+ IoU: 28.87
1220
+ IoU: 34.29
1221
+ IoU: 62.26
1222
+ IoU: 9.00
1223
+ IoU: 50.00
1224
+ IoU: 0.00
1225
+ IoU: 0.00
1226
+ IoU: 63.77
1227
+ IoU: 63.17
1228
+ IoU: 26.03
1229
+ IoU: 5.00
1230
+ IoU: 84.00
1231
+ IoU: 75.39
1232
+ IoU: 79.47
1233
+ IoU: 51.00
1234
+ IoU: 48.62
1235
+ IoU: 51,92
1236
+ IoU: 72.58
1237
+ IoU: 92.00
1238
+ IoU: 57.00
1239
+ IoU: 26.25
1240
+ IoU: 36.67
1241
+ Figure 9: In-context examples, which are from the foreground segmentation task, retrieved by UnsupPR and SupPR. These
1242
+ grids show examples from the car, cat, and chair categories.
1243
+
1244
+ What Makes Good Examples for Visual In-Context Learning?
1245
+ (d)
1246
+ IoU: 22.37
1247
+ IoU: 65.46
1248
+ IoU: 65.50
1249
+ (g)
1250
+ (h)
1251
+ (i)
1252
+ (j)
1253
+ (k)
1254
+ (l)
1255
+ (m)
1256
+ (n)
1257
+ (o)
1258
+ (p)
1259
+ (r)
1260
+ (s)
1261
+ IoU: 0.00
1262
+ IoU: 54.00
1263
+ IoU: 0.00
1264
+ IoU: 68.04
1265
+ IoU: 25.92
1266
+ IoU: 21.99
1267
+ IoU: 70.16
1268
+ IoU: 38.34
1269
+ (a)
1270
+ (b)
1271
+ (c)
1272
+ (e)
1273
+ (f)
1274
+ IoU: 48.93
1275
+ IoU: 73.70
1276
+ IoU: 41.00
1277
+ IoU: 54.02
1278
+ IoU: 41.22
1279
+ IoU: 60.00
1280
+ IoU: 42.66
1281
+ IoU: 26.76
1282
+ IoU: 39.71
1283
+ IoU: 67.09
1284
+ IoU: 17.91
1285
+ IoU: 22.38
1286
+ IoU: 59.27
1287
+ IoU: 71.61
1288
+ IoU: 85.94
1289
+ IoU: 49.44
1290
+ IoU: 71.77
1291
+ IoU: 68.91
1292
+ IoU: 65.63
1293
+ IoU: 60.82
1294
+ IoU: 70.87
1295
+ IoU: 72.97
1296
+ IoU: 71.77
1297
+ IoU: 51.60
1298
+ IoU: 77.78
1299
+ Figure 10: In-context examples, which are from the foreground segmentation task, retrieved by UnsupPR and SupPR. These
1300
+ grids show examples from the dog, horse, and motorbike categories.
1301
+
1302
+ 14What Makes Good Examples for Visual In-Context Learning?
1303
+ (d)
1304
+ IoU: 0.00
1305
+ IoU: 63.09
1306
+ IoU: 6.45
1307
+ (g)
1308
+ (h)
1309
+ (i)
1310
+ (j)
1311
+ (k)
1312
+ (l)
1313
+ (m)
1314
+ (n)
1315
+ (o)
1316
+ (p)
1317
+ (r)
1318
+ (s)
1319
+ IoU: 31.88
1320
+ IoU: 45.71
1321
+ IoU: 34.58
1322
+ IoU: 0.00
1323
+ IoU: 60.62
1324
+ IoU: 27.09
1325
+ (a)
1326
+ (b)
1327
+ (c)
1328
+ (e)
1329
+ (f)
1330
+ IoU: 0.74
1331
+ IoU: 8.34
1332
+ IoU: 23.25
1333
+ IoU: 3/00
1334
+ IoU: 43.73
1335
+ IoU: 27.15
1336
+ IoU: 29.16
1337
+ IoU: 2.44
1338
+ IoU: 56.38
1339
+ IoU: 0.00
1340
+ IoU: 2.23
1341
+ IoU: 28.82
1342
+ IoU: 62.40
1343
+ IoU: 41.35
1344
+ IoU: 93.05
1345
+ IoU: 39.02
1346
+ IoU: 45.35
1347
+ IoU: 65.69
1348
+ IoU: 18.00
1349
+ IoU: 46.55
1350
+ IoU: 33.91
1351
+ IoU: 35.21
1352
+ IoU: 33.75
1353
+ IoU: 47.07
1354
+ IoU: 41.56
1355
+ IoU: 7.08
1356
+ IoU: 38.61
1357
+ Figure 11: In-context examples, which are from the foreground segmentation task, retrieved by UnsupPR and SupPR. These
1358
+ grids show examples from the table, plant, and sofa categories.
1359
+
1360
+ What Makes Good Examples for Visual In-Context Learning?
1361
+ (d)
1362
+ IoU: 10.59
1363
+ IoU: 33.28
1364
+ IoU: 40.88
1365
+ (g)
1366
+ (h)
1367
+ (i)
1368
+ (j)
1369
+ (k)
1370
+ (l)
1371
+ (a)
1372
+ (b)
1373
+ (c)
1374
+ (e)
1375
+ (f)
1376
+ IoU: 3.23
1377
+ IoU: 18.61
1378
+ IoU: 0.00
1379
+ IoU: 26.82
1380
+ IoU: 25.88
1381
+ IoU: 44.57
1382
+ IoU: 33.40
1383
+ IoU: 61.47
1384
+ IoU: 51.28
1385
+ IoU: 59.97
1386
+ IoU: 68.02
1387
+ IoU: 66.10
1388
+ IoU: 32.34
1389
+ IoU: 30.13
1390
+ IoU: 54.34
1391
+ IoU: 27.73
1392
+ IoU: 0.00
1393
+ IoU: 20.43
1394
+ IoU: 21.28
1395
+ IoU: 0.00
1396
+ IoU: 1.73
1397
+ IoU: 11.90
1398
+ IoU: 36.78
1399
+ IoU: 71.30
1400
+ IoU: 51.64
1401
+ IoU: 27.65
1402
+ IoU: 61.60
1403
+ IoU: 32.60
1404
+ IoU: 58.09
1405
+ IoU: 0.00
1406
+ IoU: 22.71
1407
+ IoU: 23.47
1408
+ IoU: 68.81
1409
+ (m)
1410
+ (n)
1411
+ (o)
1412
+ (p)
1413
+ (r)
1414
+ (s)
1415
+ Figure 12: In-context examples, which are from the single object detection task, retrieved by UnsupPR and SupPR. We find
1416
+ the examples found by SupPR are more similar to the queries in terms of object pose (e.g., (f)), viewpoint (e.g., (r))
1417
+
1418
+ What Makes Good Examples for Visual In-Context Learning?
1419
+ (d)
1420
+ IoU: 31.79
1421
+ IoU: 32.98
1422
+ IoU: 63.17
1423
+ (g)
1424
+ (h)
1425
+ (i)
1426
+ (j)
1427
+ (k)
1428
+ (l)
1429
+ (a)
1430
+ (b)
1431
+ (c)
1432
+ (e)
1433
+ (f)
1434
+ IoU: 3.23
1435
+ IoU: 36.21
1436
+ IoU: 13.14
1437
+ IoU: 15.34
1438
+ IoU: 6.89
1439
+ IoU: 39.50
1440
+ IoU: 14.15
1441
+ IoU: 73.74
1442
+ IoU: 29.59
1443
+ IoU: 67.14
1444
+ IoU: 67.26
1445
+ IoU: 69.75
1446
+ IoU: 32.34
1447
+ IoU: 53.40
1448
+ IoU: 71.97
1449
+ IoU: 52.23
1450
+ IoU: 26.33
1451
+ IoU: 20.43
1452
+ IoU: 0.00
1453
+ IoU: 20.72
1454
+ IoU: 41.23
1455
+ IoU: 4.51
1456
+ IoU: 8.05
1457
+ IoU: 71.30
1458
+ IoU: 29.61
1459
+ IoU: 42.64
1460
+ IoU: 7.53
1461
+ IoU: 7.26
1462
+ IoU: 45.97
1463
+ IoU: 0.00
1464
+ IoU: 45.88
1465
+ IoU: 24.76
1466
+ IoU: 58.6
1467
+ (m)
1468
+ (n)
1469
+ (o)
1470
+ (p)
1471
+ (r)
1472
+ (s)
1473
+ c
1474
+ Figure 13: In-context examples, which are from the single object detection task, retrieved by UnsupPR and SupPR. We find
1475
+ the examples found by SupPR are more similar to the queries in terms of object pose (e.g., (l)), viewpoint (e.g., (m))
1476
+
1477
+ BAGGAGEWhat Makes Good Examples for Visual In-Context Learning?
1478
+ mse : 0.79
1479
+ mse : 0.88
1480
+ mse : 1.13
1481
+ mse: 0.91
1482
+ mse : 0.85
1483
+ mse : 0.63
1484
+ (d)
1485
+ mse : 0.49
1486
+ (a)
1487
+ (b)
1488
+ (c)
1489
+ (e)
1490
+ (f)
1491
+ mse : 0.53
1492
+ mse : 0.67
1493
+ mse : 0.52
1494
+ mse : 0.78
1495
+ mse : 0.51
1496
+ mse : 1.15
1497
+ mse : 0.68
1498
+ mse : 0.76
1499
+ mse : 2.16
1500
+ mse : 0.99
1501
+ mse : 0.38
1502
+ (g)
1503
+ (h)
1504
+ (i)
1505
+ (j)
1506
+ (k)
1507
+ (l)
1508
+ mse : 0.88
1509
+ mse : 0.73
1510
+ mse : 0.23
1511
+ mse : 0.17
1512
+ mse : 0.40
1513
+ mse : 0.48
1514
+ Figure 14: In-context examples, which are from the colorization task, retrieved by UnsupPR and SupPR. We also show the
1515
+ ground truth of the query image. The query image is the gray-scale version of its ground truth. The ground truth images
1516
+ of the in-context examples found by SupPR are more similar than those found by UnsupPR to the ground truth images of
1517
+ queries in terms of image style, e.g. the background color (g).
1518
+
1519
+ What Makes Good Examples for Visual In-Context Learning?
1520
+ mse : 0.66
1521
+ mse : 1.15
1522
+ mse : 1.51
1523
+ mse: 1.01
1524
+ mse : 0.78
1525
+ mse : 0.79
1526
+ (d)
1527
+ mse : 0.44
1528
+ (a)
1529
+ (b)
1530
+ (c)
1531
+ (e)
1532
+ (f)
1533
+ mse : 0.80
1534
+ mse : 0.64
1535
+ mse : 0.80
1536
+ mse : 0.55
1537
+ mse : 0.51
1538
+ mse : 3.48
1539
+ mse : 1.19
1540
+ mse : 0.79
1541
+ mse : 0.88
1542
+ mse : 0.80
1543
+ mse : 0.99
1544
+ (g)
1545
+ (h)
1546
+ (i)
1547
+ (j)
1548
+ (k)
1549
+ (l)
1550
+ mse : 0.36
1551
+ mse : 0.52
1552
+ mse : 0.85
1553
+ mse : 0.73
1554
+ mse : 0.49
1555
+ mse : 0.34
1556
+ Figure 15: In-context examples, which are from the colorization task, retrieved by UnsupPR and SupPR. We also show the
1557
+ ground truth of the query image. The query image is the gray-scale version of its ground truth. The ground truth images
1558
+ of the in-context examples found by SupPR are more similar than those found by UnsupPR to the ground truth images of
1559
+ queries in terms of image style, e.g. the background color (h).
1560
+
1561
+ odidasacnosadidas
1562
+ adidas
1563
+ adidos