jackkuo commited on
Commit
181b144
·
verified ·
1 Parent(s): 60dae9e

Delete knowledge_base

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. knowledge_base/0tAzT4oBgHgl3EQfDPqY/content/2301.00973v1.pdf +0 -3
  2. knowledge_base/0tAzT4oBgHgl3EQfDPqY/content/tmp_files/2301.00973v1.pdf.txt +0 -1482
  3. knowledge_base/0tAzT4oBgHgl3EQfDPqY/content/tmp_files/load_file.txt +0 -0
  4. knowledge_base/0tAzT4oBgHgl3EQfDPqY/vector_store/index.faiss +0 -3
  5. knowledge_base/0tAzT4oBgHgl3EQfDPqY/vector_store/index.pkl +0 -3
  6. knowledge_base/0tFPT4oBgHgl3EQfTjTq/content/2301.13054v1.pdf +0 -3
  7. knowledge_base/0tFPT4oBgHgl3EQfTjTq/content/tmp_files/2301.13054v1.pdf.txt +0 -1564
  8. knowledge_base/0tFPT4oBgHgl3EQfTjTq/content/tmp_files/load_file.txt +0 -0
  9. knowledge_base/0tFPT4oBgHgl3EQfTjTq/vector_store/index.faiss +0 -3
  10. knowledge_base/0tFPT4oBgHgl3EQfTjTq/vector_store/index.pkl +0 -3
  11. knowledge_base/29E2T4oBgHgl3EQfjQdC/content/2301.03966v1.pdf +0 -3
  12. knowledge_base/29E2T4oBgHgl3EQfjQdC/content/tmp_files/2301.03966v1.pdf.txt +0 -1687
  13. knowledge_base/29E2T4oBgHgl3EQfjQdC/content/tmp_files/load_file.txt +0 -0
  14. knowledge_base/29E2T4oBgHgl3EQfjQdC/vector_store/index.faiss +0 -3
  15. knowledge_base/29E2T4oBgHgl3EQfjQdC/vector_store/index.pkl +0 -3
  16. knowledge_base/4tE1T4oBgHgl3EQf6QWC/content/2301.03521v1.pdf +0 -3
  17. knowledge_base/4tE1T4oBgHgl3EQf6QWC/content/tmp_files/2301.03521v1.pdf.txt +0 -1047
  18. knowledge_base/4tE1T4oBgHgl3EQf6QWC/content/tmp_files/load_file.txt +0 -0
  19. knowledge_base/4tE1T4oBgHgl3EQf6QWC/vector_store/index.faiss +0 -3
  20. knowledge_base/4tE1T4oBgHgl3EQf6QWC/vector_store/index.pkl +0 -3
  21. knowledge_base/7dE4T4oBgHgl3EQf2Q2c/content/2301.05297v1.pdf +0 -3
  22. knowledge_base/7dE4T4oBgHgl3EQf2Q2c/content/tmp_files/2301.05297v1.pdf.txt +0 -1046
  23. knowledge_base/7dE4T4oBgHgl3EQf2Q2c/content/tmp_files/load_file.txt +0 -0
  24. knowledge_base/7dE4T4oBgHgl3EQf2Q2c/vector_store/index.faiss +0 -3
  25. knowledge_base/7dE4T4oBgHgl3EQf2Q2c/vector_store/index.pkl +0 -3
  26. knowledge_base/99FJT4oBgHgl3EQfpCw2/content/2301.11598v1.pdf +0 -3
  27. knowledge_base/99FJT4oBgHgl3EQfpCw2/content/tmp_files/2301.11598v1.pdf.txt +0 -1960
  28. knowledge_base/99FJT4oBgHgl3EQfpCw2/content/tmp_files/load_file.txt +0 -0
  29. knowledge_base/99FJT4oBgHgl3EQfpCw2/vector_store/index.faiss +0 -3
  30. knowledge_base/99FJT4oBgHgl3EQfpCw2/vector_store/index.pkl +0 -3
  31. knowledge_base/9tAyT4oBgHgl3EQfQ_bX/content/2301.00059v1.pdf +0 -3
  32. knowledge_base/9tAyT4oBgHgl3EQfQ_bX/content/tmp_files/2301.00059v1.pdf.txt +0 -1576
  33. knowledge_base/9tAyT4oBgHgl3EQfQ_bX/content/tmp_files/load_file.txt +0 -0
  34. knowledge_base/9tAyT4oBgHgl3EQfQ_bX/vector_store/index.faiss +0 -3
  35. knowledge_base/9tAyT4oBgHgl3EQfQ_bX/vector_store/index.pkl +0 -3
  36. knowledge_base/B9FQT4oBgHgl3EQfNza6/content/2301.13273v1.pdf +0 -3
  37. knowledge_base/B9FQT4oBgHgl3EQfNza6/content/tmp_files/2301.13273v1.pdf.txt +0 -0
  38. knowledge_base/B9FQT4oBgHgl3EQfNza6/content/tmp_files/load_file.txt +0 -0
  39. knowledge_base/B9FQT4oBgHgl3EQfNza6/vector_store/index.faiss +0 -3
  40. knowledge_base/B9FQT4oBgHgl3EQfNza6/vector_store/index.pkl +0 -3
  41. knowledge_base/BdE2T4oBgHgl3EQfRge6/content/2301.03782v1.pdf +0 -3
  42. knowledge_base/BdE2T4oBgHgl3EQfRge6/content/tmp_files/2301.03782v1.pdf.txt +0 -1366
  43. knowledge_base/BdE2T4oBgHgl3EQfRge6/content/tmp_files/load_file.txt +0 -0
  44. knowledge_base/BdE2T4oBgHgl3EQfRge6/vector_store/index.faiss +0 -3
  45. knowledge_base/BdE2T4oBgHgl3EQfRge6/vector_store/index.pkl +0 -3
  46. knowledge_base/BdE4T4oBgHgl3EQf5Q5A/content/2301.05321v1.pdf +0 -3
  47. knowledge_base/BdE4T4oBgHgl3EQf5Q5A/content/tmp_files/2301.05321v1.pdf.txt +0 -2198
  48. knowledge_base/BdE4T4oBgHgl3EQf5Q5A/content/tmp_files/load_file.txt +0 -0
  49. knowledge_base/BdE4T4oBgHgl3EQf5Q5A/vector_store/index.faiss +0 -3
  50. knowledge_base/BdE4T4oBgHgl3EQf5Q5A/vector_store/index.pkl +0 -3
knowledge_base/0tAzT4oBgHgl3EQfDPqY/content/2301.00973v1.pdf DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c13daab503f1b67bffffdb7c8a8381e73d08fb05ab7c1cd94ef385b4acbc2a33
3
- size 5629212
 
 
 
 
knowledge_base/0tAzT4oBgHgl3EQfDPqY/content/tmp_files/2301.00973v1.pdf.txt DELETED
@@ -1,1482 +0,0 @@
1
- GENERIC COLORIZED JOURNAL, VOL. XX, NO. XX, XXXX 2022
2
- 1
3
- Detecting Severity of Diabetic Retinopathy from
4
- Fundus Images using Ensembled Transformers
5
- Chandranath Adak, Senior Member, IEEE, Tejas Karkera, Soumi Chattopadhyay, Member, IEEE, and
6
- Muhammad Saqib
7
- Abstract— Diabetic Retinopathy (DR) is considered one
8
- of the primary concerns due to its effect on vision loss
9
- among most people with diabetes globally. The severity of
10
- DR is mostly comprehended manually by ophthalmologists
11
- from fundus photography-based retina images. This paper
12
- deals with an automated understanding of the severity
13
- stages of DR. In the literature, researchers have focused on
14
- this automation using traditional machine learning-based
15
- algorithms and convolutional architectures. However, the
16
- past works hardly focused on essential parts of the retinal
17
- image to improve the model performance. In this paper,
18
- we adopt transformer-based learning models to capture the
19
- crucial features of retinal images to understand DR sever-
20
- ity better. We work with ensembling image transformers,
21
- where we adopt four models, namely ViT (Vision Trans-
22
- former), BEiT (Bidirectional Encoder representation for im-
23
- age Transformer), CaiT (Class-Attention in Image Trans-
24
- formers), and DeiT (Data efficient image Transformers), to
25
- infer the degree of DR severity from fundus photographs.
26
- For experiments, we used the publicly available APTOS-
27
- 2019 blindness detection dataset, where the performances
28
- of the transformer-based models were quite encouraging.
29
- Index Terms— Blindness Detection, Diabetic Retinopa-
30
- thy, Deep learning, Transformers.
31
- I. INTRODUCTION
32
- D
33
- IABETES Mellitus, also known as diabetes, is a disorder
34
- where the patient experiences increased blood sugar
35
- levels over a long period. Diabetic Retinopathy (DR) is a mi-
36
- crovascular complication of diabetes where the retina’s blood
37
- vessels get damaged, which can lead to poor vision and even
38
- blindness if untreated [1], [2]. Studies estimated that by twenty
39
- years after diabetes onset, about 99% (or 60%) of patients
40
- having type-I (or type-II) diabetes might have DR [1]. With
41
- a worldwide presence of DR patients of about 126.6 million
42
- in 2010, the current estimate is roughly around 191 million
43
- by 2030 [3], [4]. However, about 56% of new DR cases can
44
- be reduced by timely treatment and monitoring of the severity
45
- [5]. The ophthalmologist analyzes fundus images for lesion-
46
- based symptoms like microaneurysms, hard/ soft exudates, and
47
- hemorrhages to understand the severity stages of DR [1], [2].
48
- The positive DR is divided into the following stages [5]: (1)
49
- mild: the earliest stage that can contain microaneurysms, (2)
50
- C. Adak is with Dept. of CSE, IIT Patna, India-801106, T. Karkera is
51
- with Atharva College of Engineering, Mumbai, India-400095, S. Chat-
52
- topadhyay is with the Dept. of CSE, IIIT Guwahati, India-781015, and
53
- M. Saqib is with Data61, CSIRO, Australia-2122.
54
- Corresponding author: C. Adak (e-mail: [email protected])
55
- negative
56
- mild
57
- moderate
58
- severe
59
- proliferative
60
- Fig. 1. Fundus images with DR severity stages from APTOS-2019 [7].
61
- moderate: here, the blood vessels lose the ability to blood
62
- transportation, (3) severe: here, blockages in blood vessels
63
- can occur and gives a signal to grow new blood vessels, (4)
64
- proliferative: the advanced stage where new blood vessels start
65
- growing. Fig. 1 shows some fundus images representing the
66
- DR severity stages. Manual examining fundus images for DR
67
- severity stage grading may bring inconsistencies due to a high
68
- number of patients, less number of well-trained clinicians, long
69
- diagnosing time, unclear lesions, etc. Moreover, there may be
70
- disagreement among ophthalmologists in choosing the correct
71
- severity grade [6]. Therefore, computer-aided techniques have
72
- come into the scenario for better diagnosis and broadening the
73
- prospects of early-stage detection [2].
74
- Automated DR severity stage detection from fundus pho-
75
- tographs has been performed for the last two and half decades.
76
- Earlier, some image processing tools were used [8], [9], but
77
- the machine learning-based DR became popular in the early
78
- 2000s. The machine learning-based techniques mostly relied
79
- on hand-engineered features that were carefully extracted from
80
- the fundus images and then fed to a classifier, e.g., Random
81
- Forest (RF) [10], KNN (K-Nearest Neighbors) [11], SVM
82
- (Support Vector Machine) [12], and ANN (Artificial Neural
83
- Network) [13]. Although SVM and ANN-based models were
84
- admired in the DR community, the hand-engineered feature-
85
- based machine learning models require efficient prior feature
86
- extraction, which may introduce errors for complex fundus
87
- images [1], [2]. On the other hand, deep learning-based models
88
- extract features automatically through convolution operations
89
- [14], [15]. Besides, from 2012, deep learning architectures
90
- rose to prominence in the computer vision community, which
91
- also influenced the DR severity analysis from fundus images
92
- [1]. The past deep learning-based techniques mostly employed
93
- CNN (Convolutional Neural Network) [1], [16]. However,
94
- the ability to give attention to certain regions/features and
95
- fade the remaining portions hardly exists in classical CNNs.
96
- For this reason, some contemporary methods incorporated
97
- attention mechanism [17], [18]. Although multiple research
98
- arXiv:2301.00973v1 [cs.CV] 3 Jan 2023
99
-
100
- LOGO2
101
- GENERIC COLORIZED JOURNAL, VOL. XX, NO. XX, XXXX 2022
102
- works are present in the literature [1], [2] and efforts were
103
- made to detect the existence of DR in the initial stages
104
- of its development, still there is a room for improving the
105
- performance by incorporating higher degrees of automated
106
- feature extraction using better deep learning models.
107
- In this paper, we employ the transformer model for leverag-
108
- ing its MSA (Multi-head Self-Attention) [19] to focus on the
109
- DR revealing region of the fundus image for understanding
110
- the severity. Moreover, the transformer model has shown high
111
- performance in recent days [19], [20]. Initially, we adopted
112
- ViT (Visual Transformer) [19] for detecting DR severity due to
113
- its outperformance on image classification tasks. ViT divides
114
- the input image into a sequence of patches and applies
115
- global attention [19]. Moreover, since standard ViT requires
116
- hefty amounts of data, we also adopted some other image
117
- transformer models, such as CaiT (Class-attention in image
118
- Transformers) [21], DeiT (Data-efficient image Transformer)
119
- [22], and BEiT (Bidirectional Encoder representation for im-
120
- age Transformer) [23]. CaiT is a modified version of ViT and
121
- employs specific class-attention [21]. DeiT uses knowledge
122
- distillation, which transfers the knowledge from one network
123
- to another and builds a teacher-student hierarchical network
124
- [22]. BEiT is inspired by BERT (Bidirectional Encoder Rep-
125
- resentations from Transformers) [24] to implement masking
126
- of image patches and to model the same for pre-training the
127
- ViT [23]. For experiments, we used the publicly available
128
- APTOS-2019 blindness detection dataset [7], where the in-
129
- dividual image transformers did not perform well. Therefore,
130
- we ensembled the image transformers to seek better predictive
131
- performance. The ensembled image transformer obtained quite
132
- encouraging results for DR severity stage detection. This is
133
- one of the earliest attempts to adopt and ensemble image
134
- transformers for DR severity stage detection, which is the main
135
- contribution of this paper.
136
- The rest of the paper is organized as follows. § II discusses
137
- the relevant literature about DR and § III presents the proposed
138
- methodology. Then § IV analyzes and discusses the experi-
139
- mental results. Finally, § V concludes this paper.
140
- II. RELATED WORK
141
- This section briefly presents the literature on DR severity
142
- detection from fundus images. The modern grading of DR
143
- severity stages can be traced in the report by ETDRS research
144
- group [25]. In the past, some image processing-based (e.g.,
145
- wavelet transform [8], radon transform [9]) strategies were
146
- published. For the last two decades, machine learning and
147
- deep learning-based approaches have shown dominance. We
148
- broadly categorize the related works into (a) hand-engineered
149
- feature-based models [11], [26], [27], and (b) deep feature-
150
- based models [2], which are discussed below.
151
- A. Hand-engineered Feature-based Models
152
- The hand-engineered feature-based models mostly em-
153
- ployed RF [26], KNN [28], SVM [27], ANN [29] for detecting
154
- DR severity stages. Acharya et al. [26] employed a decision
155
- tree with discrete wavelet/cosine transform-based features ex-
156
- tracted from retinal images. Casanova et al. [10] introduced RF
157
- for DR severity stage classification. In [30], RF was also used
158
- to assess DR risk. KNN classifier was employed in [11] to
159
- detect drusen, exudates, and cotton-wool spots for diagnosing
160
- DR. Tang et al. [28] used KNN for retinal hemorrhage detec-
161
- tion from fundus photographs. In [27], retinal changes due to
162
- DR was detected by using SVM. Akram et al. [12] used SVM
163
- and GMM (Gaussian Mixture Model) with enhanced features
164
- such as shape, intensity, and statistics of the affected region
165
- to identify microaneurysms for early detection of DR. ANN
166
- was employed in [13] to classify lesions for detecting DR
167
- severity. Osareh et al. [31] employed FCM (Fuzzy C-Means)-
168
- based segmentation and GA (Genetic algorithm)-based feature
169
- selection with ANN to detect exudates in DR. In [29], PSO
170
- (Particle Swarm Optimization) was used for feature selection,
171
- followed by ANN-based DR severity classification.
172
- B. Deep Feature-based Models
173
- The past deep architectures mostly used CNN for tackling
174
- DR severity. For example, Yu et al. [16] used CNN for
175
- detecting exudates in DR, Chudzik et al. [32] worked on
176
- microaneurysm detection using CNN with transfer learning
177
- and layer freezing, Gargeya and Leng [33] employed CNN-
178
- based deep residual learning to identify fundus images with
179
- DR. In [4], CNN was also used to identify DR severity stages
180
- and some related eye diseases, e.g., glaucoma and AMD (Age-
181
- related Macular Degeneration). In [34], some classical CNN
182
- architectures (e.g., AlexNet, VGG Net, GoogLeNet, ResNet)
183
- were employed for DR severity stage detection. Wang et al.
184
- [17] proposed Zoom-in-Net that combined CNN, attention
185
- mechanism, and a greedy algorithm to zoom in the region
186
- of interest for handling DR. A modified DenseNet169 ar-
187
- chitecture in conjunction with the attention mechanism was
188
- used in [18] to extract refined features for DR severity
189
- grading. In [35], a modified Xception architecture was em-
190
- ployed for DR classification. TAN (Texture Attention Net-
191
- work) was proposed in [36] by leveraging style (texture
192
- features) and content (semantic and contextual features) re-
193
- calibration mechanism. Tymchenko et al. [5] ensembled three
194
- CNN architectures (EfficientNet-B4 [37], EfficientNet-B5, and
195
- SE- ResNeXt50 [38]) for DR severity detection. Very recently,
196
- a few transformer-based models have come out, e.g., CoT-
197
- XNet [39] that combined contextual transformer and Xception
198
- architecture, SSiT [40] that employed self-supervised image
199
- transformers guided by saliency maps.
200
- III. METHODOLOGY
201
- This section first formalizes the problem statement, which
202
- is then followed by the proposal of solution architecture.
203
- A. Problem Formulation
204
- In this work, we are given an image I captured by the
205
- fundus photography, which is input to the architecture. The
206
- task is to predict the severity stage of diabetic retinopathy (DR)
207
- among negative, mild, moderate, severe, and proliferative,
208
- from I. We formulate the task as a multi-class classification
209
- problem [15]. Here, from I, features are extracted and fed to
210
-
211
- ADAK et al.: DETECTING SEVERITY OF DIABETIC RETINOPATHY FROM FUNDUS IMAGES USING ENSEMBLED TRANSFORMERS
212
- 3
213
- a classifier to predict the DR severity class labels Á, where
214
- Á = {0, 1, 2, 3, 4} corresponds to {negative, mild, moderate,
215
- severe, proliferative}, respectively.
216
- B. Solution Architecture
217
- For detecting the severity stage of DR from a fundus
218
- photograph, we adopt image transformers, i.e., ViT (Vision
219
- Transformer) [19], BEiT (Bidirectional Encoder representation
220
- for image Transformer) [23], CaiT (Class-attention in image
221
- Transformers) [21], and DeiT (Data efficient image Trans-
222
- formers) [22], and ensemble them. However, we preprocess
223
- raw fundus images before feeding them into the transformers,
224
- which we discuss first.
225
- 1) Preprocessing: The performance of deep learning mod-
226
- els is susceptible to the quality and quantity of data being
227
- passed to the model. Raw data as input can barely account for
228
- the best achievable performance of the model due to possible
229
- pre-existing noise and inconsistency in the images. Therefore,
230
- a definite flow of preprocessing is essential to train the model
231
- better [15].
232
- We now discuss various preprocessing and augmentation
233
- techniques [15], [41] applied to the raw fundus photographs
234
- for better learning. In a dataset, the fundus images may be of
235
- various sizes; therefore, we resize the image I into 256 × 256
236
- sized image Iz. We perform data augmentations on training set
237
- (DBtr), where we use centre cropping with central_fraction =
238
- 0.5, horizontal/vertical flip, random rotations within a range
239
- of [0o, 45o], random brightness-change with max_delta =
240
- 0.95, random contrast-change in the interval [0.1, 0.9]. We
241
- also apply CLAHE (Contrast Limited Adaptive Histogram
242
- Equalization) [42] on 30% samples of DBtr, which ensures
243
- over-amplification of contrast in a smaller region instead of
244
- the entire image.
245
- 2) Transformer Networks: Deep learning models in com-
246
- puter vision tasks have long been dominated by CNN (Convo-
247
- lutional Neural Network) to extract high-level feature maps by
248
- passing the image through a series of convolution operations
249
- before feeding into the MLP (Multi-Layer Perceptron) for clas-
250
- sification [43]. In recent days, transformer models have shown
251
- a substantial rise in the NLP (Natural Language Processing)
252
- domain due to its higher performances [20]. In a similar quest
253
- to leverage high-level performance through transformers, it
254
- has been introduced in image classification and some other
255
- computer vision-oriented tasks [19]. Moreover, the transformer
256
- model has lesser image-specific inductive bias than CNN [19].
257
- To identify the severity stages of DR from fundus images,
258
- here we efficiently adopt and ensemble some image transform-
259
- ers, e.g., ViT [19], BEiT [23], CaiT [21], and DeiT [22].
260
- Before focusing on our ensembled transformer model, we
261
- discuss the adaptation of individual image transformers for
262
- our task, and start with ViT.
263
- a) Vision Transformer (ViT): The ViT model adopts the idea
264
- of text-based transformer models [44], where the idea is to take
265
- the input image as a series of image patches instead of textual
266
- words, and then extract features to feed it into an MLP [19].
267
- The pictorial representation of ViT is presented in Fig. 2.
268
- Here, the input image Iz is converted into a sequence of
269
- Fig. 2. Workflow of ViT.
270
- Fig. 3. Internal view of a transformer encoder (TE).
271
- flattened patches xi
272
- p (for i = 1, 2, . . . , np), each with size
273
- wp × wp × cp, where cp denotes the number of channels
274
- of Iz. Here, cp = 3, since Iz is an RGB fundus image. In
275
- our task, Iz is of size 256 × 256, and empirically, we choose
276
- wp = 64, which results np = ( 256
277
- 64 )2 = 16. Each patch xi
278
- p is
279
- flattened further and mapped to a D-dimensional latent vector
280
- (i.e., patch embedding z0) through transformer layers using a
281
- trainable linear projection, as below.
282
- z0 = [xclass ; x1
283
- p E ; x2
284
- p E ; . . . ; xnp
285
- p E] + Epos
286
- (1)
287
- where,
288
- E
289
- is
290
- the
291
- patch
292
- embedding
293
- projection,
294
- E
295
-
296
- Rwp×wp×C×D; Epos is the position embeddings added to
297
- patch embeddings to preserve the positional information of
298
- patches, Epos ∈ R(np+1)×D; xclass = z0
299
- 0 is a learnable
300
- embedding [24].
301
- After mapping patch images to the embedding space with
302
- positional information, we add a sequence of transformer
303
- encoders [19], [45]. The internal view of a transformer encoder
304
- can be seen in Fig. 3, which includes two blocks As and Fn.
305
- The As and Fn contain MSA (Multi-head Self-Attention) [19]
306
- and MLP [15] modules, respectively. LN (Layer Normaliza-
307
- tion) [46] and residual connection [15] are employed before
308
- and after each of these modules, respectively. This is shown
309
- in equation 2 with general semantics. Here, the MLP module
310
- comprises two layers having 4D and D neurons with GELU
311
- (Gaussian Error Linear Unit) non-linear activation function
312
-
313
- Patch + Position
314
- Embedding
315
- 0
316
- MLP Head
317
- Softmax
318
- *
319
- Linear Projection of Flattened Patches
320
- 1
321
- Transformer Encoder (TE)
322
- 2
323
- 3
324
- dn
325
- np
326
- pLx
327
- LN
328
- MSA
329
- LN
330
- MLP
331
- zi
332
- Zt4
333
- GENERIC COLORIZED JOURNAL, VOL. XX, NO. XX, XXXX 2022
334
- similar to [19].
335
- z′
336
- l = MSA(LN(zl−1)) + zl−1;
337
- zl = MLP(LN(z′
338
- l)) + z′
339
- l; l = 1, 2, . . . , L
340
- (2)
341
- where, L is the total number of transformer blocks. The core
342
- component of the transformer encoder is MSA with h heads,
343
- where each head includes SA (Scaled dot-product Attention)
344
- [19], [45]. Each head i ∈ {1, 2, ..., h} of MSA calculates a
345
- tuple comprising query, key, and value [19], i.e., (Qi, Ki, V i)
346
- as follows.
347
- Qi = XW i
348
- Q ; Ki = XW i
349
- K ; V i = XW i
350
- V
351
- (3)
352
- where, X is the input embedding, and WQ, WK, WV are the
353
- weight matrices used in the linear transformation. The tuple
354
- (Q, K, V ) is fed to SA that computes the attention required
355
- to pay to the input image patches, as below.
356
- SA(Q, K, V ) = ψ
357
- �QKT
358
- √Dh
359
-
360
- V
361
- (4)
362
- where, ψ is softmax function, and Dh = D/h. The outcomes
363
- of SAs across all heads are concatenated in MSA, as follows.
364
- MSA(Q, K, V ) = [SA1 ; SA2 ; . . . ; SAh]WL
365
- (5)
366
- where WL is a weight matrix.
367
- After multiple transformer encoder blocks, the <class>
368
- token [24] enriches with the contextual information. The state
369
- of the learnable embedding at the outcome of the Transformer
370
- encoder (z0
371
- L) acts as the image representation y [19].
372
- y = LN(z0
373
- L)
374
- (6)
375
- Now, as shown in Fig. 2, we add an MLP head containing
376
- a hidden layer with 128 neurons. To capture the non-linearity,
377
- we use Mish [47] here. In the output layer, we keep five
378
- neurons with softmax activation function to obtain probability
379
- distribution s(j) in order to classify a fundus photograph into
380
- the abovementioned five severity stages of DR.
381
- b) Data efficient image Transformers (DeiT): For a lower
382
- amount of training data, ViT does not generalize well. In
383
- this scenario, DeiT can perform reasonably well and uses
384
- lower memory [22]. DeiT adopts the ViT-specific strategy and
385
- merges with the teacher-student scheme through knowledge
386
- distillation [48]. The crux of DeiT is the knowledge distillation
387
- mechanism, which is basically the knowledge transfer from
388
- one model (teacher) to another (student) [22]. Here, we use
389
- EfficientNet-B5 [37] as a teacher model that is trained apriori.
390
- The student model uses a transformer, which learns from the
391
- outcome of the teacher model through attention depending
392
- on a distillation token [22]. In this work, we employ hard-
393
- label distillation [22], where the hard decision of the teacher
394
- is considered as a true label, i.e., yt = argmaxcZt(c). The
395
- hard-label distillation objective is defined as follows.
396
- Lhard
397
- global = 0.5 LCE(ψ(Zs), y) + 0.5 LCE(ψ(Zs), yt)
398
- (7)
399
- where, LCE is the cross-entropy loss on ground-truth labels
400
- y, ψ is the softmax function, Zs and Zt are the student and
401
- teacher models’ logits, respectively. Using label smoothing,
402
- Fig. 4. The distillation procedure of DeiT.
403
- hard labels can be converted into soft ones [22].
404
- In Fig. 4, we present the distillation procedure of DeiT.
405
- Here, we add the <distillation> token to the transformer,
406
- which interacts with the <class> and <patch> tokens through
407
- transformer encoders. The transformer encoder used here is
408
- similar to the ViT’s one, which includes As and Fn blocks as
409
- shown in Fig. 3. The objective of the <distillation> token is to
410
- reproduce the teacher’s predicted label instead of the ground-
411
- truth label. The <distillation> and <class> tokens are learned
412
- by back-propagation [15].
413
- A linear classifier is used in DeiT instead of the MLP head
414
- of ViT [19], [22] to work efficiently with limited computa-
415
- tional resources.
416
- c) Class-attention in image Transformers (CaiT): CaiT
417
- usually performs better than ViT and DeiT with lesser FLOPs
418
- and learning parameters [15], when we need to increase the
419
- depth of the transformer [21]. CaiT is basically an upgraded
420
- version of ViT, which leverages layers with specific class-
421
- attention and LayerScale [21]. In Fig. 5, we show the workflow
422
- of CaiT.
423
- LayerScale aids CaiT to work at larger depths, where we
424
- separately multiply a diagonal matrix Mλ on the outputs of
425
- As and Fn blocks.
426
- z′
427
- l = Mλ(λl
428
- 1, . . . , λl
429
- D) × MSA(LN(zl−1)) + zl−1;
430
- zl = Mλ(λ′l
431
- 1, . . . , λ′l
432
- D) × MLP(LN(z′
433
- l)) + z′
434
- l
435
- (8)
436
- where, λl
437
- i and λ′l
438
- i are learning parameters, and other symbols
439
- denote the same as the above-mentioned ViT.
440
- In CaiT, the transformer layers dealing with self-attention
441
- Fig. 5. Workflow of CaiT.
442
-
443
- Self-attention
444
- lass-attentionO
445
- 口ADAK et al.: DETECTING SEVERITY OF DIABETIC RETINOPATHY FROM FUNDUS IMAGES USING ENSEMBLED TRANSFORMERS
446
- 5
447
- among patches are separated from class-attention layers that
448
- are introduced to dedicatedly extract the content of the patches
449
- into a vector, which can be sent to a linear classifier [21]. The
450
- <class> token is inserted in the latter stage, so that the initial
451
- layers can perform the self-attention among patches devotedly.
452
- In the class-attention stage, we alternatively use multi-head
453
- class-attention (Ac) [21] and Fn, as shown in Fig. 5, and
454
- update only the class embedding.
455
- d) Bidirectional Encoder representation for image Trans-
456
- former (BEiT): BEiT is a self-supervised model having its
457
- root in the BERT (Bidirectional Encoder Representations from
458
- Transformers) [23], [24]. In Fig. 6, we present the workflow
459
- of the pre-training of BEiT.
460
- The input image Iz is split into patches xi
461
- p and flattened
462
- into vectors, similar to the early-mentioned ViT. In BEiT, a
463
- backbone transformer is engaged, for which we use ViT [19].
464
- On the other hand, Iz is represented as a sequence of visual
465
- tokens vt = [vt1, vt2, . . . , vtnp] obtained by a discrete VAE
466
- (Variational Auto-Encoder) [49]. For visual token learning, we
467
- employ a tokenizer Tφ(vt | x) to map image pixels x to tokens
468
- vt, and decoder Dθ(x | vt) for reconstructing input image
469
- pixels x from vt [23].
470
- Here, a MIM (Masked Image Modeling) [23] task is per-
471
- formed to pre-train the image transformers, where some image
472
- patches are randomly masked, and the corresponding visual
473
- tokens are then predicted. The masked patches are replaced
474
- with a learnable embedding e[M]. We feed the corrupted image
475
- patches xM = {xi
476
- p : i /∈ M} �{e[M] : i ∈ M} to the
477
- transformer encoder. Here, M is the set of indices of masked
478
- positions.
479
- The encoded representation hL
480
- i
481
- is the hidden vector of
482
- the last transformer layer L for ith patch. For each masked
483
- Fig. 6. Workflow of BEiT pre-training.
484
- position, a softmax classifier ψ is used to predict the respective
485
- visual token, i.e., pMIM(vt′ | xM) = ψ(WMhL
486
- i + bM); where,
487
- WM and bM contain learning parameters for linear transfor-
488
- mation. The pre-training objective of BEiT is to maximize the
489
- log-likelihood of the correct token vti given xM, as below:
490
- max
491
-
492
- x∈ DBtr
493
- EM
494
- � �
495
- i∈M
496
- log pMIM
497
-
498
- vti | xM�
499
-
500
- where, DBtr is the training dataset. The BEiT pre-training
501
- can be perceived as VAE training [23], [49], where we follow
502
- two stages, i.e., stage-1: minimizing loss for visual token
503
- reconstruction, stage-2: modeling masked image, i.e., learning
504
- prior pMIM by keeping Tφ and Dθ fixed. It can be written as
505
- follows:
506
-
507
- (xi,xM
508
- i
509
- )
510
- ∈ DBtr
511
-
512
-
513
-
514
- �Evti∼Tφ(vt|xi) [log Dθ(xi|vti)]
515
-
516
- ��
517
-
518
- stage-1
519
- + log pMIM
520
-
521
- ˆ
522
- vti|xM
523
- i
524
-
525
-
526
- ��
527
-
528
- stage-2
529
-
530
-
531
-
532
-
533
- where, ˆ
534
- vti = argmaxvt Tφ(vt | xi).
535
- 3) Ensembled Transformers: The abovementioned four im-
536
- age transformers, i.e., ViT [19], DeiT [22], CaiT [21], and
537
- BEiT [23] are pre-trained on the training set DBtr. We now
538
- ensemble the transformers for predicting the severity stages
539
- from fundus images of the test set DBt, since ensembling
540
- multiple learning algorithms can achieve better performance
541
- than the constituent algorithms alone [50]. The pictorial rep-
542
- resentation of ensembled transformers is presented in Fig. 7.
543
- For an image sample from DBt, we obtain the softmax
544
- probability distribution s(j) : {P j
545
- 1 , P j
546
- 2 , . . . , P j
547
- nc} over jth
548
- transformer [15], for j = 1, 2, . . . , nT ; where, nc is the total
549
- number of classes (severity stages), and nT is count of the
550
- employed image transformers. Here, �nc
551
- i=1 P j
552
- i = 1, nc = 5
553
- (refer to § III-A), and nT = 4 since we use four separately
554
- trained distinct image transformers, as mentioned earlier.
555
- We obtain the severity stages/ class_labels Á|wm and Á|mv
556
- separately using two combination methods weighted mean and
557
- majority voting [50], respectively.
558
- Á|wm = argmaxi P µ
559
- i ; for i = 1, 2, . . . , nc ;
560
- P µ
561
- i =
562
- �nT
563
- j=1 αjP j
564
- i
565
- �nT
566
- j=1 αj
567
- (9)
568
- Fig. 7. Ensembled transformers.
569
-
570
- Masked
571
- Image
572
- Patches
573
- Original Image
574
- latten
575
- 0
576
- *
577
- Tokenizer
578
- L
579
- 1
580
- BEiT Encoder
581
- MIM Head
582
- 2
583
- [M]
584
- h2
585
- 3
586
- Unused at
587
- Pre-training
588
- Decoder
589
- Patch + Position
590
- Embedding
591
- Reconstructed ImageViT
592
- s(1)
593
- ative
594
- s(2)
595
- DeiT
596
- Mild
597
- Moderate
598
- Severe
599
- CaiT
600
- s(3)
601
- Proliferative
602
- c
603
- (4)
604
- BEiT6
605
- GENERIC COLORIZED JOURNAL, VOL. XX, NO. XX, XXXX 2022
606
- In this task, we choose �nT
607
- j=1 αj = 1.
608
- Á|mv = mode
609
-
610
- argmaxi(P 1
611
- i ), argmaxi(P 2
612
- i ), . . . , argmaxi(P nT
613
- i
614
- )
615
-
616
- = mode
617
-
618
- argmaxi(s(1)), argmaxi(s(2)), . . . , argmaxi(s(nT ))
619
-
620
- ;
621
- for i = 1, 2, . . . , nc
622
- (10)
623
- In this task, we use cross-entropy as the loss function [41]
624
- in the employed image transformers. The AdamW optimizer
625
- is used here due to its weight decay regularization effect
626
- for tackling overfitting [51]. The training details with hyper-
627
- parameter tuning are mentioned in Section IV-B.
628
- IV. EXPERIMENTS AND DISCUSSIONS
629
- In this section, we present the employed database, followed
630
- by experimental results with discussions.
631
- A. Database Employed
632
- For our computational experiments, we used the publicly
633
- available training samples of Kaggle APTOS (Asia Pacific
634
- Tele-Ophthalmology Society) 2019 Blindness Detection dataset
635
- [7], i.e., APTOS-2019. This database (DB) contains fundus
636
- image samples of five severity stages of DR, i.e., negative,
637
- mild, moderate, severe, and proliferative. Fig. 1 shows some
638
- sample images from this dataset. In DB, a total of 3662 fundus
639
- images are available, which we divide into training (DBtr) and
640
- testing (DBt) datasets with a ratio of 7 : 3. As a matter of fact,
641
- DBtr and DBt sets are disjoint. The sample counts of different
642
- severity stages/ class_labels (Á) for DBtr and DBt are shown
643
- in Fig. 8 individually. Here, 49.3% samples are of negative
644
- DR (Á= 0). Among positive classes, most samples are from
645
- the moderate stage (Á= 2). From this figure, it can be seen
646
- DB is imbalanced due to containing a different number of
647
- samples corresponding to various severity stages. Therefore,
648
- we augmented the data during the training of our model as
649
- mentioned in § III-B.1. The data augmentation also helped in
650
- reducing the overfitting issue [15].
651
- Fig. 8. Count of samples in APTOS-2019 [7].
652
- B. Experimental Results
653
- This section discusses the performed experiments, analyzes
654
- the model outcome, and compares them with major state-of-
655
- the-art methods. We begin with discussing the experimental
656
- settings.
657
- 1) Experiment Settings: We performed the experiments on
658
- the TensorFlow-2 framework having Python 3.7.13 over a
659
- machine with the following configurations: Intel(R) Xeon(R)
660
- CPU @ 2.00GHz with 52 GB RAM and Tesla T4 16 GB
661
- GPU. All the results shown here were obtained from DBt.
662
- The hyper-parameters of the framework were tuned and
663
- fixed during training with respect to the performance over
664
- some samples of DBt employed for hyper-parameter tuning.
665
- For all the image transformers used here (i.e., ViT, DeiT, CaiT,
666
- and BEiT), we empirically set the following hyper-parameters:
667
- transformer_layers (L) = 12, embedding_dimension (D) =
668
- 384, num_heads (h) = 6. The following hyper-parameters
669
- were selected for AdamW [51]: initial_learning_rate = 10−3;
670
- exponential decay rates for 1st and 2nd moment estimates, i.e.,
671
- β1 = 0.9, β2 = 0.999; zero-denominator removal parameter
672
- (ε) = 10−8; and weight_decay = 10−3/4. For model training,
673
- the mini-batch size was fixed to 32.
674
- 2) Model Performance: In Table I, we present the per-
675
- formance of our ensembled image transformer (EiT) using
676
- the combination schemes weighted mean (wm) and majority
677
- voting (mv), where we obtain 94.63% and 91.26% accuracy
678
- from EiTwm and EiTmv, respectively. We also ensembled
679
- multiple combinations of our employed transformers, and
680
- present their performances in this table. Here, the wm scheme
681
- performed better than mv. As evident from this table, ensem-
682
- bling various types of transformers improved the performance.
683
- Among single transformers (for nT = 1), CaiT performed
684
- the best. For nT = 2 and nT = 3, “BEiT + CaiT” and
685
- “DeiT + BEiT + CaiT” performed better than other respective
686
- combinations. Overall, EiTwm attained the best accuracy here.
687
- TABLE I
688
- PERFORMANCE OVER VARIOUS ENSEMBLING OF TRANSFORMERS
689
- nT
690
- Ensembled Transformers
691
- Accuracy (%)
692
- Weighted
693
- Majority
694
- mean
695
- voting
696
- 1
697
- ViT
698
- 82.21
699
- DeiT
700
- 85.65
701
- BEiT
702
- 86.74
703
- CaiT
704
- 86.91
705
- 2
706
- ViT + DeiT
707
- 87.03
708
- 86.55
709
- ViT + BEiT
710
- 87.48
711
- 87.03
712
- ViT + CaiT
713
- 87.77
714
- 87.21
715
- DeiT + BEiT
716
- 88.18
717
- 87.69
718
- DeiT + CaiT
719
- 88.86
720
- 87.93
721
- BEiT + CaiT
722
- 89.28
723
- 88.12
724
- 3
725
- ViT + DeiT + BEiT
726
- 90.53
727
- 88.87
728
- ViT + DeiT + CaiT
729
- 91.39
730
- 89.56
731
- ViT + BEiT + CaiT
732
- 92.14
733
- 90.28
734
- DeiT + BEiT + CaiT
735
- 93.46
736
- 90.91
737
- 4
738
- ViT + DeiT + BEiT + CaiT
739
- 94.63
740
- 91.26
741
- ( EiT )
742
- In Fig. 10 of Appendix I, we present the coarse localization
743
- maps generated by Grad-CAM [52] from the employed indi-
744
- vidual image transformers to highlight the crucial regions for
745
- understanding the severity stages.
746
- a) Various Evaluation Metrics: Besides the accuracy, in
747
- Table II, we present the performance of EiT with respect to
748
- some other evaluation metrics, e.g., kappa score, precision,
749
- recall, F1 score, specificity, balanced accuracy [53]. Here,
750
- Cohen’s quadratic weighted kappa measures the agreement
751
-
752
- 2000
753
- 1800
754
- DBtr
755
- DBt
756
- 1600
757
- 541
758
- 1400
759
- 1200
760
- sampl
761
- 1000
762
- 800
763
- 300
764
- #
765
- 600
766
- 1264
767
- 400
768
- 111
769
- 699
770
- 200
771
- 88
772
- 259
773
- 135
774
- 207
775
- 0
776
- 0
777
- 1
778
- 2
779
- 3
780
- 4
781
- severity stage/ class
782
- label (c)ADAK et al.: DETECTING SEVERITY OF DIABETIC RETINOPATHY FROM FUNDUS IMAGES USING ENSEMBLED TRANSFORMERS
783
- 7
784
- between human-assigned scores (i.e., DR severity stages)
785
- and the EiT-predicted scores. Precision analyzes the true
786
- positive samples among the total positive predictions. Recall
787
- or sensitivity finds the true positive rate. Similarly, specificity
788
- computes the true negative rate. F1 score is the harmonic mean
789
- of precision and recall. Since the employed DB is imbalanced,
790
- we also compute the balanced accuracy, which is the arithmetic
791
- mean of sensitivity and specificity. In this table, we can see
792
- that for both EiTwm and EiTmv, the kappa scores are greater
793
- than 0.81, which comprehends the “almost perfect agreement”
794
- between the human rater and EiT [53]. Here, macro means
795
- the arithmetic mean of all per class precision/ recall/ F1 score.
796
- TABLE II
797
- PERFORMANCE OF EiT OVER VARIOUS EVALUATION METRICS
798
- Metric
799
- Weighted mean
800
- Majority voting
801
- (EiTwm)
802
- (EiTmv)
803
- Accuracy (%)
804
- 94.63
805
- 91.26
806
- Kappa score
807
- 0.92
808
- 0.87
809
- Macro Precision (%)
810
- 90.55
811
- 84.65
812
- Macro Recall (%)
813
- 92.88
814
- 88.81
815
- Macro F1-score (%)
816
- 91.67
817
- 86.55
818
- Macro Specificity (%)
819
- 98.62
820
- 97.74
821
- Balanced Accuracy (%)
822
- 95.75
823
- 93.27
824
- b) Individual Class Performance: Table III presents the
825
- individual performance of EiTwm and EiTmv for detecting
826
- every severity stage of DR. From this table, we can see our
827
- models produced the best precision and recall for negative DR
828
- (Á= 0), and the lowest for severe DR (Á= 3).
829
- TABLE III
830
- PERFORMANCE OF EiT ON EVERY DR SEVERITY STAGE
831
- class_label (Á)
832
- 0
833
- 1
834
- 2
835
- 3
836
- 4
837
- EiTwm
838
- Precision (%)
839
- 98.48
840
- 86.67
841
- 95.00
842
- 83.61
843
- 89.01
844
- Recall (%)
845
- 95.75
846
- 93.69
847
- 95.00
848
- 87.93
849
- 92.05
850
- F1-score (%)
851
- 97.09
852
- 90.04
853
- 95.00
854
- 85.71
855
- 90.50
856
- Specificity (%)
857
- 98.56
858
- 98.38
859
- 98.12
860
- 99.04
861
- 99.01
862
- EiTmv
863
- Precision (%)
864
- 96.74
865
- 79.67
866
- 94.14
867
- 70.59
868
- 82.11
869
- Recall (%)
870
- 93.35
871
- 88.29
872
- 91.00
873
- 82.76
874
- 88.64
875
- F1-score (%)
876
- 95.01
877
- 83.76
878
- 92.54
879
- 76.19
880
- 85.25
881
- Specificity (%)
882
- 96.95
883
- 97.47
884
- 97.87
885
- 98.08
886
- 98.32
887
- In each row, the best result is marked bold, second-best is italic, and lowest is underlined.
888
- 3) Comparison: In Table IV, we present a comparative
889
- analysis with some major contemporary deep learning archi-
890
- tectures, e.g., ResNet50 [54], InceptionV3 [55], MobileNetV2
891
- [56], Xception [57], DenseNet169 (Farag et al. [18]), Efficient-
892
- Net [37], and SE-ResNeXt50 [38]. We have also compared
893
- with recently published transformer-based models, i.e., CoT-
894
- XNet [39], and SSiT [40]. Comparison with some major
895
- related works [5], [35], [36] can also be seen in this table.
896
- Our EiTwm outperformed the major state-of-the-art methods
897
- with respect to accuracy, balanced accuracy, sensitivity, and
898
- specificity. Our EiTmv also performed quite well in terms of
899
- balanced accuracy.
900
- 4) Impact of Hyper-parameters:
901
- We
902
- tuned
903
- the
904
- hyper-
905
- parameters and observed their impact on the experiment.
906
- a) MSA Head Count: We analyzed the performance im-
907
- pact of the number of heads (h) of MSA (Multi-head Self-
908
- Attention) in the transformer encoder and present in Fig. 9.
909
- As evident from this figure, the performance (accuracy) of
910
- TABLE IV
911
- COMPARATIVE STUDY
912
- Method
913
- Accuracy
914
- Sensitivity
915
- Specificity
916
- Balanced
917
- (%)
918
- (%)
919
- (%)
920
- Accuracy (%)
921
- ResNet50 [54]
922
- 74.64
923
- 56.52
924
- 85.71
925
- 71.12
926
- InceptionV3 [55]
927
- 78.72
928
- 63.64
929
- 85.37
930
- 74.51
931
- MobileNetV2 [56]
932
- 79.01
933
- 76.47
934
- 84.62
935
- 80.55
936
- Xception [57]
937
- 79.59
938
- 82.35
939
- 86.32
940
- 84.34
941
- Farag et al. [18]
942
- 82.00
943
- -
944
- -
945
- -
946
- Kassani et al. [35]
947
- 83.09
948
- 88.24
949
- 87.00
950
- 87.62
951
- TAN [36]
952
- 85.10
953
- 90.30
954
- 92.00
955
- -
956
- EfficientNet-B4 [37]
957
- 90.30
958
- 81.20
959
- 97.60
960
- 89.40
961
- EfficientNet-B5 [37]
962
- 90.70
963
- 80.70
964
- 97.70
965
- 89.20
966
- SE-ResNeXt50 [38]
967
- 92.40
968
- 87.10
969
- 98.20
970
- 92.65
971
- Tymchenko et al. [5]
972
- 92.90
973
- 86.00
974
- 98.30
975
- 92.15
976
- CoT-XNet [39]
977
- 84.18
978
- -
979
- 95.74
980
- -
981
- SSiT [40]
982
- 92.97
983
- -
984
- -
985
- -
986
- EiTmv [ours]
987
- 91.26
988
- 88.81
989
- 97.74
990
- 93.28
991
- EiTwm [ours]
992
- 94.63
993
- 92.88
994
- 98.62
995
- 95.75
996
- In each column, the best result is marked bold, and the second-best is underlined.
997
- Fig. 9. Impact of number of heads (h) in MSA on model performance.
998
- both EiTmv and EiTwm increased with the increment of h
999
- till h = 6, and started decreasing thereafter.
1000
- b) Weights αj of EiTwm: We tuned the weights αj (refer
1001
- to Eqn. 9) to see its impact on the performance of EiTwm.
1002
- We obtained the best accuracy of 94.63% from EiTwm for
1003
- α1 = α2 = 0.1, and α3 = α4 = 0.4. The performance of
1004
- EiTwm during tuning of αj’s is shown in Table V.
1005
- In Table VI, we also present the tuned αj’s that aided in
1006
- obtaining the best performing ensembled transformers of Table
1007
- I.
1008
- 5) Ablation Study: We here present the performed ablation
1009
- study by ablating individual transformers. Our EiT is actually
1010
- an ensembling of four different image transformers, i.e., ViT,
1011
- DeiT, CaiT, and BEiT. We ablated each transformer and
1012
- observed performance degradation than EiT. For example,
1013
- considering the weighted mean scheme, when we ablated CaiT
1014
- from EiT, the accuracy dropped by 4.1%. Similarly, ablating
1015
- BEiT and CaiT deteriorated the accuracy by 7.6%. For our
1016
- task, the best individual transformer (CaiT) attained 7.72%
1017
- lower accuracy than EiTwm. More examples can be observed
1018
- in Table I.
1019
- 6) Pre-training with Other Datasets: We checked the perfor-
1020
- mance of our EiT model by pre-training with some other
1021
- dataset. We took 1200 images of MESSIDOR [58] with
1022
- adjudicated grades by [59] (say, DBM). From IDRiD [60],
1023
- we also used “Disease Grading” dataset containing 516 images
1024
- (say, DBI). Here, we made four training set setups from DBM,
1025
- by taking 25%, 50%, 75%, and 100% of samples of DBM.
1026
-
1027
- 96
1028
- 94.63
1029
- EiTmv
1030
- 94
1031
- 92.34
1032
- 91.92
1033
- 92
1034
- 91
1035
- 90.28
1036
- 90
1037
- 89.43
1038
- 88.59
1039
- 87.67
1040
- 88
1041
- 87
1042
- 86
1043
- 84
1044
- 82
1045
- 80
1046
- 6
1047
- 8
1048
- 108
1049
- GENERIC COLORIZED JOURNAL, VOL. XX, NO. XX, XXXX 2022
1050
- TABLE V
1051
- PERFORMANCE OF EiTwm BY TUNING WEIGHTS αj
1052
- α1
1053
- α2
1054
- α3
1055
- α4
1056
- Accuracy (%)
1057
- 0.25
1058
- 0.25
1059
- 0.25
1060
- 0.25
1061
- 89.53
1062
- 0.85
1063
- 0.05
1064
- 0.05
1065
- 0.05
1066
- 82.29
1067
- 0.05
1068
- 0.85
1069
- 0.05
1070
- 0.05
1071
- 85.78
1072
- 0.05
1073
- 0.05
1074
- 0.85
1075
- 0.05
1076
- 86.92
1077
- 0.05
1078
- 0.05
1079
- 0.05
1080
- 0.85
1081
- 87.05
1082
- 0.7
1083
- 0.1
1084
- 0.1
1085
- 0.1
1086
- 82.35
1087
- 0.1
1088
- 0.7
1089
- 0.1
1090
- 0.1
1091
- 85.91
1092
- 0.1
1093
- 0.1
1094
- 0.7
1095
- 0.1
1096
- 87.04
1097
- 0.1
1098
- 0.1
1099
- 0.1
1100
- 0.7
1101
- 87.20
1102
- 0.5
1103
- 0.167
1104
- 0.167
1105
- 0.166
1106
- 82.88
1107
- 0.166
1108
- 0.5
1109
- 0.167
1110
- 0.167
1111
- 86.35
1112
- 0.167
1113
- 0.166
1114
- 0.5
1115
- 0.167
1116
- 87.62
1117
- 0.167
1118
- 0.167
1119
- 0.166
1120
- 0.5
1121
- 87.74
1122
- 0.3
1123
- 0.3
1124
- 0.2
1125
- 0.2
1126
- 88.16
1127
- 0.3
1128
- 0.2
1129
- 0.3
1130
- 0.2
1131
- 89.58
1132
- 0.3
1133
- 0.2
1134
- 0.2
1135
- 0.3
1136
- 90.27
1137
- 0.2
1138
- 0.3
1139
- 0.3
1140
- 0.2
1141
- 90.85
1142
- 0.2
1143
- 0.3
1144
- 0.2
1145
- 0.3
1146
- 91.67
1147
- 0.2
1148
- 0.2
1149
- 0.3
1150
- 0.3
1151
- 92.72
1152
- 0.4
1153
- 0.4
1154
- 0.1
1155
- 0.1
1156
- 91.18
1157
- 0.4
1158
- 0.1
1159
- 0.4
1160
- 0.1
1161
- 91.49
1162
- 0.4
1163
- 0.1
1164
- 0.1
1165
- 0.4
1166
- 92.15
1167
- 0.1
1168
- 0.4
1169
- 0.4
1170
- 0.1
1171
- 92.84
1172
- 0.1
1173
- 0.4
1174
- 0.1
1175
- 0.4
1176
- 93.47
1177
- 0.1
1178
- 0.1
1179
- 0.4
1180
- 0.4
1181
- 94.63
1182
- TABLE VI
1183
- TUNED WEIGHTS αj FOR TRANSFORMERS ENSEMBLED WITH
1184
- WEIGHTED MEAN
1185
- Transformerswm
1186
- α1
1187
- α2
1188
- α3
1189
- α4
1190
- ViT + DeiT
1191
- 0.25
1192
- 0.75
1193
- -
1194
- -
1195
- ViT + BEiT
1196
- 0.4
1197
- 0.6
1198
- -
1199
- -
1200
- ViT + CaiT
1201
- 0.4
1202
- 0.6
1203
- -
1204
- -
1205
- DeiT + BEiT
1206
- 0.4
1207
- 0.6
1208
- -
1209
- -
1210
- DeiT + CaiT
1211
- 0.3
1212
- 0.7
1213
- -
1214
- -
1215
- BEiT + CaiT
1216
- 0.5
1217
- 0.5
1218
- -
1219
- -
1220
- ViT + DeiT + BEiT
1221
- 0.2
1222
- 0.3
1223
- 0.5
1224
- -
1225
- ViT + DeiT + CaiT
1226
- 0.2
1227
- 0.3
1228
- 0.5
1229
- -
1230
- ViT + BEiT + CaiT
1231
- 0.2
1232
- 0.4
1233
- 0.4
1234
- -
1235
- DeiT + BEiT + CaiT
1236
- 0.3
1237
- 0.3
1238
- 0.4
1239
- -
1240
- ViT + DeiT + BEiT + CaiT
1241
- 0.1
1242
- 0.1
1243
- 0.4
1244
- 0.4
1245
- Similarly, four training setups were generated from DBI. As
1246
- mentioned in § IV-A, we divided APTOS-2019 database (DB)
1247
- in training (DBtr) and test (DBt) sets with a ratio of 7 : 3. In
1248
- Table VII, we present the performance of EiT on DBt, while
1249
- pre-training with DBM and DBI, and training with DBtr.
1250
- It can be observed that the performance of EiT improved
1251
- slightly when pre-trained with more data from other datasets.
1252
- TABLE VII
1253
- ACCURACY (%) OF EiT WITH PRE-TRAINING
1254
- Pre-training data
1255
- 25%
1256
- 50%
1257
- 75%
1258
- 100%
1259
- EiTwm
1260
- DBM
1261
- 94.71
1262
- 94.78
1263
- 94.83
1264
- 94.88
1265
- DBI
1266
- 94.65
1267
- 94.67
1268
- 94.7
1269
- 94.79
1270
- DBM + DBI
1271
- 94.73
1272
- 94.85
1273
- 94.98
1274
- 95.13
1275
- N.A.
1276
- 94.63
1277
- EiTmv
1278
- DBM
1279
- 91.35
1280
- 91.48
1281
- 91.56
1282
- 91.61
1283
- DBI
1284
- 91.27
1285
- 91.32
1286
- 91.34
1287
- 91.35
1288
- DBM + DBI
1289
- 91.42
1290
- 91.6
1291
- 91.68
1292
- 91.75
1293
- N.A.
1294
- 91.26
1295
- N.A.: without pre-training data
1296
- V. CONCLUSION
1297
- In this paper, we tackle the problem of automated severity
1298
- stage detection of DR from fundus images. For this purpose,
1299
- we propose two ensembled image transformers, EiTwm and
1300
- EiTmv, by using weighted mean and majority voting combi-
1301
- nation schemes, respectively. We here adopt four transformer
1302
- models, i.e., ViT, DeiT, CaiT, and BEiT. For experimentation,
1303
- we employed the publicly available APTOS-2019 blindness
1304
- detection dataset, on which EiTwm and EiTmv attained
1305
- accuracies of 94.63% and 91.26%, respectively. Although
1306
- the employed dataset was imbalanced, our models performed
1307
- quite well. Our EiTwm outperformed the major state-of-the-
1308
- art techniques. We also performed an ablation study and
1309
- observed the importance of the ensembling over the individual
1310
- transformers.
1311
- In the future, we will endeavor to improve the model perfor-
1312
- mance with some imbalanced learning techniques. Currently,
1313
- our model does not perform any lesion segmentation, which
1314
- we will also attempt to explore some implicit characteristics
1315
- of fundus images due to DR.
1316
- APPENDIX I
1317
- QUALITATIVE VISUALIZATION
1318
- As mentioned in § IV-B.2, we present the Grad-CAM maps
1319
- of the employed individual image transformers in Fig. 10.
1320
- negative
1321
- mild
1322
- moderate
1323
- severe
1324
- proliferative
1325
- Fig. 10.
1326
- Fundus images (1st row) with Grad-CAM maps for ViT, DeiT,
1327
- BEiT, CaiT as shown in 2nd, 3rd, 4th, 5th rows, respectively.
1328
- REFERENCES
1329
- [1] S. Stolte and R. Fang, “A survey on medical image analysis in diabetic
1330
- retinopathy,” Medical image analysis, vol. 64, p. 101742, 2020.
1331
- [2] N. Asiri et al., “Deep learning based computer-aided diagnosis systems
1332
- for diabetic retinopathy: A survey,” Artificial intelligence in medicine,
1333
- vol. 99, p. 101701, 2019.
1334
- [3] Y. Zheng et al., “The worldwide epidemic of diabetic retinopathy,”
1335
- Indian journal of ophthalmology, vol. 60, no. 5, p. 428, 2012.
1336
- [4] D. S. W. Ting et al., “Development and validation of a deep learning
1337
- system for diabetic retinopathy and related eye diseases using retinal
1338
- images from multiethnic populations with diabetes,” JAMA, vol. 318,
1339
- no. 22, pp. 2211–2223, 2017.
1340
- [5] B. Tymchenko et al., “Deep learning approach to diabetic retinopathy
1341
- detection,” in ICPRAM, 2020, pp. 501–509.
1342
-
1343
- SADAK et al.: DETECTING SEVERITY OF DIABETIC RETINOPATHY FROM FUNDUS IMAGES USING ENSEMBLED TRANSFORMERS
1344
- 9
1345
- [6] J. Krause et al., “Grader variability and the importance of reference stan-
1346
- dards for evaluating machine learning models for diabetic retinopathy,”
1347
- Ophthalmology, vol. 125, no. 8, pp. 1264–1272, 2018.
1348
- [7] APTOS 2019 Blindness Detection. [Online]. Available: https://www.
1349
- kaggle.com/competitions/aptos2019-blindness-detection
1350
- [8] G. Quellec et al., “Optimal wavelet transform for the detection of
1351
- microaneurysms in retina photographs,” IEEE transactions on medical
1352
- imaging, vol. 27, no. 9, pp. 1230–1241, 2008.
1353
- [9] L. Giancardo et al., “Microaneurysm detection with radon transform-
1354
- based classification on retina images,” in EMBS, 2011, pp. 5939–5942.
1355
- [10] R. Casanova et al., “Application of random forests methods to diabetic
1356
- retinopathy classification analyses,” PLOS one, vol. 9, no. 6, p. e98587,
1357
- 2014.
1358
- [11] M. Niemeijer et al., “Automated detection and differentiation of drusen,
1359
- exudates, and cotton-wool spots in digital color fundus photographs for
1360
- diabetic retinopathy diagnosis,” Investigative ophthalmology & visual
1361
- science, vol. 48, no. 5, pp. 2260–2267, 2007.
1362
- [12] M. U. Akram et al., “Identification and classification of microaneurysms
1363
- for early detection of diabetic retinopathy,” Pattern recognition, vol. 46,
1364
- no. 1, pp. 107–116, 2013.
1365
- [13] D. Usher et al., “Automated detection of diabetic retinopathy in digital
1366
- retinal images: a tool for diabetic retinopathy screening,” Diabetic
1367
- Medicine, vol. 21, no. 1, pp. 84–90, 2004.
1368
- [14] J. D. Bodapati et al., “Deep convolution feature aggregation: an appli-
1369
- cation to diabetic retinopathy severity level prediction,” Signal, Image
1370
- and Video Processing, vol. 15, no. 5, pp. 923–930, 2021.
1371
- [15] A. Zhang, Z. C. Lipton, M. Li, and A. J. Smola, “Dive into deep
1372
- learning,” arXiv:2106.11342, 2021.
1373
- [16] S. Yu et al., “Exudate detection for diabetic retinopathy with convolu-
1374
- tional neural networks,” in EMBC, 2017, pp. 1744–1747.
1375
- [17] Z. Wang et al., “Zoom-in-net: Deep mining lesions for diabetic retinopa-
1376
- thy detection,” in MICCAI, 2017, pp. 267–275.
1377
- [18] M. M. Farag et al., “Automatic severity classification of diabetic
1378
- retinopathy based on densenet and convolutional block attention mod-
1379
- ule,” IEEE Access, vol. 10, pp. 38 299–38 308, 2022.
1380
- [19] A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers
1381
- for image recognition at scale,” arXiv:2010.11929, 2020.
1382
- [20] A. M. Bra¸soveanu and R. Andonie, “Visualizing transformers for nlp:
1383
- a brief survey,” in Int. Conf. on Information Visualisation (IV).
1384
- IEEE,
1385
- 2020, pp. 270–279.
1386
- [21] H. Touvron et al., “Going deeper with image transformers,” in ICCV,
1387
- 2021, pp. 32–42.
1388
- [22] ——, “Training data-efficient image transformers & distillation through
1389
- attention,” in ICML, 2021, pp. 10 347–10 357.
1390
- [23] H. Bao, L. Dong, and F. Wei, “BEiT: BERT Pre-Training of Image
1391
- Transformers,” in ICLR, arXiv:2106.08254, 2022.
1392
- [24] J. Devlin et al., “BERT: Pre-training of Deep Bidirectional Transformers
1393
- for Language Understanding,” arXiv:1810.04805, 2018.
1394
- [25] “Grading diabetic retinopathy from stereoscopic color fundus pho-
1395
- tographs—an extension of the modified airlie house classification: Etdrs
1396
- report number 10,” Ophthalmology, vol. 98, no. 5, Supplement, pp. 786–
1397
- 806, 1991.
1398
- [26] U. R. Acharya et al., “Automated diabetic macular edema (dme) grading
1399
- system using dwt, dct features and maculopathy index,” Computers in
1400
- biology and medicine, vol. 84, pp. 59–68, 2017.
1401
- [27] K. M. Adal et al., “An automated system for the detection and clas-
1402
- sification of retinal changes due to red lesions in longitudinal fundus
1403
- images,” IEEE transactions on biomedical engineering, vol. 65, no. 6,
1404
- pp. 1382–1390, 2017.
1405
- [28] L. Tang et al., “Splat feature classification with application to retinal
1406
- hemorrhage detection in fundus images,” IEEE Transactions on Medical
1407
- Imaging, vol. 32, no. 2, pp. 364–375, 2013.
1408
- [29] A. Herliana et al., “Feature selection of diabetic retinopathy disease
1409
- using particle swarm optimization and neural network,” in CITSM.
1410
- IEEE, 2018, pp. 1–4.
1411
- [30] S. Sanromà et al., “Assessment of diabetic retinopathy risk with random
1412
- forests.” in ESANN, 2016.
1413
- [31] A. Osareh et al., “A computational-intelligence-based approach for
1414
- detection of exudates in diabetic retinopathy images,” IEEE Transactions
1415
- on Information Technology in Biomedicine, vol. 13, no. 4, pp. 535–545,
1416
- 2009.
1417
- [32] P. Chudzik et al., “Microaneurysm detection using deep learning and
1418
- interleaved freezing,” in Medical imaging 2018: image processing, vol.
1419
- 10574.
1420
- SPIE, 2018, pp. 379–387.
1421
- [33] R. Gargeya and T. Leng, “Automated identification of diabetic retinopa-
1422
- thy using deep learning,” Ophthalmology, vol. 124, no. 7, pp. 962–969,
1423
- 2017.
1424
- [34] S. Wan et al., “Deep convolutional neural networks for diabetic retinopa-
1425
- thy detection by image classification,” Computers & Electrical Engineer-
1426
- ing, vol. 72, pp. 274–282, 2018.
1427
- [35] S. H. Kassani et al., “Diabetic retinopathy classification using a modified
1428
- xception architecture,” in ISSPIT, 2019, pp. 1–6.
1429
- [36] M. D. Alahmadi, “Texture attention network for diabetic retinopathy
1430
- classification,” IEEE Access, vol. 10, pp. 55 522–55 532, 2022.
1431
- [37] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convo-
1432
- lutional neural networks,” in ICML, 2019, pp. 6105–6114.
1433
- [38] J. Hu et al., “Squeeze-and-excitation networks,” in CVPR, 2018, pp.
1434
- 7132–7141.
1435
- [39] S. Zhao et al., “Cot-xnet: Contextual transformer with xception network
1436
- for diabetic retinopathy grading,” Physics in Medicine & Biology, 2022.
1437
- [40] Y. Huang et al., “Ssit: Saliency-guided self-supervised image trans-
1438
- former for diabetic retinopathy grading,” arXiv:2210.10969, 2022.
1439
- [41] I. Goodfellow et al., Deep learning.
1440
- MIT press, 2016.
1441
- [42] V. Stimper et al., “Multidimensional contrast limited adaptive histogram
1442
- equalization,” IEEE Access, vol. 7, pp. 165 437–165 447, 2019.
1443
- [43] D. Sarvamangala et al., “Convolutional neural networks in medical
1444
- image understanding: a survey,” Evolutionary intel., pp. 1–22, 2021.
1445
- [44] T. Wolf et al., “Transformers: State-of-the-Art Natural Language Pro-
1446
- cessing,” in EMNLP: System Demonstrations, 2020, pp. 38–45.
1447
- [45] A. Vaswani et al., “Attention is all you need,” Advances in neural
1448
- information processing systems, vol. 30, 2017.
1449
- [46] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,”
1450
- arXiv:1607.06450, 2016.
1451
- [47] D. Misra, “Mish: A self regularized non-monotonic neural activation
1452
- function,” BMVC, Paper #928, 2020.
1453
- [48] G. Hinton, O. Vinyals, and J. Dean, “Distilling the Knowledge in a
1454
- Neural Network,” in NIPS Deep Learning and Representation Learning
1455
- Workshop, arXiv:1503.02531, 2015.
1456
- [49] A. Ramesh et al., “Zero-shot text-to-image generation,” in ICML, 2021,
1457
- pp. 8821–8831.
1458
- [50] L. Rokach, “Ensemble-based Classifiers,” Artificial intelligence review,
1459
- vol. 33, no. 1, pp. 1–39, 2010.
1460
- [51] I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,”
1461
- ICLR, arXiv:1711.05101, 2019.
1462
- [52] R. Selvaraju et al., “Grad-CAM: Visual Explanations from Deep Net-
1463
- works via Gradient-based Localization,” in ICCV, 2017, pp. 618–626.
1464
- [53] M. Grandini, E. Bagli, and G. Visani, “Metrics for multi-class classifi-
1465
- cation: an overview,” arXiv preprint arXiv:2008.05756, 2020.
1466
- [54] K. He et al., “Deep residual learning for image recognition,” in CVPR,
1467
- 2016, pp. 770–778.
1468
- [55] C. Szegedy et al., “Rethinking the inception architecture for computer
1469
- vision,” in CVPR, 2016, pp. 2818–2826.
1470
- [56] M. Sandler et al., “Mobilenetv2: Inverted residuals and linear bottle-
1471
- necks,” in CVPR, 2018, pp. 4510–4520.
1472
- [57] F. Chollet, “Xception: Deep learning with depthwise separable convo-
1473
- lutions,” in CVPR, 2017, pp. 1251–1258.
1474
- [58] E. Decencière et al., “Feedback on a publicly distributed image database:
1475
- the Messidor database,” Image Analysis & Stereology, vol. 33, no. 3, pp.
1476
- 231–234, 2014.
1477
- [59] J. Krause et al., “Grader variability and the importance of reference stan-
1478
- dards for evaluating machine learning models for diabetic retinopathy,”
1479
- Ophthalmology, vol. 125, no. 8, pp. 1264–1272, 2018.
1480
- [60] P. Porwal et al., “Indian Diabetic Retinopathy Image Dataset (IDRiD),”
1481
- 2018. [Online]. Available: https://dx.doi.org/10.21227/H25W98
1482
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
knowledge_base/0tAzT4oBgHgl3EQfDPqY/content/tmp_files/load_file.txt DELETED
The diff for this file is too large to render. See raw diff
 
knowledge_base/0tAzT4oBgHgl3EQfDPqY/vector_store/index.faiss DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:7f36c8a3e98941ef37884da4ee60c4be0d76db65dbc1727ce93f077544cd5989
3
- size 4522029
 
 
 
 
knowledge_base/0tAzT4oBgHgl3EQfDPqY/vector_store/index.pkl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:cc427607bd91fe7c63af46d8869ec43535372a3a594fd7019d44feeaa3e8c4cc
3
- size 149953
 
 
 
 
knowledge_base/0tFPT4oBgHgl3EQfTjTq/content/2301.13054v1.pdf DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:3befe2aaa2ac32e4e0859a3f418d920b15621b32f359ba6453dc18373c361681
3
- size 308095
 
 
 
 
knowledge_base/0tFPT4oBgHgl3EQfTjTq/content/tmp_files/2301.13054v1.pdf.txt DELETED
@@ -1,1564 +0,0 @@
1
- arXiv:2301.13054v1 [cs.FL] 30 Jan 2023
2
- Monadic Expressions and their Derivatives
3
- Samira Attou1, Ludovic Mignot2, Clément Miklarz2, and Florent Nicart2
4
- 1 Université Gustave Eiffel,
5
- 5 Boulevard Descartes — Champs s/ Marne
6
- 77454 Marne-la-Vallée Cedex 2
7
- 2 GR2IF,
8
- Université de Rouen Normandie,
9
- Avenue de l’Université,
10
- 76801 Saint-Étienne-du-Rouvray, France
11
12
- {ludovic.mignot,clement.miklarz1, florent.nicart}@univ-rouen.fr
13
- Abstract. We propose another interpretation of well-known derivatives computations from regular expres-
14
- sions, due to Brzozowski, Antimirov or Lombardy and Sakarovitch, in order to abstract the underlying data
15
- structures (e.g. sets or linear combinations) using the notion of monad. As an example of this generalization
16
- advantage, we first introduce a new derivation technique based on the graded module monad and then show an
17
- application of this technique to generalize the parsing of expression with capture groups and back references.
18
- We also extend operators defining expressions to any n-ary functions over value sets, such as classical operations
19
- (like negation or intersection for Boolean weights) or more exotic ones (like algebraic mean for rational weights).
20
- Moreover, we present how to compute a (non-necessarily finite) automaton from such an extended expression,
21
- using the Colcombet and Petrisan categorical definition of automata. These category theory concepts allow us
22
- to perform this construction in a unified way, whatever the underlying monad.
23
- Finally, to illustrate our work, we present a Haskell implementation of these notions using advanced techniques
24
- of functional programming, and we provide a web interface to manipulate concrete examples.
25
- 1
26
- Introduction
27
- This paper is an extended version of [2].
28
- Regular expressions are a classical way to represent associations between words and value sets. As an example,
29
- classical regular expressions denote sets of words and regular expressions with multiplicities denote formal series.
30
- From a regular expression, solving the membership test (determining whether a word belongs to the denoted
31
- language) or the weighting test (determining the weight of a word in the denoted formal series) can be solved,
32
- following Kleene theorems [11,17] by computing a finite automaton, such as the position automaton [9,3,5,6].
33
- Another family of methods to solve these tests is the family of derivative computations, that does not require the
34
- construction of a whole automaton. The common point of these techniques is to transform the test for an arbitrary
35
- word into the test for the empty word, which can be easily solved in a purely syntactical way (i.e. by induction over
36
- the structure of expressions). Brzozowski [4] shows how to compute, from a regular expression E and a word w, a
37
- regular expression dw(E) denoting the set of words w′ such that ww′ belongs to the language denoted by E. Solving
38
- the membership test hence becomes the membership test for the empty word in the expression dw(E). Antimirov [1]
39
- modifies this method in order to produce sets of expressions instead of expressions, i.e. defines the partial derivatives
40
- ∂w(E) as a set of expressions the sum of which denotes the same language as dw(E). If the number of derivatives
41
- is exponential w.r.t. the length |E| of E in the worst case3, the partial derivatives produce at most a linear number
42
- of expressions w.r.t. |E|. Lombardy and Sakarovitch [13] extends these methods to expressions with multiplicities.
43
- Finally, Sulzmann and Lu [18] apply these derivation techniques to parse POSIX expressions.
44
- It is well-known that these methods are based on a common operation, the quotient of languages. Furthermore,
45
- Antimirov’s method can be interpreted as the derivation of regular expression with multiplicities in the Boolean
46
- semiring. However, the Brzozowski computation does not produce the same expressions (i.e. equality over the syntax
47
- trees) as the Antimirov one.
48
- Main contributions: In this paper, we present a unification of these computations by applying notions of
49
- category theory to the category of sets, and show how to compute categorical automata as defined in [7], by reinter-
50
- preting the work started in [15]. We make use of classical monads to model well-known derivatives computations.
51
- Furthermore, we deal with extended expressions in a general way: in this paper, expressions can support extended
52
- 3 as far as rules of associativity, commutativity and idempotence of the sum are considered, possibly infinite otherwise.
53
-
54
- operators like complement, intersection, but also any n-ary function (algebraic mean, extrema multiplications, etc.).
55
- The main difference with [15] is that we formally state the languages and series that the expressions denote in an
56
- inherent way w.r.t. the underlying monads.
57
- More precisely, this paper presents:
58
- – an extension of expressions to any n-ary function over the value set,
59
- – a monadic generalization of expressions,
60
- – a solution for the membership/weight test for these expressions,
61
- – a computation of categorical derivative automata,
62
- – a new monad that fits with the extension to n-ary functions,
63
- – an illustration implemented in Haskell using advanced functional programming,
64
- – an extension to capture groups and back references expressions.
65
- Motivation: The unification of derivation techniques is a goal by itself. Moreover, the formal tools used to
66
- achieve this unification are also useful: Monads offer both theoretical and practical advantages. Indeed, from a
67
- theoretical point of view, these structures allow the abstraction of properties and focus on the principal mechanisms
68
- that allow solving the membership and weight problems. Besides, the introduction of exotic monads can also facilitate
69
- the study of finiteness of derivated terms. From a practical point of view, monads are easy to implement (even in
70
- some other languages than Haskell) and allow us to produce compact and safe code. Finally, we can easily combine
71
- different algebraic structures or add some technical functionalities (capture groups, logging, nondeterminism, etc.)
72
- thanks to notions like monad transformers [10] that we consider in this paper.
73
- This paper is structured as follows. In Section 2, we gather some preliminary material, like algebraic structures
74
- or category theory notions. We also introduce some functions well-known to the Haskell community that can allow
75
- us to reduce the size of our equations. We then structurally define the expressions we deal with, the associated series
76
- and the weight test for the empty word in Section 3. In order to extend this test to any arbitrary word, we first state
77
- in Section 4 some properties required by the monads we consider. Once this so-called support is determined, we show
78
- in Section 5 how to compute the derivatives. The computation of derivative automata is explained in Section 6.
79
- A new monad and its associated derivatives computation is given in Section 7. An implementation is presented
80
- in Section 8. Finally, we show how to (alternatively to [18]) compute derivatives of capture group expressions in
81
- Section 9 and show that as far as the same operators are concerned, the derivative formulae are the same whatever
82
- the underlying monad is.
83
- 2
84
- Preliminaries
85
- We denote by S → S′ the set of functions from a set S to a set S′. The notation λx → f(x) is an equivalent notation
86
- for a function f.
87
- A monoid is a set S endowed with an associative operation and a unit element. A semiring is a structure
88
- (S, ×, +, 1, 0) such that (S, ×, 1) is a monoid, (S, +, 0) is a commutative monoid, × distributes over + and 0 is an
89
- annihilator for ×. A starred semiring is a semiring with a unary function ⋆ such that
90
- k⋆ = 1 + k × k⋆ = 1 + k⋆ × k.
91
- A K-series over the free monoid (Σ∗, ·, ε) associated with an alphabet Σ, for a semiring K = (K, ×, +, 1, 0), is
92
- a function from Σ∗ to K. The set of K-series can be endowed with the structure of semiring as follows:
93
- 1(w) =
94
-
95
- 1
96
- if w = ε,
97
- 0
98
- otherwise,
99
- 0(w) = 0,
100
- (S1 + S2)(w) = S1(w) + S2(w),
101
- (S1 × S2)(w) =
102
-
103
- u·v=w
104
- S1(u) × S2(v).
105
- Furthermore, if S1(ε) = 0 (i.e. S1 is said to be proper), the star of S1 is the series defined by
106
- (S1)⋆(ε) = 1,
107
- (S1)⋆(w) =
108
-
109
- n≤|w|,w=u1···un,uj̸=ε
110
- S1(u1) × · · · × S1(un).
111
- Finally, for any function f in Kn → K, we set:
112
- (f(S1, . . . , Sn))(w) = f(S1(w), . . . , Sn(w)).
113
- (1)
114
- A functor 4 F associates with each set S a set F(S) and with each function f in S → S′ a function F(f) from
115
- F(S) to F(S′) such that
116
- F(id) = id,
117
- F(f ◦ g) = F(f) ◦ F(g),
118
- 4 More precisely, a functor over a subcategory of the category of sets.
119
-
120
- where id is the identity function and ◦ the classical function composition.
121
- A monad5 M is a functor endowed with two (families of) functions
122
- – pure, from a set S to M(S),
123
- – bind, sending any function f in S → M(S′) to M(S) → M(S′),
124
- such that the three following conditions are satisfied:
125
- bind(f)(pure(s)) = f(s),
126
- bind(pure) = id,
127
- bind(g)(bind(f)(m)) = bind(λx → bind(g)(f(x)))(m).
128
- Example 1. The Maybe monad associates:
129
- – any set S with the set Maybe(S) = {Just(s) | s ∈ S} ∪ {Nothing}, where Just and Nothing are two syntactic
130
- tokens allowing us to extend a set with one value;
131
- – any function f with the function Maybe(f) defined by
132
- Maybe(f)(Just(s)) = Just(f(s)),
133
- Maybe(f)(Nothing) = Nothing
134
- – is endowed with the functions pure and bind defined by:
135
- pure(s) = Just(s),
136
- bind(f)(Just(s)) = f(s),
137
- bind(f)(Nothing) = Nothing.
138
- Example 2. The Set monad associates:
139
- – with any set S the set 2S,
140
- – with any function f the function Set(f) defined by Set(f)(R) = �
141
- r∈R{f(r)},
142
- – is endowed with the functions pure and bind defined by:
143
- pure(s) = {s},
144
- bind(f)(R) =
145
-
146
- r∈R
147
- f(r).
148
- Example 3. The LinComb(K) monad, for K = (K, ×, +, 1, 0), associates:
149
- – with any set S the set of K-linear combinations of elements of S, where a linear combination is a finite (formal,
150
- commutative) sum of couples (denoted by ⊞) in K × S where (k, s) ⊞ (k′, s) = (k + k′, s),
151
- – with any function f the function LinComb(K)(f) defined by
152
- LinComb(K)(f)(R) = ⊞
153
- (k,r)∈R
154
- (k, f(r)),
155
- – is endowed with the functions pure and bind defined by:
156
- pure(s) = (1, s),
157
- bind(f)(R) = ⊞
158
- (k,r)∈R
159
- k ⊗ f(r),
160
- where k ⊗ R = ⊞
161
- (k′,r)∈R
162
- (k × k′, r).
163
- To compact equations, we use the following operators for any monad M:
164
- f <$> s = M(f)(s),
165
- m >>= f = bind(f)(m).
166
- If <$> can be used to lift unary functions to the monadic level, >>= and pure can be used to lift any n-ary function
167
- f in S1 × · · · × Sn → S, defining a function liftn sending S1 × · · · × Sn → S to M(S1) × · · · × M(Sn) → M(S) as
168
- follows:
169
- liftn(f)(m1, . . . , mn) =m1 >>= (λs1 → . . .
170
- mn >>= (λsn → pure(f(s1, . . . , sn))) . . .)
171
- Let us consider the set
172
- 1 = {⊤} with only one element. The images of this set by some previously defined monads
173
- can be evaluated as value sets classically used to weight words in association with classical regular expressions. As
174
- an example, Maybe(1) and Set(1) are isomorphic to the Boolean set, and any set LinComb(K)(1) can be converted
175
- into the underlying set of K. This property allows us to extend in a coherent way classical expressions to monadic
176
- expressions, where the type of the weights is therefore given by the ambient monad.
177
- 5 More precisely, a monad over a subcategory of the category of sets.
178
-
179
- 3
180
- Monadic Expressions
181
- As seen in the previous section, elements in M(1) can be evaluated as classical value sets for some particular
182
- monads. Hence, we use these elements not only for the weights associated with words by expressions, but also for
183
- the elements that act over the denoted series.
184
- In the following, in addition to classical operators (+, · and ∗), we denote:
185
- – the action of an element over a series by ⊙,
186
- – the application of a function by itself.
187
- Definition 1. Let M be a monad. An M-monadic expression E over an alphabet Σ is inductively defined as follows:
188
- E = a,
189
- E = ε,
190
- E = ∅,
191
- E = E1 + E2,
192
- E = E1 · E2,
193
- E = E∗
194
- 1,
195
- E = α ⊙ E1,
196
- E = E1 ⊙ α,
197
- E = f (E1, . . . , En) ,
198
- where a is a symbol in Σ, (E1, . . . , En) are n M-monadic expressions over Σ, α is an element of M(1) and f is a
199
- function from (M(1))n to M(1).
200
- We denote by Exp(Σ) the set of monadic expressions over an alphabet Σ.
201
- Example 4. As an example of functions that can be used in our extension of classical operators, one can define the
202
- function ExtDist(x1, x2, x3) = max(x1, x2, x3) − min(x1, x2, x3) from N3 to N.
203
- Similarly to classical regular expressions, monadic expressions associate a weight with any word. Such a relation
204
- can be denoted via a formal series. However, before defining this notion, in order to simplify our study, we choose
205
- to only consider proper expressions. Let us first show how to characterize them by the computation of a nullability
206
- value.
207
- Definition 2. Let M be a monad such that the structure (M(1), +, ×, ⋆, 1, 0) is a starred semiring. The nullability
208
- value of an M-monadic expression E over an alphabet Σ is the element Null(E) of M(1) inductively defined as
209
- follows:
210
- Null(ε) = 1,
211
- Null(∅) = 0,
212
- Null(a) = 0,
213
- Null(E1 + E2) = Null(E1) + Null(E2),
214
- Null(E1 · E2) = Null(E1) × Null(E2),
215
- Null(E∗
216
- 1) = Null(E1)⋆,
217
- Null(α ⊙ E1) = α × Null(E1),
218
- Null(E1 ⊙ α) = Null(E1) × α,
219
- Null(f(E1, . . . , En)) = f(Null(E1), . . . , Null(En)),
220
- where a is a symbol in Σ, (E1, . . . , En) are n M-monadic expressions over Σ, α is an element of M(1) and f is a
221
- function from (M(1))n to M(1).
222
- When the considered semiring is not a starred one, we restrict the nullability value computation to expressions
223
- where a starred subexpression admits a null nullability value. In order to compute it, let us consider the Maybe
224
- monad, allowing us to elegantly deal with such a partial function.
225
- Definition 3. Let M be a monad such that the structure (M(1), +, ×, 1, 0) is a semiring. The partial nullability
226
- value of an M-monadic expression E over an alphabet Σ is the element PartNull(E) of Maybe(M(1)) defined as
227
- follows:
228
- PartNull(ε) = Just(1),
229
- PartNull(∅) = Just(0),
230
- PartNull(a) = Just(0),
231
- PartNull(E1 + E2) = lift2(+)(PartNull(E1), PartNull(E2)),
232
- PartNull(E1 · E2) = lift2(×)(PartNull(E1), PartNull(E2)),
233
- PartNull(E∗
234
- 1) =
235
-
236
- Just(1)
237
- if PartNull(E1) = Just(0),
238
- Nothing
239
- otherwise,
240
- PartNull(α ⊙ E1) = (λE → α × E) <$> PartNull(E1),
241
- PartNull(E1 ⊙ α) = (λE → E × α) <$> PartNull(E1),
242
- PartNull(f(E1, . . . , En)) = liftn(f)(PartNull(E1), . . . , PartNull(En)),
243
- where a is a symbol in Σ, (E1, . . . , En) are n M-monadic expressions over Σ, α is an element of M(1) and f is a
244
- function from (M(1))n to M(1).
245
-
246
- An expression E is proper if its partial nullability value is not Nothing, therefore if it is a value Just(v); in this
247
- case, v is its nullability value, denoted by Null(E) (by abuse).
248
- Definition 4. Let M be a monad such that the structure (M(1), +, ×, 1, 0) is a semiring, and E be a M-monadic
249
- proper expression over an alphabet Σ. The series S(E) associated with E is inductively defined as follows:
250
- S(ε)(w) =
251
-
252
- 1
253
- if w = ε,
254
- 0
255
- otherwise,
256
- S(∅)(w) = 0,
257
- S(a)(w) =
258
-
259
- 1
260
- if w = a,
261
- 0
262
- otherwise,
263
- S(E1 + E2) = S(E1) + S(E2),
264
- S(E1 · E2) = S(E1) × S(E2),
265
- S(E∗
266
- 1) = (S(E1))⋆,
267
- S(α ⊙ E1)(w) = α × S(E1)(w),
268
- S(E1 ⊙ α)(w) = S(E1)(w) × α,
269
- S(f(E1, . . . , En)) = f(S(E1), . . . , S(En)),
270
- where a is a symbol in Σ, (E1, . . . , En) are n M-monadic expressions over Σ, α is an element of M(1) and f is a
271
- function from (M(1))n to M(1).
272
- From now on, we consider the set Exp(Σ) of M-monadic expressions over Σ to be endowed with the structure of
273
- a semiring, and two expressions denoting the same series to be equal. The weight associated with a word w in Σ∗
274
- by E is the value weightw(E) = S(E)(w). The nullability of a proper expression is the weight it associates with ε,
275
- following Definition 3 and Definition 4.
276
- Proposition 1. Let M be a monad such that the structure (M(1), +, ×, 1, 0) is a semiring. Let E be an M-monadic
277
- proper expression over Σ. Then:
278
- Null(E) = weightε(E).
279
- The previous proposition implies that the weight of the empty word can be syntactically computed (i.e. inductively
280
- computed from a monadic expression). Now, let us show how to extend this computation by defining the computation
281
- of derivatives for monadic expressions.
282
- 4
283
- Monadic Supports for Expressions
284
- A K-left-semimodule, for a semiring K = (K, ×, +, 1, 0), is a commutative monoid (S, ±, 0) endowed with a function
285
- ⊲ from K × S to S such that:
286
- (k × k′) ⊲ s = k ⊲ (k′ ⊲ s),
287
- (k + k′) ⊲ s = k ⊲ s ± k′ ⊲ s,
288
- k ⊲ (s ± s′) = k ⊲ s ± k ⊲ s′,
289
- 1 ⊲ s = s,
290
- 0 ⊲ s = k ⊲ 0 = 0.
291
- A K-right-semimodule can be defined symmetrically.
292
- An operad [12,14] is a structure (O, (◦j)j∈N, id) where O is a graded set (i.e. O = �
293
- n∈N On), id is an element of
294
- O1, ◦j is a function defined for any three integers (i, j, k)6 with 0 < j ≤ k in Ok × Oi → Ok+i−1 such that for any
295
- elements p1 ∈ Om, p2 ∈ On, p3 ∈ Op:
296
- ∀0 < j ≤ m, id ◦1 p1 = p1 ◦j id = p1,
297
- ∀0 < j ≤ m, 0 < j′ ≤ n, p1 ◦j (p2 ◦j′ p3) = (p1 ◦j p2) ◦j+j′−1 p3,
298
- ∀0 < j′ ≤ j ≤ m, (p1 ◦j p2) ◦j′ p3 = (p1 ◦j′ p3) ◦j+p−1 p2.
299
- Combining these compositions ◦j, one can define a composition ◦ sending Ok × Oi1 × · · · × Oik to Oi1+···+ik: for
300
- any element (p, q1, . . . , qk) in Ok × Ok,
301
- p ◦ (q1, . . . , qk) = (· · · ((p ◦k qk) ◦k−1 qk−1 · · · ) · · · ) ◦1 q1.
302
- Conversely, the composition ◦ can define the compositions ◦j using the identity element: for any two elements (p, q)
303
- in Ok × Oi, for any integer 0 < j ≤ k:
304
- p ◦j q = p ◦ (id, . . . , id
305
-
306
- ��
307
-
308
- j−1 times
309
- , q, id, . . . , id
310
-
311
- ��
312
-
313
- k−j times
314
- ).
315
- As an example, the set of n-ary functions over a set, with the identity function as unit, forms an operad.
316
- A module over an operad (O, ◦, id) is a set S endowed with a function ⋇ from On × Sn to S such that
317
- f ⋇ (f1 ⋇ (s1,1, . . . , s1,i1), . . . , fn ⋇ (sn,1, . . . , sn,in))
318
- = (f ◦ (f1, . . . , fn)) ⋇ (s1,1, . . . , s1,i1, . . . , sn,1, . . . , sn,in).
319
- 6 every couple (i, k) unambiguously defines the domain and codomain of a function ◦j
320
-
321
- The extension of the computation of derivatives could be performed for any monad. Indeed, any monad could
322
- be used to define well-typed auxiliary functions that mimic the classical computations. However, some properties
323
- should be satisfied in order to compute weights equivalently to Definition 4. Therefore, in the following we consider
324
- a restricted kind of monads.
325
- A monadic support is a structure (M, +, ×, 1, 0, ±, 0, ⋉, ⊲, ⊳, ⋇) satisfying:
326
- – M is a monad,
327
- – R = (M(1), +, ×, 1, 0) is a semiring,
328
- – M = (M(Exp(Σ)), ±, 0) is a monoid,
329
- – (M, ⋉) is a Exp(Σ)-right-semimodule,
330
- – (M, ⊲) is a R-left-semimodule,
331
- – (M, ⊳) is a R-right-semimodule,
332
- – (M(Exp(Σ)), ⋇) is a module for the operad of the functions over M(1).
333
- An expressive support is a monadic support (M, +, ×, 1, 0, ±, 0, ⋉, ⊲, ⊳, ⋇) endowed with a function toExp from
334
- M(Exp(Σ)) to Exp(Σ) satisfying the following conditions:
335
- weightw(toExp(m)) = m >>= weightw
336
- (2)
337
- toExp(m ⋉ F) = toExp(m) · F,
338
- (3)
339
- toExp(m ± m′) = toExp(m) + toExp(m′),
340
- (4)
341
- toExp(m ⊲ x) = toExp(m) ⊙ x,
342
- (5)
343
- toExp(x ⊳ m) = x ⊙ toExp(m),
344
- (6)
345
- toExp(f ⋇ (m1, . . . , mn)) = f(toExp(m1), . . . , toExp(mn)).
346
- (7)
347
- Let us now illustrate this notion with three expressive supports that will allow us to model well-known derivatives
348
- computations.
349
- Example 5 (The Maybe support).
350
- toExp(Nothing) = 0,
351
- toExp(Just(E)) = E,
352
- Nothing + m = m,
353
- m + Nothing = m,
354
- Just(⊤) + Just(⊤) = Just(⊤),
355
- Nothing × m = Nothing,
356
- m × Nothing = Nothing,
357
- Just(⊤) × Just(⊤) = Just(⊤),
358
- Nothing ± m = m,
359
- m ± Nothing = m,
360
- Just(E) ± Just(E′) = Just(E + E′),
361
- 1 = Just(⊤),
362
- 0 = Nothing,
363
- 0 = Nothing,
364
- m ⋉ F = (λE → E · F) <$> m,
365
- m ⊲ m′ = m >>= (λx → m′),
366
- m ⊳ m′ = m′ >>= (λx → m),
367
- f ⋇ (m1, . . . , mn) = pure(f(toExp(m1), . . . , toExp(mn))).
368
- Example 6 (The Set support).
369
- toExp({E1, . . . , En}) = E1 + · · · + En,
370
- + = ∪,
371
- × = ∩,
372
- ± = ∪,
373
- 1 = {⊤},
374
- 0 = ∅,
375
- 0 = ∅,
376
- m ⋉ F = (λE → E · F) <$> m,
377
- m ⊲ m′ = m >>= (λx → m′),
378
- m ⊳ m′ = m′ >>= (λx → m),
379
- f ⋇ (m1, . . . , mn) = pure(f(toExp(m1), . . . , toExp(mn))).
380
- Example 7 (The LinComb(K) support).
381
- toExp((k1, E1) ⊞ · · · ⊞ (kn, En)) = k1 ⊙ E1 + · · · + kn ⊙ En,
382
- + = ⊞,
383
- (k, ⊤) × (k′, ⊤) = (k × k′, ⊤),
384
- 1 = (1, ⊤),
385
- 0 = (0, ⊤),
386
- ± = ⊞,
387
- 0 = (0, ⊤),
388
- m ⋉ F = (λE → E · F) <$> m,
389
- m ⊲ m′ = m >>= (λx → m′),
390
- m ⊳ k = (λE → E ⊙ k) <$> m,
391
- f ⋇ (m1, . . . , mn) = pure(f(toExp(m1), . . . , toExp(mn))).
392
-
393
- 5
394
- Monadic Derivatives
395
- In the following, (M, +, ×, 1, 0, ±, 0, ⋉, ⊲, ⊳, ⋇, toExp) is an expressive support.
396
- Definition 5. The derivative of an M-monadic expression E over Σ w.r.t. a symbol a in Σ is the element da(E)
397
- in M(Exp(Σ)) inductively defined as follows:
398
- da(ε) = 0,
399
- da(∅) = 0,
400
- da(b) =
401
-
402
- pure(ε)
403
- if a = b,
404
- 0
405
- otherwise,
406
- da(E1 + E2) = da(E1) ± da(E2),
407
- da(E∗
408
- 1) = da(E1) ⋉ E∗
409
- 1,
410
- da(E1 · E2) = da(E1) ⋉ E2 ± Null(E1) ⊲ da(E2),
411
- da(α ⊙ E1) = α ⊲ da(E1),
412
- da(E1 ⊙ α) = da(E1) ⊳ α,
413
- da(f(E1, . . . , En)) = f ⋇ (da(E1), . . . , da(En))
414
- where b is a symbol in Σ, (E1, . . . , En) are n M-monadic expressions over Σ, α is an element of M(1) and f is a
415
- function from (M(1))n to M(1).
416
- The link between derivatives and series can be stated as follows, which is an alternative description of the classical
417
- quotient.
418
- Proposition 2. Let E be an M-monadic expression over an alphabet Σ, a be a symbol in Σ and w be a word in
419
- Σ∗. Then:
420
- weightaw(E) = da(E) >>= weightw.
421
- Proof. Let us proceed by induction over the structure of E. All the classical cases (i.e. the function operator left
422
- aside) can be proved following the classical methods ([1,4,13]). Therefore, let us consider this last case.
423
- da(f(E1, . . . , En)) >>= weightw
424
- = weightw(toExp(da(f(E1, . . . , En))))
425
- (Eq (2))
426
- = weightw(toExp(f ⋇ (da(E1), . . . , da(En)))
427
- (Def 5))
428
- = weightw(f(toExp(da(E1)), . . . , toExp(da(En))))
429
- (Eq (7))
430
- = f(weightw(toExp(da(E1))), . . . , weightw(toExp(da(En))))
431
- (Def 4, Eq (1))
432
- = f(da(E1) >>= weightw, . . . , da(En) >>= weightw)
433
- (Eq (2))
434
- = f(weightaw(E1), . . . , weightaw(En))
435
- (Ind. hyp.)
436
- = weightaw(f(E1, . . . , En))
437
- (Def 4, Eq (1))
438
- Let us define how to extend the derivative computation from symbols to words, using the monadic functions.
439
- Definition 6. The derivative of an M-monadic expression E over Σ w.r.t. a word w in Σ∗ is the element dw(E)
440
- in M(Exp(Σ)) inductively defined as follows:
441
- dε(E) = pure(E),
442
- da·v(E) = da(E) >>= dv,
443
- where a is a symbol in Σ and v a word in Σ∗.
444
- Finally, it can be easily shown, by induction over the length of the words, following Proposition 2, that the
445
- derivatives computation can be used to define a syntactical computation of the weight of a word associated with an
446
- expression.
447
- Theorem 1. Let E be an M-monadic expression over an alphabet Σ and w be a word in Σ∗. Then:
448
- weightw(E) = dw(E) >>= Null.
449
- Notice that, restraining monadic expressions to regular ones,
450
- – the Maybe support leads to the classical derivatives [4],
451
- – the Set support leads to the partial derivatives [1],
452
- – the LinComb support leads to the derivatives with multiplicities [13].
453
- Example 8. Let us consider the function ExtDist defined in Example 4 and the LinComb(N)-monadic expression
454
- E = ExtDist(a∗b∗ + b∗a∗, b∗a∗b∗, a∗b∗a∗).
455
- da(E) = ExtDist(a∗b∗ + a∗, a∗b∗, a∗b∗a∗ + a∗)
456
- daa(E) = ExtDist(a∗b∗ + a∗, a∗b∗, a∗b∗a∗ + 2 ⊙ a∗)
457
-
458
- daaa(E) = ExtDist(a∗b∗ + a∗, a∗b∗, a∗b∗a∗ + 3 ⊙ a∗)
459
- daab(E) = ExtDist(b∗, b∗, b∗a∗)
460
- weightaaa(E) = daaa(E) >>= Null
461
- = ExtDist(1 + 1, 1, 1 + 3) = 4 − 1 = 3
462
- weightaab(E) = daab(E) >>= Null = ExtDist(1, 1, 1) = 0
463
- In the next section, we show how to compute the derivative automaton associated with an expression.
464
- 6
465
- Automata Construction
466
- A category C is defined by:
467
- – a class ObjC of objects,
468
- – for any two objects A and B, a set HomC(A, B) of morphisms,
469
- – for any three objects A, B and C, an associative composition function ◦C in HomC(B, C) −→ HomC(A, B) −→
470
- HomC(A, C),
471
- – for any object A, an identity morphism idA in HomC(A, A), such that for any morphisms f in HomC(A, B) and
472
- g in HomC(B, A), f ◦C idA = f and idA ◦C g = g.
473
- Given a category C, a C-automaton is a tuple (Σ, I, Q, F, i, δ, f) where
474
- – Σ is a set of symbols (the alphabet),
475
- – I is the initial object, in Obj(C),
476
- – Q is the state object, in Obj(C),
477
- – F is the final object, in Obj(C),
478
- – i is the initial morphism, in HomC(I, Q),
479
- – δ is the transition function, in Σ −→ HomC(Q, Q),
480
- – f is the value morphism, in HomC(Q, F).
481
- The function δ can be extended as a monoid morphism from the free monoid (Σ∗, ·, ε) to the morphism monoid
482
- (HomC(Q, Q), ◦C, idQ), leading to the following weight definition.
483
- The weight associated by a C-automaton A = (Σ, I, Q, F, i, δ, f) with a word w in Σ∗ is the morphism weight(w)
484
- in HomC(I, F) defined by
485
- weight(w) = f ◦C δ(w) ◦C i.
486
- If the ambient category is the category of sets, and if I =
487
- 1, the weight of a word is equivalently an element of
488
- F. Consequently, a deterministic (complete) automaton is equivalently a Set-automaton with
489
- 1 as the initial object
490
- and B as the final object.
491
- Given a monad M, the Kleisli composition of two morphisms f ∈ HomC(A, B) and g ∈ HomC(B, C) is the
492
- morphism (f >=> g)(x) = f(x) >>= g in HomC(A, C). This composition defines a category, called the Kleisli
493
- category K(M) of M, where:
494
- – the objects are the sets,
495
- – the morphisms between two sets A and B are the functions between A and M(B),
496
- – the identity is the function pure.
497
- Considering these categories:
498
- – a deterministic automaton is equivalently a K(Maybe)-automaton,
499
- – a nondeterministic automaton is equivalently a K(Set)-automaton,
500
- – a weighted automaton over a semiring K is equivalently a K(LinComb(K))-automaton,
501
- all with
502
- 1 as both the initial object and the final object.
503
- Furthermore, for a given expression E, if i = pure(E), δ(a)(E′) = da(E′) and f = Null, we can compute
504
- the well-known derivative automata using the three previously defined supports, and the accessible part of these
505
- automata are finite ones as far as classical expressions are concerned [4,1,13].
506
- More precisely, extended expressions can lead to infinite automata, as shown in the next example.
507
-
508
- Example 9. Considering the computations of Example 8, it can be shown that
509
- dan(E) = ExtDist(a∗b∗ + a∗, a∗b∗, a∗b∗a∗ + n ⊙ a∗).
510
- Hence, there is not a finite number of derivated terms, that are the states in the classical derivative automaton.
511
- This infinite automaton is represented in Figure 1, where the final weights of the states are represented by double
512
- edges. The sink states are omitted.
513
- ExtDist(a∗b∗ + b∗a∗, b∗a∗b∗, a∗b∗a∗)
514
- ExtDist(a∗b∗ + a∗, a∗b∗, a∗b∗a∗ + a∗)
515
- ExtDist(a∗b∗ + a∗, a∗b∗, a∗b∗a∗ + 2 ⊙ a∗)
516
- ExtDist(b∗, b∗, b∗a∗)
517
- ExtDist(0, 0, a∗)
518
- ExtDist(b∗ + b∗a∗, b∗a∗b∗ + b∗, b∗a∗)
519
- ExtDist(b∗ + b∗a∗, b∗a∗b∗ + 2 ⊙ b∗, b∗a∗)
520
- ExtDist(a∗, a∗b∗, a∗)
521
- ExtDist(0, b∗, 0)
522
- ExtDist(a∗b∗ + a∗, a∗b∗, a∗b∗a∗ + n ⊙ a∗)
523
- ExtDist(b∗ + b∗a∗, b∗a∗b∗ + n ⊙ b∗, b∗a∗)
524
- 1
525
- 1
526
- 2
527
- n
528
- 1
529
- 1
530
- 2
531
- 1
532
- n
533
- a
534
- b
535
- b
536
- a
537
- b
538
- a
539
- b
540
- a
541
- b
542
- a
543
- b
544
- a
545
- b
546
- a
547
- b
548
- a
549
- b
550
- a
551
- Fig. 1. The (infinite) derivative weighted automaton associated with E.
552
- In the following section, let us show how to model a new monad in order to solve this problem.
553
- 7
554
- The Graded Module Monad
555
- Let us consider an operad O = (O, ◦, id) and the association sending:
556
- – any set S to �
557
- n∈N On × Sn,
558
- – any f in S → S′ to the function g in �
559
- n∈N On × Sn → �
560
- n∈N On × S′n:
561
- g(o, (s1, . . . , sn)) = (o, (f(s1), . . . , f(sn)))
562
- It can be checked that this is a functor, denoted by GradMod(O). Moreover, it forms a monad considering the two
563
- following functions:
564
- pure(s) = (id, s),
565
- (o, (s1, . . . , sn)) >>= f = (o ◦ (o1, . . . , on), (s1,1, . . . , s1,i1, . . . , sn,1, . . . , sn,in))
566
- where f(sj) = (oj, sj,1, . . . , sj,ij). However, notice that GradMod(O)(1) cannot be easily evaluated as a value space.
567
- Thus, let us compose it with another monad. As an example, let us consider a semiring K = (K, ×, +, 1, 0) and
568
- the operad O of the n-ary functions over K. Hence, let us define the functor7 GradComb(O, K) that sends S to
569
- GradMod(O)(LinComb(K)(S)).
570
- 7 it is folk knowledge that the composition of two functors is a functor.
571
-
572
- To show that this combination is a monad, let us first define a function α sending GradComb(O, K)(S) to
573
- GradMod(O)(S). It can be easily done by converting a linear combination into an operadic combination, i.e. an
574
- element in GradMod(O)(S), with the following function toOp:
575
- toOp((k1, s1) ⊞ · · · ⊞ (kn, sn))
576
- = (λ(x1, . . . , xn) → k1 × x1 + · · · + kn × xn, (s1, . . . , sn)),
577
- α(o, (L1, . . . , Ln)) = (o ◦ (o1, . . . , on), (s1,1, . . . , s1,i1, . . . , sn,1, . . . , sn,in))
578
- where toOp(Lj) = (oj, (sj,1, . . . , sj,ij)).
579
- Consequently, we can define the monadic functions as follows:
580
- pure(s) = (id, (1, s)),
581
- (o, (L1, . . . , Ln)) >>= f = α(o, (L1, . . . , Ln)) >>= f
582
- where the second occurrence of >>= is the monadic function associated with the monad GradMod(O).
583
- Let us finally define an expressive support for this monad:
584
- toExp(o, (L1, . . . , Ln)) = o(toExp(L1), . . . , toExp(Ln)),
585
- (o, (L1, . . . , Ln)) + (o′, (L′
586
- 1, . . . , L′
587
- n′)) = (o + o′, (L1, . . . , Ln, L′
588
- 1, . . . , L′
589
- n′))
590
- (o, (L1, . . . , Ln)) × (o′, (L′
591
- 1, . . . , L′
592
- n′)) = (o × o′, (L1, . . . , Ln, L′
593
- 1, . . . , L′
594
- n′))
595
- ± = +,
596
- 1 = (id, (1, ⊤)),
597
- 0 = (id, (0, ⊤)),
598
- 0 = (id, (0, ⊤)),
599
- m ⋉ F = pure(toExp(m) · F),
600
- (o, (M1, . . . , Mk)) ⊲ (o′, (L1, . . . , Ln)) = (o(M1, . . . , Mk) × o′, (L1, . . . , Ln)),
601
- (o, (L1, . . . , Ln)) ⊳ (o′, (M1, . . . , Mk)) = (o × o′(M1, . . . , Mk), (L1, . . . , Ln))
602
- f ⋇ ((o1, (L1,1, . . . , L1,i1)), . . . , (on, (Ln,1, . . . , Ln,in)))
603
- = (f ◦ (o1, . . . , on), (L1,1, . . . , L1,i1, . . . , Ln,1, . . . , Ln,in))
604
- where (o + o′)(x1, . . . , xn+n′) = o(x1, . . . , xn) + o′(xn+1, . . . , xn+n′)
605
- (o × o′)(x1, . . . , xn+n′) = o(x1, . . . , xn) × o′(xn+1, . . . , xn+n′)
606
- Example 10. Let us consider that two elements in GradComb(O, K)(Exp(Σ)) are equal if they have the same image
607
- by toExp. Let us consider the expression E = ExtDist(a∗b∗ + b∗a∗, b∗a∗b∗, a∗b∗a∗) of Example 8.
608
- da(E) = ExtDist ⋇ ((+, (a∗b∗, a∗)), (id, a∗b∗), (+, (a∗b∗a∗, a∗)))
609
- = (ExtDist ◦ (+, id, +), (a∗b∗, a∗, a∗b∗, a∗b∗a∗, a∗))
610
- daa(E) = (ExtDist ◦ (+, id, + ◦ (+, id)), (a∗b∗, a∗, a∗b∗, a∗b∗a∗, a∗, a∗))
611
- = (ExtDist ◦ (+, id, + ◦ (id, 2×)), (a∗b∗, a∗, a∗b∗, a∗b∗a∗, a∗))
612
- daaa(E) = (ExtDist ◦ (+, id, + ◦ (id, 3×)), (a∗b∗, a∗, a∗b∗, a∗b∗a∗, a∗))
613
- daab(E) = (ExtDist ◦ (+, id, +), (b∗, ∅, b∗, b∗a∗, ∅))
614
- = (ExtDist, (b∗, b∗, b∗a∗))
615
- weightaaa(E) = daaa(E) >>= Null
616
- = ExtDist ◦ (+, id, +)(1, 1, 1, 1, 3)
617
- = ExtDist(1 + 1, 1, 1 + 3) = 4 − 1 = 3
618
- weightaab(E) = daab(E) >>= Null = ExtDist(1, 1, 1) = 0
619
- Using this monad, the number of derivated terms, that is the number of states in the associated derivative automaton,
620
- is finite. Indeed, the computations are absorbed in the transition structure. This automaton is represented in
621
- Figure 2. Notice that the dashed rectangle represent the functions that are composed during the traversal associated
622
- with a word. The final weights are represented by double edges. The sink states are omitted. The state b∗ is duplicated
623
- to simplify the representation.
624
-
625
- ExtDist(a∗b∗ + b∗a∗, b∗a∗b∗, a∗b∗a∗)
626
- ExtDist
627
- +
628
- +
629
- ExtDist
630
- +
631
- b∗a∗b∗
632
- +
633
- a∗b∗a∗
634
- +
635
- a∗b∗
636
- a∗
637
- b∗
638
- b∗a∗
639
- b∗
640
- 1
641
- 1
642
- 1
643
- 1
644
- 1
645
- 1
646
- 1
647
- 1
648
- a
649
- b
650
- a
651
- b
652
- b
653
- a
654
- b
655
- a
656
- a
657
- b
658
- b
659
- b
660
- Fig. 2. The Associated Derivative Automaton of ExtDist(a∗b∗ + b∗a∗, b∗a∗b∗, a∗b∗a∗).
661
- However, notice that not every monadic expression produces a finite set of derivated terms, as shown in the next
662
- example.
663
- Example 11. Let us consider the expression E of Example 8 and the expression F = E · c∗. It can be shown that
664
- dan(F) = toExp(dan(E)) · c∗
665
- = ExtDist(a∗b∗ + a∗, a∗b∗, a∗b∗a∗ + n ⊙ a∗) · c∗.
666
- The study of the necessary and sufficient conditions of monads that lead to a finite set of derivated terms is one
667
- of the next steps of our work.
668
- 8
669
- Haskell Implementation
670
- The notions described in the previous sections have been implemented in Haskell, as follows:
671
- – The notion of monad over a sub-category of sets is a typeclass using the Constraint kind to specify a sub-
672
- category;
673
- – n-ary functions and their operadic structures are implemented using fixed length vectors, the size of which is
674
- determined at compilation using type level programming;
675
- – The notion of graded module is implemented through an existential type to deal with unknown arities: Its
676
- monadic structure is based on an extension of heterogeneous lists, the graded vectors, typed w.r.t. the list of
677
- the arities of the elements it contains;
678
- – The parser and some type level functions are based on dependently typed programming with singletons [8],
679
- allowing, for example, determining the type of the monads or the arity of the functions involved at run-time;
680
- – An application is available here [16] illustrating the computations:
681
- • the backend uses servant to define an API;
682
-
683
- • the frontend is defined using Reflex, a functional reactive programming engine and cross compiled in
684
- JavaScript with GHCJS.
685
- As an example, the monadic expression of the previous examples can be entered in the web application as the
686
- input ExtDist(a*.b*+b*.a*,b*.a*.b*,a*.b*.a*).
687
- 9
688
- Capture Groups
689
- Capture groups are a standard feature of POSIX regular expressions where parenthesis are used to memorize
690
- some part of the input string being matched in order to reuse either for substitution or matching. We give here
691
- an equivalent definition along with derivation formulae and a monadic definition. The semantic of this definition
692
- conforms to those of POSIX expressions. Precisely, when a capture group has been involved more than one time
693
- due to a stared subexpression, the value of the corresponding variable corresponds to the last capture.
694
- 9.1
695
- Syntax of Expressions with Capture Groups
696
- A capture-group expression E over a symbol alphabet Σ and a variable alphabet Γ (or Σ, Γ-expression for short)
697
- is inductively defined as
698
- E = a,
699
- E = ε,
700
- E = ∅,
701
- E = F + G,
702
- E = F · G,
703
- E = F ∗,
704
- E = (F)x,
705
- E = x,
706
- where F and G are two Σ, Γ-expressions, a is a symbol in Σ, u is in Σ∗ and x is a variable in Γ. In the POSIX
707
- syntax, capture groups are implicitly mapped with variables respectively with the order of the opening parenthesis
708
- of a pair. Here, each capture group is associated explicitly to a variable by indexing the closing parenthesis with
709
- the name of this variable.
710
- 9.2
711
- Contextual Expressions and their Contextual Languages
712
- In order to define the contextual language and the derivation of capture-group expressions, we need to extend the
713
- syntax of the expressions in order to attach to any capture group the current part of the input string captured
714
- during an execution.
715
- A contextual capture-group expression E over a symbol alphabet Σ and a variable alphabet Γ (or Σ, Γ-expression
716
- for short) is inductively defined as
717
- E = a,
718
- E = ε,
719
- E = ∅,
720
- E = F + G,
721
- E = F · G,
722
- E = F ∗,
723
- E = (F)u
724
- x,
725
- E = x,
726
- where F and G are two Σ, Γ-expressions, a is a symbol in Σ, u is in Σ∗ and x is a variable in Γ.
727
- Notice that a Σ, Γ-expression is equivalent to a contextual capture-group expression where u = ε for every
728
- occurrence of capture group.
729
- In the following, we consider that a context is a function from Γ to Maybe(Σ∗), modelling the possibility that
730
- a variable was initialized (or not) during the parsing. The set of contexts is denoted by Ctxt(Γ, Σ).
731
- Using these notions of contexts, let us now explain the semantics of contextual capture-group expressions. While
732
- parsing, a context is built to memorize the different affectations of words to variables. Therefore, a (contextual)
733
- language associated with an expression is a set of couples built from a language and the context that was used to
734
- compute it.
735
- The classic atomic cases (a symbol, the empty word or the empty set) are easy to define, preserving the context.
736
- Another one is the case of a variable x: the context is applied here to compute the associated word (if it exists) and
737
- is preserved.
738
- The recursive cases are interpreted as such:
739
- – The contextual language of a sum of two expressions is the union of their contextual languages, computed
740
- independently.
741
- – The contextual language of a catenation of two expressions F and G is computed in three steps. First, the
742
- contextual language of F is computed. Secondly, for each couple (L, ctxt) of this contextual language, the
743
- function ctxt is considered as the new context to compute the contextual language of G, leading to new couples
744
- (L′, ctxt′). Finally, for each of these combinations, a couple (L·L′, ctxt′) is added to form the resulting contextual
745
- language.
746
-
747
- – The contextual language of a starred expression is, classically, the infinite union of the powered contextual
748
- languages, computed by iterated catenations.
749
- – The contextual language of a captured expression (F)u
750
- x is computed in two steps. First, the contextual language
751
- of F is computed. Then, for each couple (L, ctxt) of it, a word w is chosen in L and the context ctxt must be
752
- updated coherently.
753
- More formally, the contextual language of a Σ, Γ-expression E associated with a context ctxt in Ctxt(Γ, Σ) is
754
- the subset Lctxt(E) of 2Σ∗ × Ctxt(Γ, Σ) inductively defined as follows:
755
- Lctxt(a) = {({a}, ctxt)},
756
- Lctxt(ε) = {({ε}, ctxt)},
757
- Lctxt(∅) = ∅,
758
- Lctxt(x) =
759
-
760
-
761
- if ctxt(x) = Nothing,
762
- {({w}, ctxt)}
763
- otherwise if ctxt(x) = Just(w),
764
- Lctxt(F + G) = Lctxt(F) ∪ Lctxt(G),
765
- Lctxt(F · G) =
766
-
767
- (L1,ctxt1)∈Lctxt(F ),
768
- (L2,ctxt2)∈Lctxt1(G)
769
- {(L1 · L2, ctxt2)},
770
- Lctxt(F ∗) =
771
-
772
- n∈N
773
- (Lctxt(F))
774
- n,
775
- Lctxt((F)u
776
- x) =
777
-
778
- (L1,ctxt1)∈Lctxt(F ),
779
- w∈L1
780
- {({w}, [ctxt1]x←uw)},
781
- where F and G are two Σ, Γ-expressions, a is a symbol in Σ, x is a variable in Γ, u is in Σ∗, Ln is defined, for any
782
- set L of couples (language, context) by
783
- Ln =
784
-
785
-
786
-
787
-
788
-
789
-
790
-
791
-
792
-
793
-
794
-
795
-
796
- (L,ctxt)∈L
797
- {({ε}, ctxt)}
798
- if n = 0,
799
-
800
- (L1,ctxt1)∈L,
801
- (L2,ctxt2)∈Ln−1
802
- {(L1 · L2, ctxt2)}
803
- otherwise,
804
- and [ctxt]x←w is the context defined by
805
- [ctxt]x←w(y) =
806
-
807
- Just(w)
808
- if x = y,
809
- ctxt(y)
810
- otherwise.
811
- The contextual language of an expression E is the set of couples obtained from an uninitialised context, where
812
- nothing is associated with any variable, that is the set
813
- Lλ_→Nothing(E).
814
- Finally, the language denoted by an expression E is the set of words obtained by forgetting the contexts, that is the
815
- set
816
-
817
- (L,_)∈Lλ_→Nothing(E)
818
- L.
819
- Example 12. Let us consider the three following expressions over the symbol alphabet {a, b, c} and the variable
820
- alphabet {x}:
821
- E = E1 · E2,
822
- E1 = ((a∗)xbx)∗,
823
- E2 = cx.
824
- The language denoted by E2 is empty, since it is computed from the empty context, where nothing is associated
825
- with x. However, parsing E1 allows us to compute contexts that define word values to affect to x. Let us thus show
826
- how is defined the contextual language of E1:
827
- – the contextual language of (a∗)x is the set�
828
- n∈N
829
- {({an}, λx → Just(an))}
830
- where each word an is recorded in a context;
831
- – the contextual language of (a∗)xbx is the set
832
-
833
- n∈N
834
- {({anban}, λx → Just(an))}
835
- where each word an is recorded in a context applied to evaluate the variable x;
836
- – the contextual language of E1 is the union of the two following sets S1 and S2:
837
- S1 = {({��}, λx → Nothing)}
838
- S2 = {({anban | n ∈ N}∗ · {ambam}, λx → Just(am)) | m ∈ N}
839
- where each iteration of the outermost star produces a new record for the variable x in the context; however,
840
- notice that only the last one is recorded at the end of the process.
841
-
842
- Finally, the language of E is obtained by considering the contexts obtained from the parsing of E1 to evaluate the
843
- occurrence of x in E2, leading to the set�
844
- m∈N
845
- ({anban | n ∈ N}∗ · {ambamcam}).
846
- Obviously, some classical equations still hold with these computations:
847
- Lemma 1. Let E, F and G be three Σ, Γ-expressions and ctxt be a context in Ctxt(Γ, Σ). The two following
848
- equations hold:
849
- Lctxt(E · (F + G)) = Lctxt(E · F + E · G)
850
- Lctxt(F ∗) = Lctxt(ε + F · F ∗)
851
- Proof. Let us proceed by equality sequences:
852
- Lctxt(E · (F + G)) =
853
-
854
- (L1,ctxt1)∈Lctxt(E),
855
- (L2,ctxt2)∈Lctxt1 (F +G)
856
- {(L1 · L2, ctxt2)}
857
- =
858
-
859
- (L1,ctxt1)∈Lctxt(E),
860
- (L2,ctxt2)∈Lctxt1 (F )∪Lctxt1(G)
861
- {(L1 · L2, ctxt2)}
862
- =
863
-
864
- (L1,ctxt1)∈Lctxt(E),
865
- (L2,ctxt2)∈Lctxt1 (F )
866
- {(L1 · L2, ctxt2)}
867
-
868
-
869
- (L1,ctxt1)∈Lctxt(E),
870
- (L2,ctxt2)∈Lctxt1(G)
871
- {(L1 · L2, ctxt2)}
872
- = Lctxt(E · F) ∪ Lctxt(E · G)
873
- = Lctxt(E · F + E · G)
874
- Lctxt(F ∗) =
875
-
876
- n∈N
877
- (Lctxt(F))
878
- n
879
- = (Lctxt(F))
880
- 0 ∪
881
-
882
- n∈N,n≥1
883
- (Lctxt(F))
884
- n
885
- = (Lctxt(F))
886
- 0 ∪
887
-
888
- n∈N
889
- Lctxt(F) · (Lctxt(F))
890
- n
891
- = (Lctxt(F))
892
- 0 ∪ Lctxt(F) ·
893
-
894
- n∈N
895
- (Lctxt(F))
896
- n
897
- = Lctxt(ε + F · F ∗)
898
- In order to solve the membership test for the contextual capture-group expressions, let us extend the classical
899
- derivation method. But first, let us show how to extend the nullability predicate, needed at the end of the process.
900
- 9.3
901
- Nullability Computation
902
- The nullability predicate allows us to determine whether the empty word belongs to the language denoted by
903
- an expression. As far as capture groups are concerned, a context has to be computed. Therefore, the nullability
904
- predicate can be represented as a set of contexts the application of which produces a language that contains the
905
- empty word.
906
- As we have seen, the nullability depends on the current context. Given an expression and a context ctxt, the
907
- nullability predicate is a set in 2Ctxt(Γ,Σ), computed as follows:
908
- Nullctxt(ε) = {ctxt}
909
- Nullctxt(∅) = ∅
910
- Nullctxt(a) = ∅
911
- Nullctxt(x) =
912
-
913
- {ctxt}
914
- if ctxt(x) = Just(ε)
915
-
916
- otherwise.
917
- Nullctxt(E + F) = Nullctxt(E) ∪ Nullctxt(F)
918
- Nullctxt(E · F) =
919
-
920
- ctxt′∈Nullctxt(F ),
921
- ctxt′′∈Nullctxt′ (G)
922
- {ctxt′′}
923
- Nullctxt(E∗) = {ctxt}
924
- Nullctxt((E)u
925
- x) =
926
-
927
- ctxt′∈Nullctxt(F )
928
- {[ctxt′]x←u}
929
- where E and F are two Σ, Γ-expressions, a is a symbol in Σ, x is a variable in Γ and u is in Σ∗.
930
- Example 13. Let us consider the three expressions of Example 12:
931
- E = E1 · E2,
932
- E1 = ((a∗)xbx)∗,
933
- E2 = cx.
934
- For any context ctxt,
935
- Nullctxt(E1) = {ctxt},
936
- Nullctxt(E2) = ∅,
937
- Nullctxt(E) = ∅.
938
- The nullability predicate allows us to determine whether there exists a couple in the contextual language of an
939
- expression such that its first component contains the empty word.
940
-
941
- Proposition 3. Let E be a Σ, Γ-expression and ctxt be a context in Ctxt(Γ, Σ). Then the two following conditions
942
- are equivalent:
943
- – Nullctxt(E) ̸= ∅,
944
- – ∃(L, _) ∈ Lctxt(E) | ε ∈ L.
945
- Proof. By induction over the structure of E:
946
- – If E = a ∈ Σ or E = ∅, the property holds since Nullctxt(E) is empty and since there is no couple (L, ctxt′) in
947
- Lctxt(E) with ε in L.
948
- – If E = ε, the following two conditions hold,
949
- Nullctxt(E) = {ctxt},
950
- Lctxt(E) = {({ε}, ctxt)},
951
- satisfying the stated condition.
952
- – If E = F + G, the following two conditions hold:
953
- Nullctxt(F + G) = Nullctxt(F) ∪ Nullctxt(G),
954
- Lctxt(F + G) = Lctxt(F) ∪ Lctxt(G).
955
- Since, by induction hypothesis, the following two conditions hold
956
- Nullctxt(F) ̸= ∅ ⇔ ∃(L, ctxt′) ∈ Lctxt(F) | ε ∈ L,
957
- Nullctxt(G) ̸= ∅ ⇔ ∃(L, ctxt′) ∈ Lctxt(G) | ε ∈ L,
958
- the proposition holds.
959
- – If E = F · G, the two following conditions hold:
960
- Nullctxt(F · G) =
961
-
962
- ctxt′∈Nullctxt(F ),
963
- ctxt′′∈Nullctxt′(G),
964
- {ctxt′′},
965
- Lctxt(F · G) =
966
-
967
- (L,ctxt′)∈Lctxt(F ),
968
- (L′,ctxt′′)∈Lctxt′(G),
969
- {(L · L′, ctxt′′)}.
970
- Since, by induction hypothesis, the two following conditions hold,
971
- Nullctxt(F) ̸= ∅ ⇔ ∃(L, ctxt′) ∈ Lctxt(F) | ε ∈ L,
972
- Nullctxt′(G) ̸= ∅ ⇔ ∃(L, ctxt′′) ∈ Lctxt′(G) | ε ∈ L,
973
- the proposition holds.
974
- – If E = F ∗, since the two following conditions hold
975
- Nullctxt(F ∗) = {ctxt},
976
- Lctxt(F)
977
- 0 = {({ε}, ctxt)} ∈ Lctxt(F ∗),
978
- the stated condition holds.
979
- – If E = (F)u
980
- x, both following conditions hold:
981
- Nullctxt((F)u
982
- x) =
983
-
984
- ctxt′∈Nullctxt(F )
985
- {[ctxt′]x←u},
986
- Lctxt((F)u
987
- x) =
988
-
989
- (L,ctxt′)∈Lctxt(F ),
990
- w∈L
991
- {({w}, [ctxt′]x←uw)}.
992
- Then, following induction hypothesis,
993
- Nullctxt(F) ̸= ∅ ⇔ ∃(L, ctxt′) ∈ Lctxt(F) | ε ∈ L,
994
- the stated condition holds.
995
- – If E = x, both following conditions hold:
996
- Nullctxt(x) =
997
-
998
- {ctxt}
999
- if ctxt(x) = Just(ε)
1000
-
1001
- otherwise,
1002
- Lctxt(x) =
1003
-
1004
-
1005
- if ctxt(x) = Nothing,
1006
- {({w}, ctxt)}
1007
- otherwise if ctxt(x) = Just(w).
1008
- Therefore, the proposition holds.
1009
- 9.4
1010
- Derivation formulae
1011
- Similarly to the nullability predicate, the derivation computation builds the context while parsing the expression.
1012
- Therefore, the derivative of an expression with respect to a context is a set of couples (expression, context), induc-
1013
- tively computed as follows, for any Σ, Γ-expression and for any context ctxt in Ctxt(Γ, Σ):
1014
- dctxt
1015
- a
1016
- (ε) = ∅
1017
- dctxt
1018
- a
1019
- (∅) = ∅
1020
-
1021
- dctxt
1022
- a
1023
- (b) =
1024
-
1025
-
1026
- if a ̸= b,
1027
- {(ε, ctxt)}
1028
- otherwise,
1029
- dctxt
1030
- a
1031
- (x) =
1032
-
1033
- dctxt
1034
- a
1035
- (w)
1036
- if ctxt(x) = Just(w)
1037
-
1038
- otherwise
1039
- dctxt
1040
- a
1041
- (F + G) = dctxt
1042
- a
1043
- (F) ∪ dctxt
1044
- a
1045
- (G)
1046
- dctxt
1047
- a
1048
- (F · G) =
1049
-
1050
- (ctxt′,F ′)∈dctxt
1051
- a
1052
- (F )
1053
- {(F ′ · G, ctxt′)}
1054
-
1055
-
1056
- ctxt′∈Nullctxt(F )
1057
- dctxt′
1058
- a
1059
- (G)
1060
- dctxt
1061
- a
1062
- (F ∗) =
1063
-
1064
- (ctxt′,F ′)∈dctxt
1065
- a
1066
- (F )
1067
- {(F ′ · F ∗, ctxt′)}
1068
- dctxt
1069
- a
1070
- ((F)u
1071
- x) =
1072
-
1073
- (ctxt′,F ′)∈dctxt
1074
- a
1075
- (F )
1076
- {((F ′)u·a
1077
- x , ctxt′)}
1078
- where F and G are two Σ, Γ-expressions, a is a symbol in Σ, x is a variable in Γ and u is in Σ∗.
1079
- Example 14. Let us consider the three expressions of Example 12:
1080
- E = E1 · E2,
1081
- E1 = ((a∗)xbx)∗,
1082
- E2 = cx.
1083
- Then, for any context ctxt,
1084
- dctxt
1085
- a
1086
- (E) = {((a∗)a
1087
- xbx((a∗)xbx)∗cx, ctxt)},
1088
- dctxt
1089
- b
1090
- (E) = {(x((a∗)xbx)∗cx, λx → ε)},
1091
- dctxt
1092
- c
1093
- (E) = {(x, ctxt)}.
1094
- The derivation of an expression allows us to syntactically express the computation of the quotient of the language
1095
- components in contextual languages, where the quotient w−1(L) is the set {w′ | ww′ ∈ L}.
1096
- Proposition 4. Let E be a Σ, Γ-expression, ctxt be a context in Ctxt(Γ, Σ) and a be a symbol in Σ. Then:
1097
-
1098
- (E′,ctxt′)∈dctxt
1099
- a
1100
- (E)
1101
- Lctxt′(E′) =
1102
-
1103
- (L′,ctxt′)∈Lctxt(E)
1104
- {(a−1(L′), ctxt′)}
1105
- Proof. By induction over the structure of E, assimilating ∅ and {(∅, ctxt)} for any context ctxt.
1106
- – If E = ε or E = ∅, the property vacuously holds.
1107
- – If E = b ∈ Σ,
1108
-
1109
- (E′,ctxt′)∈dctxt
1110
- a
1111
- (b)
1112
- Lctxt′(E′) =
1113
-
1114
-
1115
- if b ̸= a,
1116
- {({ε}, ctxt)}
1117
- otherwise,
1118
- = {(a−1({b}), ctxt)} =
1119
-
1120
- (L′,ctxt′)∈Lctxt(b)
1121
- {(a−1(L′), ctxt′)}.
1122
- – If E = F + G,�
1123
- (E′,ctxt′)∈dctxt
1124
- a
1125
- (F +G)
1126
- Lctxt′(E′) =
1127
-
1128
- (E′,ctxt′)∈dctxt
1129
- a
1130
- (F )∪dctxt
1131
- a
1132
- (G)
1133
- Lctxt′(E′)
1134
- =
1135
-
1136
- (E′,ctxt′)∈dctxt
1137
- a
1138
- (F )
1139
- Lctxt′(E′) ∪
1140
-
1141
- (E′,ctxt′)∈dctxt
1142
- a
1143
- (G)
1144
- Lctxt′(E′)
1145
- =
1146
-
1147
- (L′,ctxt′)∈Lctxt(F )
1148
- {(a−1(L′), ctxt′)} ∪
1149
-
1150
- (L′,ctxt′)∈Lctxt(G)
1151
- {(a−1(L′), ctxt′)}
1152
- =
1153
-
1154
- (L′,ctxt′)∈Lctxt(F )∪Lctxt(G)
1155
- {(a−1(L′), ctxt′)}
1156
- =
1157
-
1158
- (L′,ctxt′)∈Lctxt(F +G)
1159
- {(a−1(L′), ctxt′)}.
1160
- – If E = F · G,
1161
-
1162
- (E′,ctxt′)∈dctxt
1163
- a
1164
- (F ·G)
1165
- Lctxt′(E′) =
1166
-
1167
- (ctxt′,F ′)∈dctxt
1168
- a
1169
- (F )
1170
- Lctxt′(F ′ · G) ∪
1171
-
1172
- ctxt′∈Nullctxt(F ),
1173
- (G′,ctxt′′)∈dctxt′
1174
- a
1175
- (G)
1176
- Lctxt′′(G′)
1177
- =
1178
-
1179
- (ctxt′,F ′)∈dctxt
1180
- a
1181
- (F ),
1182
- (L1,ctxt1)∈Lctxt(F ′),
1183
- (L2,ctxt2)∈Lctxt1(G)
1184
- {(L1 · L2, ctxt2)} ∪
1185
-
1186
- ctxt′∈Nullctxt(F ),
1187
- (G′,ctxt′′)∈dctxt′
1188
- a
1189
- (G)
1190
- Lctxt′′(G′)
1191
- =
1192
-
1193
- (L1,ctxt1)∈Lctxt(F ),
1194
- (L2,ctxt2)∈Lctxt1 (G)
1195
- {(a−1(L1) · L2, ctxt2)} ∪
1196
-
1197
- ctxt1∈Nullctxt(F ),
1198
- (L2,ctxt2)∈Lctxt1 (G)
1199
- {(a−1(L2), ctxt2)}
1200
-
1201
- =
1202
-
1203
- (L1,ctxt1)∈Lctxt(F ),
1204
- (L2,ctxt2)∈Lctxt1 (G)
1205
- {(a−1(L1) · L2, ctxt2)} ∪
1206
-
1207
- ∃(L,ctxt1)∈Lctxt(F )|ε∈L,
1208
- (L2,ctxt2)∈Lctxt1(G)
1209
- {(a−1(L2), ctxt2)}
1210
- =
1211
-
1212
- (L1,ctxt1)∈Lctxt(F ),
1213
- (L2,ctxt2)∈Lctxt1 (G)
1214
- {(a−1(L1) · L2, ctxt2)} ∪
1215
-
1216
- (L1,ctxt1)∈Lctxt(F ),
1217
- ε∈L1,
1218
- (L2,ctxt2)∈Lctxt1 (G)
1219
- {(a−1(L2), ctxt2)}
1220
- =
1221
-
1222
- (L1,ctxt1)∈Lctxt(F ),
1223
- (L2,ctxt2)∈Lctxt1 (G)
1224
- {(a−1(L1 · L2), ctxt2)}
1225
- =
1226
-
1227
- (L′,ctxt′)∈Lctxt(F ·G)
1228
- {(a−1(L′), ctxt′)}.
1229
- – If E = F ∗,
1230
-
1231
- (E′,ctxt′)∈dctxt
1232
- a
1233
- (F ∗)
1234
- Lctxt′(E′) =
1235
-
1236
- (ctxt′,F ′)∈dctxt
1237
- a
1238
- (F )
1239
- Lctxt′(F ′ · F ∗)
1240
- =
1241
-
1242
- (ctxt′,F ′)∈dctxt
1243
- a
1244
- (F ),
1245
- (L1,ctxt1)∈Lctxt(F ′),
1246
- (L2,ctxt2)∈Lctxt1(F ∗)
1247
- {(L1 · L2, ctxt2)}
1248
- =
1249
-
1250
- (L1,ctxt1)∈Lctxt(F ),
1251
- (L2,ctxt2)∈Lctxt1(F ∗)
1252
- {(a−1(L1) · L2, ctxt2)}
1253
- =
1254
-
1255
- (L1,ctxt1)∈Lctxt(F ),
1256
- (L2,ctxt2)∈Lctxt1(F ∗)
1257
- {(a−1(L1 · L2), ctxt2)}
1258
- =
1259
-
1260
- (L′,ctxt′)∈Lctxt(F ·F ∗)
1261
- {(a−1(L′), ctxt′)}
1262
- =
1263
-
1264
- (L′,ctxt′)∈Lctxt(ε+F ·F ∗)
1265
- {(a−1(L′), ctxt′)}
1266
- =
1267
-
1268
- (L′,ctxt′)∈Lctxt(F ∗)
1269
- {(a−1(L′), ctxt′)}
1270
- – If E = (F)u
1271
- x,
1272
-
1273
- (E′,ctxt′)∈dctxt
1274
- a
1275
- ((F )u
1276
- x)
1277
- Lctxt′(E′) =
1278
-
1279
- (ctxt′,F ′)∈dctxt
1280
- a
1281
- (F )
1282
- Lctxt′((F ′)u·a
1283
- x )
1284
- =
1285
-
1286
- (ctxt′,F ′)∈dctxt
1287
- a
1288
- (F )
1289
- (L1,ctxt1)∈Lctxt′(F ′),
1290
- w∈L1
1291
- {({w}, [ctxt1]x←uaw)}
1292
- =
1293
-
1294
- (L1,ctxt1)∈Lctxt(F ),
1295
- w∈a−1(L1)
1296
- {({w}, [ctxt1]x←uaw)}
1297
- =
1298
-
1299
- (L1,ctxt1)∈Lctxt(F ),
1300
- aw∈L1
1301
- {({w}, [ctxt1]x←uaw)}
1302
- =
1303
-
1304
- (L1,ctxt1)∈Lctxt(F ),
1305
- aw∈L1
1306
- {(a−1({aw}), [ctxt1]x←uaw)}
1307
- =
1308
-
1309
- (L1,ctxt1)∈Lctxt(F ),
1310
- w∈L1
1311
- {(a−1({w}), [ctxt1]x←uw)}
1312
- =
1313
-
1314
- (L′,ctxt′)∈Lctxt((F )u
1315
- x)
1316
- {(a−1(L′), ctxt′)}
1317
-
1318
- – If E = x,
1319
-
1320
- (E′,ctxt′)∈dctxt
1321
- a
1322
- (x)
1323
- Lctxt′(E′) =
1324
-
1325
-
1326
-
1327
-
1328
-
1329
-
1330
- (E′,ctxt′)∈dctxt
1331
- a
1332
- (w)
1333
- Lctxt′(E′)
1334
- if ctxt(x) = Just(w),
1335
-
1336
- otherwise,
1337
- =
1338
-
1339
-
1340
-
1341
-
1342
-
1343
-
1344
- (w,ctxt)∈dctxt
1345
- a
1346
- (aw)
1347
- Lctxt(w)
1348
- if ctxt(x) = Just(aw),
1349
-
1350
- otherwise,
1351
- =
1352
-
1353
- {({w}, ctxt)}
1354
- if ctxt(x) = Just(aw),
1355
-
1356
- otherwise,
1357
- =
1358
-
1359
- {(a−1({aw}), ctxt)}
1360
- if ctxt(x) = Just(aw),
1361
-
1362
- otherwise,
1363
- =
1364
-
1365
- {(a−1({w}), ctxt)}
1366
- if ctxt(x) = Just(w),
1367
-
1368
- otherwise,
1369
- =
1370
-
1371
- (L′,ctxt′)∈Lctxt(x)
1372
- {(a−1(L′), ctxt′)}
1373
- The derivation w.r.t. a word is, as usual, an iterated application of the derivation w.r.t. a symbol, recursively
1374
- defined as follows, for any Σ, Γ-expression E, for any context ctxt in Ctxt(Γ, Σ), for any symbol a in Σ and for
1375
- any word v in Σ∗:
1376
- dctxt
1377
- ε
1378
- (E) = {(E, ctxt)},
1379
- dctxt
1380
- a·v (E) =
1381
-
1382
- (E′,ctxt′)∈dctxt
1383
- a
1384
- (E)
1385
- dctxt′
1386
- v
1387
- (E′).
1388
- Example 15. Let us consider the three expressions of Example 14:
1389
- E = E1 · E2,
1390
- E1 = ((a∗)xbx)∗,
1391
- E2 = cx.
1392
- Then, for any context ctxt,
1393
- dctxt
1394
- ab (E) = dctxt
1395
- b
1396
- ((a∗)a
1397
- xbx((a∗)xbx)∗cx)
1398
- = {(x((a∗)xbx)∗cx, λx → a)}
1399
- dctxt
1400
- aba (E) = dλx→a
1401
- a
1402
- (x((a∗)xbx)∗cx)
1403
- = {(((a∗)xbx)∗cx, λx → a)}
1404
- dctxt
1405
- abac(E) = dλx→a
1406
- c
1407
- (((a∗)xbx)∗cx)
1408
- = {(x, λx → a)}
1409
- dctxt
1410
- abaca(E) = dλx→a
1411
- a
1412
- (x)
1413
- = {(ε, λx → a)}
1414
- Such an operation allows us to syntactically compute the quotient.
1415
- Proposition 5. Let E be a Σ, Γ-expression, ctxt be a context in Ctxt(Γ, Σ) and w be a word in Σ∗. Then:
1416
-
1417
- (E′,ctxt′)∈dctxt
1418
- w
1419
- (E)
1420
- Lctxt′(E′) =
1421
-
1422
- (L′,ctxt′)∈Lctxt(E)
1423
- {(w−1(L′), ctxt′)}
1424
- Proof. By a direct induction over the structure of words.
1425
- Finally, the membership test of a word w can be performed as usual by first computing the derivation w.r.t. w, and
1426
- then by determining the existence of a nullable derivative, as a direct corollary of Proposition 3 and Proposition 5.
1427
- Theorem 2. Let E be a Σ, Γ-expression, ctxt be a context in Ctxt(Γ, Σ) and w be a word in Σ∗. Then the two
1428
- following conditions are equivalent:
1429
- – ∃(L, _) ∈ Lctxt(E) | w ∈ L,
1430
- – ∃(E′, ctxt′) ∈ dctxt
1431
- w
1432
- (E) | Nullctxt′(E′) ̸= ∅.
1433
- We have shown how to compute the derivatives and solve the membership test in a classical way. Let us show how
1434
- to embed the context computation in a convenient monad, in order to generalize the definitions to other structure
1435
- than sets.
1436
-
1437
- 9.5
1438
- The StateT Monad Transformer
1439
- Monads do not compose well in general. However, ones can consider particular combinations of these objects. Among
1440
- those, well-known patterns are the monad transformers like the StateT Monad Transformer [10]. This combination
1441
- allows us to mimick the use of global variables in a functional way. In our setting, it allows us to embed the context
1442
- computation in an elegant way.
1443
- Let S be a set and M be a monad. We denote by StateT(S, M) following the mapping:
1444
- StateT(S, M)(A) = S → M(A × S).
1445
- In other terms, StateT(S, M)(A) is the set of functions from S to the monadic structure M(A×S) based on couples
1446
- in the cartesian product (A × S).
1447
- The mapping StateT(S, M) can be equipped by a structure of functor, defined for any function f from a set A
1448
- to a set B by
1449
- StateT(S, M)(f)(state)(s) = M(λ(a, s) → (f(a), s))(state(s)).
1450
- It can also be equipped with the structure of monad, defined for any function f from a set A to the set StateT(S, M)(B):
1451
- pure(a) = λs → pure(a, s)
1452
- bind(f)(state)(s) = state(s) >>= λ(a, s′) → f(a)(s′)
1453
- 9.6
1454
- Monadic Definitions
1455
- The previous definitions associated with capture-group expressions can be equivalently restated using the StateT
1456
- monad transformer specialised with the Set monad.
1457
- Let us first consider the following claims where M = StateT(Ctxt(Γ, Σ), Set), allowing us to bring closer M
1458
- and the previous notion of monadic support:
1459
- – R = (M(1), +, ×, 1, 0) is a semiring by setting:
1460
- f1 + f2 = λs → f1(s) ∪ f2(s),
1461
- f1 × f2 = f1 >>= λ_ → f2,
1462
- 1 = λs → {(⊤, s)} = pure(⊤),
1463
- 0 = λs → ∅,
1464
- – M = (M(Exp(Σ)), ±, 0) is a monoid by setting:
1465
- ± = +,
1466
- 0 = 0,
1467
- – (M, ⋉) is a Exp(Σ)-right-semimodule by setting:
1468
- f ⋉ F = λs →
1469
-
1470
- (E,ctxt)∈f(s)
1471
- {(E · F, ctxt)},
1472
- – (M, ⊲) is a R-left-semimodule by setting:
1473
- f1 ⊲ f2 = f1 >>= λ_ → f2.
1474
- Then, the nullable predicate formulae can be equivalently restated as an element in StateT(Ctxt(Γ, Σ), Set)(1),
1475
- which is equal by definition to Ctxt(Γ, Σ) → Set(1 × Ctxt(Γ, Σ)), isomorphic to Ctxt(Γ, Σ) → Set(Ctxt(Γ, Σ)). It
1476
- can inductively be computed as follows:
1477
- Null(ε) = 1
1478
- Null(∅) = 0
1479
- Null(a) = 0
1480
- Null(E + F) = Null(E) + Null(F)
1481
- Null(E · F) = Null(E) × Null(F)
1482
- Null(E∗) = 1
1483
- Null(x)(ctxt) =
1484
-
1485
- pure((⊤, ctxt))
1486
- if ctxt(x) = Just(ε),
1487
-
1488
- otherwise,
1489
- Null((E)u
1490
- x)(ctxt) = Set(λ(⊤, ctxt′) → (⊤, [ctxt′]x←u))(Null(F)(ctxt)),
1491
- where E and F are two Σ, Γ-expressions, a is a symbol in Σ, x is a variable in Γ and u is in Σ∗. Notice that
1492
- these formulae are the same that the ones in Definition 2 as far as classical operators are concerned, and that these
1493
- formulae can be easily generalized to other convenient monads than Set.
1494
- Moreover, the derivative of an expression is an element in StateT(Ctxt(Γ, Σ), Set)(Exp(Σ, Γ)):
1495
- da(ε) = 0
1496
- da(∅) = 0
1497
- da(b) =
1498
-
1499
- 0
1500
- if a ̸= b,
1501
- pure(ε)
1502
- otherwise,
1503
- da(E + F) = da(E) ± da(F)
1504
- da(E · F) = da(E) ⋉ F + Null(E) ⊲ da(F)
1505
- da(E∗) = da(E) ⋉ E∗
1506
- da((E)u
1507
- x) = StateT(Ctxt(Γ, Σ), Set)(λF → (F)ua
1508
- x )(da(E))
1509
- da(x)(ctxt) =
1510
-
1511
- pure((w, ctxt))
1512
- if ctxt(x) = Just(aw),
1513
-
1514
- otherwise,
1515
-
1516
- where E and F are two Σ, Γ-expressions, a is a symbol in Σ, x is a variable in Γ and u is in Σ∗. Once again, notice
1517
- that these formulae are the same that the ones in Definition 5 as far as classical operators are concerned, and that
1518
- these formulae can be easily generalized to other convenient monads than Set.
1519
- Finally, the derivation w.r.t. a word is monadically defined as in previous sections:
1520
- dε(E) = pure(E),
1521
- dav(E) = da(E) >>= dv,
1522
- and the membership test of a word w can be equivalently rewritten as follows:
1523
- (dw(E) >>= Null)(λ_ → Nothing) ̸= ∅.
1524
- 10
1525
- Conclusion and Perspectives
1526
- In this paper, we achieved the first step of our plan to unify the derivative computation over word expressions.
1527
- Monads are indeed useful tools to abstract the underlying computation structures and thus may allow us to consider
1528
- some other functionalities, such as capture groups via the well-known StateT monad transformer [10]. We aim to
1529
- study the conditions satisfying by monads that lead to finite set of derivated terms, and to extend this method
1530
- to tree expressions using enriched categories. Finally, we plan to extend monadic derivation to other underlying
1531
- monads for capture groups, linear combinations for example.
1532
- References
1533
- 1. Antimirov, V.M.: Partial derivatives of regular expressions and finite automaton constructions. Theor. Comput. Sci.
1534
- 155(2) (1996) 291–319
1535
- 2. Attou, S., Mignot, L., Miklarz, C., Nicart, F.: Monadic expressions and their derivatives. In: NCMA. Volume 367 of
1536
- EPTCS (2022) 49–64
1537
- 3. Berry, G., Sethi, R.:
1538
- From regular expressions to deterministic automata.
1539
- Theoretical computer science 48 (1986)
1540
- 117–126
1541
- 4. Brzozowski, J.A.: Derivatives of regular expressions. J. ACM 11(4) (1964) 481–494
1542
- 5. Caron, P., Flouret, M.: From glushkov wfas to k-expressions. Fundam. Informaticae 109(1) (2011) 1–25
1543
- 6. Champarnaud, J., Laugerotte, É., Ouardi, F., Ziadi, D.: From regular weighted expressions to finite automata. Int. J.
1544
- Found. Comput. Sci. 15(5) (2004) 687–700
1545
- 7. Colcombet, T., Petrisan, D.: Automata and minimization. SIGLOG News 4(2) (2017) 4–27
1546
- 8. Eisenberg, R.A., Weirich, S.: Dependently typed programming with singletons. In: Haskell, ACM (2012) 117–130
1547
- 9. Glushkov, V.M.: The abstract theory of automata. Russian Mathematical Surveys 16(5) (1961) 1
1548
- 10. Jones, M.P.: Functional programming with overloading and higher-order polymorphism. In: Adv. Func. Prog. Volume
1549
- 925 of LNCS, Springer (1995) 97–136
1550
- 11. Kleene, S.: Representation of events in nerve nets and finite automata. Automata Studies Ann. Math. Studies 34
1551
- (1956) 3–41 Princeton U. Press.
1552
- 12. Loday, J.L., Vallette, B.: Algebraic operads. Volume 346. Springer Science & Business Media (2012)
1553
- 13. Lombardy, S., Sakarovitch, J.: Derivatives of rational expressions with multiplicity. Theor. Comput. Sci. 332(1-3) (2005)
1554
- 141–177
1555
- 14. May, J.P.: The geometry of iterated loop spaces. Volume 271. Springer (2006)
1556
- 15. Mignot, L.: Une proposition d’implantation des structures d’automates, d’expressions et de leurs algorithmes associés
1557
- utilisant les catégories enrichies (in french).
1558
- Habilitation à diriger des recherches, Université de Rouen normandie
1559
- (Décembre 2020) 212 pages.
1560
- 16. Mignot, L.: Monadic derivatives. https://github.com/LudovicMignot/MonadicDerivatives (2022)
1561
- 17. Schützenberger, M.P.: On the definition of a family of automata. Inf. Control. 4(2-3) (1961) 245–270
1562
- 18. Sulzmann, M., Lu, K.Z.M.: POSIX regular expression parsing with derivatives. In: FLOPS. Volume 8475 of Lecture
1563
- Notes in Computer Science, Springer (2014) 203–220
1564
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
knowledge_base/0tFPT4oBgHgl3EQfTjTq/content/tmp_files/load_file.txt DELETED
The diff for this file is too large to render. See raw diff
 
knowledge_base/0tFPT4oBgHgl3EQfTjTq/vector_store/index.faiss DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:939f1931eb597464ed0357ece468b9f942c377d83b89956269012520e2bfbf42
3
- size 4390957
 
 
 
 
knowledge_base/0tFPT4oBgHgl3EQfTjTq/vector_store/index.pkl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:46a711f10f601c1825a35c4c8b3d94027a479a8a4864de43e97b9d3bcd7b1b79
3
- size 159172
 
 
 
 
knowledge_base/29E2T4oBgHgl3EQfjQdC/content/2301.03966v1.pdf DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4b441323350525b02d4927164bd145116d979a6e6323799c9322d57d3789a27a
3
- size 10782147
 
 
 
 
knowledge_base/29E2T4oBgHgl3EQfjQdC/content/tmp_files/2301.03966v1.pdf.txt DELETED
@@ -1,1687 +0,0 @@
1
- AdvBiom: Adversarial Attacks on Biometric
2
- Matchers
3
- Debayan Deb, Vishesh Mistry, Rahul Parthe
4
- TECH5,
5
- Troy, MI, USA
6
- {debayan.deb, vishesh.mistry, rahul.parthe}@tech5-sa.com
7
- Abstract
8
- With the advent of deep learning models, face recognition systems have achieved
9
- impressive recognition rates. The workhorses behind this success are Convolutional
10
- Neural Networks (CNNs) and the availability of large training datasets. However,
11
- we show that small human-imperceptible changes to face samples can evade most
12
- prevailing face recognition systems. Even more alarming is the fact that the same
13
- generator can be extended to other traits in the future. In this work, we present how
14
- such a generator can be trained and also extended to other biometric modalities,
15
- such as fingerprint recognition systems.
16
- 1
17
- Introduction
18
- The last decade has seen a massive influx of deep learning-based technologies that have tackled
19
- problems which were once thought to be unsolvable. Much of this progress can be attributed to
20
- Convolutional Neural Networks (CNNs) [1, 2] which are now deployed in a plethora of applications
21
- ranging from cancer detection to driving autonomous vehicles. Akin to the computer vision domain,
22
- the use of CNNs have completely changed the face of biometrics due to the availability of powerful
23
- computing devices (GPUs, TPUs) and deep architectures capable of learning rich features [3–5].
24
- Automated face recognition systems (AFR) have been proven to achieve accuracies as high as 99%
25
- True Accept Rate (TAR) @ 0.1% False Accept Rate (FAR) [6], majorly owing to publicly available
26
- large-scale face datasets.
27
- Unfortunately, studies have shown that CNN-based networks are vulnerable to adversarial pertur-
28
- bations1 [7–12]. It is not surprising that AFR systems too are not impervious to these attacks.
29
- Adversarial attacks to an AFR system can be classified into two categories - (i) impersonation attack
30
- where the hacker tries to perturb his face image to match it to a target victim, and (ii) obfuscation
31
- attack where the hacker’s face image is perturbed to match with a random identity. Both the above at-
32
- tacks involve the hacker adding targeted human-imperceptible perturbations to the face image. These
33
- adversarial attacks are different from face digital manipulation that include attribute manipulation
34
- and synthetic faces, and also from presentation attacks which involves the perpetrator wearing a
35
- physical artifact such as a mask or replaying a photograph/video of a genuine individual which may
36
- be conspicuous in scenarios where human operators are involved.
37
- Let us consider, as example, the largest deployment of fingerprint recognition systems - India’s
38
- Aadhaar Project [13], which currently has an enrolled gallery size of about 1.35 billion faces from
39
- nearly all of its citizens. In September 2022 alone, Aadhaar received 1.3 billion authentication
40
- requests2. In order to deny a citizen his/her rightful access to government benefits, healthcare, and
41
- financial services, an attacker can maliciously perturb enrolled face images such that they do not
42
- 1Adversarial perturbations refer to altering an input image instance with small, human imperceptible changes
43
- in a manner that can evade CNN models.
44
- 2https://bit.ly/3BzlpZJ
45
- arXiv:2301.03966v1 [cs.CV] 10 Jan 2023
46
-
47
- match to the genuine person during verification. In a typical AFR system, adversarial faces can
48
- be replaced with a captured face image in order to prevent the probe face from matching to any of
49
- its corresponding enrolled faces. Additionally, the attacker can compromise the entire gallery by
50
- inserting adversarial faces in the enrolled gallery, where no probe face will match to the correct
51
- identity’s gallery.
52
- Adversarial attacks can further be categorized into two types of attacks based on how the attack vector
53
- is trained and generated:
54
- 1. White-box attack: Attacks in which the hacker has full knowledge of the recognition system,
55
- and iteratively perturbs every pixel by various optimization schemes are termed as white-box
56
- attacks [14–22].
57
- 2. Black-box attack: With no information about the parameters of the recognition system,
58
- black-box attacks are deployed by either transferring attacks learned from an available AFR
59
- system [23–28], or querying the the target system for score [29–31] or decision [32, 33].
60
- 3. Semi-whitebox attack: Here, a white-box model is utilized only during training and then ad-
61
- versarial examples are synthesized during inference without any knowledge of the deployed
62
- AFR model.
63
- We propose an automated adversarial synthesis method, named AdvBiom, which generates an ad-
64
- versarial image for a probe image and satisfies all the above requirements. The contributions of the
65
- paper are as follows:
66
- 1. GAN-based AdvBiom that learns to generate visually realistic adversarial face images that
67
- are misclassified by state-of-the-art automated biometric systems.
68
- 2. Adversarial images generated via AdvBiom are model-agnostic and transferable, and achieve
69
- high success rate on 5 state-of-the-art automated face recognition systems.
70
- 3. Visualizing regions where pixels are perturbed and analyzing the transferability of AdvBiom .
71
- 4. We show that AdvBiom achieves significantly higher attack success rate under current
72
- defense mechanisms compared to baselines.
73
- 5. With the addition of the proposed Minutiae Displacement and Distortion modules, we show
74
- thatAdvBiom can also be extended to successfully evade automated fingerprint recognition
75
- systems.
76
- 2
77
- Related Work
78
- 2.1
79
- Adversarial Attacks
80
- As discussed earlier, adversarial attacks are broadly classified into white-box attacks and black-box
81
- attacks. A large number of white-box attacks are gradient-based where they analyze the gradients
82
- during the back-propagation of an available face recognition system and perform pixel-wise per-
83
- turbations to the target face image. While approaches such as FGSM [14] and PGD [17] exploit
84
- the high-dimensional space of deep networks to generate adversarial attacks, C&W [18] focuses on
85
- minimizing objective functions for optimal adversarial perturbations. However, the basic assump-
86
- tion in white-box attacks that the target recognition system will be available is not plausible. In
87
- real-life scenarios, the hacker will not have any information regarding the architecture, training and
88
- deployment of the recognition system.
89
- Black-box attacks can be classified into three major categories: transfer-based, score-based, and
90
- decision-based attacks. Transfer-based attacks train their adversarial attack generator using readily
91
- available recognition systems and then deploy the attacks onto a black-box target system. Dong et
92
- al. [23] proposed the use of momentum for efficient transferability of the adversarial samples. DI2-
93
- FGSM [24] suggested to increase input data diversity for improving transferability. Other approaches
94
- in this category include AI-FGSM [27] and TI-FGSM [28]. Score-based attacks [29–31] query the
95
- target system for scores and try to estimate its gradients. Decision-based attacks have the most
96
- challenging setting wherein only the decisions from the target system are queried. Some effective
97
- methods in this category include Evolutionary attack [32] and Boundary attack [33].
98
- 2
99
-
100
- 2.2
101
- Adversarial Attacks on Face Recognition
102
- Although adversarial attacks on face recognition systems have only been recently explored, there
103
- has been a significant number of effective approaches for evading AFR systems. Attacks on face
104
- recognition systems can be broadly categorized into physical attacks and digital attacks. Physical
105
- attacks involve generating adversarial physical artifacts which are ’worn’ on a face. Sharif et
106
- al. [34, 35] proposed generating adversarial eye-glass frames for attacking face recognition systems.
107
- In [36], adversarial printed stickers placed on a hat were generated. However, methods [34–36] are
108
- implemented in a white-box setting which is unrealistic. Additionally, Nguyen et al. [37] proposed
109
- an adversarial light projection attack using an on-premise projector. Yin et al. [38] generated and
110
- printed eye makeup patches to be stuck around the eyes. More recently, authors in [39] proposed an
111
- adversarial mask for impersonation attacks in a black-box setting. However, all the above methods
112
- suffer a major drawback of being unrealistic in an operational setting where a human operator is
113
- present.
114
- Digital attacks refer to manipulating and perturbing the pixels of a digital face image before being
115
- passed through a face recognition system. Early works [9, 18, 10, 8, 40] focused on gradient-based
116
- attacks for face recognition. However, these methods implement lp-norm perturbations to each pixel
117
- resulting in decreased attack transferability, and vulnerability to denoising models. Cauli et al. [41]
118
- implemented a backdoor attack where the target face recognition system’s training samples were
119
- manipulated. Apart from the fact that gaining access to the target AFR’s training samples is highly
120
- improbable, a thorough visual inspection of the samples can easily identify the digital artifacts. Other
121
- works employ more stealthy attack approaches against face recognition models. Dong et al. [32]
122
- proposed an evolutionary optimization method for generating adversarial faces in decision-based
123
- black-box settings. However, they require a minimum of 1,000 queries to the target face recognition
124
- system before a realistic adversarial face can be synthesized. [42] added a conditional variational
125
- autoencoder and attention modules to generate adversarial faces in a transfer-based black-box setting.
126
- However, they solely focused on impersonation attacks and require at least 5 image samples of the
127
- target subject for training and inference. Zhong et al. [43] implemented dropout [44] to improve
128
- the transferability of the adversarial examples. [38] perturbed the eye region of a face to produce
129
- adversarial eyeshadow artifacts. However, the artifacts are visibly conspicuous under close inspection.
130
- Deb et al. [25] used a GAN to generate minimal perturbations in salient facial regions. More recently,
131
- [45] and [46] have focused on manipulating facial attributes for targeted adversarial attacks.
132
- 3
133
- Adversarial Faces
134
- 3.1
135
- Preliminaries
136
- The goal of any attacker is to evade Automated Face Recognition (AFR) systems under either of the
137
- two settings:
138
- • Obfuscation Manipulate input face images in a manner such that they cannot be identified
139
- as the hacker, or
140
- • Impersonation Edit input face images such that they are identified as a target/desired
141
- individual (victim).
142
- While the manipulated face image evades the AFR system, a key requirement in a successful attack is
143
- such that the input face image should appear as a legitimate face photo of the attacker. In other words,
144
- the attacker desires an automated method of adding small and human-imperceptible changes to an
145
- input face image such that it can evade AFR systems while appear benign to human observers. These
146
- changes are denoted as adversarial perturbations and the manipulated image is hereby referred to as
147
- adversarial images3. In addition, the automated method of synthesizing adversarial perturbations is
148
- named as adversarial generator.
149
- Formally, given an input face image, x, an adversarial generator has two requirements under the
150
- obfuscation scenario:
151
- • synthesize an adversarial face image, xadv = x + δ, such that AFR systems fail to match
152
- xadv and x, and
153
- 3We interchangeably use the terms adversarial images and adversarial faces in this paper.
154
- 3
155
-
156
- • limit the magnitude of perturbation ||δ||p such that xadv appears very similar to x to humans.
157
- When the attack aims to impersonate a target individual, we need an image of the victim xtarget
158
- where the identity of x and xtarget are different. Therefore, constraints under the impersonation
159
- setting are as follows:
160
- • synthesize an adversarial face image, xadv = x + δ, such that AFR systems erroneously
161
- match xadv and xtarget, and
162
- • limit the magnitude of perturbation ||δ||p such that xadv appears very similar to x to humans.
163
- Obfuscation attempts (faces are perturbed such that they cannot be identified as the attacker) are gen-
164
- erally more effective [25], computationally efficient to synthesize [14, 17], and widely adopted [47]
165
- compared to impersonation attacks (perturbed faces can automatically match to a target subject).
166
- Therefore, this paper focuses on crafting obfuscation attacks, however, we will still show examples
167
- on synthesizing impersonation attacks.
168
- 3.2
169
- Gradient-based Attacks
170
- In white-box attacks, the attacker is assumed to have the knowledge and access to the AFR system’s
171
- model and parameters. Naturally, we then expect a much better attack success rate under white-box
172
- settings since the attacker can carefully craft adversarial perturbations that necessarily evade the target
173
- AFR system. However, these white-box manipulations of face recognition models are impractical in
174
- real-world scenarios. For instance, assuming access to an airport’s already deployed AFR system
175
- may be extremely difficult.
176
- Nevertheless, it is advantageous to understand prevailing white-box methods. That is, if given access
177
- to a CNN-based AFR system, how could one utilize all of its model parameters to launch a successful
178
- adversarial attack?
179
- A common approach is to utilize gradients of the whitebox AFR models. Namely the attackers modify
180
- the image in the direction of the gradient of the loss function with respect to the input image. There
181
- are two prevailing approaches to perform such gradient-based attacks:
182
- • one-shot attacks, in which the attacker takes a single step in the direction of the gradient,
183
- and
184
- • iterative attacks where instead of a single step, several steps are taken until we obtain a
185
- successful adversarial pattern.
186
- 3.2.1
187
- Fast Gradient Sign Method (FGSM)
188
- This method computes an adversarial image by adding a pixel-wide perturbation of magnitude in the
189
- direction of the gradient [14]. Under FGSM attack, we take a single step towards the direction of the
190
- gradient, and therefore, FGSM is very efficient in terms of computation time. Formally, given an
191
- input image x, we obtain an adversarial image xadv:
192
- xadv = x + ϵ · sign (▽xJ (x, y))
193
- where, J is the loss function used to train the AFR system (typically, softmax cross entropy loss), and
194
- y is the ground truth class label of x (typically, the subject ID of the identity in x).
195
- FGSM was first proposed for the object classification domain and therefore, utilizes softmax proba-
196
- bilities for crafting adversarial perturbations. Therefore, the number of object classes are assumed to
197
- be known during training and testing. However, face recognition systems do not utilize the softmax
198
- layer for classification (as the number of identities are not fixed during deployment) instead features
199
- from the last fully connected layer are used for comparing face images.
200
- We first modify FGSM appropriately in order to evade AFR systems rather than object classifiers.
201
- Instead of considering the softmax cross-entropy loss as J, we craft a new loss function that models
202
- real-world scenario4:
203
- LfeatureMatch = 1 − Ex
204
-
205
- F(x) · F(xadv)
206
- ||F(x)|| ||F(xadv)||
207
-
208
- .
209
- 4For brevity, we denote Ex ≡ Ex∈Pdata.
210
- 4
211
-
212
- where, F is the matcher and F(x) is the feature representation of an input image x. The above feature
213
- matching loss function computes the cosine distance between a pair of images and ensures that the
214
- features between adversarial image xadv and input image x are as close as possible. Therefore, the
215
- gradient of the above loss ensures the features do not match and hence, can be considered as an
216
- obfuscation adversarial attack.
217
- In Fig. 1, we show the results of launching our modified FGSM attack on a state-of-the-art AFR
218
- system, namely ArcFace [3]. We see that with a single step and with minimal perturbations, the real
219
- and adversarial images of Tiger Woods does not match via ArcFace while humans can easily identity
220
- both images as pertaining to the same subject.
221
- (a) Real Input Image
222
- (b) Perturbation
223
- (c) FGSM [14]
224
- Figure 1: Adversarial face synthesized via FGSM [14]. A state-of-the-art face matcher, ArcFace [3], fails to
225
- match the adversarial and input image. Cosine similarity score (∈ [−1, 1]) between the two images is 0.27,
226
- while a score above 0.36 (threshold @ 0.1% False Accept Rate) indicates that two faces are of the same subject.
227
- 3.2.2
228
- Projected Gradient Descent (PGD)
229
- An extreme case of white-box attacks is the PGD attack [17] where we assume that the attacker also
230
- has unlimited number of attempts to try and evade the deployed AFR system. Unlike FGSM, PGD is
231
- an iterative attack. PGD attempts to find the perturbation δ that maximises the loss of a model on a
232
- particular input while keeping the size of the perturbation smaller than a specified amount referred
233
- to as ϵ. We keep iterating until such a δ is obtained. Similar to FGSM, we modify the loss function
234
- of PGD to fit the requirements of AFR system by again considering LfeatureMatch as the loss. Fig.
235
- 2 shows the results of PGD attack on ArcFace matcher. Note that due to multiple iterations, PGD
236
- attack on AFR systems is more powerful (lower cosine similarity) but also more visible to humans as
237
- compared to the single-step FGSM attack.
238
- (a) Real Input Image
239
- (b) Perturbation
240
- (c) PGD [17]
241
- Figure 2: Adversarial face synthesized via PGD [17]. A state-of-the-art face matcher, ArcFace [3], fails to match
242
- the adversarial and input image. Cosine similarity score (∈ [−1, 1]) between the two images is 0.12, while a
243
- score above 0.36 (threshold @ 0.1% False Accept Rate) indicates that two faces are of the same subject.
244
- 3.3
245
- Geometric Perturbations (GFLM)
246
- Prior efforts in crafting adversarial faces have also tried non-linear deformations as a natural method
247
- for evading AFR systems [48]. Non-linear deformations are applied by performing geometric warping
248
- to the input face images.
249
- Unlike traditional adversarial perturbations that basically add an adversarial perturbation δ, authors
250
- in [48] propose a fast method of generating adversarial faces by altering the landmark locations of
251
- the input images. The resulting adversarial faces completely lie on the manifold of natural images,
252
- which makes it extremely difficult to detect any adversarial perturbations. Results of geometrically
253
- warped adversarial faces are presented in 3.
254
- 5
255
-
256
- (a) Real Input Image
257
- (b) Perturbation
258
- (c) GFLM
259
- Figure 3: Adversarial face synthesized via GFLM [48]. A state-of-the-art face matcher, ArcFace [3], fails to
260
- match the adversarial and input image. Cosine similarity score (∈ [−1, 1]) between the two images is 0.33,
261
- while a score above 0.36 (threshold @ 0.1% False Accept Rate) indicates that two faces are of the same subject.
262
- 3.4
263
- Attribute-based Perturbations
264
- Unlike geometric-warping and gradient-based attacks that may perturb every pixel in the image, a
265
- few studies propose manipulating only salient regions in faces, e.g., eyes, nose, and mouth.
266
- By restricting perturbations to only semantic regions of the face, SemanticAdv [46] generates
267
- adversarial examples in a more controllable fashion by editing a single semantic aspect through
268
- attribute-conditioned image editing. Fig. 4 shows results from adversarial manipulating semantic
269
- attributes. We can see while the attacks are indeed successful, it comes at the cost of altering the
270
- perceived identity as well as leads to degraded image quality.
271
- (a) Real Input Image
272
- (b) Blond
273
- (c) Bangs
274
- (d) Mouth Open
275
- (e) Eyeglasses
276
- (f) Makeup
277
- Figure 4: Adversarial face synthesized via manipulating semantic attributes [46]. All adversarial images (b-f)
278
- fail to match with the real image (a) via ArcFace [3].
279
- 4
280
- AdvBiom: Learning to Synthesize Adversarial Attacks
281
- We find that majority of prior efforts on crafting adversarial attacks either degrade the visual quality
282
- where an observant human can still visually pick out the adversarial patterns. We also identify the
283
- following challenges with prior efforts:
284
- • Gradient-based attacks rely on white-box settings where the entire deployed CNN-based
285
- AFR system is available to the attacker to compute its gradients.
286
- • Geometrically-warping faces generally do not guarantee adversarial success and greatly
287
- distort the face image.
288
- • Semantic attribute manipulation can also degrade visual quality and may lead to greater
289
- conspicuous changes.
290
- Instead, we propose to train a network to “learn" the salient regions of the face that can be perturbed
291
- to evade AFR systems in a semi-whitebox setting. These leads to the following advantages over prior
292
- efforts:
293
- 6
294
-
295
- • Perceptual Realism Given a large enough training dataset, a network can gradually learn to
296
- synthesize adversarial face images that are perceptually realistic such that a human observer
297
- can identify the image as a legitimate face image.
298
- • Higher Attack Success The faces can be learned to be perturbed in a manner such that they
299
- cannot be identified as the hacker (obfuscation at- tack) or automatically matched to a target
300
- subject (impersonation attack) by an AFR system.
301
- • Controllable The amount of perturbation can also be controllable by the attacker so that
302
- they can examine the success of the learning model as a function of amount of perturbation.
303
- • Transferability Due to the semi-whitebox setting: once the network learns to generate the
304
- perturbed instances based on a single face recognition system, attacks can be transferred to
305
- any black-box AFR systems.
306
- We propose an automated adversarial biometric synthesis method, named AdvBiom, which generates
307
- an adversarial image for a probe face image and satisfies all the above requirements.
308
- 4.1
309
- Methodology
310
- Our goal is to synthesize a face image that visually appears to pertain to the target face, yet automatic
311
- face recognition systems either incorrectly matches the synthesized image to another person or does
312
- not match to target’s gallery images. AdvBiom comprises of a generator G, a discriminator D, and
313
- face matcher (see Figure 5).
314
- Probe
315
- ℒ"#$
316
- Synthesized
317
- +
318
- ℒ%&'()%)*
319
- ℒ+',)-,./)%0(
320
- Adversarial Mask
321
- 1
322
-
323
- 3
324
- Figure 5: Given a probe face image, AdvBiom automatically generates an adversarial mask that is then added to
325
- the probe to obtain an adversarial face image.
326
- Generator
327
- The proposed generator takes an input face image, x ∈ X, and outputs an image, G(x).
328
- The generator is conditioned on the input image x; for different input faces, we will get different
329
- synthesized images.
330
- Since our goal is to obtain an adversarial image that is metrically similar to the probe in the image
331
- space, x, it is not desirable to perturb all the pixels in the probe image. For this reason, we treat the
332
- output from the generator as an additive mask and the adversarial face is defined as x + G(x). If
333
- the magnitude of the pixels in G(x) is minimal, then the adversarial image comprises mostly of the
334
- probe x. Here, we denote G(x) as an “adversarial mask". In order to bound the magnitude of the
335
- adversarial mask, we introduce a perturbation loss during training by minimizing the L2 norm5:
336
- Lperturbation = Ex [max (ϵ, ∥G(x)∥2)]
337
- (1)
338
- where ϵ ∈ [0, ∞) is a hyperparameter that controls the minimum amount of perturbation allowed.
339
- 5For brevity, we denote Ex ≡ Ex∈X .
340
- 7
341
-
342
- In order to achieve our goal of impersonating a target subject’s face or obfuscating one’s own identity,
343
- we need a face matcher, F, to supervise the training of AdvBiom. For obfuscation attack, at each
344
- training iteration, AdvBiom tries to minimize the cosine similarity between face embeddings of the
345
- input probe x and the generated image x + G(x) via an identity loss function:
346
- Lidentity = Ex[F(x, x + G(x))]
347
- (2)
348
- For an impersonation attack, AdvBiom maximizes the cosine similarity between the face embeddings
349
- of a randomly chosen target’s probe, y, and the generated adversarial face x + G(x) via:
350
- Lidentity = Ex[1 − F(y, x + G(x))]
351
- (3)
352
- The perturbation and identity loss functions enforce the network to learn the salient facial regions
353
- that can be perturbed minimally in order to evade automatic face recognition systems.
354
- Discriminator
355
- Akin to previous works on GANs [49, 50], we introduce a discriminator in order
356
- to encourage perceptual realism of the generated images. We use a fully-convolution network as a
357
- patch-based discriminator [50]. Here, the discriminator, D, aims to distinguish between a probe, x,
358
- and a generated adversarial face image x + G(x) via a GAN loss:
359
- LGAN =
360
- Ex [log D(x)] +
361
- Ex[log(1 − D(x + G(x)))]
362
- (4)
363
- Finally, AdvBiom is trained in an end-to-end fashion with the following objectives:
364
- min
365
- D LD = −LGAN
366
- (5)
367
- min
368
- G LG = LGAN + λiLidentity + λpLperturbation
369
- (6)
370
- where λi and λp are hyper-parameters controlling the relative importance of identity and perturbation
371
- losses, respectively. Note that LGAN and Lperturbation encourage the generated images to be visually
372
- similar to the original face images, while Lidentity optimizes for a high attack success rate. After
373
- training, the generator G can generate an adversarial face image for any input image and can be tested
374
- on any black-box face recognition system.
375
- The overall algorithm describing the training procedure of AdvBiom can be found in Algorithm 1.
376
- 4.2
377
- Experimental Results
378
- Evaluation Metrics
379
- We quantify the effectiveness of the adversarial attacks generated by Ad-
380
- vBiom and other state-of-the-art baselines via (i) attack success rate and (ii) structural similarity
381
- (SSIM).
382
- The attack success rate for obfuscation attack is computed as,
383
- Attack Success Rate = (No. of Comparisons < τ)
384
- Total No. of Comparisons
385
- (7)
386
- where each comparison consists of a subject’s adversarial probe and an enrollment image. Here, τ
387
- is a pre-determined threshold computed at, say, 0.1% FAR6. Attack success rate for impersonation
388
- attack is defined as,
389
- Attack Success Rate = (No. of Comparisons ≥ τ)
390
- Total No. of Comparisons
391
- (8)
392
- Here, a comparison comprises of an adversarial image synthesized with a target’s probe and matched
393
- to the target’s enrolled image. We evaluate the success rate for the impersonation setting via 10-fold
394
- cross-validation where each fold consists of a randomly chosen target.
395
- Similar to prior studies [42], in order to measure the similarity between the adversarial example and
396
- the input face, we compute the structural similarity index (SSIM) between the images. SSIM is a
397
- normalized metric between −1 (completely different image pairs) to 1 (identical image pairs).
398
- 6For each face matcher, we pre-compute the threshold at 0.1% FAR on all possible image pairs in LFW.
399
- For e.g., threshold @ 0.1% FAR for ArcFace is 0.28.
400
- 8
401
-
402
- Algorithm 1 Training AdvBiom. All experiments in this work use α = 0.0001, β1 = 0.5, β2 = 0.9,
403
- λi = 10.0, λp = 1.0, m = 32.
404
- We set ϵ = 3.0 (obfuscation), ϵ = 8.0 (impersonation).
405
- 1: Input
406
- 2:
407
- X
408
- Training Dataset
409
- 3:
410
- F
411
- Cosine similarity between an image pair obtained by biometric matcher
412
- 4:
413
- G
414
- Generator with weights Gθ
415
- 5:
416
- D
417
- Discriminator with weights Dθ
418
- 6:
419
- m
420
- Batch size
421
- 7:
422
- α
423
- Learning rate
424
- 8: for number of training iterations do
425
- 9:
426
- Sample a batch of probes {x(i)}m
427
- i=1 ∼ X
428
- 10:
429
- if impersonation attack then
430
- 11:
431
- Sample a batch of target images y(i) ∼ X
432
- 12:
433
- δ(i) = G((x(i), y(i))
434
- 13:
435
- else if obfuscation attack then
436
- 14:
437
- δ(i) = G(x(i))
438
- 15:
439
- end if
440
- 16:
441
- x(i)
442
- adv = x(i) + δ(i)
443
- 17:
444
- Lperturbation = 1
445
- m
446
- ��m
447
- i=1 max
448
-
449
- ϵ, ||δ(i)||2
450
- ��
451
- 18:
452
- if impersonation attack then
453
- 19:
454
- Lidentity = 1
455
- m
456
- ��m
457
- i=1 F
458
-
459
- x(i), x(i)
460
- adv
461
- ��
462
- 20:
463
- else if obfuscation attack then
464
- 21:
465
- Lidentity = 1
466
- m
467
- ��m
468
- i=1
469
-
470
- 1 − F
471
-
472
- y(i), x(i)
473
- adv
474
- ���
475
- 22:
476
- end if
477
- 23:
478
- LG
479
- GAN = 1
480
- m
481
- ��m
482
- i=1 log
483
-
484
- 1 − D(x(i)
485
- adv)
486
- ��
487
- 24:
488
- LD = 1
489
- m
490
- �m
491
- i=1
492
-
493
- log
494
-
495
- D(x(i))
496
-
497
- + log
498
-
499
- 1 − D(x(i)
500
- adv)
501
- ��
502
- 25:
503
- LG = LG
504
- GAN + λiLidentity + λpLperturbation
505
- 26:
506
- Gθ = Adam(▽GLG, Gθ, α, β1, β2)
507
- 27:
508
- Dθ = Adam(▽DLD, Dθ, α, β1, β2)
509
- 28: end for
510
- Datasets
511
- We train AdvBiom on CASIA-WebFace [51] and then test on LFW [52]7.
512
- • CASIA-WebFace [51] is comprised of 494,414 face images belonging to 10,575 different
513
- subjects. We removed 84 subjects that are also present in LFW and the testing images in
514
- this paper.
515
- • LFW [52] contains 13,233 web-collected images of 5,749 different subjects. In order to
516
- compute the attack success rate, we only consider subjects with at least two face images.
517
- After this filtering, 9,614 face images of 1,680 subjects are available for evaluation.
518
- All the testing images in this paper have no identity overlap with the training set, CASIA-
519
- WebFace [51].
520
- Data Preprocessing
521
- All face images are passed through MTCNN face detector [53] to detect five
522
- landmarks (two eyes, nose, and two mouth corners). Via similarity transformation, the face images
523
- are aligned. After transformation, the images are resized to 160 × 160. Prior to training and testing,
524
- each pixel in the RGB image is normalized by subtracting 127.5 and dividing by 128.
525
- Experimental Settings
526
- We use ADAM optimizers in Tensorflow with β1 = 0.5 and β2 = 0.9 for
527
- the entire network. Each mini-batch consists of 32 face images. We train AdvBiom for 200,000 steps
528
- with a fixed learning rate of 0.0001. Since our goal is to generate adversarial faces with high success
529
- 7Training on CASIA-WebFace and evaluating on LFW is a common approach in face recognition literature [3,
530
- 4]
531
- 9
532
-
533
- rate, the identity loss is of utmost importance. We empirically set λi = 10.0 and λp = 1.0. We
534
- train two separate models and set ϵ = 3.0 and ϵ = 8.0 for obfuscation and impersonation attacks,
535
- respectively.
536
- Gallery
537
- Probe
538
- Proposed AdvBiom
539
- GFLM [48]
540
- PGD [17]
541
- FGSM [14]
542
- 0.68
543
- 0.14
544
- 0.26
545
- 0.27
546
- 0.04
547
- 0.38
548
- 0.08
549
- 0.12
550
- 0.21
551
- 0.02
552
- (a) Obfuscation Attack
553
- Target’s Gallery Target’s Probe
554
- Probe
555
- Proposed AdvBiom
556
- A3GN [42]
557
- FGSM [14]
558
- 0.78
559
- 0.10
560
- 0.30
561
- 0.29
562
- 0.36
563
- 0.80
564
- 0.15
565
- 0.34
566
- 0.33
567
- 0.42
568
- (b) Impersonation Attack
569
- Figure 6: Adversarial face synthesis results on LFW dataset in (a) obfuscation and (b) impersonation attack
570
- settings (cosine similarity scores obtained from ArcFace [3] with threshold @ 0.1% FAR= 0.28). The proposed
571
- method synthesizes adversarial faces that are seemingly inconspicuous and maintain high perceptual quality.
572
- Architecture
573
- Let c7s1-k be a 7 × 7 convolutional layer with k filters and stride 1. dk denotes a
574
- 4 × 4 convolutional layer with k filters and stride 2. Rk denotes a residual block that contains two
575
- 3 × 3 convolutional layers. uk denotes a 2× upsampling layer followed by a 5 × 5 convolutional
576
- layer with k filters and stride 1. We apply Instance Normalization and Batch Normalization to the
577
- generator and discriminator, respectively. We use Leaky ReLU with slope 0.2 in the discriminator
578
- and ReLU activation in the generator. The architectures of the two modules are as follows:
579
- • Generator:
580
- c7s1-64,d128,d256,R256,R256,R256, u128, u64, c7s1-3
581
- • Discriminator:
582
- d32,d64,d128,d256,d512
583
- A 1 × 1 convolutional layer with 3 filters and stride 1 is attached to the last convolutional layer of the
584
- discriminator for the patch-based GAN loss LGAN.
585
- We apply the tanh activation function on the last convolution layer of the generator to ensure
586
- that the generated image ∈ [−1, 1]. In the paper, we denoted the output of the tanh layer as an
587
- “adversarial mask”, G(x) ∈ [−1, 1] and x ∈ [−1, 1]. The final adversarial image is computed as
588
- 10
589
-
590
- Obfuscation Attack
591
- Proposed AdvBiom
592
- GFLM [48]
593
- PGD [17]
594
- FGSM [14]
595
- Attack Success Rate (%) @ 0.1% FAR
596
- FaceNet [5]
597
- 99.67
598
- 23.34
599
- 99.70
600
- 99.96
601
- SphereFace [4]
602
- 97.22
603
- 29.49
604
- 99.34
605
- 98.71
606
- ArcFace [3]
607
- 64.53
608
- 03.43
609
- 33.25
610
- 35.30
611
- COTS-A
612
- 82.98
613
- 08.89
614
- 18.74
615
- 32.48
616
- COTS-B
617
- 60.71
618
- 05.05
619
- 01.49
620
- 18.75
621
- Structural Similarity
622
- 0.95 ± 0.01
623
- 0.82 ± 0.12
624
- 0.29 ± 0.06
625
- 0.25 ± 0.06
626
- Computation Time (s)
627
- 0.01
628
- 3.22
629
- 11.74
630
- 0.03
631
- Impersonation Attack
632
- Proposed AdvBiom
633
- A3GN [42]
634
- PGD [17]
635
- FGSM [14]
636
- Attack Success Rate (%) @ 0.1% FAR
637
- FaceNet [5]
638
- 20.85 ± 0.40
639
- 05.99 ± 0.19
640
- 76.79 ± 0.26
641
- 13.04 ± 0.12
642
- SphereFace [4]
643
- 20.19 ± 0.27
644
- 07.94 ± 0.19
645
- 09.03 ± 0.39
646
- 02.34 ± 0.03
647
- ArcFace [3]
648
- 24.30 ± 0.44
649
- 17.14 ± 0.29
650
- 19.50 ± 1.95
651
- 08.34 ± 0.21
652
- COTS-A
653
- 20.75 ± 0.35
654
- 15.01 ± 0.30
655
- 01.76 ± 0.10
656
- 01.40 ± 0.08
657
- COTS-B
658
- 19.85 ± 0.28
659
- 10.23 ± 0.50
660
- 12.49 ± 0.24
661
- 04.67 ± 0.16
662
- Structural Similarity
663
- 0.92 ± 0.02
664
- 0.69 ± 0.04
665
- 0.77 ± 0.04
666
- 0.48 ± 0.75
667
- Computation Time (s)
668
- 0.01
669
- 0.04
670
- 11.74
671
- 0.03
672
- White-box matcher (used for training)
673
- Black-box matcher (never used in training)
674
- Table 1: Attack success rates and structural similarities between probe and gallery images for obfus-
675
- cation and impersonation attacks. Attack rates for obfuscation comprises of 484,514 comparisons and
676
- the mean and standard deviation across 10-folds for impersonation reported. The mean and standard
677
- deviation of the structural similarities between adversarial and probe images along with the time
678
- taken to generate a single adversarial image (on a Quadro M6000 GPU) also reported.
679
- xadv = 2 × clamp
680
-
681
- G(x) +
682
- � x+1
683
- 2
684
- ��1
685
- 0 − 1. This ensures G(x) can either add or subtract pixels from
686
- x when G(x) ̸= 0. When G(x) → 0, then xadv → x.
687
- Face Recognition Systems
688
- For all our experiments, we employ 5 state-of-the-art face matchers8.
689
- Three of them are publicly available, namely, FaceNet [5], SphereFace [4], and ArcFace [3]. We also
690
- report our results on two commercial-off-the-shelf (COTS) face matchers, COTS-A and COTS-B9.
691
- We use FaceNet [5] as the white-box face recognition model, F, during training. All the testing
692
- images in this paper are generated from the same model (trained only with FaceNet) and tested on
693
- different matchers.
694
- 4.2.1
695
- Comparison with Prevailing Adversarial Face Generators
696
- We compare our adversarial face synthesis method with state-of-the-art methods that have specifi-
697
- cally been implemented or proposed for faces, including GFLM [48], PGD [17], FGSM [14], and
698
- A3GN [42]10. In Table 1, we find that compared to the state-of-the-art, AdvBiom generates adversarial
699
- faces that are similar to the probe 6.
700
- Moreover, the adversarial images attain a high obfuscation attack success rate on 4 state-of-the-art
701
- black-box AFR systems in both obfuscation and impersonation settings. AdvBiom learns to perturb
702
- the salient regions of the face, unlike PGD [17] and FGSM [14], which alter every pixel in the
703
- image. GFLM [48], on the other hand, geometrically warps the face images and thereby, results
704
- in low structural similarity. In addition, the state-of-the-art matchers are robust to such geometric
705
- deformation which explains the low success rate of GFLM on face matchers. A3GN, another
706
- GAN-based method, however, fails to achieve a reasonable success rate in an impersonation setting.
707
- 8All the open-source and COTS matchers achieve 99% accuracy on LFW under LFW protocol.
708
- 9Both COTS-A and COTS-B utilize CNNs for face recognition. COTS-B is one of the top performers in the
709
- NIST Ongoing Face Recognition Vendor Test (FRVT) [54].
710
- 10We train the baselines using their official implementations (detailed in the supplementary material).
711
- 11
712
-
713
- 4.2.2
714
- Ablation Study
715
- In order to analyze the importance of each module in our system, in Figure 7, we train three variants
716
- of AdvBiom for comparison by removing the discriminator (D), perturbation loss Lperturbation, and
717
- identity loss Lidentity, respectively.
718
- Input
719
- w/o D
720
- w/o Lprt
721
- w/o Lidt
722
- with all
723
- Figure 7: Variants of AdvBiom trained without the discriminator, perturbation loss, and identity loss, respectively.
724
- Every component of AdvBiom is necessary.
725
- The discriminator helps to ensure the visual quality of the synthesized faces are maintained. With
726
- the generator alone, undesirable artifacts are introduced. Without the proposed perturbation loss,
727
- perturbations in the adversarial mask are unbounded and therefore, leads to a lack in perceptual
728
- quality. The identity loss is imperative in ensuring an adversarial image is obtained. Without the
729
- identity loss, the synthesized image cannot evade state-of-the-art face matchers. We find that every
730
- component of AdvBiom is necessary in order to obtain an adversarial face that is not only perceptually
731
- realistic but can also evade state-of-the-art face matchers.
732
- 4.2.3
733
- What is AdvBiom Learning?
734
- Via Lperturbation, during training, AdvBiom learns to perturb only the salient facial regions that can
735
- evade the face matcher, F (FaceNet [5] in our case). In Figure 8, AdvBiom synthesizes the adversarial
736
- masks corresponding to the probes. We then threshold the mask to extract pixels with perturbation
737
- magnitudes exceeding 0.40. It can be inferred that the eyebrows, eyeballs, and nose contain highly
738
- discriminative information that an AFR system utilizes to identify an individual. Therefore, perturbing
739
- these salient regions are enough to evade state-of-the-art face recognition systems.
740
- 4.2.4
741
- Transferability of AdvBiom
742
- In Table 1, we find that attacks synthesized by AdvBiom when trained on a white-box matcher
743
- (FaceNet), can successfully evade 5 other face matchers that are not utilized during training in both
744
- obfuscation and impersonation settings. In order to investigate the transferability property of AdvBiom,
745
- we extract face embeddings of real images and their corresponding adversarial images, under the
746
- obfuscation setting, via the white-box matcher (FaceNet) and a black-box matcher (ArcFace). In total,
747
- we extract feature vectors from 1,456 face images of 10 subjects in the LFW dataset [52]. In Figure 9,
748
- we plot the correlation heatmap between face features of real images, their corresponding adversarial
749
- masks and adversarial images. First, we observe that face embeddings of real images extracted by
750
- FaceNet and ArcFace are correlated in a similar fashion. This indicates that both matchers extract
751
- features with related pairwise correlations. Consequently, perturbing salient features for FaceNet
752
- can lead to high attack success rates for ArcFace as well. The similarity among the correlation
753
- distributions of both matchers can also be observed when adversarial masks and adversarial images
754
- are input to the matchers. That is, receptive fields for automatic face recognition systems attend to
755
- similar regions in the face.
756
- 12
757
-
758
- Probe
759
- Adv. Mask
760
- Visualization
761
- Adv. Image
762
- 0.12
763
- 0.26
764
- Figure 8: State-of-the-art face matchers can be evaded by slightly perturbing salient facial regions, such as
765
- eyebrows, eyeballs, and nose (cosine similarity obtained via ArcFace [3]).
766
- Figure 9: Correlation between face features extracted via FaceNet and ArcFace from 1,456 images belonging to
767
- 10 subjects.
768
- To further illustrate the distributions of the embeddings of real and synthesized images, we plot
769
- the 2D t-SNE visualization of the face embeddings for the 10 subjects in Figure 10. The identity
770
- clusterings can be clearly observed from both real and adversarial images. In particular, the adversarial
771
- counterpart of each subject forms a new cluster that draws closer to the adversarial clusterings of
772
- other subjects. This shows that AdvBiom perturbs only salient pixels related to face identity while
773
- maintaining a semantic meaning in the feature space, resulting in a similar manifold of synthesized
774
- faces to that of real faces.
775
- 4.2.5
776
- Controllable Perturbation
777
- The perturbation loss, Lperturbation is bounded by a hyper-parameter, ϵ, i.e., the L2 norm of the
778
- adversarial mask must be at least ϵ. Without this constraint, the adversarial mask becomes a blank
779
- image with no changes to the probe. With ϵ, we can observe a trade-off between the attack success
780
- rate and the structural similarity between the probe and synthesized adversarial face (Fig. 11). A
781
- higher ϵ leads to less perturbation restriction, resulting in a higher attack success rate at the cost of a
782
- lower structural similarity. For an impersonation attack, this implies that the adversarial image may
783
- 13
784
-
785
- Real Image
786
- Adversarial Mask
787
- Adversarial Image
788
- 0.8
789
- FaceNet
790
- 0.4
791
- 0.0
792
- ArcFace
793
- -0.4FaceNet
794
- Real Image
795
- Adversarial Image (Obfuscation)
796
- ArcFace
797
- Figure 10: 2D t-SNE visualization of face representations extracted via FaceNet and ArcFace from 1,456 images
798
- belonging to 10 subjects.
799
- contain facial features from both the hacker and the target. In our experiments, we chose ϵ = 8.0 and
800
- ϵ = 3.0 for impersonation and obfuscation attacks, respectively.
801
- 4
802
- 6
803
- 8
804
- 10
805
- 12
806
- 14
807
- 16
808
- 0.81
809
- 0.92
810
- 0.95
811
- 0.76
812
- 0.69
813
- 5
814
- 13
815
- 21
816
- 39
817
- 52
818
- 60
819
- 66
820
- Hyper-parameter (ε)
821
- Success Rate (%)
822
- Structural Similarity
823
- ε = 4.0
824
- ε = 8.0
825
- ε = 10.0
826
- ε = 16.0
827
- Figure 11: Trade-off between attack success rate and structural similarity for impersonation attacks.
828
- 4.2.6
829
- Attacks via AdvBiom Beyond Faces
830
- We now show that the AdvBiommethod, coupled with the proposed Minutiae Displacement and
831
- Distortion Modules, can be extended to effectively generate adversarial fingerprints which are visually
832
- similar to corresponding probe fingerprints while evading two state-of-the-art COTS fingerprint
833
- matchers as well as a deep network-based fingerprint matcher.
834
- 14
835
-
836
- FS: 0.97
837
- FS: 0.92
838
- (a) Enrolled Mate
839
- VS: 235 | FS: 0.96
840
- VS: 172 | FS: 0.99
841
- (b) Input Probe
842
- VS: 31 | FS: 0.92
843
- VS: 10 | FS: 0.92
844
- (c) AdvBiom
845
- VS: 134 | FS: 0.96
846
- VS: 104 | FS: 0.96
847
- (d) DeepFool [16]
848
- VS: 139 | FS: 0.95
849
- VS: 104 | FS: 0.96
850
- (d) PGD [17]
851
- Figure 12: Example probe and corresponding mate fingerprints along with synthesized adversarial probes. (a)
852
- Two example mate fingerprints from NIST SD4 [55], and (b) the corresponding mates. Adversarial probe
853
- fingerprints using different approaches are shown in: (c) proposed synthesis method, AdvBiom; (d-e) state-of-
854
- the-art methods, DeepFool and PGD respectively. VeriFinger v11.0 match score (probe v. mate) - VS, and the
855
- fingerprintness score (degree of similarity of a given image to a fingerprint pattern) - FS ∈ [0,1] [56], which
856
- ranges from 1 (the highest) to 0 (the lowest), are given below each image. A VS of above 48 (at 0.01% FAR)
857
- indicates a successful match between the probe and the mate. The proposed attack AdvBiom successfully
858
- evades COTS and deep network-based matchers, while maintaining visual fingerprint perceptibility and high
859
- fingerprintness scores.
860
- Grosz et. al [57] showed that random minutiae position displacements and non-linear distortions
861
- drastically affected the performance of COTS fingerprint matchers. AdvBiom builds upon these two
862
- perturbations and when given a probe fingerprint, can synthesize an adversarial fingerprint image that
863
- retains all of the original fingerprint attributes except the identity, i.e. a fingerprint recognition system
864
- should not match the adversarial fingerprint to the probe fingerprint (obfuscation attack).
865
- Figure 13 shows the schematic of AdvBiom conditioned for fingerprints. The following subsections
866
- explain the major components of the approach in detail.
867
- Minutiae Displacement Module
868
- While the authors in [57] showed the effectiveness of random
869
- minutiae position displacements on COTS matchers, they studied the effect of this perturbation by
870
- directly modifying the minutiae template instead of the fingerprint image (pixel space). However, it
871
- may be difficult to obtain the minutiae template of a given fingerprint image using COTS minutiae
872
- extractors rather than the source fingerprint image itself. Thus, we propose a minutiae displacement
873
- module Gdisp which, given a fingerprint image, displaces its minutiae points in random directions by
874
- a predefined distance. To extract minutiae points from a fingerprint image, we employ a minutiae map
875
- extractor (M) from [58]. For a fingerprint image of width w and height h, M outputs a 12 channel
876
- heat map H ∈ Rh×w×12, where if H(i,j,c), value of the heat map at position (i,j) and channel c, is
877
- greater than a threshold mt and is the local maximum in its 5 × 5 × 3 neighboring cube, a minutiae is
878
- marked at (i,j). The minutiae direction θ is calculated by maximising the quadratic interpolation with
879
- respect to:
880
- f
881
-
882
- (c − 1) × π
883
- 6
884
-
885
- = H (i, j, (c − 1)%12)
886
- (9)
887
- f
888
-
889
- c × π
890
- 6
891
-
892
- = H(i, j, c)
893
- (10)
894
- f
895
-
896
- (c + 1) × π
897
- 6
898
-
899
- = H(i, j, (c + 1)%12)
900
- (11)
901
- Figure 14 shows a fingerprint image and its corresponding 12 channel minutiae map. Once M
902
- extracts a minutiae map Hprobe from the input probe fingerprint x, we detect minutiae points by
903
- applying a threshold of 0.2 on Hprobe and finding closed contours. Each detected contour, at say
904
- 15
905
-
906
- 𝑀𝑖𝑛𝑢𝑡𝑖𝑎𝑒 𝑀𝑎𝑝
907
- Extractor (ℳ)
908
- 𝐷𝑖𝑠𝑐𝑟𝑖𝑚𝑖𝑛𝑎𝑡𝑜𝑟
909
- (𝒟)
910
- GAN Loss
911
- ℒgan
912
- 𝑀𝑖𝑛. 𝐷𝑖𝑠𝑝𝑙𝑎𝑐𝑒𝑚𝑒𝑛𝑡
913
- Module (𝒢𝑑𝑖𝑠𝑝)
914
- Probe Fingerprint
915
- Displaced Fingerprint
916
- Original M.Map
917
- 𝑀𝑖𝑛𝑢𝑡𝑖𝑎𝑒 𝑃𝑖𝑥.
918
- Displacement
919
- Target M.Map
920
- M.Map Sim Loss
921
- ℒmmap_sim
922
- Pixel Loss
923
- ℒpixel
924
- Predicted M.Map
925
- M.Map Dis Loss
926
- ℒmmap_dis
927
- 𝐷𝑖𝑠𝑡𝑜𝑟𝑡𝑖𝑜𝑛
928
- Module (𝒢𝑑𝑖𝑠𝑡)
929
- 𝑀𝑖𝑛𝑢𝑡𝑖𝑎𝑒 𝑀𝑎𝑝
930
- Extractor (ℳ)
931
- Adversarial Fingerprint
932
- Figure 13: Schematic of AdvBiom for generating adversarial fingerprints. Given a probe fingerprint image, it is
933
- passed to Gdisp which randomly displaces its minutiae points. The distortion module (Gdist) identifies control
934
- points on the displaced fingerprint and non-linearly distorts the image to output the adversarial fingerprint. The
935
- solid black arrows show the forward pass of the network while the dotted black arrows show the propagation of
936
- the losses.
937
- location (i, j), is displaced by a predefined L1 distance d = |∆i|+|∆j|, giving us the target minutiae
938
- map Htarget.
939
- 0
940
- 𝜋/6
941
- 𝜋/3
942
- 𝜋/2
943
- 2𝜋/3
944
- 5𝜋/6
945
- 𝜋
946
- 7𝜋/6
947
- 4𝜋/3
948
- 3𝜋/2
949
- 5𝜋/3
950
- 11𝜋/6
951
- Figure 14: The 12 channel minutiae map of an example fingerprint image shown on the left. The minutiae points
952
- (shown in red) are marked by a COTS minutiae extractor. The bright spots in each channel image indicate the
953
- spatial location of minutiae points while the kth channel (k ∈ [0, 11]) indicate the contributions of minutiae
954
- points to the kπ/6 orientation.
955
- The minutiae displacement module Gdisp is essentially an autoencoder conditioned on the probe
956
- fingerprint x and the target minutiae map Htarget. It learns to generate a displaced fingerprint xdisp
957
- whose predicted minutiae map Hpred is as close as possible to the target minutiae map Htarget in the
958
- pixel space. To achieve this, we have three losses that govern Gdisp:
959
- Lmmap_sim = ||Htarget − Hpred||1
960
- (12)
961
- Lmmap_dis =
962
- 1
963
- ||Htarget − Hprobe||1
964
- (13)
965
- , where Lmmap_sim is the minutiae map similarity loss which minimises the distance between the
966
- predicted and target minutiae map, while the minutiae map dissimilarity loss Lmmap_dis maximises
967
- 16
968
-
969
- G
970
- QU
971
- Q
972
- Q
973
- QQ
974
- Q
975
- Gthe distance between the predicted and probe minutiae map. In figure 15, we show two example
976
- probe fingerprints and their corresponding displaced fingerprints after passing through Gdisp.
977
- Distortion Module
978
- One of the most noteworthy conclusions from [57] was that non-linear distor-
979
- tions to minutiae points was one of the most successful perturbations to lower the similarity scores
980
- between perturbed and corresponding unperturbed fingerprints. Again, the non-linear distortion
981
- was applied to all the minutiae points in the template and not to the image. Thus, our next step in
982
- generating adversarial fingerprints consists of a distortion module Gdist which learns to distort salient
983
- points in a fingerprint image.
984
- The architecture of Gdist consists of an encoder conditioned on the input probe fingerprint x and the
985
- target minutiae map Hprobe. The output from the encoder is a predefined number of control points11 c.
986
- The non-linear distortion model proposed in [59], learned using a thin plate spline (TPS) model [60]
987
- from 320 already distorted fingerprint videos, was employed to calculate the displacements of the
988
- predicted control points. The hyper-parameter σ is used to indicate the extent of the distortion. The
989
- control points and their displacements are then fed to a differentiable warping module [61] to get the
990
- resultant adversarial fingerprint xadv.
991
- To limit the magnitude of non-linear distortion and to ensure that xdisp and xadv are close to the
992
- probe fingerprint x, we introduce pixel loss between the image pairs (x, xdisp) and (x, xadv):
993
- Lpixel = 1
994
- n
995
-
996
- i,j
997
- |xi,j − xdispi,j|+ 1
998
- n
999
-
1000
- i,j
1001
- |xi,j − xadvi,j|
1002
- (14)
1003
- Figure 16 shows two displaced fingerprints and their corresponding output from Gdist.
1004
- Discriminator
1005
- In order to guide the generative modules Gdisp and Gdist to synthesize realistic
1006
- fingerprint images, we introduce a fully convolutional network as a patch-based discriminator D.
1007
- The job of the discriminator is to distinguish between real fingerprint images x and the generated
1008
- adversarial fingerprint images xadv. This is accomplished through the GAN loss:
1009
- Lgan = logD(x) + log(1 − D(xadv))
1010
- (15)
1011
- The proposed approach AdvBiom is trained in an end-to-end manner with respect to the following
1012
- objective function:
1013
- (16)
1014
- L = Lgan + λmmap_simLmmap_sim + λmmap_disLmmap_dis + λpixelLpixel
1015
- where the hyper-parameters λmmap_sim, λmmap_dis, and λpixel denote the relative importance of
1016
- their respective losses. Once trained, AdvBiom can generate an adversarial fingerprint image for
1017
- any input probe fingerprint and can be tested on any fingerprint matcher regardless of the feature
1018
- extraction method (minutiae or deep-features).
1019
- 11Control points are points in an image to which non-linear distortion is applied.
1020
- Probe Fingerprint
1021
- Displaced Fingerprint
1022
- Probe Fingerprint
1023
- Displaced Fingerprint
1024
- Figure 15: Example probe fingerprints from NIST SD4 [55] and their corresponding output from the minutiae
1025
- displacement module Gdisp. The minutiae points (shown in red) are marked using a COTS minutiae extractor.
1026
- 17
1027
-
1028
- Displaced Fingerprint
1029
- Distorted Fingerprint
1030
- Displaced Fingerprint
1031
- Distorted Fingerprint
1032
- Figure 16: Fingerprints in the left column are example displaced fingerprints from Gdisp. The distortion module
1033
- Gdist predicts control points (marked in blue) and distorts the images based on their displacements (red arrows)
1034
- using the non-linear distortion model from [59]. The resultant distorted fingerprint images are shown in the right
1035
- column.
1036
- Successful
1037
- Attacks
1038
- Failed
1039
- Attacks
1040
- Original
1041
- Probe
1042
- Adv. Probe
1043
- (AdvFinge)
1044
- Mate
1045
- Original
1046
- Probe
1047
- Adv. Probe
1048
- (AdvFinge)
1049
- Mate
1050
- VS: 36 | FS: 0.82
1051
- VS: 47 | FS: 0.88
1052
- VS: 109 | FS: 0.87
1053
- VS: 76 | FS: 0.89
1054
- (AdvBiom)
1055
- (AdvBiom)
1056
- Figure 17: Example successful and failed adversarial fingerprints attack using AdvBiom on NIST SD4 [55]. The
1057
- VeriFinger matching scores (probe v. mate): VS, and fingerprintness [56] scores: FS, of adversarial probes are
1058
- shown below their respective triplet. Note that the VeriFinger matching threshold is 48 at 0.01% FAR.
1059
- Evaluation Metrics:
1060
- The requirement of a good adversarial fingerprints generator is to evade
1061
- fingerprint matchers while preserving fingerprint attributes and being model-agnostic. Thus, in order
1062
- to quantify the performance of adversarial attacks generated by AdvBiom and other state-of-the-art
1063
- baselines, we employ the following evaluation metrics:
1064
- • True Accept Rate (TAR): The extent to which an adversarial attack can evade a fingerprint
1065
- matcher is measured by the drop in TAR at an operational setting, say 0.01% False Accept
1066
- Rate (FAR).
1067
- • Fingerprintness: Soweon and Jain [56] proposed a domain-specific metric called finger-
1068
- printness to measure the degree of similarity of a given image to a fingerprint pattern.
1069
- Fingerprintness ranges from [0,1] where higher the score, higher the probability of the
1070
- pattern in the image corresponding to a fingerprint pattern.
1071
- • NFIQ 2.0: Lastly, we use NFIQ 2.0 [62] quality scores to evaluate the fingerprint quality
1072
- of adversarial fingerprint images. NFIQ scores range from [0,100] where a score of 100
1073
- depicts the highest fingerprint quality.
1074
- Note that since non-linear distortions change the structure of the image, using the structural similarity
1075
- index (SSIM) metric is inappropriate as it essentially measures the local change in structures of the
1076
- image pairs.
1077
- Datasets: We train AdvBiom on an internal dataset of 120,000 rolled fingerprint images. Furthermore,
1078
- we evaluate the performance of the proposed fingerprint adversarial attack and other baselines on:
1079
- • 2,000 fingerprint pairs from NIST SD4 [55]
1080
- • 27,000 fingerprint pairs from NIST SD14 [63]
1081
- 18
1082
-
1083
- 111@
1084
- 8
1085
- Q
1086
- Q
1087
- QQ
1088
- Q
1089
- de
1090
- G
1091
- G母
1092
- Q
1093
- %
1094
- d
1095
- TG
1096
- %
1097
- Q
1098
- Qd
1099
- Q
1100
- d
1101
- 00
1102
- &
1103
- QQ
1104
- %
1105
- CQ
1106
- d
1107
- 3
1108
- d
1109
- &
1110
- QQQ
1111
- @山
1112
- G
1113
- %Q
1114
- QQ
1115
- Q
1116
- q Q
1117
- bu
1118
- G
1119
- P dppdQ
1120
- @
1121
- d
1122
- QQ-
1123
- Q?
1124
- &
1125
- d
1126
- &
1127
- ?
1128
- de
1129
- qQ
1130
- &
1131
- GQ QC
1132
- QQ
1133
- Q
1134
- Q
1135
- ?
1136
-
1137
- &
1138
- G
1139
- lG
1140
- Q
1141
- QQ
1142
- %
1143
- Q
1144
- @
1145
- G
1146
- QQQQ
1147
- G
1148
- G20-
1149
- Q
1150
- Q
1151
- Q
1152
- q
1153
- QQ
1154
- G
1155
- Q
1156
- G8
1157
- a
1158
- QQ
1159
- QQ
1160
- cs
1161
- G
1162
- Q
1163
- G
1164
- QQ
1165
- 0甲Accuracy
1166
- Adversarial Attacks
1167
- Original
1168
- Probes
1169
- FGSM
1170
- I-FGSM
1171
- Deep
1172
- Fool
1173
- PGD
1174
- Adv
1175
- Biom
1176
- TAR
1177
- (%)
1178
- at
1179
- 0.01%
1180
- FAR
1181
- NIST
1182
- SD4
1183
- VeriFinger
1184
- 99.05
1185
- 95.20
1186
- 98.30
1187
- 95.00
1188
- 97.60
1189
- 56.25
1190
- Innovatrics
1191
- 97.00
1192
- 93.00
1193
- 95.50
1194
- 92.65
1195
- 94.75
1196
- 41.35
1197
- DeepPrint
1198
- 94.55
1199
- 36.20
1200
- 64.15
1201
- 30.40
1202
- 68.75
1203
- 46.35
1204
- NIST
1205
- SD14
1206
- VeriFinger
1207
- 99.42
1208
- 95.20
1209
- 98.30
1210
- 95.00
1211
- 97.60
1212
- 37.67
1213
- Innovatrics
1214
- 98.24
1215
- 90.84
1216
- 95.68
1217
- 91.32
1218
- 94.01
1219
- 25.69
1220
- DeepPrint
1221
- 96.52
1222
- 48.70
1223
- 84.48
1224
- 31.44
1225
- 64.28
1226
- 69.42
1227
- FVC
1228
- 2004
1229
- DB1 A
1230
- VeriFinger
1231
- 94.89
1232
- 91.60
1233
- 91.53
1234
- 86.92
1235
- 92.69
1236
- 22.31
1237
- Innovatrics
1238
- 94.15
1239
- 87.36
1240
- 85.68
1241
- 82.32
1242
- 88.75
1243
- 5.52
1244
- DeepPrint
1245
- 75.36
1246
- 13.22
1247
- 33.31
1248
- 6.87
1249
- 27.39
1250
- 20.62
1251
- Table 2: True Accept Rate (TAR) @ 0.01% FAR of AdvBiom along with state-of-the-art baselines attacks on
1252
- three datasets - NIST SD4 [55], NIST SD14 [63], and FVC 2004 DB1 A [64]. 2 COTS fingerprint matchers -
1253
- VeriFinger v11.0 [65] and Innovatrics v7.6.0.627 [66], and a deep network-based matcher DeepPrint [67] were
1254
- employed for the evaluation. It is observed that DeepPrint, a deep network-based matcher, is susceptible to all
1255
- types of adversarial attacks while VeriFinger and Innovatrics are more robust.
1256
- • 558 fingerprints from DB1 A of FVC 2004 [64], consisting of 1,369 genuine pairs.
1257
- Experimental Settings: AdvBiom was trained using the Adam optimizer with β1 as 0.5 and β2 as
1258
- 0.9. The hyper-parameters were empirically set to λmmap_sim = 0.05, λmmap_dis = 500000, and
1259
- λpixel = 1000 for convergence. Based on the conclusions drawn in [57], d, c, and λ were set to
1260
- 20, 16, and 2.0 respectively for optimal effectiveness against fingerprint matchers while ensuring
1261
- fingerprint realism. AdvBiom was trained for 16,000 steps using Tensorflow r1.14.0 on an Intel Core
1262
- i7-11700F @ 2.50GHz CPU with a RTX 3070 GPU. On the same machine, AdvBiom can synthesize
1263
- an adversarial fingerprint within 0.35 seconds.
1264
- Fingerprint Authentication Systems: Since AdvBiom is a black-box attack, we do not require any
1265
- fingerprint authentication system while training the network. However, we evaluate AdvBiom and
1266
- other baseline attacks on two COTS fingerprint matchers and one deep network-based matcher:
1267
- • VeriFinger v11.0 [65]
1268
- • Innovatrics v7.6.0.627 [66]
1269
- • DeepPrint [67]
1270
- Comparison with Prevailing Fingerprint Adversarial Generators
1271
- We show the performance
1272
- of our method AdvBiom as compared to other state-of-the-art attacks in Table 2. We observe that
1273
- the TAR of two COTS and a deep network-based fingerprint matcher for the aforementioned three
1274
- datasets. It is to note that all the baseline attacks [14, 15, 17, 16] are white-box attacks and were
1275
- trained using DeepPrint [67]. It is evident from Table 2 that AdvBiom is the most successful attack on
1276
- COTS matchers VeriFinger and Innovatrics, and is also able to effectively evade a deep network-based
1277
- fingerprint matcher, namely DeepPrint. It can also be observed that while COTS fingerprint matchers
1278
- are robust to most adversarial attacks, DeepPrint is very susceptible to the same attacks since it
1279
- heavily relies on the texture of the fingerprint which is majorly affected by adversarial attacks.
1280
- A successful adversarial attack should not only evade fingerprint matchers but should also preserve
1281
- fingerprint attributes. In order to observe the effect of adversarial attacks on fingerprint pattern
1282
- in images, we plot the fingerprintness [56] distribution of 2,000 probes from NIST SD4 [55] for
1283
- AdvBiom as well as for other baseline attacks. Since all the state-of-the-art baselines essentially add
1284
- noise to each pixel in the image, they do not change the structure of the fingerprint and thus do not
1285
- 19
1286
-
1287
- 0.6
1288
- 0.7
1289
- 0.8
1290
- 0.9
1291
- 1.0
1292
- Fingerprintedness Scores
1293
- 0
1294
- 2
1295
- 4
1296
- 6
1297
- 8
1298
- 10
1299
- 12
1300
- Probability of occurence
1301
- Original Probes ( = 0.91)
1302
- AdvFinge ( = 0.86)
1303
- FGSM ( = 0.89)
1304
- I-FGSM ( = 0.91)
1305
- PGD ( = 0.90)
1306
- DeepFool ( = 0.90)
1307
- AdvBiom
1308
- Figure 18: Fingerprintness [56] distribution of 2,000
1309
- probes from NIST SD4 with respect to AdvBiom and
1310
- other state-of-the-art baselines attacks.
1311
- 0
1312
- 20
1313
- 40
1314
- 60
1315
- 80
1316
- 100
1317
- NFIQ Score
1318
- 0.000
1319
- 0.005
1320
- 0.010
1321
- 0.015
1322
- 0.020
1323
- 0.025
1324
- Probability of occurence
1325
- Original Probes ( = 41.70)
1326
- AdvFinge ( = 31.46)
1327
- FGSM ( = 43.30)
1328
- I-FGSM ( = 44.23)
1329
- PGD ( = 41.28)
1330
- DeepFool ( = 43.42)
1331
- AdvBiom
1332
- Figure 19: NFIQ 2.0 [62] quality scores distribution
1333
- of 2,000 probes from NIST SD4 [55] with respect to
1334
- AdvBiom and other baselines attacks.
1335
- affect fingerprintness scores. AdvBiom , on the other hand, displaces minutiae points and non-linearly
1336
- distorts the image, and still maintains a high mean fingerprintness score of µ = 0.86.
1337
- Furthermore, we also compute the NFIQ 2.0 [62] quality scores distribution (figure 19) of the original
1338
- and adversarial probes from NIST SD4 [55]. As shown in figure 12, baseline attacks tend to minutely
1339
- perturb image pixels to generate adversarial fingerprints and as a result do not have much of an effect
1340
- on the quality scores. AdvBiom , on the other hand, provides an optimal solution by successfully
1341
- attacking fingerprint matchers while maintaining high fingerprintness and NFIQ scores.
1342
- Genuine and Imposter Scores Distribution
1343
- To determine the effect of adversarial fingerprint
1344
- on both genuine and imposter pairs, we plot the genuine and imposter scores distribution of NIST
1345
- SD4 [55] in figure 20 before and after applying AdvBiom . We computed a total of 2,000 genuine
1346
- and 20,000 imposter scores for the evaluation. It can be observed that the genuine scores drastically
1347
- decrease and shift to the left of the axis as their mean drops from 183.87 to 55.55 after the attack.
1348
- However, the imposter scores remain unaffected with the mean imposter score changing by only 0.53.
1349
- 0
1350
- 100
1351
- 200
1352
- 300
1353
- 400
1354
- Matching Scores
1355
- 0.00
1356
- 0.02
1357
- 0.04
1358
- 0.06
1359
- 0.08
1360
- 0.10
1361
- 0.12
1362
- 0.14
1363
- Probability of Occurence
1364
- Genuine Scores | Before Attack ( = 183.88)
1365
- Genuine Scores | After Attack ( = 55.55)
1366
- Imposter Scores | Before Attack ( = 6.00)
1367
- Imposter Scores | After Attack ( = 6.52)
1368
- 48: Matching Threshold at 0.01% FAR
1369
- (a) Using VeriFinger SDK [65]
1370
- (b) Using Innovatrics SDK [66]
1371
- 1.00
1372
- 0.75
1373
- 0.50
1374
- 0.25
1375
- 0.00
1376
- 0.25
1377
- 0.50
1378
- 0.75
1379
- 1.00
1380
- Matching Scores
1381
- 0.00
1382
- 0.02
1383
- 0.04
1384
- 0.06
1385
- 0.08
1386
- 0.10
1387
- 0.12
1388
- Probability of Occurence
1389
- Genuine Scores | Before Attack ( = 0.94)
1390
- Genuine Scores | After Attack ( = 0.78)
1391
- Imposter Scores | Before Attack ( = 0.10)
1392
- Imposter Scores | After Attack ( = 0.09)
1393
- 0.837: Matching Threshold at 0.01% FAR
1394
- (c) using DeepPrint [67]
1395
- Figure 20: Genuine and imposter scores distribution of NIST SD4 [55] before and after the adversarial attack
1396
- AdvBiom using three state-of-the-art fingerprint matchers - VeriFinger v11.0 [65], Innovatrics v7.6.0.627 [66],
1397
- and DeepPrint [67]. Here, µ refers to the mean of the scores distribution. In all the three cases, the genuine
1398
- scores shift towards the left while the imposter scores do not get affected by the attack.
1399
- Is AdvBiom Biased Towards Certain Fingerprint Types?
1400
- The generated adversarial fingerprint
1401
- from AdvBiom is conditioned on the input probe fingerprint. Thus, it is essential to check if there is a
1402
- relation between the amount of perturbation applied and the fingerprint type. The confusion matrix
1403
- for the five fingerprint types (left loop, right loop, whorl, arch, tented arch) before and after applying
1404
- AdvBiom on the 2,000 probes of NIST SD4 [55] is shown in table 3. Note that we use NIST SD4 for
1405
- this evaluation since it has a uniform number of fingerprint images per each type (400 fingerprints
1406
- per type). It is evident from the table that all five fingerprint types are almost equally susceptible to
1407
- the attack, and thus the attack crafted AdvBiom is not biased towards a particular fingerprint type.
1408
- 20
1409
-
1410
- Genuine Scores LBefore Attack (u = 590.00)
1411
- Genuine Scores 1After Attack (μu = 48.72)
1412
- 0.6
1413
- Imposter Scores /Before Attack (μu = 1.34)
1414
- Imposter Scores l After Attack (μ = 1.4o)
1415
- 0.5
1416
- Probability of Occurence
1417
- 40:Matching Threshold at 0.01% FAR
1418
- 0.4
1419
- 0.3
1420
- 0.2
1421
- 0.1
1422
- 0.0
1423
- 200
1424
- 600
1425
- 0
1426
- 400
1427
- 800
1428
- 1000
1429
- Matching Scores0.012
1430
- 0.010
1431
- 0.008
1432
- 0.006
1433
- 0.004
1434
- 0.002
1435
- 0.000
1436
- 0
1437
- 200
1438
- 400
1439
- 600
1440
- 800
1441
- 1000Before Attack
1442
- After Attack
1443
- TAR
1444
- L: 99.75%
1445
- R: 99.25%
1446
- W: 99.50%
1447
- T: 99.25%
1448
- A: 97.50%
1449
- FAR
1450
- L: 0%
1451
- R: 0%
1452
- W: 0%
1453
- T: 0%
1454
- A: 0%
1455
- FRR
1456
- L: 0.25%
1457
- R: 0.75%
1458
- W: 0.50%
1459
- T: 0.75%
1460
- A: 2.50%
1461
- TRR
1462
- L: 100%
1463
- R: 100%
1464
- W: 100%
1465
- T: 100%
1466
- A: 100%
1467
- TAR
1468
- L: 59.00%
1469
- R: 56.50%
1470
- W: 58.75%
1471
- T: 56.00%
1472
- A: 57.00%
1473
- FAR
1474
- L: 0%
1475
- R: 0%
1476
- W: 0%
1477
- T: 0%
1478
- A: 0%
1479
- FRR
1480
- L: 41.00%
1481
- R: 43.50%
1482
- W: 41.25%
1483
- T: 44.00%
1484
- A: 43.00%
1485
- TRR
1486
- L: 100%
1487
- R: 100%
1488
- W: 100%
1489
- T: 100%
1490
- A: 100%
1491
- Table 3: Confusion matrix for five fingerprint types (left loop: L, right loop: R, whorl: W, tented arch: T, arch:
1492
- A) from NIST SD4 [55] before and after the adversarial attack using AdvBiom. Here, TAR = True Accept Rate,
1493
- FAR = False Accept Rate, FRR = False Reject Rate, and TRR = True Reject Rate. Note that the matching
1494
- threshold was 48 at 0.01% FAR using the COTS fingerprint matcher VeriFinger. AdvBiom is not biased towards
1495
- any fingerprint type.
1496
- 5
1497
- Conclusions
1498
- We show that a new method of adversarial synthesis, namely AdvBiom, that automatically generates
1499
- adversarial face images with imperceptible perturbations evading state-of-the-art biometric matchers.
1500
- With the help of a GAN, and the proposed perturbation and identity losses, AdvBiom learns the set
1501
- of pixel locations required by face matchers for identification and only perturbs those salient facial
1502
- regions (such as eyebrows and nose). Once trained, AdvBiom generates high quality and perceptually
1503
- realistic adversarial examples that are benign to the human eye but can evade state-of-the-art black-
1504
- box face matchers, while outperforming other state-of-the-art adversarial face methods. Beyond
1505
- faces, we show for the first time that such a method with the proposed Minutiae Displacement and
1506
- Distortion Modules can also evade state-of-the-art automated fingerprint recognition systems.
1507
- References
1508
- [1] D. E. Rumelhart and J. L. McClelland, Learning Internal Representations by Error Propagation,
1509
- pp. 318–362. 1987.
1510
- [2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional
1511
- neural networks,” Commun. ACM, vol. 60, p. 84–90, may 2017.
1512
- [3] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “Arcface: Additive angular margin loss for deep
1513
- face recognition,” in Proceedings of the IEEE/CVF conference on computer vision and pattern
1514
- recognition, pp. 4690–4699, 2019.
1515
- [4] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, “Sphereface: Deep hypersphere embedding
1516
- for face recognition,” in Proceedings of the IEEE conference on computer vision and pattern
1517
- recognition, pp. 212–220, 2017.
1518
- [5] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recogni-
1519
- tion and clustering,” in Proceedings of the IEEE conference on computer vision and pattern
1520
- recognition, pp. 815–823, 2015.
1521
- [6] P. Grother, G. Quinn, and P. Phillips, “Report on the evaluation of 2d still-image face recognition
1522
- algorithms,” 2010-06-17 2010.
1523
- [7] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 ieee
1524
- symposium on security and privacy (sp), pp. 39–57, Ieee, 2017.
1525
- [8] Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting adversarial attacks with
1526
- momentum,” in Proceedings of the IEEE conference on computer vision and pattern recognition,
1527
- pp. 9185–9193, 2018.
1528
- [9] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,”
1529
- arXiv preprint arXiv:1412.6572, 2014.
1530
- [10] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models
1531
- resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
1532
- 21
1533
-
1534
- [11] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus,
1535
- “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
1536
- [12] S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Universal adversarial pertur-
1537
- bations,” in Proceedings of the IEEE conference on computer vision and pattern recognition,
1538
- pp. 1765–1773, 2017.
1539
- [13] UIDAI, “Unique Identification Authority of India.” https://uidai.gov.in, 2022.
1540
- [14] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,”
1541
- in 3rd International Conference on Learning Representations, ICLR, 2015.
1542
- [15] A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” in
1543
- 5th International Conference on Learning Representations, ICLR, 2017.
1544
- [16] S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: A simple and accurate method to
1545
- fool deep neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition,
1546
- CVPR, pp. 2574–2582, IEEE Computer Society, 2016.
1547
- [17] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models
1548
- resistant to adversarial attacks,” in 6th International Conference on Learning Representations,
1549
- ICLR, 2018.
1550
- [18] N. Carlini and D. A. Wagner, “Towards evaluating the robustness of neural networks,” in IEEE
1551
- Symposium on Security and Privacy, SP, pp. 39–57, IEEE Computer Society, 2017.
1552
- [19] C. Xiao, J. Zhu, B. Li, W. He, M. Liu, and D. Song, “Spatially transformed adversarial examples,”
1553
- in 6th International Conference on Learning Representations, ICLR, 2018.
1554
- [20] K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and
1555
- D. Song, “Robust physical-world attacks on deep learning visual classification,” in IEEE/CVF
1556
- Conference on Computer Vision and Pattern Recognition, pp. 1625–1634, 2018.
1557
- [21] N. Papernot, P. D. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations
1558
- of deep learning in adversarial settings,” in IEEE European Symposium on Security and Privacy,
1559
- EuroS&P, pp. 372–387, IEEE, 2016.
1560
- [22] A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” in 5th
1561
- International Conference on Learning Representations, ICLR, 2017.
1562
- [23] Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting adversarial attacks
1563
- with momentum,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR,
1564
- pp. 9185–9193, IEEE Computer Society, 2018.
1565
- [24] C. Xie, Z. Zhang, Y. Zhou, S. Bai, J. Wang, Z. Ren, and A. L. Yuille, “Improving transferability
1566
- of adversarial examples with input diversity,” in Proceedings of the IEEE/CVF Conference on
1567
- Computer Vision and Pattern Recognition, pp. 2730–2739, 2019.
1568
- [25] D. Deb, J. Zhang, and A. K. Jain, “Advfaces: Adversarial face synthesis,” in 2020 IEEE
1569
- International Joint Conference on Biometrics (IJCB), pp. 1–10, IEEE, 2020.
1570
- [26] Y. Liu, X. Chen, C. Liu, and D. Song, “Delving into transferable adversarial examples and
1571
- black-box attacks,” in 5th International Conference on Learning Representations, ICLR, 2017.
1572
- [27] W. Xiang, H. Su, C. Liu, Y. Guo, and S. Zheng, “Improving the robustness of adversarial attacks
1573
- using an affine-invariant gradient estimator,” Available at SSRN 4095198, 2021.
1574
- [28] Y. Dong, T. Pang, H. Su, and J. Zhu, “Evading defenses to transferable adversarial examples by
1575
- translation-invariant attacks,” in Proceedings of the IEEE/CVF Conference on Computer Vision
1576
- and Pattern Recognition, pp. 4312–4321, 2019.
1577
- [29] S. Cheng, Y. Dong, T. Pang, H. Su, and J. Zhu, “Improving black-box adversarial attacks with a
1578
- transfer-based prior,” Advances in neural information processing systems, vol. 32, 2019.
1579
- 22
1580
-
1581
- [30] A. Ilyas, L. Engstrom, A. Athalye, and J. Lin, “Black-box adversarial attacks with limited
1582
- queries and information,” in International Conference on Machine Learning, pp. 2137–2146,
1583
- PMLR, 2018.
1584
- [31] Y. Li, L. Li, L. Wang, T. Zhang, and B. Gong, “Nattack: Learning the distributions of adver-
1585
- sarial examples for an improved black-box attack on deep neural networks,” in International
1586
- Conference on Machine Learning, pp. 3866–3876, PMLR, 2019.
1587
- [32] Y. Dong, H. Su, B. Wu, Z. Li, W. Liu, T. Zhang, and J. Zhu, “Efficient decision-based black-
1588
- box adversarial attacks on face recognition,” in Proceedings of the IEEE/CVF Conference on
1589
- Computer Vision and Pattern Recognition, pp. 7714–7722, 2019.
1590
- [33] W. Brendel, J. Rauber, and M. Bethge, “Decision-based adversarial attacks: Reliable attacks
1591
- against black-box machine learning models,” arXiv preprint arXiv:1712.04248, 2017.
1592
- [34] M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a crime: Real and stealthy
1593
- attacks on state-of-the-art face recognition,” CCS ’16, (New York, NY, USA), p. 1528–1540,
1594
- Association for Computing Machinery, 2016.
1595
- [35] M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “A general framework for adversarial
1596
- examples with objectives,” ACM Transactions on Privacy and Security (TOPS), vol. 22, no. 3,
1597
- pp. 1–30, 2019.
1598
- [36] S. Komkov and A. Petiushko, “Advhat: Real-world adversarial attack on arcface face id system,”
1599
- in 2020 25th International Conference on Pattern Recognition (ICPR), pp. 819–826, IEEE,
1600
- 2021.
1601
- [37] D.-L. Nguyen, S. S. Arora, Y. Wu, and H. Yang, “Adversarial light projection attacks on face
1602
- recognition systems: A feasibility study,” in Proceedings of the IEEE/CVF conference on
1603
- computer vision and pattern recognition workshops, pp. 814–815, 2020.
1604
- [38] B. Yin, W. Wang, T. Yao, J. Guo, Z. Kong, S. Ding, J. Li, and C. Liu, “Adv-makeup: A new
1605
- imperceptible and transferable attack on face recognition,” arXiv preprint arXiv:2105.03162,
1606
- 2021.
1607
- [39] X. Liu, F. Shen, J. Zhao, and C. Nie, “Rstam: An effective black-box impersonation attack on
1608
- face recognition using a mobile and compact printer,” arXiv preprint arXiv:2206.12590, 2022.
1609
- [40] A. J. Bose and P. Aarabi, “Adversarial attacks on face detectors using neural net based con-
1610
- strained optimization,” in 2018 IEEE 20th International Workshop on Multimedia Signal
1611
- Processing (MMSP), pp. 1–6, IEEE, 2018.
1612
- [41] N. Cauli, A. Ortis, and S. Battiato, “Fooling a face recognition system with a marker-free
1613
- label-consistent backdoor attack,” in Image Analysis and Processing – ICIAP 2022: 21st Inter-
1614
- national Conference, Lecce, Italy, May 23–27, 2022, Proceedings, Part II, (Berlin, Heidelberg),
1615
- p. 176–185, Springer-Verlag, 2022.
1616
- [42] L. Yang, Q. Song, and Y. Wu, “Attacks on state-of-the-art face recognition using attentional
1617
- adversarial attack generative network,” Multimedia tools and applications, vol. 80, no. 1,
1618
- pp. 855–875, 2021.
1619
- [43] Y. Zhong and W. Deng, “Towards transferable adversarial attack against deep face recognition,”
1620
- IEEE Transactions on Information Forensics and Security, vol. 16, pp. 1452–1466, 2020.
1621
- [44] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A
1622
- simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research,
1623
- vol. 15, no. 56, pp. 1929–1958, 2014.
1624
- [45] S. Jia, B. Yin, T. Yao, S. Ding, C. Shen, X. Yang, and C. Ma, “Adv-attribute: Inconspicuous and
1625
- transferable adversarial attack on face recognition,” arXiv preprint arXiv:2210.06871, 2022.
1626
- [46] H. Qiu, C. Xiao, L. Yang, X. Yan, H. Lee, and B. Li, “Semanticadv: Generating adversarial
1627
- examples via attribute-conditioned image editing,” in European Conference on Computer Vision,
1628
- pp. 19–37, Springer, 2020.
1629
- 23
1630
-
1631
- [47] S. Shan, E. Wenger, J. Zhang, H. Li, H. Zheng, and B. Y. Zhao, “Fawkes: protecting privacy
1632
- against unauthorized deep learning models,” in USENIX, pp. 1589–1604, 2020.
1633
- [48] A. Dabouei, S. Soleymani, J. Dawson, and N. Nasrabadi, “Fast geometrically-perturbed adver-
1634
- sarial faces,” in WACV, 2019.
1635
- [49] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and
1636
- Y. Bengio, “Generative adversarial networks,” Communications of the ACM, vol. 63, no. 11,
1637
- pp. 139–144, 2020.
1638
- [50] E. L. Denton, S. Chintala, R. Fergus, et al., “Deep generative image models using a laplacian
1639
- pyramid of adversarial networks,” Advances in neural information processing systems, vol. 28,
1640
- 2015.
1641
- [51] D. Yi, Z. Lei, S. Liao, and S. Z. Li, “Learning face representation from scratch,” arXiv preprint
1642
- arXiv:1411.7923, 2014.
1643
- [52] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database
1644
- for studying face recognition in unconstrained environments,” Tech. Rep. 07-49, University of
1645
- Massachusetts, Amherst, October 2007.
1646
- [53] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and alignment using multitask
1647
- cascaded convolutional networks,” IEEE SPL, vol. 23, no. 10, pp. 1499–1503, 2016.
1648
- [54] P. J. Grother, M. Ngan, and K. Hanaoka, “Ongoing Face Recognition Vendor Test (FRVT), Part
1649
- 2: Identification,” NIST Interagency Report, 2018.
1650
- [55] NIST,
1651
- “NIST
1652
- Special
1653
- Database
1654
- 4.”
1655
- https://www.nist.gov/srd/
1656
- nist-special-database-4, 2022.
1657
- [56] S. Yoon and A. K. Jain, “Is there a fingerprint pattern in the image?,” in 2013 International
1658
- Conference on Biometrics (ICB), pp. 1–8, 2013.
1659
- [57] S. A. Grosz, J. J. Engelsma, N. G. P. Jr., and A. K. Jain, “White-box evaluation of fingerprint
1660
- matchers,” CoRR, vol. abs/1909.00799, 2019.
1661
- [58] K. Cao, D. Nguyen, C. Tymoszek, and A. K. Jain, “End-to-end latent fingerprint search,” IEEE
1662
- Trans. Inf. Forensics Secur., vol. 15, pp. 880–894, 2020.
1663
- [59] X. Si, J. Feng, J. Zhou, and Y. Luo, “Detection and rectification of distorted fingerprints,” IEEE
1664
- Trans. Pattern Anal. Mach. Intell., vol. 37, no. 3, pp. 555–568, 2015.
1665
- [60] F. L. Bookstein, “Principal warps: Thin-plate splines and the decomposition of deformations,”
1666
- IEEE Trans. Pattern Anal. Mach. Intell., vol. 11, no. 6, pp. 567–585, 1989.
1667
- [61] F. Cole, D. Belanger, D. Krishnan, A. Sarna, I. Mosseri, and W. T. Freeman, “Synthesizing
1668
- normalized faces from facial identity features,” in Proceedings of the IEEE Conference on
1669
- Computer Vision and Pattern Recognition (CVPR), July 2017.
1670
- [62] E. Tabassi, “NFIQ 2.0: NIST Fingerprint image quality,” NISTIR 8034, 2016.
1671
- [63] NIST,
1672
- “NIST
1673
- Special
1674
- Database
1675
- 14.”
1676
- https://www.nist.gov/srd/
1677
- nist-special-database-14, 2022.
1678
- [64] D. Maio, D. Maltoni, R. Cappelli, J. L. Wayman, and A. K. Jain, “Fvc2004: Third fingerprint
1679
- verification competition,” in International conference on biometric authentication, pp. 1–7,
1680
- Springer, 2004.
1681
- [65] Neurotechnology, “VeriFinger SDK.” https://www.neurotechnology.com, 2022.
1682
- [66] Innovatrics, “Innovatrics SDK.” https://www.innovatrics.com/innovatrics-abis/,
1683
- 2022.
1684
- [67] J. J. Engelsma, K. Cao, and A. K. Jain, “Learning a fixed-length fingerprint representation,”
1685
- IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2019.
1686
- 24
1687
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
knowledge_base/29E2T4oBgHgl3EQfjQdC/content/tmp_files/load_file.txt DELETED
The diff for this file is too large to render. See raw diff
 
knowledge_base/29E2T4oBgHgl3EQfjQdC/vector_store/index.faiss DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:59d30beb04d3903d4c95d4164264e82fb3f7c721e6f3113c723ad0538ecbdc77
3
- size 5242925
 
 
 
 
knowledge_base/29E2T4oBgHgl3EQfjQdC/vector_store/index.pkl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c8b823ec5ac82801cbc2d54bf720d9a4d7fe9befb4dd70962abdcd688d6e782e
3
- size 193207
 
 
 
 
knowledge_base/4tE1T4oBgHgl3EQf6QWC/content/2301.03521v1.pdf DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:810cee56052123f932475999c9959ec9eeb021a93ea1440772efbfb652bd9535
3
- size 250161
 
 
 
 
knowledge_base/4tE1T4oBgHgl3EQf6QWC/content/tmp_files/2301.03521v1.pdf.txt DELETED
@@ -1,1047 +0,0 @@
1
- arXiv:2301.03521v1 [math.SP] 9 Jan 2023
2
- GREEN’S FUNCTIONS FOR FIRST-ORDER SYSTEMS OF
3
- ORDINARY DIFFERENTIAL EQUATIONS WITHOUT THE
4
- UNIQUE CONTINUATION PROPERTY
5
- STEVEN REDOLFI AND RUDI WEIKARD
6
- Abstract. This paper is a contribution to the spectral theory associated with
7
- the differential equation Ju′ + qu = wf on the real interval (a, b) when J is
8
- a constant, invertible skew-Hermitian matrix and q and w are matrices whose
9
- entries are distributions of order zero with q Hermitian and w non-negative.
10
- Under these hypotheses it may not be possible to uniquely continue a solution
11
- from one point to another, thus blunting the standard tools of spectral theory.
12
- Despite this fact we are able to describe symmetric restrictions of the maximal
13
- relation associated with Ju′ + qu = wf and show the existence of Green’s
14
- functions for self-adjoint relations even if unique continuation of solutions fails.
15
- 1. Introduction
16
- This paper is a contribution to the spectral theory for the differential equation
17
- Ju′ + qu = wf
18
- posed on the real interval (a, b) when J is a constant, invertible, and skew-Hermitian
19
- n × n-matrix while the entries of the matrices q and w are distributions of order
20
- zero1 with q Hermitian and w non-negative. Ghatasheh and Weikard [7] studied this
21
- equation under the additional hypothesis that initial value problems have unique
22
- balanced2 solutions in the space of functions of locally bounded variation.
23
- The equation Ju′ + qu = wf has, of course, been investigated by many people
24
- when the coefficients q and w are locally integrable. In that situation initial value
25
- problems always have unique solutions. This is not necessarily the case when the
26
- measures induced by q or w have discrete components. It appears that an equation
27
- with measure coefficients was first considered in 1952, when Krein [8] modelled a
28
- vibrating string. In 1964 Atkinson [2] suggested to unify the treatment of differen-
29
- tial and difference equations by writing them as systems of integral equation where
30
- integrals were to be viewed as matrix-valued Riemann-Stieltjes integrals. Atkinson
31
- explained that the presence of point masses may prevent the continuation of so-
32
- lutions across such points and posed a condition avoiding that problem but more
33
- Date: 11. May 2022.
34
- This is a preprint of an article published in Integral Equations and Operator Theory which is
35
- available online at https://doi.org/10.1007/s00020-022-02703-6.
36
- ©2022.
37
- This manuscript version is made available under the CC-BY-NC-ND 4.0 license
38
- http://creativecommons.org/licenses/by-nc-nd/4.0/.
39
- 1Recall that distributions of order 0 are distributional derivatives of functions of locally
40
- bounded variation and hence may be thought of, on compact subintervals of (a, b), as measures.
41
- For simplicity we might use the word measure instead of distribution of order 0 below.
42
- 2A function of locally bounded variation is called balanced, if its values at any given point are
43
- averages of its left- and right-hand limits at that point.
44
- 1
45
-
46
- 2
47
- STEVEN REDOLFI AND RUDI WEIKARD
48
- restrictive than the one posed in [7]. In 1999 Savchuk and Shkalikov [10] treated
49
- Schr¨odinger equations with potentials in the Sobolev space W −1,2
50
- loc
51
- . Their paper was
52
- very influential and spurred many further developments. Nevertheless, Eckhardt
53
- et al. [5] showed in 2013, with the help of quasi-derivatives or, equivalently, by
54
- writing the equation as a system, that a treatment without leaving the realm of
55
- locally integrable coefficients is possible. In the same year Eckhardt and Teschl [6]
56
- investigated 2×2-systems with diagonal measure-valued matrices q and w requiring
57
- essentially Atkinson’s condition.
58
- A more thorough account of the subject’s history is given in [7]. The papers
59
- [5] and [6], mentioned above, may also serve as excellent sources, with perhaps
60
- different emphases, of this history.
61
- One feature of systems of first-order equations is that, generally, they are repre-
62
- sented by linear relations rather than linear operators. There is a well-developed
63
- spectral theory for linear relations initiated by Arens [1], see also Orcutt [9], and
64
- Bennewitz [3]. The most important results (for our purposes) are also surveyed in
65
- Appendix B of [7].
66
- Existence or uniqueness of solutions of an initial value problem for Ju′+qu = wf
67
- fails when, for some x ∈ (a, b), the matrices
68
- B±(x, 0) = J ± 1
69
- 2∆q(x)
70
- are not invertible. Here ∆q(x) = Q+(x)−Q−(x) when Q denotes an anti-derivative
71
- of q. Equivalently, ∆q(x) = dQ({x}) where dQ is the measure (locally) generated
72
- by q. Assuming the unique continuation property for solutions of Ju′ + qu = wf
73
- Ghatasheh and Weikard defined maximal and minimal relations Tmax and Tmin
74
- associated with the differential equation Ju′ + qu = wf and showed that Tmax
75
- is the adjoint of Tmin. They characterized the self-adjoint restrictions of Tmax, if
76
- any, with the aid of boundary conditions and proved that resolvents are given as
77
- integral operators, i.e., the existence of a Green’s function for any such self-adjoint
78
- relation T . Under even more restrictive conditions they also showed the existence
79
- of a Fourier transform diagonalizing T .
80
- Campbell, Nguyen, and Weikard [4] defined maximal and minimal relations and
81
- showed that Tmax = T ∗
82
- min without the hypothesis of unique continuation of so-
83
- lutions. Our goal here is to advance their ideas. In particular, even though the
84
- equation Ju′ + qu = w(λu + f) may have infinitely many linearly independent
85
- solutions the deficiency indices, i.e., the number of linearly independent solutions
86
- of Ju′ + qu = ±iwu of finite positive norm, is still bounded by n, the size of the
87
- system. We show that symmetric restrictions of Tmax, in particular the self-adjoint
88
- ones, are still given by posing boundary conditions and we show that the resolvents
89
- of self-adjoint restrictions are integral operators by proving the existence of Green’s
90
- functions.
91
- We will not approach the problem of Fourier transforms and eigenfunction ex-
92
- pansions but hope to return to it in future work.
93
- The material in this paper is arranged as follows. In Section 2 we recall the
94
- circumstances under which existence and uniqueness of solutions to initial value
95
- problems does hold and investigate the sets of those x ∈ (a, b) and λ ∈ C giving
96
- rise to trouble.
97
- Then, in Section 3 we discuss the manifold of solutions of our
98
- differential equation in the special case when a and b are regular endpoints. These
99
- results are instrumental in Section 4 where we investigate the deficiency indices
100
-
101
- GREEN’S FUNCTIONS
102
- 3
103
- of the minimal relation and its symmetric extensions but without the assumption
104
- that a and b are regular. Before we prove the existence of Green’s functions for
105
- self-adjoint restrictions of the maximal relation in Section 6 we discuss the role
106
- played by non-trivial solutions of zero norm in Section 5.
107
- Let us add a few words about notation. D′0((a, b)) is the space of distributions of
108
- order 0, i.e., the space of distributional derivatives of functions of locally bounded
109
- variation. Any function u of locally bounded variation has left- and right-hand
110
- limits denoted by u− and u+, respectively. Also, u is called balanced if u = u# =
111
- (u+ + u−)/2. The space of balanced functions of bounded variation defined on
112
- (a, b) is denoted by BV#((a, b)) while BV#
113
- loc((a, b)) stands for the space of balanced
114
- functions of locally bounded variation.
115
- We use
116
- 1 to denote an identity matrix
117
- of appropriate size and superscripts ⊤ and ∗ indicate transposition and adjoint,
118
- respectively. The sum of two closed only trivially intersecting subspaces S and T of
119
- some Hilbert space (i.e., their direct sum) is denoted by S ⊎ T ; if S and T are even
120
- orthogonal we may use ⊕ instead of ⊎. The orthogonal complement of a subspace
121
- S of a Hilbert space H is denoted by H ⊖ S or by S⊥. For c1, ..., cN ∈ Cn we
122
- abbreviate the column vector (c⊤
123
- 1 , ..., c⊤
124
- N)⊤ ∈ CnN by (c1, ..., cN)⋄.
125
- 2. Preliminaries
126
- Throughout this paper we assume the following hypothesis to be in force.
127
- Hypothesis 2.1. J is a constant, invertible and skew-Hermitian n × n-matrix.
128
- Both q and w are in D′0((a, b))n×n, w is non-negative and q Hermitian.
129
- Given that w is non-negative it gives rise to a positive measure on (a, b) and we
130
- denote the space of functions f which satisfy
131
-
132
- f ∗wf < ∞ by L2(w). This space
133
- permits the semi-inner product ⟨f, g⟩ =
134
-
135
- f ∗wg (note that ⟨f, f⟩ may be 0 without
136
- f being 0).
137
- Consider the differential equation
138
- Ju′ + (q − λw)u = wf
139
- (2.1)
140
- where λ is a complex parameter and f an element of L2(w). The latter condition
141
- guarantees that wf is in D′0((a, b))n. We will search for solutions in BV#
142
- loc((a, b))n.
143
- In this case each term in (2.1) is a distribution of order 0 so that it makes sense to
144
- pose the equation.
145
- The point a is called a regular endpoint for Ju′ + qu = wf, if there is a point
146
- c ∈ (a, b) such that the left-continuous anti-derivatives Q and W of q and w are
147
- of bounded variation on (a, c). In this case q and w may be thought of as finite
148
- measures on (a, c). Similarly, b is called regular, if Q and W are of bounded variation
149
- on (c, b). If an endpoint is not regular, it is called singular. Not surprisingly, the
150
- study of our problem is less complicated when the endpoints are regular and we
151
- will use this fact to our advantage.
152
- Despite our earlier denigration of the existence and uniqueness theorem of so-
153
- lutions of initial value problems it continues to play a crucial role. The following
154
- theorem was proved in [7].
155
- Theorem 2.2. Suppose r ∈ D′0((a, b))n×n, g ∈ D′0((a, b))n and that the matrices
156
- 1 ± ∆r(x)/2 are invertible for all x ∈ (a, b). Let x0 be a point in (a, b). Then the
157
- initial value problem u′ = ru + g, u(x0) = u0 ∈ Cn has a unique balanced solution
158
- u ∈ BV#
159
- loc((a, b))n.
160
-
161
- 4
162
- STEVEN REDOLFI AND RUDI WEIKARD
163
- If a is a regular endpoint we may pose an initial condition (for u+) at a. Simi-
164
- larly, if b is regular we may prescribe u−(b) as the initial condition.
165
- Suppose now that u is a solution of (2.1). Treating either side of this equation
166
- as a measure (restricted to a compact subset of (a, b)) evaluation at a singleton {x}
167
- shows that
168
- J(u+(x) − u−(x)) + ∆q−λw(x)u#(x) = ∆w(x)f(x)
169
- or, equivalently,
170
- B+(x, λ)u+(x) − B−(x, λ)u−(x) = ∆w(x)f(x)
171
- (2.2)
172
- when we define
173
- B±(x, λ) = J ± 1
174
- 2
175
-
176
- ∆q(x) − λ∆w(x)
177
-
178
- .
179
- Note that, if B+(x, λ) is not invertible, we could be in one of the following two
180
- situations: (i) a solution given on (a, x) may fail to exist on (x, b) or (ii) there
181
- are infinitely many ways to continue a solution on (a, x) to (x, b). An analogous
182
- statement holds, of course, if B−(x, λ) is not invertible.
183
- Let us now investigate the circumstances when a pair (x, λ) gives such trouble.
184
- Define the sets Λx = {λ ∈ C : det(B+(x, λ)) det(B−(x, λ)) = 0} and Ξλ = {x ∈
185
- (a, b) : det(B+(x, λ)) det(B−(x, λ)) = 0}. First note, since B−(x, λ) = −B+(x, λ)∗,
186
- we have that Ξλ = Ξλ and that each Λx is symmetric with respect to the real axis.
187
- Also, Λx is empty unless at least one of ∆q(x) and ∆w(x) is different from 0 and
188
- hence for all but countably many x. Next, we claim that Λx is finite as soon as it
189
- misses one point. To see this suppose that B+(x, λ0) is invertible and that λ ̸= λ0.
190
- Since
191
- B+(x, λ) = (λ0 − λ)B+(x, λ0)
192
- �1
193
- 2B+(x, λ0)−1∆w(x) − 1/(λ − λ0)
194
-
195
- we see that B+(x, λ) fails to be invertible only if 1/(λ−λ0) is an eigenvalue of some
196
- n × n-matrix. A similar statement holds, of course, for B− proving our claim.
197
- The really bad points x, namely those where Λx = C, are thus contained in Ξ0.
198
- Here we wish to remove the hypothesis Ξ0 = ∅ posed in [7]. On any subinterval of
199
- (a, b) on which q gives rise to a finite measure we find that �∞
200
- k=1 ∥∆q(xk)∥ must be
201
- finite, when k �→ xk is a sequence of distinct points in that interval. It follows now
202
- that Ξ0 is a discrete set. One shows similarly that, for any fixed complex number
203
- λ the set Ξλ is discrete.
204
- Lemma 2.3. Suppose [s, t] ⊂ (a, b) and (s, t)∩Ξ0 = ∅. Then we have that Λ(s,t) =
205
-
206
- x∈(s,t) Λx is a discrete subset of C.
207
- Proof. There are only finitely many points x in (s, t) where ∥J−1∆q(x)∥ > 1. Using
208
- a Neumann series one sees that only at such points the norm of B+(x, 0)−1 can be
209
- larger than 2∥J−1∥. Thus there is a positive number C such that ∥B+(x, 0)−1∥ ≤ C
210
- for all x ∈ (s, t). Now suppose that B+(x, λ) is not invertible and that |λ| ≤ R.
211
- Then 1/λ is an eigenvalue of 1
212
- 2B+(x, 0)−1∆w(x). This requires that ∥∆w(x)∥ ≥
213
- 2/(RC) and thus can happen only for finitely many x ∈ (s, t). Since similar argu-
214
- ments work for B− the number of points in �
215
- x∈(s,t) Λx which lie in a disk of radius
216
- R centered at 0 must be finite.
217
-
218
- We remark that, when one of the anti-derivatives of q and w is only locally of
219
- bounded variation, the set �
220
- x∈(a,b) Λx need not be discrete even if every Λx is finite.
221
-
222
- GREEN’S FUNCTIONS
223
- 5
224
- Theorem 2.4. Suppose [s, t] ⊂ (a, b) and (s, t) ∩ Ξ0 = ∅. If u0 ∈ Cn and λ ∈
225
- C \ Λ(s,t), then the initial value problem Ju′ + qu = λwu, u+(s) = u0 has a unique
226
- balanced solution in (s, t). Moreover, u(x, ·) for x ∈ (s, t) as well as u−(t, ·) are
227
- analytic in C \ Λ(s,t) and meromorphic on C. An analogous statement holds when
228
- the initial condition is posed at t.
229
- Proof. The first claim is simply a consequence of Theorem 2.2. When x ∈ (s, t)
230
- the analyticity of u(x, ·) in C \ Λ(s,t), which is an open set, was proved in Section
231
- 2.3 of [7]. If we modify q and w by setting them 0 on [t, b) we do not change the
232
- solution on (s, t). The solution for the modified problem evaluated at t is analytic
233
- and coincides with u−(t, ·) proving its analyticity. It remains to show that a point
234
- λ0 ∈ Λ(s,t) can merely give rise to poles.
235
- We know already that there are only finitely many points x in (s, t) where one of
236
- B±(x, λ0) fails to be invertible. Suppose x′ and x′′ are two consecutive such points.
237
- If we know the solution on (s, x′) and that u−(x′, ·) has, at worst, a pole at λ0,
238
- then the solution in (x′, x′′) is determined by the initial value
239
- u+(x′, λ) = B+(x′, λ)−1B−(x′, λ)u−(x′, λ)
240
- which also has, at worst, a pole at λ0 since this is true for B+(x′, λ)−1. For x ∈ (s, t)
241
- the claim follows now by induction. To prove that u−(t, ·) is also meromorphic we
242
- proceed as before and modify q and w on [t, b).
243
-
244
- 3. Solving the differential equation
245
- Our goal in this section is to investigate the set of solutions of the differential
246
- equation Ju′ + (q − λw)u = wf on (a, b) under a strengthened hypothesis.
247
- Hypothesis 3.1. In addition to Hypothesis 2.1 we ask that a and b are regular
248
- endpoints for Ju′ + qu = wf.
249
- Moreover, given the partition
250
- a = x0 < x1 < x2 < ... < xN < xN+1 = b
251
- (3.1)
252
- of (a, b) we require that Ξ0 ⊂ {x1, ..., xN}. We then consider only λ for which both
253
- B+(x, λ) and B−(x, λ) are invertible unless x is in {x1, ..., xN}.
254
- This hypothesis is in force throughout this section but later only if explicitly
255
- mentioned. We emphasize that Ξ0 is finite when a and b are regular. Also, the set
256
- of permissible λ, which we call Ω0, is symmetric with respect to the real axis and
257
- avoids only a discrete set.
258
- On each interval (xj, xj+1) we let Uj(·, λ) be a fundamental matrix of balanced
259
- solutions of the homogeneous differential equation Ju′ + (q − λw)u = 0 such that
260
- limx↓xj Uj(x, λ) =
261
- 1. The existence of these fundamental matrices is guaranteed by
262
- Theorem 2.2. The general balanced solution u of the non-homogeneous equation
263
- Ju′ + (q − λw)u = wf on (xj, xj+1) satisfies, according to Lemma 3.3 in [7],
264
- u−(x) = U −
265
- j (x, λ)
266
-
267
- cj + J−1
268
-
269
- (xj,x)
270
- Uj(·, λ)∗wf
271
-
272
- for any cj ∈ Cn. Define
273
- Uj(xj+1, λ) =
274
- lim
275
- x↑xj+1 Uj(x, λ)
276
- and
277
- Ij(f, λ) =
278
-
279
- (xj,xj+1)
280
- Uj(·, λ)∗wf.
281
-
282
- 6
283
- STEVEN REDOLFI AND RUDI WEIKARD
284
- Using u+(xj) = cj and u−(xj) = Uj−1(xj, λ)(cj−1 + J−1Ij−1(f, λ)) in equation
285
- (2.2) gives
286
- (−B−(xj, λ)Uj−1(xj, λ), B+(xj, λ))
287
- �cj−1
288
- cj
289
-
290
- = ∆w(xj)f(xj) + B−(xj, λ)Uj−1(xj, λ)J−1Ij−1(f, λ).
291
- We need to consider these equations for j = 1, ..., N simultaneously. This gives rise
292
- to the system
293
- B(λ)˜u = F0(f, λ)
294
- (3.2)
295
- where ˜u = (c0, ..., cN)⋄, B(λ), to be specified presently, is in CnN×n(N+1), and
296
- F0(f, λ) is in CnN.
297
- The two-diagonal block-matrix structure of B suggests the
298
- introduction of matrices E⊤ and E⊥, which, respectively, strip the first and last
299
- n components off a vector in their domain Cn(N+1). If we also define the block-
300
- matrices
301
- B(λ) = diag(B+(x1, λ), ..., B+(xN, λ)),
302
- U(λ) = diag(U0(x1, λ), ..., UN−1(xN, λ)),
303
- and J = diag(J, ..., J) and when we note that
304
- B(λ)∗ = diag(−B−(x1, λ), ..., −B−(xN, λ)),
305
- we obtain
306
- B(λ) = B(λ)∗U(λ)E⊥ + B(λ)E⊤.
307
- (3.3)
308
- The vector F0(f, λ) is given by
309
- F0(f, λ) = R(f) − B(λ)∗U(λ)J −1I(f, λ)
310
- with R(f) = ((∆wf)(x1), ..., (∆wf)(xN))⋄ and I(f, λ) = (I0(f, λ), ..., IN−1(f, λ))⋄.
311
- We now have the following theorem.
312
- Theorem 3.2. The differential equation Ju′ + (q − λw)u = wf has a solution u
313
- on (a, b) if and only if ˜u = (u+(x0), ..., u+(xN))⋄ is a solution of equation (3.2).
314
- In particular, in the homogeneous case, where f = 0, the space of solutions has
315
- dimension n(N + 1) − rk B(λ) ≥ n.
316
- We note that rk B(λ) = n when N = 1 so that the space of solutions of Ju′ +
317
- (q − λw)u = 0 is then exactly n-dimensional. For N = 2, however, consider the
318
- example (a, b) = R, J =
319
- � 0 −1
320
- 1
321
- 0
322
-
323
- , q =
324
- � 0 2
325
- 2 0
326
-
327
- (δ1 − δ2), w =
328
- � 2 0
329
- 0 0
330
-
331
- (δ1 + δ2), where the
332
- δk are Dirac point measures concentrated on {k}. It shows that the dimension of
333
- the space of solutions of Ju′ + (q − λw)u = 0 may be strictly larger than n.
334
- Next we investigate the connection between the right-hand limits of a solution u
335
- of the homogeneous equation Ju′ + (q − λw)u = 0 at the points x0, ..., xN (given
336
- by the vector ˜u) and the vector ˆu = (u(x1), ..., u(xN))⋄. We have ˆu = D(λ)˜u where
337
- D(λ) = 1
338
- 2(U(λ)E⊥ + E⊤)
339
- (3.4)
340
- is again a two-diagonal block-matrix. If N ≥ 2 we will also introduce the matrices
341
- Bm(λ) and Dm(λ) which are obtained by deleting the first and last n columns from
342
- B(λ) and D(λ), respectively. If N = 1 we should think of Bm(λ) and Dm(λ) as
343
- maps from the trivial vector space to Cn. Their adjoints are the map from Cn to
344
- {0}. With this understanding the following results hold also for N = 1 even though
345
- they then involve “matrices” with no rows or columns.
346
-
347
- GREEN’S FUNCTIONS
348
- 7
349
- Lemma 3.3. D(λ)∗B(λ) − B(λ)∗D(λ) = diag(−J, 0, ..., 0, J) and Dm(λ)∗B(λ) −
350
- Bm(λ)∗D(λ) = 0.
351
- Proof. This follows since U(λ)∗J U(λ) = J which, in turn, follows from Lemma 3.2
352
- in [7].
353
-
354
- Lemma 3.4. The map v �→ B(λ)v, restricted to ker D(λ), is a bijection onto
355
- ker Dm(λ)∗. Similarly, the map v �→ D(λ)v, restricted to ker B(λ), is a bijection
356
- onto ker Bm(λ)∗. In particular, dim ker D(λ) = dim ker Dm(λ)∗ and dim ker B(λ) =
357
- dim ker Bm(λ)∗.
358
- Proof. The identity Dm(λ)∗B(λ)−Bm(λ)∗D(λ) = 0 shows that B(λ) maps ker D(λ)
359
- to ker Dm(λ)∗ as well as that D(λ) maps ker B(λ) to ker Bm(λ)∗.
360
- If v ∈ ker B(λ) ∩ ker D(λ) one shows that E⊥v = E⊤v = 0 using the definitions
361
- (3.3) and (3.4) of B and D and the fact that B(λ) − B(λ)∗ = 2J . This, of course,
362
- implies that v = 0 and hence the injectivity of both B(λ)|ker D(λ) and D(λ)|ker B(λ).
363
- Clearly, both D(λ) and Dm(λ)∗, having invertible matrices along their main
364
- diagonal, are of full rank.
365
- The rank-nullity theorem shows therefore that their
366
- kernels both have dimension n. This proves surjectivity of B(λ)|ker D(λ).
367
- Finally, assume that v ∈ ker Bm(λ)∗. Then v = D(λ)x for some x ∈ Cn(N+1)
368
- which implies that 0 = Bm(λ)∗D(λ)x = Dm(λ)∗B(λ)x. The first part of the proof
369
- shows that there is a y ∈ ker D(λ) such that B(λ)y = B(λ)x. Hence v = D(λ)(x���y)
370
- where x − y ∈ ker B(λ).
371
-
372
- The following theorem establishes a connection between solutions of the differ-
373
- ential equation Ju′ + (q − λw)u = 0 and elements of ker Bm(λ)∗.
374
- Theorem 3.5. If u is a solution of Ju′ + (q − λw)u = 0 on (a, b), then ˆu =
375
- (u(x1), ..., u(xN))⋄ is in ker Bm(λ)∗.
376
- If, in addition, u+(a) = u−(b) = 0, then
377
- ˆu ∈ ker B(λ)∗ (a subspace of ker Bm(λ)∗).
378
- Conversely, if ˆu ∈ ker Bm(λ)∗, then Ju′ + (q − λw)u = 0 has a unique solution
379
- u on (a, b) such that (u(x1), ..., u(xN))⋄ = ˆu. If, indeed, ˆu ∈ ker B(λ)∗, we further
380
- have u+(a) = u−(b) = 0.
381
- Let us emphasize that supp u ⊂ [x1, xN] when u+(a) = u−(b) = 0.
382
- Proof. If u solves Ju′ + (q − λw)u = 0, then, by Theorem 3.2, ˜u ∈ ker B(λ).
383
- Lemma 3.4 shows then that ˆu = D(λ)˜u is in ker Bm(λ)∗. If u+(a) = u−(b) = 0,
384
- then Lemma 3.3 gives 0 = B(λ)∗D(λ)˜u = B(λ)∗ˆu.
385
- Conversely, assume that ˆu ∈ ker Bm(λ)∗ = D(λ)(ker B(λ)).
386
- Then there is a
387
- unique vector ˜u ∈ ker B(λ) such that ˆu = D(λ)˜u, which, in turn, defines a unique
388
- solution u of Ju′+(q−λw)u = 0 such that (u(x1), ..., u(xN))⋄ = ˆu. If ˆu ∈ ker B(λ)∗,
389
- then, according to Lemma 3.3, diag(−J, 0, ..., 0, J)˜u = 0 which shows that u+(a) =
390
- u−(b) = 0.
391
-
392
- Given an algebraic system Ax = b we know that there exist solutions only if
393
- b ∈ ran A = (ker A∗)⊥. For the differential equation Ju′ + (q − λw)u = wf with
394
- integrable coefficients q and w the unique continuation property for the solutions
395
- gives rise to the variation of constants formula, which then guarantees the existence
396
- of solutions for any non-homogeneity f (within reason). In the present situation,
397
-
398
- 8
399
- STEVEN REDOLFI AND RUDI WEIKARD
400
- however, the problem of existence raises its head and we now set out to give neces-
401
- sary and sufficient conditions for f guaranteeing the existence of a solution in the
402
- spirit of Linear Algebra.
403
- Lemma 3.6. If ˜v ∈ ker B(λ) and ˆv = D(λ)˜v, then
404
- ˜v∗E∗
405
- ⊥ = −ˆv∗B(λ)∗U(λ)J −1
406
- and
407
- ˜v∗E∗
408
- ⊤ = ˆv∗B(λ)J −1.
409
- Moreover, if f ∈ L2(w) and Jv′ + (q − λw)v = 0, then
410
-
411
- v∗wf = ˆv∗F0(f, λ) + ˆv∗B(λ)J −1˜I(f, λ) = ˆv∗F0(f, λ) + ˜v∗E∗
412
- ⊤˜I(f, λ)
413
- where (v(x1), ..., v(xN))⋄ = ˆv = D(λ)˜v and ˜I(f, λ) = (0, ..., 0, IN(f, λ))⋄ ∈ CnN.
414
- Proof. Using the definitions (3.3) and (3.4) of B and D and the identities B(λ) −
415
- B(λ)∗ = 2J and U(λ)∗J U(λ) = J we obtain that B(λ)˜v = 0 implies
416
- B(λ)D(λ)˜v = U(λ)∗−1J E⊥˜v
417
- and
418
- B(λ)∗D(λ)˜v = −J E⊤˜v.
419
- Taking adjoints gives the first claim since ˆv = D(λ)˜v.
420
- The second claim is an immediate consequence of this, since
421
-
422
- v∗wf = ˆv∗R(f) + ˜v∗(I0(f, λ), ..., IN (f, λ))⋄
423
- = ˆv∗R(f) + ˜v∗E∗
424
- ⊥I(f, λ) + ˜v∗E∗
425
- ⊤˜I(f, λ)
426
- = ˆv∗R(f) − ˆv∗B(λ)∗U(λ)J −1I(f, λ) + ˆv∗B(λ)J −1˜I(f, λ)
427
- = ˆv∗F0(f, λ) + ˆv∗B(λ)J −1˜I(f, λ).
428
-
429
- Theorem 3.7. The differential equation Ju′ + (q − λw)u = wf has a solution on
430
- (a, b) if and only if
431
-
432
- v∗wf = 0 for every solution v of Jv′ + (q − λw)v = 0 which
433
- vanishes at a and b.
434
- Proof. By Theorem 3.2 the solution u exists if and only if the system (3.2) has a
435
- solution ˜u = (u+(x0), ..., u+(xN))⋄. This, in turn, happens if and only if F0(f, λ) ∈
436
- ran B(λ) = (ker B(λ)∗)⊥.
437
- By Theorem 3.5 the solutions of Jv′ +(q −λw)v = 0 which vanish at a and b are
438
- in one-to-one correspondence with elements of ker B(λ)∗. Since v+(xN) = 0 we have
439
- ˜v∗E∗
440
- ⊤˜I(f, λ) = 0 and then, from Lemma 3.6, we obtain ˆv∗F0(f, λ) =
441
-
442
- v∗wf.
443
-
444
- In the case of unique continuation of solutions the condition that v vanishes at a
445
- or b implies, of course, that v = 0. Consequently, Ju′ + (q − λw)u = wf has then a
446
- solution for any f ∈ L2(w). The set of all solutions is thus obtained by adding the
447
- general solution of Ju′ +(q −λw)u = 0 whose dimension is n(N + 1)−rkB(λ) ≥ n.
448
- Theorem 3.8. The differential equation Ju′ + (q − λw)u = wf has a solution on
449
- (a, b) which vanishes at a and b if and only if
450
-
451
- v∗wf = 0 for every solution v of
452
- Jv′ + (q − λw)v = 0.
453
- Proof. For u to vanish at a and b it is required that u+(x0) = 0 and u+(xN) =
454
- −J−1IN(f, λ). The system (3.2) is therefore equivalent to
455
- Bm(λ)(c1, ..., cN−1)⋄ = F0(f, λ) + B(λ)J −1˜I(f, λ).
456
- The proof is now analogous to the one for Theorem 3.7.
457
-
458
-
459
- GREEN’S FUNCTIONS
460
- 9
461
- We conclude this section by “counting” the solutions of Ju′ + qu = λwu which
462
- are not compactly supported.
463
- More precisely, we will determine the dimension
464
- of the quotient space of all solutions of Ju′ + qu = λwu modulo the space of
465
- compactly supported solutions. Theorem 3.5 shows that the space of all solutions of
466
- Ju′+qu = λwu is in one-to-one correspondence with ker Bm(λ)∗ and that the space
467
- of compactly supported solutions of Ju′+qu = λwu is in one-to-one correspondence
468
- with ker B(λ)∗. We therefore define
469
- ˜n(λ) = dim(ker Bm(λ)∗/ ker B(λ)∗) = dim ker Bm(λ)∗ − dim ker B(λ)∗.
470
- Lemma 3.9. ˜n(λ) + ˜n(λ) = 2n.
471
- Proof. Since rk B(λ) = rk B(λ)∗, the rank-nullity theorem implies
472
- dim ker B(λ) = n(N + 1) − rk B(λ)∗ = n + dim ker B(λ)∗.
473
- Hence, using also the analogous equation for λ,
474
- dim ker B(λ) − dim ker B(λ)∗ + dim ker B(λ) − dim ker B(λ)∗ = 2n.
475
- Lemma 3.4 gives that dim ker B(λ) = dim ker Bm(λ)∗ yielding the claim.
476
-
477
- From Theorem 2.4 we know that the matrices Uj(xj+1, ·) are meromorphic on C
478
- with poles at most at points in the complement of Ω0. It follows that the entries of
479
- B are also meromorphic. Since the meromorphic functions on C form a field there
480
- is a row-echelon matrix ˜B with meromorphic entries such that B˜u = 0 has the same
481
- solutions as ˜B˜u = 0. Now define a set Ω as Ω0 without the set of all poles of ˜B as
482
- well as their complex conjugates, and the set of zeros and their conjugates of any
483
- of the pivots of ˜B.
484
- Theorem 3.10. If λ ∈ Ω, then dim ker B(λ) = dim ker B(λ) and ˜n(λ) = n.
485
- Proof. The construction of Ω entails that rk B(λ) = rk ˜B(λ) = rk B(λ) if λ ∈ Ω.
486
- Since ˜n(λ) = dim ker B(λ) − dim ker B(λ)∗ = dim ker B(λ) + n − dim ker B(λ) we
487
- obtain ˜n(λ) = n.
488
-
489
- 4. Symmetric restrictions of Tmax
490
- Given a differential equation Ju′ + qu = wf we now define associated minimal
491
- and maximal relations. Recall that L2(w) is the space of functions f such that
492
-
493
- f ∗wf < ∞. First we define
494
- Tmax = {(u, f) ∈ L2(w) × L2(w) : u ∈ BV#
495
- loc((a, b))n, Ju′ + qu = wf}.
496
- Subsequently we will always tacitly assume that u ∈ BV#
497
- loc((a, b))n, when we use
498
- u′. Next, let
499
- Tmin = {(u, f) ∈ Tmax : supp u is compact in (a, b)}.
500
- Note that these are spaces of pairs of functions. To employ the power of functional
501
- analysis we need to realize these relations in Hilbert spaces. Therefore we introduce,
502
- as usual, the space L2(w) as the quotient of L2(w) modulo the subspace of all u ∈
503
- L2(w) for which ∥u∥2 =
504
-
505
- u∗wu = 0. Denoting the equivalence class corresponding
506
- to u by [u] we now set
507
- Tmax = {([u], [f]) ∈ L2(w) × L2(w) : (u, f) ∈ Tmax}
508
-
509
- 10
510
- STEVEN REDOLFI AND RUDI WEIKARD
511
- and
512
- Tmin = {([u], [f]) ∈ Tmax : (u, f) ∈ Tmin}.
513
- Here (and elsewhere) we choose brevity over precision: whenever we have a pair
514
- ([u], [f]) in Tmax we choose u and f such that (u, f) ∈ Tmax.
515
- Define the vector space
516
- L0 = {u ∈ BV#
517
- loc((a, b))n : Ju′ + qu = 0 and ∥u∥ = 0}.
518
- In many cases this space is trivial and some authors restrict their attention to the
519
- case where it is; this is then called the definiteness condition. However, we will
520
- not do so here. Note that ∥u∥ = 0 if and only if wu is the zero distribution. The
521
- significance of L0 stems from the following fact. Suppose ([u], [f]) ∈ Tmax and that
522
- there are u, v ∈ [u] and f, g ∈ [f] such that Ju′ + qu = wf and Jv′ + qv = wg.
523
- Then J(u − v)′ + q(u − v) = w(f − g) = 0 as well as w(u − v) = 0, i.e., u − v ∈ L0.
524
- In other words, in the presence of a non-trivial space L0, the class [u] has many
525
- representatives of locally bounded variation satisfying the differential equation for a
526
- given class [f] (the choice of a representative of [f], on the other hand, is irrelevant).
527
- In Section 5 we will describe a procedure to choose a representative of [u] in a
528
- distinctive way.
529
- In [4] it was proved that Tmin is symmetric, indeed that T ∗
530
- min = Tmax. In this case
531
- it is well-known that von Neumann’s theorem holds. Setting Dλ = {([u], λ[u]) ∈
532
- Tmax} it states that
533
- Tmax = Tmin ⊎ Dλ ⊎ Dλ
534
- when Im λ ̸= 0. Moreover, when λ = ±i, these direct sums are even orthogonal. It
535
- is also known that the dimension of Dλ does not change as λ varies in either the
536
- upper or the lower half plane. The numbers n± = dim D±i are called deficiency
537
- indices of Tmin and we are now setting out to investigate these.
538
- If u is a solution of Ju′ +qu = λwu which is compactly supported then (u, λu) ∈
539
- Tmin and ([u], λ[u]) ∈ Tmin ∩ Dλ. If λ is not real, then Tmin ∩ Dλ is trivial and it
540
- follows that compactly supported solutions of Ju′ + qu = λwu do not contribute to
541
- the corresponding deficiency index. We now have, as a corollary of Theorem 3.10,
542
- that the deficiency indices of Tmin cannot be more than n if a and b are regular
543
- endpoints. We do not state this result separately since it is included in the next
544
- theorem about the general case.
545
- Thus, to emphasize, we allow in the following a and b to be either regular or
546
- singular endpoints. Let τk, k ∈ Z, be a strictly increasing sequence in (a, b) having
547
- a and b as its only limit points and such that all points in Ξ0 are among the
548
- τk. Considering now only the interval Ik = (τ−k, τk) we set xj = τ−k+j for j =
549
- 0, ..., N + 1 = 2k. We can then introduce the objects from Section 3. To emphasize
550
- their dependence on k we will add a superscript (k) to those objects. We have then,
551
- in particular, the matrices B(k), B(k)
552
- m and the sets Ω(k) of permissible values of λ.
553
- We now define Ω = �∞
554
- k=1 Ω(k) and note that Ω is symmetric with respect to the
555
- real axis and misses only countably many values from C.
556
- Now fix a non-real λ ∈ Ω. If u is a solution of Ju′+qu = λwu on (a, b) we denote
557
- its restriction to the interval Ik by u(k). We are interested in the quotient space Xk
558
- of all solutions of Ju′ +qu = λwu on Ik modulo the compactly supported solutions.
559
- If u is a solution of Ju′+qu = λwu on Ik we denote the associated equivalence class
560
- in Xk by ⌊u⌋k. A compactly supported solution u of Ju′ + qu = λwu on Ik can be
561
- extended by 0 to all of (a, b) yielding an element in Tmin ∩ Dλ. This implies, since
562
-
563
- GREEN’S FUNCTIONS
564
- 11
565
- Im λ ̸= 0, that ∥u∥2 =
566
-
567
- Ik u∗wu = 0 and shows that Xk is a normed space with the
568
- norm given by ∥u∥2
569
- k =
570
-
571
- Ik u∗wu. According to Theorem 3.5 the quotient space Xk
572
- is isomorphic to ker B(k)
573
- m (λ)∗/ ker B(k)(λ)∗ and, by Theorem 3.10, its dimension is
574
- equal to n since λ ∈ Ω ⊂ Ω(k).
575
- Theorem 4.1. The deficiency indices of Tmin are less than or equal to n.
576
- Proof. Fix a non-real λ ∈ Ω. Suppose u1, ..., um are solutions of Ju′ + qu = λwu
577
- such that [u1], ..., [um] are linearly independent elements of Dλ. We will show below
578
- that there is an interval Ip = (τ−p, τp) such that ⌊u(p)
579
- 1 ⌋p, ..., ⌊u(p)
580
- m ⌋p are linearly
581
- independent elements of Xp. Hence m ≤ n, the dimension of Xp. Since deficiency
582
- indices are constant in either half-plane they cannot be larger than n.
583
- We will now prove the existence of Ip by induction. That is we prove that, for
584
- every k ∈ {1, ..., m}, there is an interval Iℓk such that the restrictions of u1, ..., uk
585
- to Iℓk generate linearly independent elements ⌊u(ℓk)
586
- 1
587
- ⌋ℓk, ..., ⌊u(ℓk)
588
- k
589
- ⌋ℓk of Xℓk. Once
590
- this is achieved we set p = ℓm.
591
- Suppose k = 1 and let Iℓ1 be an interval such that ∥u(ℓ1)
592
- 1
593
- ∥ > 0. By what we
594
- argued above we know that u(ℓ1)
595
- 1
596
- is not compactly supported in Iℓ1 and thus gives
597
- rise to a non-zero (and hence linearly independent) element of Xℓ1.
598
- Now suppose we had already shown our claim for some k < m. If ⌊u(ℓk)
599
- 1
600
- ⌋ℓk, ...,
601
- ⌊u(ℓk)
602
- k+1⌋ℓk are already linearly independent as elements of Xℓk we choose ℓk+1 = ℓk
603
- and our induction step is complete. Otherwise, there are unique complex numbers
604
- α1, ..., αk such that
605
- ∥(α1u1 + ... + αkuk + uk+1)(ℓk)∥ℓk = 0.
606
- However, there must be an interval Iℓk+1 ⊃ Iℓk where
607
- ∥(α1u1 + ... + αkuk + uk+1)(ℓk+1)∥ℓk+1 > 0
608
- on account that [u1], ..., [uk+1] are linearly independent. It follows now that, as ele-
609
- ments of Xℓk+1 the vectors ⌊u(ℓk+1)
610
- 1
611
- ⌋ℓk+1, ..., ⌊u(ℓk+1)
612
- k+1
613
- ⌋ℓk+1 are linearly independent.
614
- This completes our induction step also in this case.
615
-
616
- Corollary 4.2. If a and b are regular, then n+ = n−.
617
- Proof. Fix a non-real λ in Ω. Since a and b are regular, the set Ξλ = Ξλ is finite.
618
- Thus we may assume that it is contained in Ik = (τ−k, τk) for some appropriate k.
619
- Then dim ker B(k)(λ) is the number of linearly independent solutions of Ju′ + qu =
620
- λwu. Theorem 3.10 shows that Ju′ + qu = λwu has the same number of linearly
621
- independent solutions. Any of these solutions has finite norm but some may have
622
- norm 0. Now note, that if u is a solution of Ju′+qu = λwu of norm 0, then we have
623
- wu = 0, so that u is also a solution of Ju′ + qu = λwu. Therefore n+ = n−.
624
-
625
- As mentioned above, it is well-known, even in the case of relations, that von
626
- Neumann’s theorem E∗ = E⊕Di⊕D−i holds when E is a closed symmetric relation
627
- in H × H when H is a Hilbert space. In our case, when d = dim Di ⊕ D−i is finite,
628
- as we just showed, we can use Theorem B.5 in [7] to characterize the symmetric
629
- restriction of Tmax in terms of boundary conditions. We state that theorem here
630
- for easy reference. The operator J appearing there is defined by J (u, f) = (f, −u)
631
- for u, f ∈ H.
632
-
633
- 12
634
- STEVEN REDOLFI AND RUDI WEIKARD
635
- Theorem 4.3. Suppose E is a closed symmetric relation in H × H with d =
636
- dim Di ⊕ D−i < ∞ and that m ≤ d/2 is a natural number or 0. If A : E∗ → Cd−m
637
- is a surjective linear operator such that E ⊂ ker A and AJ A∗ has rank d−2m then
638
- ker A is a closed symmetric restriction of E∗ for which the dimension of (ker A)⊖E
639
- is m. Conversely, every closed symmetric restriction of E∗ is the kernel of such a
640
- linear operator A. Finally, ker A is self-adjoint if and only if AJ A∗ = 0 (entailing
641
- m = d/2).
642
- A second ingredient for our next considerations is Lagrange’s identity (or Green’s
643
- formula). If (u, f) and (v, g) are in Tmax, then v∗wf and g∗wu are finite measures.
644
- Therefore v∗Ju′ + v′∗Ju = v∗wf − g∗wu is also a finite measure. Its antiderivative
645
- v∗Ju is of bounded variation and thus has limits at a and b. Integration now gives
646
- Lagrange’s identity
647
- (v∗Ju)−(b) − (v∗Ju)+(a) = ⟨v, f⟩ − ⟨g, u⟩.
648
- (4.1)
649
- Note the right-hand side, and hence the left-hand side, does not change upon choos-
650
- ing different representatives in place of u, f, v, or g.
651
- Now, if (v, g) is an element of Di⊕D−i, then (u, f) �→ ⟨(v, g), (u, f)⟩ is a bounded
652
- linear functional on Tmax. Conversely, since Tmax is a Hilbert space, a bounded
653
- linear functional on Tmax is given by (u, f) �→ ⟨(v, g), (u, f)⟩ for some (v, g) ∈ Tmax.
654
- When it is also known that Tmin is in the kernel of this functional, (v, g) may be
655
- chosen in Di ⊕ D−i. Hence, in our situation, the operator A from Theorem 4.3
656
- is given by d − m linearly independent elements in Di ⊕ D−i. Lagrange’s identity
657
- implies that the entries of the matrix AJ A∗ are then given by
658
- (AJ A∗)k,ℓ = ⟨(vk, gk), (gℓ, −vℓ)⟩ = (g∗
659
- kJgℓ)−(b) − (g∗
660
- kJgℓ)+(a).
661
- (4.2)
662
- Therefore we arrive at the following theorem.
663
- Theorem 4.4. Let d = n+ + n− and suppose that m ≤ min{n+, n−}. If (v1, g1),
664
- ..., (vd−m, gd−m) are linearly independent elements of Di⊕D−i such that the matrix
665
- defined in (4.2) has rank d − 2m, then
666
- T = {(u, f) ∈ Tmax : (g∗
667
- j Ju)−(b) − (g∗
668
- j Ju)+(a) = 0 for j = 1, ..., d − m}
669
- (4.3)
670
- is a closed symmetric restriction of Tmax.
671
- Conversely, if T is a closed symmetric restriction of Tmax and m is the dimen-
672
- sion of T ⊖ Tmin, then T is given by (4.3) for appropriate elements (v1, g1), ...,
673
- (vd−m, gd−m) of Di ⊕ D−i for which the matrix defined in (4.2) has rank d − 2m.
674
- For self-adjoint restrictions of Tmax it is hence necessary and sufficient that n+ =
675
- n− = m = d−m and that (g∗
676
- kJgℓ)−(b)−(g∗
677
- kJgℓ)+(a) = 0 for all 1 ≤ k, ℓ ≤ m = d/2.
678
- 5. The space L0
679
- We mentioned earlier that the class [u] does not have a unique balanced repre-
680
- sentative when ([u], [f]) ∈ Tmax, if the space L0 has non-trivial elements. In this
681
- section we describe a procedure to choose a representative in a distinctive way.
682
- To this end we assume, without loss of generality, that B+(τ0, 0) = B−(τ0, 0) = J
683
- so that solutions of our differential equations are continuous at τ0. Define N0 =
684
- {h(τ0) : h ∈ L0} and for each k ∈ N both Nk = {h+(τk) : h ∈ L0, supp h ⊂ [τk, b)}
685
- and N−k = {h−(τ−k) : h ∈ L0, supp h ⊂ (a, τ−k]}. Then, for k ∈ N0, we say that a
686
- function u ∈ BV#
687
- loc((a, b))n satisfies condition (±k), if u±(τ±k) is perpendicular to
688
- N±k (using always the upper sign or always the lower sign).
689
-
690
- GREEN’S FUNCTIONS
691
- 13
692
- Lemma 5.1. Suppose ([u], [f]) ∈ Tmax. Then there is a unique balanced v ∈ [u]
693
- such that (v, f) ∈ Tmax and v satisfies condition (k) for every k ∈ Z.
694
- Proof. First consider uniqueness. Suppose u and v are two functions satisfying the
695
- given conditions. Then u − v ∈ L0 and hence (u − v)(τ0)∗t(τ0) = 0 for t = u and
696
- t = v. Subtract these equations to find (u−v)(τ0) = 0, and thus u = v on (τ−1, τ1).
697
- Moreover, h1 = (u − v)χ[τ1,b) and h−1 = (u − v)χ(a,τ−1] are in L0. Conditions (1)
698
- and (−1) show therefore that (u−v)+(τ1) and (u−v)−(τ−1) are also 0 which proves
699
- that u = v on (τ−2, τ2). Induction informs us now that u = v everywhere.
700
- We now turn to existence. Pick a balanced representative u ∈ [u] such that
701
- (u, f) ∈ Tmax. There is an element h0 ∈ L0 such that the orthogonal projection of
702
- u(τ0) onto N0 equals h0(τ0). Thus v0 = u − h0 satisfies (v0, f) ∈ Tmax, v0 ∈ [u],
703
- and condition (0).
704
- Next, there is an element h1 ∈ L0 with support in [τ1, b) such that the orthogonal
705
- projection of v+
706
- 0 (τ1) onto N1 equals h+
707
- 1 (τ1). We now define v1 = v0 − h1. Then
708
- (v1, f) ∈ Tmax, v1 ∈ [u], and v1 satisfies condition (1). Notice that v1 = v0 on
709
- (a, τ1) implying that v1 also satisfies condition (0).
710
- Proceeding recursively, we may define, for each k ∈ N, functions hk ∈ L0 sup-
711
- ported in [τk, b) such that vk = u−�k
712
- j=0 hj satisfies conditions (0), ..., (k), vk ∈ [u],
713
- and (vk, f) ∈ Tmax.
714
- Since, for a fixed x ∈ [τ0, b), only finitely many of the numbers hk(x) are different
715
- from 0, we find that the sequence k �→ vk converges pointwise to a function ˜v ∈ [u]
716
- satisfying conditions (k) for all k ∈ N0 and (˜v, f) ∈ Tmax. We can now repeat
717
- this process for negative integers starting from the function ˜v instead of u arriving
718
- eventually at a function v ∈ [u] satisfying conditions (k) for all k ∈ Z and (v, f) ∈
719
- Tmax.
720
-
721
- We denote the operator which assigns the function v just constructed to a given
722
- element ([u], [f]) ∈ Tmax by E. If Im = (τ−m, τm) we also define Em : Tmax →
723
- BV#(Im)n by composing E with the restriction to the interval Im.
724
- Note that
725
- BV#(Im)n is a Banach space with the norm |||u|||m defined as the sum of the
726
- variation of u over Im and the norm of u(τ0).
727
- Theorem 5.2. The operator Em : Tmax → BV#(Im)n is bounded.
728
- Proof. Due to the closed graph theorem we merely have to show that Em is a
729
- closed operator. Thus assume that the sequence ([uj], [fj]) converges to ([u], [f]) in
730
- Tmax and that Em([uj], [fj]) converges to v in BV#(Im)n and hence pointwise. To
731
- simplify notation we assume that Em([uj], [fj])) and Em([u], [f]) are the restrictions
732
- of uj and u, respectively, to the interval Im. We need to show that u = v on Im.
733
- First note that u±
734
- j (τ±k) ∈ N ⊥
735
- ±k and
736
- ��u±
737
- j (τ±k) − v±(τ±k)
738
- �� → 0 imply that v
739
- satisfies conditions (±k) for each k ∈ {0, ..., m − 1}. For ℓ ∈ {−m, m − 1} and
740
- x ∈ (τℓ, τℓ+1) we have
741
- u−
742
- j (x) = U −
743
- ℓ (x)
744
-
745
- u+
746
- j (τℓ) + J−1
747
-
748
- (τℓ,x)
749
- U ∗
750
- ℓ wfj
751
-
752
- when Uℓ denotes the fundamental matrix of Ju′ + qu = 0 on the interval (τℓ, τℓ+1)
753
- satisfying U +
754
- ℓ (τℓ) =
755
- 1. Taking the limit as j → ∞ gives
756
- v−(x) = U −
757
- ℓ (x)
758
-
759
- v+(τℓ) + J−1
760
-
761
- (τℓ,x)
762
- U ∗
763
- ℓ wf
764
-
765
-
766
- 14
767
- STEVEN REDOLFI AND RUDI WEIKARD
768
- since the integral may be considered as a vector of scalar products which are, of
769
- course, continuous. The variation of constants formula shows that v is a balanced
770
- solution for Jv′ + qv = wf on (τℓ, τℓ+1). We also have
771
- J(u+
772
- j (τℓ) − u−
773
- j (τℓ)) + ∆q(τℓ)uj(τℓ) = ∆w(τℓ)fj(τℓ).
774
- (5.1)
775
- The fact that [fj] converges to [f] in L2(w) implies, on account of the positivity
776
- of w, that ∆w(τℓ)fj(τℓ) converges to ∆w(τℓ)f(τℓ).
777
- Therefore taking a limit in
778
- (5.1) shows, in conjunction with the previous observations, that Jv′ + qv = wf on
779
- the interval Im. Since u satisfies the same equation we have that u − v satisfies
780
- J(u − v)′ + q(u − v) = 0 on Im.
781
- Next we show w(u − v) = 0 on Im. Fatou’s lemma implies
782
- 0 ≤
783
-
784
- Im
785
- (u − v)∗w(u − v) ≤ lim inf
786
- j→∞
787
-
788
- Im
789
- (u − uj)∗w(u − uj) = 0.
790
- It follows that w(u − v) = 0 on Im.
791
- Finally, a variant of Lemma 5.1 shows now that u = v.
792
-
793
- 6. Green’s function
794
- Now suppose that we have a self-adjoint restriction T of Tmax. The resolvent set
795
- of T is the set of those λ for which T − λ : dom(T ) → L2(w) is bijective, i.e.,
796
- ̺(T ) = {λ ∈ C : ker(T − λ) = {0}, ran(T − λ) = L2(w)}
797
- which is an open set. We denote its complement, the spectrum of T , by σ(T ).
798
- Since T is self-adjoint, σ(T ) is a subset of R.
799
- If λ ∈ ̺(T ), then the resolvent
800
- Rλ = (T − λ)−1 is a bounded linear operator from L2(w) to dom(T ). We now
801
- define Rλ : L2(w) → BV#
802
- loc((a, b))n by
803
- Rλ[f] = E((Rλ[f], λRλ[f] + [f])).
804
- Thus Rλ[f] is the unique solution of Ju′ + qu = w(λu + f) in L2(w) satisfying
805
- condition (k) for every k ∈ Z.
806
- We will now show that Rλ is an integral operator. Its kernel G is called a Green’s
807
- function for T .
808
- Theorem 6.1. If T is a self-adjoint restriction of Tmax, then there exists, for given
809
- x ∈ (a, b) and λ ∈ ̺(T ), a matrix G(x, ·, λ) such that the columns of G(x, ·, λ)∗ are
810
- in L2(w) and
811
- (Rλ[f])(x) =
812
-
813
- G(x, ·, λ)wf.
814
- (6.1)
815
- Proof. Fix x ∈ Im and λ ∈ ̺(T ). Consider the restriction of Rλ[f] to the interval
816
- Im. Since Em and Rλ are bounded operators the map [f] �→ (Rλ[f])(x) is a bounded
817
- linear map from L2(w) to Cn. Hence there are elements [g1], ..., [gn] ∈ L2(w) such
818
- that the k-th component of (Rλ[f])(x) equals ⟨[gk], [f]⟩. Let these be the columns
819
- of the matrix-valued function G(x, ·, λ)∗. Then we obtain (6.1).
820
-
821
- One wishes to complement this fairly abstract existence result by a more concrete
822
- one where Green’s function is given in terms of solutions of the differential equation
823
- as is done in the classical case, see, for instance, Zettl [11]. This was also achieved
824
- in [7] under the assumption that Ξ0 is empty and minor generalizations of this
825
- are certainly possible. Such an explicit construction of Green’s function, where
826
- possible, is the cornerstone of many other results in spectral theory, in particular
827
-
828
- GREEN’S FUNCTIONS
829
- 15
830
- the development of a spectral transformation and more detailed information about
831
- the resolvent, e.g., the compactness of the resolvent in the regular case. Due to
832
- the difficulties posed by the absence of an existence and uniqueness theorem for
833
- initial value problems we have, so far, not been able to obtain such a construction
834
- in general. However, we hope to return to this issue in the future.
835
- 7. Example
836
- In this section we treat an example where the matrices B±(x, λ) fail to be invert-
837
- ible for infinitely many x and all λ, in other words where Ξ0 is infinite and Λx = C
838
- for all x ∈ Ξ0 (recall that in [7] the hypothesis Ξ0 = ∅ was made causing each Λx
839
- to be finite). The example is Ju′ + qu = wf on (a, b) = R where
840
- J =
841
-
842
- 0
843
- −1
844
- 1
845
- 0
846
-
847
- , q =
848
-
849
- 0
850
- 2
851
- 2
852
- 0
853
- � �
854
- k∈Z
855
- (δ2k − δ2k+1), and, w =
856
-
857
- 2
858
- 0
859
- 0
860
- 0
861
- � �
862
- k∈Z
863
- δk
864
- with δk denoting the Dirac point measure concentrated on {k}. Since we are seeking
865
- balanced solutions we need the matrices
866
- B−(2k − 1, λ) =
867
- �λ
868
- 0
869
- 2
870
- 0
871
-
872
- and
873
- B+(2k − 1, λ) =
874
- �−λ
875
- −2
876
- 0
877
- 0
878
-
879
- as well as
880
- B−(2k, λ) =
881
-
882
- λ
883
- −2
884
- 0
885
- 0
886
-
887
- and
888
- B+(2k, λ) =
889
-
890
- −λ
891
- 0
892
- 2
893
- 0
894
-
895
- .
896
- If x is not an integer we have B±(x, λ) = J. Note that f ∈ L2(w) if and only if
897
- k �→ f1(k) is in ℓ2(Z) and any element in L2(w) is uniquely determined by these
898
- values (here f1 denotes the first component of f).
899
- In any interval (k, k + 1) solutions of Ju′ + qu = w(λu + f) are constant, say
900
- (αk, βk)⊤. At x = 2k − 1 the equation
901
- B+(2k − 1, λ)u+(2k − 1) − B−(2k − 1, λ)u−(2k − 1) = (2f1(2k − 1), 0)⊤
902
- implies α2k−2 = 0 and
903
- − λα2k−1 − 2β2k−1 = 2f1(2k − 1).
904
- (7.1)
905
- Similarly, at x = 2k we get α2k = 0 and
906
- − λα2k−1 + 2β2k−1 = 2f1(2k).
907
- (7.2)
908
- We can now describe the space Tmax. A pair (u, f) is in Tmax if and only if the
909
- sequences k �→ f1(k) and k �→ u1(k) are in ℓ2(Z), f1(2k) = −f1(2k − 1), u1(2k) =
910
- u1(2k − 1), and
911
- u =
912
-
913
- k∈Z
914
- � �2u1(2k)
915
- f1(2k)
916
-
917
- χ#
918
- (2k−1,2k) +
919
- � 0
920
- β2k
921
-
922
- χ#
923
- (2k,2k+1)
924
-
925
- with arbitrary numbers β2k. Note that ∥u∥2 = 4 �
926
- k∈Z |u1(2k)|2.
927
- Choosing here f = 0 shows that 0 is an eigenvalue of Tmax with infinite multi-
928
- plicity. Choosing f = 0 and requiring ∥u∥ = 0 determines the space L0. Indeed,
929
- L0 =
930
- � �
931
- k∈Z
932
-
933
- 0
934
- β2k
935
-
936
- χ#
937
- (2k,2k+1) : β2k ∈ C
938
-
939
-
940
- 16
941
- STEVEN REDOLFI AND RUDI WEIKARD
942
- which is infinite-dimensional. We now define the sequence τ setting τ0 = 1/2 and,
943
- for k ∈ N, τk = k and τ−k = 1 − k. A solution u of Ju′ + qu = w(λu + f) always
944
- satisfies condition (2k + 1) and it satisfies condition (2k) exactly when β2k = 0.
945
- For f = 0 equations (7.1) and (7.2) show that no non-zero λ can be an eigenvalue
946
- of Tmax. In particular, the deficiency indices n± are 0, i.e., Tmax is self-adjoint. Now
947
- choose λ ̸= 0 and f arbitrary in L2(w). Then
948
- (Rλf)(x) = − 1
949
-
950
-
951
- k∈Z
952
- �2f1(2k − 1) + 2f1(2k)
953
- λf1(2k − 1) − λf1(2k)
954
-
955
- χ#
956
- (2k−1,2k)(x)
957
- (7.3)
958
- is the unique solution of Ju′ + qu = w(λu + f) satisfying condition (k) for any
959
- k ∈ Z. Since
960
- ∥Rλf∥2 =
961
-
962
- k∈Z
963
- 2|(Rλf)1(k)|2 =
964
- 1
965
- |λ|2
966
-
967
- k∈Z
968
- |f1(2k − 1) + f1(2k)|2
969
- (7.4)
970
- is finite we have that C \ {0} is the resolvent set of Tmax.
971
- We now define H = {u ∈ L2(w) : u1(2k − 1) = u1(2k)} and H∞ = {f ∈ L2(w) :
972
- f1(2k − 1) = −f1(2k)}. These spaces are orthogonal to each other and their direct
973
- sum is L2(w). Equation (7.4) shows that ker Rλ = H∞. Moreover, we have
974
- Tmax = (H × {0}) ⊕ ({0} × H∞).
975
- This is an instance of a general feature for a self-adjoint linear relation T : if H is
976
- the closure of the domain of T , H∞ the orthogonal complement of H, and T0 =
977
- T ∩ (H × H), then T = T0 ⊕ ({0} × H∞). The former summand is then a linear
978
- operator densely defined in H called the operator part of T . The latter summand
979
- is called the multi-valued part of T .
980
- We end this example by identifying Green’s function for our example. It may
981
- be guessed by looking at equation (7.3). In any case one can check directly that
982
- (Rλf)(x) =
983
-
984
- G(x, ·, λ)wf. Note that the second column of G is irrelevant since
985
- the second row of w is 0. When x is not integer G(x, y, λ) is given by
986
-
987
- k∈Z
988
-
989
- − 1
990
- λ
991
- �1
992
- 0
993
- 0
994
- 0
995
-
996
- + 1
997
- 2
998
- � 0
999
- 1
1000
- −1
1001
- 0
1002
-
1003
- sgn(x − y)
1004
-
1005
- χ#
1006
- (2k−1,2k)(x)χ#
1007
- (2k−1,2k)(y).
1008
- If x is an integer we have instead
1009
- G(2k − 1, y, λ) = 1
1010
- 2
1011
- lim
1012
- x↓2k−1 G(x, y, λ)
1013
- and
1014
- G(2k, y, λ) = 1
1015
- 2 lim
1016
- x↑2k G(x, y, λ).
1017
- References
1018
- [1] Richard Arens. Operational calculus of linear relations. Pacific J. Math., 11:9–23, 1961.
1019
- [2] F. V. Atkinson. Discrete and continuous boundary problems. Mathematics in Science and
1020
- Engineering, Vol. 8. Academic Press, New York-London, 1964.
1021
- [3] Christer Bennewitz. Symmetric relations on a Hilbert space. Pages 212–218. Lecture Notes
1022
- in Math., Vol. 280, 1972.
1023
- [4] Kevin Campbell, Minh Nguyen, and Rudi Weikard. On the spectral theory for first-order
1024
- systems without the unique continuation property. Linear Multilinear Algebra, 69(12):2315–
1025
- 2323, 2021. Published online: 04 Oct 2019.
1026
- [5] Jonathan Eckhardt, Fritz Gesztesy, Roger Nichols, and Gerald Teschl. Weyl-Titchmarsh the-
1027
- ory for Sturm-Liouville operators with distributional potentials. Opuscula Math., 33(3):467–
1028
- 563, 2013.
1029
- [6] Jonathan Eckhardt and Gerald Teschl. Sturm-Liouville operators with measure-valued coef-
1030
- ficients. J. Anal. Math., 120:151–224, 2013.
1031
-
1032
- GREEN’S FUNCTIONS
1033
- 17
1034
- [7] Ahmed Ghatasheh and Rudi Weikard. Spectral theory for systems of ordinary differential
1035
- equations with distributional coefficients. J. Differential Equations, 268(6):2752–2801, 2020.
1036
- [8] M. G. Kre˘ın. On a generalization of investigations of Stieltjes. Doklady Akad. Nauk SSSR
1037
- (N.S.), 87:881–884, 1952.
1038
- [9] Bruce Call Orcutt. Canonical differential equations. PhD thesis, University of Virginia, 1969.
1039
- [10] A. M. Savchuk and A. A. Shkalikov. Sturm-Liouville operators with singular potentials. Math-
1040
- ematical Notes, 66(6):741–753, 1999. Translated from Mat. Zametki, Vol. 66, pp. 897–912
1041
- (1999).
1042
- [11] Anton Zettl. Sturm-Liouville theory, volume 121 of Mathematical Surveys and Monographs.
1043
- American Mathematical Society, Providence, RI, 2005.
1044
- Department of Mathematics, University of Alabama at Birmingham, Birmingham, AL
1045
- 35226-1170, USA
1046
1047
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
knowledge_base/4tE1T4oBgHgl3EQf6QWC/content/tmp_files/load_file.txt DELETED
The diff for this file is too large to render. See raw diff
 
knowledge_base/4tE1T4oBgHgl3EQf6QWC/vector_store/index.faiss DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4e5b91ee67af6419b43c683b45077dc110aee1b85bb6a33416483eb4951a17a8
3
- size 2883629
 
 
 
 
knowledge_base/4tE1T4oBgHgl3EQf6QWC/vector_store/index.pkl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a3a2fdaf23922132289c2ddca0bdd11b4606a47be3d41f0e82dcce2277fab048
3
- size 115273
 
 
 
 
knowledge_base/7dE4T4oBgHgl3EQf2Q2c/content/2301.05297v1.pdf DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:0cadfce4eb7cc1141eda21e2e2c5ab84220a616ffae7ff50568c41671f0d0173
3
- size 1544777
 
 
 
 
knowledge_base/7dE4T4oBgHgl3EQf2Q2c/content/tmp_files/2301.05297v1.pdf.txt DELETED
@@ -1,1046 +0,0 @@
1
- Towards Dependable Autonomous Systems
2
- Based on Bayesian Deep Learning Components
3
- Fabio Arnez∗, Huascar Espinoza†, Ansgar Radermacher∗ and Franc¸ois Terrier∗
4
- ∗Universit´e Paris-Saclay, CEA, List, F-91120, Palaiseau, France
5
- {name.lastname}@cea.fr
6
- †KDT JU, TO 56 05/16, B-1049 Brussels, Belgium
7
8
- Abstract—As
9
- autonomous
10
- systems
11
- increasingly
12
- rely
13
- on
14
- Deep Neural Networks (DNN) to implement the navigation
15
- pipeline functions, uncertainty estimation methods have be-
16
- come paramount for estimating confidence in DNN predictions.
17
- Bayesian Deep Learning (BDL) offers a principled approach to
18
- model uncertainties in DNNs. However, in DNN-based systems,
19
- not all the components use uncertainty estimation methods
20
- and typically ignore the uncertainty propagation between them.
21
- This paper provides a method that considers the uncertainty
22
- and the interaction between BDL components to capture the
23
- overall system uncertainty. We study the effect of uncertainty
24
- propagation in a BDL-based system for autonomous aerial
25
- navigation. Experiments show that our approach allows us to
26
- capture useful uncertainty estimates while slightly improving the
27
- system’s performance in its final task. In addition, we discuss the
28
- benefits, challenges, and implications of adopting BDL to build
29
- dependable autonomous systems.
30
- Index Terms—Bayesian Deep Learning, Uncertainty Propaga-
31
- tion, Unmanned Aerial Vehicle, Navigation, Dynamic Depend-
32
- ability
33
- I. INTRODUCTION
34
- Navigation in complex environments still represents a big
35
- challenge for autonomous systems (AS). Particular instances
36
- of this problem are autonomous driving and autonomous aerial
37
- navigation in the context of self-driving cars and Unmanned
38
- Aerial Vehicles (UAVs), respectively. In both cases, the naviga-
39
- tion task is addressed by first acquiring rich and complex raw
40
- sensory information (e.g., from camera, radar, LiDAR, etc.),
41
- which is then processed to drive the autonomous agent towards
42
- its goal. Usually, this process is done in sequence, where tasks
43
- and specific software components are linked together in the so-
44
- called perception-planning-control software pipeline [1], [2].
45
- Over the last decade, Deep Neural Networks (DNNs) have
46
- become a popular choice to implement navigation pipeline
47
- components thanks to their effectiveness in processing com-
48
- plex sensory inputs, and their powerful representation learning
49
- that surpasses the performance of traditional methods. Cur-
50
- rently, three main paradigms exist to develop and train navi-
51
- gation components based on DNNs: Modular (isolated), End-
52
- to-End (E2E) learning, and mixed or hybrid approaches [2].
53
- Preprint version. Accepted and presented at the 18th European Depend-
54
- able Computing Conference (EDCC), Zaragoza, Spain, 2022. Digital Object
55
- Identifier (DOI) is available in the preprint description.
56
- Fig. 1. UAV BDL-based Aerial Navigation Pipeline: The downstream control
57
- component gets predictions of the previous perception component as input and
58
- must take their uncertainty into account.
59
- Despite the remarkable progress in representation learning,
60
- DNNs should also represent the confidence in their predictions
61
- to deploy them in safety-critical systems. McAllister et al. [2]
62
- proposed using Bayesian Deep Learning (BDL) to implement
63
- the components from navigation pipelines or stacks. Bayesian
64
- methods offer a principled framework to model and capture
65
- system uncertainty. However, if the Bayesian approach is
66
- followed, all the components in the system pipeline should
67
- use BDL to enable uncertainty propagation in the pipeline.
68
- Hence, BDL components should admit uncertainty information
69
- as an input to account for the uncertainty from the outputs of
70
- preceding BDL components (See Fig. 1).
71
- In recent years, a large body of literature has employed
72
- uncertainty estimation methods in robotic tasks thanks to
73
- its potential to improve the safety of automated functions
74
- [3], and the capacity to increase the task performance [4],
75
- [5]. However, uncertainty is captured partially in navigation
76
- pipelines that utilize DNNs. BDL methods are used mainly in
77
- perception tasks, and downstream components (e.g., planning
78
- and control) usually ignore the uncertainty from the preceding
79
- components or do not capture uncertainty in their predictions.
80
- Although some works propagate downstream perceptual
81
- uncertainty from intermediate representations [6]–[8], the
82
- overall system output does not take into account all the
83
- uncertainty sources from DNN components in the pipeline.
84
- Moreover, proposed frameworks for dynamic dependability
85
- management that use uncertainty information focus only on
86
- DNN-based perception tasks [9], [10], ignoring uncertainty
87
- propagation through the system pipeline, the interactions be-
88
- arXiv:2301.05297v1 [cs.RO] 12 Jan 2023
89
-
90
- Perception
91
- Controltween uncertainty-aware components, and the potential impact
92
- on system performance and safety.
93
- Quantifying uncertainty in a BDL-based system (i.e., a
94
- pipeline of BDL components) still remains a challenging task.
95
- Uncertainties from BDL components must be assembled in
96
- a principled way to provide a reliable measure of overall
97
- system uncertainty, based on which safe decisions can be
98
- made [2], [11]. In this paper, we propose to capture the
99
- uncertainty along a pipeline of BDL components and study
100
- the impact of uncertainty propagation on the aerial navigation
101
- task in a UAV. In addition, we propose an uncertainty-centric
102
- dynamic dependability management framework to cope with
103
- the challenges that arise from propagating uncertainty through
104
- BDL-based systems.
105
- II. RELATED WORK
106
- A. Neural Network Uncertainty Estimation
107
- Bayesian neural networks (BNN) have been widely used
108
- to represent the confidence in the predictions. A proper con-
109
- fidence representation in DNN predictions can be achieved
110
- by modeling two sources of uncertainty: aleatoric (data)
111
- and epistemic (model) uncertainty. For epistemic uncertainty,
112
- Bayesian inference is used to estimate the posterior predic-
113
- tive distribution. In practice, approximate Bayesian inference
114
- methods are often used [12]–[15] since the posterior on the
115
- model parameters p(θ | D) is intractable in DNNs.
116
- To model data uncertainty, [14], [16] propose to incorporate
117
- additional outputs to represent the parameters (mean and vari-
118
- ance) of a Gaussian distribution. Loquercio et al. [17] forward
119
- propagate sensor noise through the DNN. This approach does
120
- not require retraining, however, it assumes a fixed uncertainty
121
- value for the sensor noise at the input. Another family of
122
- methods aim to capture complex stochastic patterns such
123
- as multimodality or heteroscedasticity (aleatoric uncertainty)
124
- using latent variables (LV) as input. When BNNs are used with
125
- LV (BNN+LV), both types of uncertainty can be captured [18],
126
- [19]. In this approach, a BNN receives an input combined with
127
- a random disturbance coming from an LV (i.e., features are
128
- partially stochastic). In contrast, this paper considers that a
129
- BNN can receive a complete stochastic features at the input.
130
- B. Uncertainty in DNN-Based Navigation
131
- In an autonomous driving context, perception uncertainty is
132
- captured from implicit [8] and explicit representations [7] and
133
- used downstream for scene motion forecasting and trajectory
134
- planning respectively. In reinforcement learning, input uncer-
135
- tainty has been employed for model-based [20] and model-free
136
- control policies [21]. In the former case, a collision predictor
137
- uncertainty is passed to a model predictive controller. In the
138
- latter, perception uncertainty is mapped to the control policy
139
- uncertainty using heuristics. In the context of aerial navigation,
140
- a few works have considered uncertainty. [17] uses a fixed
141
- uncertainty value for sensors as an input to a control policy.
142
- [6] extends the work from [22] to use the uncertainty from
143
- perception noisy representations downstream in a BNN control
144
- policy. Although these approaches use perception uncertainty
145
- in downstream components, not all the DNN components in
146
- the pipeline employ uncertainty estimation methods.
147
- C. Uncertainty-based Dependability Frameworks
148
- For the deployment of dependable autonomous systems that
149
- use machine learning (ML) components, Trapp et al. [23] and
150
- Henne et al. [9] conceptualized the use and runtime monitoring
151
- of perception uncertainty to ensure safe behavior on AS. To
152
- model system behavior, probabilistic graphical models (PGMs)
153
- and, in particular, Bayesian Networks (BNs) have been used in
154
- dependability research for safety and reliability analyses and
155
- risk assessment applications [24]. BNs allow incorporating ex-
156
- pert domain knowledge, model complex relationships between
157
- components, and enable decision-making under uncertainty.
158
- In the context of autonomous aviation systems, [10] proposes
159
- a method for quantifying system assurance using perception
160
- component uncertainty and dynamic BNs. For autonomous
161
- vehicles, [25] offers a framework for dynamic risk assessment,
162
- using BNs to predict the behavior intents of other traffic par-
163
- ticipants. Unlike these works, this paper considers uncertainty
164
- from Bayesian deep learning components beyond perception.
165
- III. SYSTEM TASK FORMULATION
166
- In this paper, we address the problem of autonomous aerial
167
- navigation. The goal of the autonomous agent (i.e., UAV) is
168
- to navigate through a set of gates with unknown locations
169
- disposed in a circular track. Following prior work from [6],
170
- [22], the navigation architecture consists of two DNN-based
171
- components: one for perception and the other for control
172
- (see Fig. 2). Both DNNs are trained following the hybrid
173
- paradigm. To achieve the agent goal, the navigation task is
174
- formulated as a sequential-decision making problem, where
175
- a sequence of control actions are produced given environ-
176
- ment observations. In this regard, the simulation environment
177
- provides at each time step an observation comprised of an
178
- RGB image x acquired from a front-facing camera on the
179
- UAV. The perception component defines an encoder function
180
- qφ : X → Z that maps the input image x to a rich low
181
- dimensional representation z ��� R10. Next, a control policy
182
- πw : Z → Y maps the compact representation z to control
183
- commands y = [ ˙x, ˙y, ˙z, ˙ψ] ∈ R4, corresponding to linear and
184
- angular (yaw) velocities in the UAV body frame.
185
- In the perception component, a cross-modal variational
186
- autoencoder (CMVAE) [22], [26] is used to learn a rich and
187
- robust compact representation. A CMVAE is a variant of the
188
- traditional variational autoencoder (VAE) [27] that learns a
189
- single latent representation for multiple data modalities. In
190
- this case, the perception dataset Dp has two data modalities:
191
- the RGB images and the pose of the gate relative to the UAV
192
- body-frame. During training, the CMVAE encoder qφ maps an
193
- input image x to a noisy representation with mean µφ(x) and
194
- variance σ2
195
- φ(x) in the latent space, from where latent vectors
196
- z are sampled, z ∼ N(µφ, σ2
197
- φ). Next, a latent vector z is used
198
- to reconstruct the input image and estimate the gate pose (i.e.,
199
- recover the two data modalities) using two DNNs, a decoder
200
- and a feed-forward network. The CMVAE encoder qφ is based
201
-
202
- Fig. 2. System architecture for aerial navigation
203
- on the Dronet architecture [28], and additional constraints
204
- on the latent space are imposed through the loss function to
205
- promote the learning of robust disentangled representations.
206
- Once the perception component is trained, the downstream
207
- control task (control policy π) uses a feed-forward network to
208
- operate on the latent vectors z at the output of the CMVAE
209
- encoder qφ to predict UAV velocities. To this end, the control
210
- policy network is added at the output of the perception
211
- encoder qφ, forming the navigation pipeline DNN. The control
212
- component π uses a control imitation learning dataset (Dc).
213
- During training, we freeze the perception encoder qφ to update
214
- only the control policy network. For more information about
215
- the general architecture for aerial navigation, datasets, and
216
- training procedures, we refer the reader to [6], [22].
217
- IV. METHODOLOGY
218
- A. Uncertainty from Perception Representations
219
- Although the CMVAE encoder qφ employs Bayesian in-
220
- ference to obtain latent vectors z, CMVAE does not capture
221
- epistemic uncertainty since the encoder lacks a distribution
222
- over parameters φ. To capture uncertainty in the perception
223
- encoder we follow prior work from [29], [30] that attempts to
224
- capture epistemic uncertainty in VAEs. We adapt the CMVAE
225
- to capture the posterior qΦ(z | x, Dp) as shown in (1).
226
- qΦ(z | x, Dp) =
227
-
228
- q(z | x, φ)p(φ | Dp)dφ
229
- (1)
230
- To approximate (1), we take a set Φ = {φm}M
231
- m of encoder
232
- parameters samples φm ∼ p(φ | Dp), to obtain a set of
233
- latent samples {zm}M
234
- m=1 ∼ qΦ(z | x, Dp) at the output
235
- of the encoder. In practice, we modify CMVAE by adding
236
- a dropout layer in the encoder. Then, we use Monte Carlo
237
- Dropout (MCD) [12] to approximate the posterior on the
238
- encoder weights p(φ | Dp). Finally, for a given input image x
239
- we perform M stochastic forward passes (with dropout “turned
240
- on”) to compute a set of M latent vector samples z at runtime.
241
- B. Input Uncertainty for Control
242
- In BDL, downstream uncertainty propagation assumes that
243
- a neural network component is able to handle or admit uncer-
244
- tainty at the input. In our navigation case, this implies that the
245
- DNN-based controller is able to handle the uncertainty coming
246
- from the perception encoder qΦ. To capture the navigation
247
- model uncertainty (overall system uncertainty at the output
248
- of the controller), we use the Bayesian approach to compute
249
- the posterior predictive distribution for target variable y∗
250
- associated with a new input image x∗, as shown in (2).
251
- p(y∗ | x∗, Dc, Dp) =
252
- ��
253
- π(y | z, w)p(w | Dc)qΦ(z | x∗, Dp)dwdz
254
- (2)
255
- The integrals from (2) are intractable, and we rely on
256
- approximations to obtain an estimation of the predictive dis-
257
- tribution. The posterior p(w | Dc) is difficult to evaluate,
258
- thus we can approximate the inner integral using an ensemble
259
- of neural networks [15]. In practice, we train an ensemble
260
- of N probabilistic control policies {πn(y | z, wn)}N
261
- n=1,
262
- with weights {wn}N
263
- n=1 ∼ p(w|D). Each control policy πn
264
- in the ensemble predicts the mean µ and variance σ2 for
265
- each velocity command, i.e., yµ
266
- =
267
- [µ ˙x, µ ˙y, µ ˙z, µ ˙ψ] and
268
- yσ2 = [σ2
269
- ˙x, σ2
270
- ˙y, σ2
271
- ˙z, σ2
272
- ˙ψ]. For training the control policy we
273
- use imitation learning and the heteroscedastic loss function,
274
- as suggested by [14], [16].
275
- The outer integral is approximated by taking a set of
276
- samples from the perception component latent space. In
277
- [6] latent samples are drawn using the encoder mean and
278
- variance z ∼ N(µφ, σ2
279
- φ). For the sake of simplicity, we
280
- directly use the samples obtained in the perception component
281
- {zm}M
282
- m ∼ qΦ(z | x, Dp) to take into account the epistemic
283
- uncertainty from the previous stage. Finally, the predictions
284
- that we get from passing each latent vector z through each
285
- ensemble member are used to estimate the posterior predictive
286
- distribution in (2). From the control policy perspective, using
287
- multiple latent samples z can be seen as taking a better
288
- “picture” of the latent space (perception representation) to
289
- gather more information about the environment.
290
- V. EXPERIMENTS & DISCUSSION
291
- For our experiments, we seek to study the impact of
292
- uncertainty propagation in the navigation pipeline. In par-
293
- ticular, we seek to answer the following research questions:
294
- RQ1. How does uncertainty from perception representations
295
- affect downstream component uncertainty estimation quality?
296
- RQ2. Can uncertainty propagation improve system perfor-
297
- mance? RQ3. Could uncertainty-aware components in the
298
- pipeline help detect challenging scenes that can threaten the
299
- system mission? To answer these questions we perform a
300
- quantitative and qualitative comparison between uncertainty-
301
- aware aerial navigation models.
302
- A. Experimental setup
303
- 1) Navigation Model Baselines: All the navigation archi-
304
- tectures are based on [22] and are implemented using PyTorch.
305
- Table I shows the uncertainty-aware navigation architectures
306
- used in our experiments, detailing the type of perception
307
- component, the number of latent variable samples (LVS), the
308
- type of control policy, and the number of control prediction
309
-
310
- TrainingOnly
311
- Ensemble
312
- Probabilistic NeuralNetworks
313
- 2
314
- CMVAE
315
- Yμl
316
- 2
317
- 元1
318
- Yol
319
- Z1
320
- Yμ3
321
- Z2
322
- :
323
- 元3
324
- Yo3
325
- p(y* I x*, Dc, Dp)
326
- 2
327
- Yμ5
328
- Yo5
329
- 元5
330
- q(z / x, Dp)
331
- [Tn(y I z, Wn))N
332
- Perception
333
- ControlTABLE I
334
- UNCERTAINTY-AWARE NAVIGATION MODELS IN THE EXPERIMENTS
335
- Model
336
- Perception (qφ)
337
- LVS
338
- Control Policy (π)
339
- CPS
340
- M0
341
- MCD-CMVAE
342
- 32
343
- Ensemble (N = 5) Prob.
344
- 160
345
- M1
346
- CMVAE
347
- 32
348
- Ensemble (N = 5) Prob.
349
- 160
350
- M2
351
- CMVAE
352
- 1
353
- Ensemble (N = 5) Prob.
354
- 5
355
- M3
356
- CMVAE
357
- 32
358
- Deterministic
359
- 32
360
- M4
361
- CMVAE
362
- 1
363
- Prob.
364
- 1
365
- samples (CPS) at the output of the system. For instance, M0
366
- represents our Bayesian navigation pipeline. M0 perception
367
- component captures epistemic uncertainty using MCD with
368
- 32 forward passes for each input to get 32 latent variable pre-
369
- dictions. For the sake of simplicity, perception predictions are
370
- directly used as latent variable samples in downstream control.
371
- The control component uses an ensemble of 5 probabilistic
372
- control policies obtaining 160 control prediction samples. M1
373
- to M4 partially capture uncertainty in the pipeline since
374
- they use a deterministic perception component (CMVAE). For
375
- the control component, M1 and M2 take 32 and 1 latent
376
- variable samples (LVS) respectively, and use the samples later
377
- with an ensemble of 5 probabilistic control policies capturing
378
- epistemic and aleatoric uncertainty; M3 uses 32 LVS, and
379
- the control component is completely deterministic; M4 uses
380
- 1 LVS with a probabilistic control policy to capture aleatoric
381
- uncertainty. For UAV control, we use the expected value of
382
- the predicted velocities means at the output of the control
383
- component [14], i.e., ˆyµ = E([µ ˙x, µ ˙y, µ ˙z, µ ˙ψ]).
384
- 2) Datasets: We use two independent datasets for each
385
- component in the navigation pipeline. The perception CMVAE
386
- uses a dataset (Dp) of 300k images where a gate is visible and
387
- gate-pose annotations area available. The control component
388
- uses a dataset (Dc) of 17k images with UAV velocity anno-
389
- tations. Dc is collected by flying the UAV in a circular track
390
- with gates, using traditional methods for trajectory planning
391
- and control (see [22] for more details). The perception dataset
392
- is divided into 80% for training, and the remaining 20% for
393
- validation and testing. The control dataset uses a split of 90%
394
- for training and the remaining for validation and testing. In
395
- both cases the image size is 64x64 pixels. In addition, using the
396
- validation data from Dp and Dc, we generate refined validation
397
- sub-datasets with images that have: exactly one visible gate
398
- (ideal situation), no visible gate in front, and multiple gates
399
- visible. The last two types of images represent situations that
400
- can pose a risk to the system task. Each sub-dataset contains
401
- 200 images.
402
- B. Experiments
403
- In the context of RQ2, we use the validation dataset from the
404
- control component to measure the regression Expected Cali-
405
- bration Error (ECE) [31] to compare the quality of uncertainty
406
- estimates from navigation models at the output of the system,
407
- (i.e., the control component output).
408
- In order to answer RQ2, we evaluate our navigation archi-
409
- tecture under controlled simulations using the AirSim simu-
410
- (a) Circular track view without noise (left) and with noise (right).
411
- (b) UAV mission scenes
412
- Fig. 3. UAV Mission: Navigation tracks and scenes from birds-eye view, and
413
- view from UAV perspective
414
- lation environment. The UAV mission resembles the scenario
415
- and the conditions observed in the training dataset. Therefore,
416
- we use a circular track with eight equally spaced gates posi-
417
- tioned initially in a radius of 8m and constant height. To assess
418
- the system performance to perturbations in the environment,
419
- we generate new tracks adding random noise to each gate
420
- radius and height.
421
- In the context of the AirSim [32] simulation environment, a
422
- track is entirely defined by a set of gates, their poses in three-
423
- dimensional space, and the expected navigation direction of
424
- the agent. For perception-based navigation, the complexity of
425
- a track resides in the “gate-visibility” difficulty [33], [34], i.e.,
426
- how well the camera Field-of-View (FoV) captures the gate. A
427
- natural way to increase track complexity is by adding a random
428
- displacement to the position of each gate. A track without
429
- random displacement in the gates has a circular fashion. Gate
430
- position randomness alters the shape of the track, affecting the
431
- gate visibility, i.e., gates are: not visible, partially visible, or
432
- multiple gates can be captured in the UAV FoV as presented
433
- in Fig. 3. The images from these scenarios are challenging
434
- given its potential impact on system performance.
435
- To measure the system performance we take the average
436
- number of gates passed in all generated tracks. For track
437
- generation we use a random seed to produce circular tracks
438
- with two levels of noise in the gates offset, i.e., each random
439
- seed generates two (reproducible) noisy tracks. In total, we use
440
- 6 random seeds to produce 12 tracks, 6 tracks per noise level.
441
- The two noise levels are a combination of Gate Radius Noise
442
- (GRN) and Gate Height Noise (GHN). Finally, all navigation
443
- models are tested in the same generated tracks for a fair
444
- comparison, and each model has 3 trials per track.
445
- To address RQ3, we perform a qualitative comparison of
446
- the component predicted densities using scenes (images) from
447
- challenging situations during the UAV mission. To this end,
448
- we first use the images from the generated sub-datasets. Next,
449
- we use the scenes from Fig. 3b as an input to the Bayesian
450
- navigation model M0 to analyze the effect uncertainty prop-
451
- agation under specific situations.
452
-
453
- 口口
454
-
455
-
456
- 口TABLE II
457
- UNCERTAINTY-AWARE NAVIGATION MODELS:
458
- ECE & AVG. NUMBER OF GATES PASSED
459
- Model
460
- ECE (↓)
461
- Performance with Track Gate Noise (↑)
462
- GRN ∼ U[−1.0, 1.0)
463
- GHN ∼ U[0, 2.0)
464
- GRN ∼ U[−1.5, 1.5)
465
- GHN ∼ U[0, 3.0)
466
- M0
467
- 0.00700
468
- 19.77
469
- 9.22
470
- M1
471
- 0.00129
472
- 17.67
473
- 6.0
474
- M2
475
- 0.00136
476
- 17.33
477
- 4.0
478
- M3
479
- 0.05709
480
- 8.33
481
- 5.0
482
- M4
483
- 0.00050
484
- 15.16
485
- 4.38
486
- C. Results
487
- Table II summarizes the ECE for all the navigation models
488
- using the validation dataset from the control component. M4
489
- has the best uncertainty quality since the model learned to
490
- predict the noise from the data using the heteroscedastic
491
- loss function. On the contrary, M2 has the worst calibration
492
- results caused by the deterministic control choice and its
493
- inability to learn the data uncertainty. M1 and M2 have
494
- similar values since both receive the one noisy encoding from
495
- perception. However, M1 takes multiple samples from the
496
- noisy perception encoding which causes a reduction of the
497
- ECE value. Finally, M0 shows a higher ECE value compared
498
- to the previous models. This is caused by applying MCD in
499
- the perception CMVAE and the dispersion of the latent codes
500
- at the output of the perception encoder qΦ. The uncertainty
501
- quality of the downstream control is slightly affected because
502
- the control component did not see the same perception encod-
503
- ing dispersion (uncertainty) during training.
504
- For RQ2, Table II presents the navigation performance
505
- results for all the navigation models. In general, learning to
506
- predict uncertainty in the control component can boost the
507
- performance significantly. However, for M3, sampling from a
508
- noisy perception representation adds sufficient diversity to the
509
- downstream control predictions, resulting in better decisions
510
- than M2 in tracks with higher noise levels. In M4, the good
511
- performance suggests that the track noise observed at test time
512
- (lower noise level), resembles the data noise observed during
513
- the training of the single probabilistic model.
514
- In case of M0, the diversity from perception prediction
515
- samples improves the performance. Interestingly, the perfor-
516
- mance difference with other models is not significant. This
517
- situation can make us wonder if an uncertainty estimation is
518
- needed along the whole pipeline. Nonetheless, we believe that
519
- performance similarity is rooted in how we use our model
520
- predictions and uncertainties. The control output is computed
521
- by taking the mean and variance of the policy ensemble
522
- mixture, and only the mean values are passed to the UAV
523
- control. However, the multimodal predictions in Fig. 5 show
524
- that admitting perception uncertainty (samples) at the input of
525
- the control component permits the representation of ambiguity
526
- in the predictions. Hence, a proper use of predictions and
527
- associated uncertainties is needed. For example, in a bi-modal
528
- predictive distribution at the output, we can use the modes
529
- (a) Visible gate sub-dataset
530
- (b) No visible gate sub-dataset
531
- (c) Multiple gates visible sub-dataset
532
- Fig. 4. Navigation model standard deviation (ˆσ) prediction comparison
533
- (i.e., distribution peaks) instead of the expected value to avoid
534
- sub-optimal control decisions (e.g., near distribution valleys).
535
- In the context of RQ3, Fig. 4 shows the estimated uncer-
536
- tainty densities (ˆσ) for each velocity command at the output
537
- of the system, using the images from the generated datasets.
538
- In this case, M0 allows higher uncertainty estimates while
539
- reducing the dispersion in the sub-datasets from each situation.
540
- Fig. 5 shows M0 predictions at the output of the perception
541
- (z) and control (ˆµ, ˆσ) components. Predictions are made using
542
- the three sample images from Fig. 3b, using the LVS and CPS
543
- to estimate the densities.
544
- M0 perception and control outputs show high uncertainty
545
- (dispersion) values when a gate is not visible (mid-right). The
546
- ˆµ ˙y density suggests that the UAV control predictions will
547
- follow the training dataset (Dc) bias, rotating clockwise and
548
- moving to the right when no gate is in-front. Interestingly,
549
- the predicted densities in the bottom plots show that M0 is
550
- able to represent the ambiguity in the input, i.e. sample image
551
-
552
- DoubleorMultipleVisibleGatesSubdataset:PredictedStandardDeviationDensities
553
- 1.2
554
- Navigation Model
555
- Velocity (m/s) or (deg/s)
556
- Mo
557
- 1.0
558
- Mi
559
- M2
560
- 0.8
561
- 0.2
562
- 0.0
563
- ox
564
- ModelPredictionVisibleGate Subdataset:Predicted Standard DeviationDensities
565
- 1.2
566
- Navigation Model
567
- Mo
568
- 1.0
569
- Mi
570
- M2
571
- 0.8
572
- 0.2
573
- 0.0
574
- ox
575
- Mode/PredictionNoVisibleGate Subdataset:Predicted Standard DeviationDensities
576
- 1.2
577
- Navigation Model
578
- Velocity (m/s) or (deg/s)
579
- Mo
580
- 1.0
581
- M1
582
- M2
583
- 0.8
584
- 0.6
585
- 0.4
586
- 0.2
587
- 0.0
588
- x
589
- ModelPrediction(a) Single gate prediction densities
590
- (b) No visible gate prediction densities
591
- (c) Double gate prediction densities
592
- Fig. 5. Bayesian navigation model M0: Perception qΦ predictions z density (left); Predicted velocity ˆµ density (mid); Predicted velocity ˆσ (right)
593
- .
594
- with two gates. The predicted densities have a multimodal
595
- distribution (two peaks) for ˆµ ˙y and ˆσ ˙y commands. Further,
596
- the predicted densities for the latent vector z show that
597
- the uncertainty from perception outputs is different for each
598
- type of sample, which is suitable for the early detection
599
- of anomalies based on uncertainty information. In addition,
600
- detecting multi-modality in prediction distributions can help
601
- expressing situations where decisions must be made.
602
- D. Dynamic Dependability Management using Uncertainty
603
- from DNN-Based Systems
604
- Based on the results and observations in the previous
605
- sub-sections, uncertainty propagation through a DNN-based
606
- can impact downstream component predictions and their per-
607
- formance. Thus, using uncertainty information to improve
608
- system dependability or safety can be a challenging task. For
609
- example, building monitoring functions based on uncertainty
610
- information is no simple task. The uncertainty intervals we ob-
611
- served for different situations present overlaps that can lead to
612
- false-positive or false-negative verdicts. Moreover, the multi-
613
- modal nature of some predictions under specific conditions or
614
- scenes demands knowledge of multiple intervals around the
615
- monitored uncertainty value. Therefore dependable and safe
616
- automated systems require more than a simple composition of
617
- predicates around some confidence measures.
618
- Towards building dependable autonomous systems, we pro-
619
- pose to align with previous frameworks that leverage percep-
620
- tion uncertainty (cf. subsection II-C). However existing frame-
621
- works for system dependability do not consider the impact
622
- of uncertainty propagation in uncertainty-aware systems. To
623
- overcome these new challenges, we propose to capture and
624
- use uncertainty beyond perception and consider as well the
625
- uncertainty from downstream components along the navigation
626
- pipeline, as presented in Fig.6 1 . Our approach for dynamic
627
- dependability management takes inspiration from [35] and
628
- focuses on safety. Therefore, we propose an architecture for
629
- dynamic risk assessment and management where we devise
630
- three functional blocks, as shown in Fig. 6 2 : Monitoring
631
- functions, risk estimation and behavior arbitration modules.
632
- 1) Monitoring Functions: Monitoring is a widely-known
633
- dependability technique for runtime verification intended to
634
- track system variables (e.g. component inputs and outputs).
635
- In the automotive domain, SOTIF and ISO26262 suggest the
636
- use of monitoring functions as a solution for error detection in
637
- hardware and software components [36]. Monitoring functions
638
- are designed using a set of rules, based on a model of the
639
- system and its environment, and the properties they should
640
- guarantee. Hence, monitors basically perform a binary classi-
641
- fication task to check if a property holds or not.
642
- Designing monitoring functions for ML components is
643
- different given the probabilistic nature of the outputs and
644
- the difficulty in specifying the component behavior at design
645
- time. For ML-based components in general, typical monitoring
646
-
647
- PerceptionCMVAEEncodergoPredictionDensities
648
- value
649
- iable
650
- vari
651
- .atent
652
- Zo
653
- Z1
654
- Z2
655
- Z3
656
- Z4
657
- Z5
658
- Z6
659
- Z7
660
- Z8
661
- Z9
662
- LatentvectorzvariablesControl EnsembleMixtureVelocity μPredictionDensities
663
- Mo Prediction
664
- 1.75
665
- μx
666
- 1.50
667
- py
668
- 1.25
669
- 1.00
670
- 0.75
671
- 0.50
672
- 0.25
673
- 0.00
674
- -1.0
675
- 0.5
676
- 0.0
677
- 0.5
678
- 1.0
679
- 1.5
680
- 2.0
681
- 2.5
682
- 3.0
683
- 3.5
684
- Predictedvelocityμ(m/s)or(deg/s)Control Ensemble Mixture Velocity Prediction Densities
685
- 3.5
686
- Mo Prediction
687
- 3.0
688
- ox
689
- oy
690
- 2.5
691
- Density
692
- 02
693
- 2.0
694
- 1.5
695
- 1.0
696
- 0.5
697
- 0.0
698
- 0.0
699
- 0.2
700
- 0.4
701
- 0.6
702
- 0.8
703
- 1.0
704
- Predicted velocity (m/s)or (deg/s)Perception CMVAE Encoder go Prediction Densities
705
- value
706
- variable
707
- .atent
708
- -3
709
- Zo
710
- Z1
711
- Z2
712
- Z3
713
- Z4
714
- Z5
715
- Z6
716
- Z7
717
- Z8
718
- Zg
719
- LatentvectorzvariablesControl Ensemble MixtureVelocity μPredictionDensities
720
- 2.00
721
- Mo Prediction
722
- μx
723
- 1.75
724
- ily
725
- 1.50
726
- 1.25
727
- 1.00
728
- 0.75
729
- 0.50
730
- 0.25
731
- 0.00
732
- 1.0
733
- 0.5
734
- 0.0
735
- 0.5
736
- 1.0
737
- 1.5
738
- 2.0
739
- 2.5
740
- 3.0
741
- 3.5
742
- Predictedvelocityμ (m/s)or(deg/s)ControlEnsembleMixtureVelocityoPredictionDensities
743
- Mo Prediction
744
- 3.0
745
- 0x
746
- 2.5
747
- oy
748
- 02
749
- 1.5
750
- 1.0
751
- 0.5
752
- 0.0
753
- 0.0
754
- 0.2
755
- 0.4
756
- 0.6
757
- 0.8
758
- 1.0
759
- Predicted velocity (m/s)or (deg/s)Perception CMVAE Encoder go Prediction Densities
760
- value
761
- variable
762
- .atent
763
- Zo
764
- Z1
765
- Z2
766
- Z3
767
- Z4
768
- Z5
769
- Z6
770
- Z7
771
- Z8
772
- Zg
773
- LatentvectorzvariablesControl Ensemble Mixture Velocityμ Prediction Densities
774
- 3.5
775
- Mo Prediction
776
- 3.0
777
- px
778
- 2.5
779
- Density
780
- 2.0
781
- 1.5
782
- 1.0
783
- 0.5
784
- 0.0
785
- 0.5
786
- 0.0
787
- 0.5
788
- 1.0
789
- 1.5
790
- 2.0
791
- 2.5
792
- 3.0
793
- Predicted velocity μ(m/s)or(deg/s)Control Ensemble Mixture Velocity Prediction Densities
794
- 4.0
795
- Mo Prediction
796
- ox
797
- 3.5
798
- <
799
- 3.0
800
- 02
801
- 2.0
802
- 1.5
803
- 1.0
804
- 0.5
805
- 0.0
806
- 0.0
807
- 0.2
808
- 0.4
809
- 0.6
810
- 0.8
811
- 1.0
812
- Predicted velocity (m/s)or (deg/s)Fig. 6. Runtime risk assessment & management framework
813
- function tasks include Out-of-Distribution (OoD) detection or
814
- Out-of-Boundary (OoB) detection and can be implemented
815
- with rules, data-driven methods or a mix of both.
816
- 2) Probabilistic Inference for Risk Assessment: To enable
817
- dynamic uncertainty-aware reasoning and provide context to
818
- risk estimates, we propose to use Bayesian networks. Fol-
819
- lowing the methodology described in [24], BNs for risk
820
- assessment and safety can be constructed using a combination
821
- of expert domain knowledge and data. The experts provide a
822
- model of causal relations and can have support from traditional
823
- dependability analysis (e.g., fault tree analysis) to build the BN
824
- structure while system data is used to provide the conditional
825
- probabilities between random variables.
826
- In our framework, the BN of the system can receive
827
- the predictions from components in the pipeline (probability
828
- distributions) and the verdicts from monitoring functions ap-
829
- plied to system sensors, component predictions, and relevant
830
- environmental variables. The output of the BN is represented
831
- by all the critical events identified by experts. Hence, during
832
- inference, the BN estimates the probability of a critical event,
833
- which is used along with its severity to compute the system’s
834
- risk at runtime [37]. Though we focus on risk assessment, in a
835
- general way the output of BNs can be any assurance measure
836
- variables linked to dependability attributes [10]. Further, the
837
- BN should handle uncertain evidence [38] to preserve the
838
- probabilistic nature of component and monitor predictions.
839
- 3) Behavior Arbitration: The last building block in our
840
- framework aims at keeping the system in a safe state by
841
- taking or discarding navigation pipeline predictions. Safe
842
- decisions must be made in the presence of high-risk values in
843
- a given context caused by erroneous component predictions or
844
- associated uncertainties and external environmental variables.
845
- To this end, we propose using Behavior Trees (BTs) to adopt
846
- different system behaviors while facing high-risk situations.
847
- BTs are sophisticated modular decision-making engines for
848
- reactive and fault-tolerant task execution [39]. Compositions
849
- of BTs can preserve safety and robustness properties [40] and
850
- are widely adopted tools in robotics. In the context of our
851
- system, we can have a dedicated behavior to search for a gate
852
- when we detect that there are no gates in the UAV FoV. This
853
- behavior will put the system back into a state where the levels
854
- of uncertainty do not represent a risk.
855
- VI. CONCLUSION
856
- We presented a method to capture and propagate uncertainty
857
- along a navigation pipeline implemented with Bayesian deep
858
- learning components for UAV aerial navigation. We analyzed
859
- the effect of uncertainty propagation regarding system com-
860
- ponent predictions and performance. Our experiments show
861
- that our approach to capturing and propagating uncertainty
862
- along the system can provide valuable predictions for decision-
863
- making and identifying situations that are critical for the
864
- system. However, proper use and management of component
865
- predictions and uncertainty estimates are needed to create
866
- dependable and highly-performant systems. In this sense and
867
- based on our observations, we also proposed a framework for
868
- system dependability management using system uncertainty
869
- and focused on safety. In future work, we aim to implement
870
- our proposed dependability framework and explore sampling-
871
- free methods [41] for uncertainty estimation to reduce the
872
- computational budget and memory footprint in our approach.
873
- ACKNOWLEDGMENT
874
- This work has received funding from the COMP4DRONES
875
- project, under ECSEL Joint Undertaking (JU) grant agreement
876
- N°826610. The ECSEL JU receives support from the European
877
- Union’s Horizon 2020 research and innovation programme and
878
- from Spain, Austria, Belgium, Czech Republic, France, Italy,
879
- Latvia, Netherlands.
880
- REFERENCES
881
- [1] S. Grigorescu, B. Trasnea, T. Cocias, and G. Macesanu, “A survey
882
- of deep learning techniques for autonomous driving,” Journal of Field
883
- Robotics, vol. 37, no. 3, pp. 362–386, 2020.
884
- [2] R. McAllister, Y. Gal, A. Kendall, M. Van Der Wilk, A. Shah, R. Cipolla,
885
- and A. Weller, “Concrete problems for autonomous vehicle safety: Ad-
886
- vantages of bayesian deep learning,” in Proceedings of the Twenty-Sixth
887
- International Joint Conference on Artificial Intelligence.
888
- International
889
- Joint Conferences on Artificial Intelligence, Inc., 2017.
890
- [3] R. Michelmore, M. Kwiatkowska, and Y. Gal, “Evaluating uncertainty
891
- quantification in end-to-end autonomous driving control,” arXiv preprint
892
- arXiv:1811.06817, 2018.
893
- [4] F. Nozarian, C. M¨uller, and P. Slusallek, “Uncertainty quantification
894
- and calibration of imitation learning policy in autonomous driving.” in
895
- TAILOR, 2020, pp. 146–162.
896
- [5] E. Ohn-Bar, A. Prakash, A. Behl, K. Chitta, and A. Geiger, “Learning
897
- situational driving,” in Proceedings of the IEEE/CVF Conference on
898
- Computer Vision and Pattern Recognition, 2020, pp. 11 296–11 305.
899
- [6] F. Arnez, H. Espinoza, A. Radermacher, and F. Terrier, “Improving
900
- robustness of deep neural networks for aerial navigation by incorporating
901
- input uncertainty,” in International Conference on Computer Safety,
902
- Reliability, and Security.
903
- Springer, 2021, pp. 219–225.
904
- [7] B. Ivanovic, K.-H. Lee, P. Tokmakov, B. Wulfe, R. McAllister,
905
- A. Gaidon, and M. Pavone, “Heterogeneous-agent trajectory forecasting
906
- incorporating class uncertainty,” arXiv preprint arXiv:2104.12446, 2021.
907
- [8] S. Casas, C. Gulino, S. Suo, K. Luo, R. Liao, and R. Urtasun,
908
- “Implicit latent variable model for scene-consistent motion forecasting,”
909
- in Computer Vision–ECCV 2020: 16th European Conference, Glasgow,
910
- UK, August 23–28, 2020, Proceedings, Part XXIII 16.
911
- Springer, 2020,
912
- pp. 624–641.
913
- [9] M. Henne, A. Schwaiger, and G. Weiss, “Managing uncertainty of ai-
914
- based perception for autonomous systems.” in AISafety@ IJCAI, 2019.
915
-
916
- Bayesian Deep Learning-Based Navigation Pipeline
917
- (Uncertainty Propagation Between Components)
918
- Sensors
919
- Perception
920
- Planning
921
- Control
922
- Monitoring Functions
923
- (Data-Driven / STL / Mixed)
924
- Environment
925
- Graphical Model
926
- Probabilistic
927
- Behavior
928
- Arbitration
929
-
930
-
931
- Expertknowledge&Data
932
- Risk Estimation
933
- 2
934
- Risk Assessment & Management[10] E. Asaadi, E. Denney, and G. Pai, “Quantifying assurance in learning-
935
- enabled systems,” in International Conference on Computer Safety,
936
- Reliability, and Security.
937
- Springer, 2020, pp. 270–286.
938
- [11] A. Lavin, C. M. Gilligan-Lee, A. Visnjic, S. Ganju, D. Newman,
939
- S. Ganguly, D. Lange, A. G. Baydin, A. Sharma, A. Gibson et al.,
940
- “Technology readiness levels for machine learning systems,” arXiv
941
- preprint arXiv:2101.03989, 2021.
942
- [12] Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation:
943
- Representing model uncertainty in deep learning,” in international
944
- conference on machine learning, 2016, pp. 1050–1059.
945
- [13] Y. Gal, J. Hron, and A. Kendall, “Concrete dropout,” in Advances in
946
- neural information processing systems, 2017, pp. 3581–3590.
947
- [14] B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable
948
- predictive uncertainty estimation using deep ensembles,” in Advances in
949
- neural information processing systems, 2017, pp. 6402–6413.
950
- [15] F. K. Gustafsson, M. Danelljan, and T. B. Sch¨on, “Evaluating scalable
951
- bayesian deep learning methods for robust computer vision,” arXiv
952
- preprint arXiv:1906.01620, 2019.
953
- [16] A. Kendall and Y. Gal, “What uncertainties do we need in bayesian
954
- deep learning for computer vision?” in Advances in neural information
955
- processing systems, 2017, pp. 5574–5584.
956
- [17] A. Loquercio, M. Segu, and D. Scaramuzza, “A general framework for
957
- uncertainty estimation in deep learning,” IEEE Robotics and Automation
958
- Letters, vol. 5, no. 2, pp. 3153–3160, 2020.
959
- [18] S. Depeweg, J. Hern´andez-Lobato, F. Doshi-Velez, and S. Udluft,
960
- “Learning and policy search in stochastic dynamical systems with
961
- bayesian neural networks,” in 5th International Conference on Learning
962
- Representations, ICLR 2017-Conference Track Proceedings, 2017.
963
- [19] S. Depeweg, J.-M. Hernandez-Lobato, F. Doshi-Velez, and S. Udluft,
964
- “Decomposition of uncertainty in bayesian deep learning for efficient
965
- and risk-sensitive learning,” in International Conference on Machine
966
- Learning.
967
- PMLR, 2018, pp. 1184–1193.
968
- [20] B. L¨utjens, M. Everett, and J. P. How, “Safe reinforcement learning
969
- with model uncertainty estimates,” in 2019 International Conference on
970
- Robotics and Automation (ICRA).
971
- IEEE, 2019, pp. 8662–8668.
972
- [21] T. Fan, P. Long, W. Liu, J. Pan, R. Yang, and D. Manocha, “Learning
973
- resilient behaviors for navigation under uncertainty,” in 2020 IEEE
974
- International Conference on Robotics and Automation (ICRA).
975
- IEEE,
976
- 2020, pp. 5299–5305.
977
- [22] R. Bonatti, R. Madaan, V. Vineet, S. Scherer, and A. Kapoor, “Learning
978
- visuomotor policies for aerial navigation using cross-modal representa-
979
- tions,” arXiv preprint arXiv:1909.06993, 2019.
980
- [23] M. Trapp, D. Schneider, and G. Weiss, “Towards safety-awareness
981
- and dynamic safety management,” in 2018 14th European Dependable
982
- Computing Conference (EDCC).
983
- IEEE, 2018, pp. 107–111.
984
- [24] S. Kabir and Y. Papadopoulos, “Applications of bayesian networks and
985
- petri nets in safety, reliability, and risk assessments: A review,” Safety
986
- science, vol. 115, pp. 154–175, 2019.
987
- [25] J. Reich, M. Wellstein, I. Sorokos, F. Oboril, and K.-U. Scholl, “Towards
988
- a software component to perform situation-aware dynamic risk assess-
989
- ment for autonomous vehicles,” in Dependable Computing–EDCC 2021
990
- Workshops: DREAMS, DSOGRI, SERENE 2021, Munich, Germany,
991
- September 13, 2021, Proceedings.
992
- Springer Nature, 2021, p. 3.
993
- [26] A. Spurr, J. Song, S. Park, and O. Hilliges, “Cross-modal deep varia-
994
- tional hand pose estimation,” in Proceedings of the IEEE Conference on
995
- Computer Vision and Pattern Recognition, 2018, pp. 89–98.
996
- [27] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv
997
- preprint arXiv:1312.6114, 2013.
998
- [28] A. Loquercio, A. I. Maqueda, C. R. Del-Blanco, and D. Scaramuzza,
999
- “Dronet: Learning to fly by driving,” IEEE Robotics and Automation
1000
- Letters, vol. 3, no. 2, pp. 1088–1095, 2018.
1001
- [29] E. Daxberger and J. M. Hern´andez-Lobato, “Bayesian variational
1002
- autoencoders for unsupervised out-of-distribution detection,” arXiv
1003
- preprint arXiv:1912.05651, 2019.
1004
- [30] A. Jesson, S. Mindermann, U. Shalit, and Y. Gal, “Identifying causal-
1005
- effect inference failure with uncertainty-aware models,” Advances in
1006
- Neural Information Processing Systems, vol. 33, pp. 11 637–11 649,
1007
- 2020.
1008
- [31] V. Kuleshov, N. Fenner, and S. Ermon, “Accurate uncertainties for deep
1009
- learning using calibrated regression,” arXiv preprint arXiv:1807.00263,
1010
- 2018.
1011
- [32] S. Shah, D. Dey, C. Lovett, and A. Kapoor, “Airsim: High-fidelity visual
1012
- and physical simulation for autonomous vehicles,” in Field and service
1013
- robotics.
1014
- Springer, 2018, pp. 621–635.
1015
- [33] R. Madaan, N. Gyde, S. Vemprala, M. Brown, K. Nagami, T. Taubner,
1016
- E. Cristofalo, D. Scaramuzza, M. Schwager, and A. Kapoor, “Airsim
1017
- drone racing lab,” arXiv preprint arXiv:2003.05654, 2020.
1018
- [34] Y. Song, M. Steinweg, E. Kaufmann, and D. Scaramuzza, “Autonomous
1019
- drone racing with deep reinforcement learning,” in 2021 IEEE/RSJ
1020
- International Conference on Intelligent Robots and Systems (IROS).
1021
- IEEE, 2021, pp. 1205–1212.
1022
- [35] P. Moosbrugger, K. Y. Rozier, and J. Schumann, “R2u2: monitoring
1023
- and diagnosis of security threats for unmanned aerial systems,” Formal
1024
- Methods in System Design, vol. 51, no. 1, pp. 31–61, 2017.
1025
- [36] S. Mohseni, M. Pitale, V. Singh, and Z. Wang, “Practical solutions
1026
- for machine learning safety in autonomous vehicles,” arXiv preprint
1027
- arXiv:1912.09630, 2019.
1028
- [37] J. Eggert, “Risk estimation for driving support and behavior planning
1029
- in intelligent vehicles,” at-Automatisierungstechnik, vol. 66, no. 2, pp.
1030
- 119–131, 2018.
1031
- [38] A. B. Mrad, V. Delcroix, S. Piechowiak, P. Leicester, and M. Abid,
1032
- “An explication of uncertain evidence in bayesian networks: likelihood
1033
- evidence and probabilistic evidence,” Applied Intelligence, vol. 43, no. 4,
1034
- pp. 802–824, 2015.
1035
- [39] M. Colledanchise and L. Natale, “On the implementation of behavior
1036
- trees in robotics,” IEEE Robotics and Automation Letters, vol. 6, no. 3,
1037
- pp. 5929–5936, 2021.
1038
- [40] M. Colledanchise and P. ¨Ogren, “How behavior trees modularize ro-
1039
- bustness and safety in hybrid systems,” in 2014 IEEE/RSJ International
1040
- Conference on Intelligent Robots and Systems.
1041
- IEEE, 2014, pp. 1482–
1042
- 1488.
1043
- [41] B. Charpentier, O. Borchert, D. Z¨ugner, S. Geisler, and S. G¨unnemann,
1044
- “Natural posterior network: Deep bayesian predictive uncertainty for ex-
1045
- ponential family distributions,” arXiv preprint arXiv:2105.04471, 2021.
1046
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
knowledge_base/7dE4T4oBgHgl3EQf2Q2c/content/tmp_files/load_file.txt DELETED
The diff for this file is too large to render. See raw diff
 
knowledge_base/7dE4T4oBgHgl3EQf2Q2c/vector_store/index.faiss DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5f0e796f813dff3147f30dcf7b1a2a215c861cf939a3ea5e8c3454d462feb851
3
- size 2949165
 
 
 
 
knowledge_base/7dE4T4oBgHgl3EQf2Q2c/vector_store/index.pkl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:be8a5013faccdd81061f3c8f73f926d2610b93bff75439220031b3032a7c9b77
3
- size 111673
 
 
 
 
knowledge_base/99FJT4oBgHgl3EQfpCw2/content/2301.11598v1.pdf DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:3909851f657db119e0c1d90718d9b4ae49d9a8602af2ce63c86108b8784323f5
3
- size 1654502
 
 
 
 
knowledge_base/99FJT4oBgHgl3EQfpCw2/content/tmp_files/2301.11598v1.pdf.txt DELETED
@@ -1,1960 +0,0 @@
1
- arXiv:2301.11598v1 [math.NA] 27 Jan 2023
2
- Practical Sketching Algorithms for Low-Rank
3
- Tucker Approximation of Large Tensors
4
- Wandi Dong1, Gaohang Yu1*, Liqun Qi2,1,3 and Xiaohao Cai4
5
- 1Department of Mathematics, Hangzhou Dianzi University,
6
- Hangzhou, 310018, China.
7
- 2Huawei Theory Research Lab, Hong Kong, China.
8
- 3Department of Applied Mathematics, Hongkong Polytechnic
9
- University, Hong Kong, China.
10
- 4School of Electronics and Computer Science, University of
11
- Southampton, Southampton, SO17 1BJ, UK.
12
- *Corresponding author(s). E-mail(s): [email protected];
13
- Contributing authors: [email protected];
14
15
- Abstract
16
- Low-rank approximation of tensors has been widely used in high-
17
- dimensional data analysis. It usually involves singular value decom-
18
- position (SVD) of large-scale matrices with high computational com-
19
- plexity. Sketching is an effective data compression and dimension-
20
- ality reduction technique applied to the low-rank approximation of
21
- large matrices. This paper presents two practical randomized algo-
22
- rithms for low-rank Tucker approximation of large tensors based
23
- on sketching and power scheme, with a rigorous error-bound analy-
24
- sis. Numerical experiments on synthetic and real-world tensor data
25
- demonstrate the competitive performance of the proposed algorithms.
26
- Keywords: tensor sketching, randomized algorithm, Tucker decomposition,
27
- subspace power iteration, high-dimensional data
28
- MSC Classification: 68W20 , 15A18 , 15A69
29
- 1
30
-
31
- 2
32
- Sketching Algorithms for Low-Rank Tucker Approximation
33
- 1 Introduction
34
- In practical applications, high-dimensional data, such as color images, hyper-
35
- spectral images and videos, often exhibit a low-rank structure. Low-rank
36
- approximation of tensors has become a general tool for compressing and
37
- approximating high-dimensional data and has been widely used in scientific
38
- computing, machine learning, signal/image processing, data mining, and many
39
- other fields [1]. The classical low-rank tensor factorization models include,
40
- e.g., Canonical Polyadic decomposition (CP) [2, 3], Tucker decomposition [4–
41
- 6], Hierarchical Tucker (HT) [7, 8], and Tensor Train decomposition (TT)
42
- [9]. This paper focuses on low-rank Tucker decomposition, also known as
43
- the low multilinear rank approximation of tensors. When the target rank
44
- of Tucker decomposition is much smaller than the original dimensions, it
45
- will have good compression performance. For a given Nth-order tensor X ∈
46
- RI1×I2×...×IN , the low-rank Tucker decomposition can be formulated as the
47
- following optimization problem, i.e.,
48
- min
49
- Y ∥X − Y∥2
50
- F ,
51
- (1)
52
- where Y ∈ RI1×I2×...×IN , with rank(Y(n)) ≤ rn for n = 1, 2, . . ., N, Y(n) is the
53
- mode-n unfolding matrix of Y, and rn is the rank of the mode-n unfolding
54
- matrix of X.
55
- For the Tucker approximation of higher-order tensors, the most fre-
56
- quently used non-iterative algorithms are the improved algorithms for the
57
- higher-order singular value decomposition (HOSVD) [5], the truncated higher-
58
- order SVD (THOSVD) [10] and the sequentially truncated higher-order SVD
59
- (STHOSVD) [11]. Although the results of THOSVD and STHOSVD are usu-
60
- ally sub-optimal, they can use as reasonable initial solutions for iterative
61
- methods such as higher-order orthogonal iteration (HOOI) [10]. However, both
62
- algorithms rely directly on SVD when computing the singular vectors of inter-
63
- mediate matrices, requiring large memory and high computational complexity
64
- when the size of tensors is large.
65
- Strikingly, randomized algorithms can reduce the communication among
66
- different levels of memories and are parallelizable. In recent years, many schol-
67
- ars have become increasingly interested in randomized algorithms for finding
68
- approximation Tucker decomposition of large-scale data tensors [12–17, 19, 20].
69
- For example, Zhou et al. [12] proposed a randomized version of the HOOI
70
- algorithm for Tucker decomposition. Che and Wei [13] proposed an adaptive
71
- randomized algorithm to solve the multilinear rank of tensors. Minster et al.
72
- [14] designed randomized versions of the THOSVD and STHOSVD algorithms,
73
- i.e., R-STHOSVD. Sun et al. [17] presented a single-pass randomized algorithm
74
- to compute the low-rank Tucker approximation of tensors based on a practical
75
- matrix sketching algorithm for streaming data, see also [18] for more details.
76
- Regarding more randomized algorithms proposed for Tucker decomposition,
77
- please refer to [15, 16, 19, 20] for a detailed review of randomized algorithms
78
-
79
- Sketching Algorithms for Low-Rank Tucker Approximation
80
- 3
81
- for solving Tucker decomposition of tensors in recent years involving, e.g., ran-
82
- dom projection, sampling, count-sketch, random least-squares, single-pass, and
83
- multi-pass algorithms.
84
- This paper presents two efficient randomized algorithms for finding the
85
- low-rank Tucker approximation of tensors, i.e., Sketch-STHOSVD and sub-
86
- Sketch-STHOSVD summarized in Algorithms 6 and 8, respectively. The main
87
- contributions of this paper are threefold. Firstly, we propose a new one-pass
88
- sketching algorithm (i.e., Algorithm 6) for low-rank Tucker approximation,
89
- which can significantly improve the computational efficiency of STHOSVD.
90
- Secondly, we present a new matrix sketching algorithm (i.e., Algorithm 7) by
91
- combining the two-sided sketching algorithm proposed by Tropp et al. [18]
92
- with subspace power iteration. Algorithm 7 can accurately and efficiently com-
93
- pute the low-rank approximation of large-scale matrices. Thirdly, the proposed
94
- Algorithm 8 can deliver a more accurate Tucker approximation than sim-
95
- pler randomized algorithms by combining the subspace power iteration. More
96
- importantly, sub-Sketch-STHOSVD can converge quickly for any data tensors
97
- and independently of singular value gaps.
98
- The rest of this paper is organized as follows. Section 2 briefly introduces
99
- some basic notations, definitions, and tensor-matrix operations used in this
100
- paper and recalls some classical algorithms, including THOSVD, STHOSVD,
101
- and R-STHOSVD, for low-rank Tucker approximation. Our proposed two-
102
- sided sketching algorithm for STHOSVD is given in Section 3. In Section 4,
103
- we present an improved algorithm with subspace power iteration. The effec-
104
- tiveness of the proposed algorithms is validated thoroughly in Section 5 by
105
- numerical experiments on synthetic and real-world data tensors. We conclude
106
- in Section 6.
107
- 2 Preliminary
108
- 2.1 Notations and basic operations
109
- Some common symbols used in this paper are shown in the following Table 1.
110
- Table 1 Common symbols used in this paper.
111
- Symbols
112
- Notations
113
- a
114
- scalar
115
- A
116
- matrix
117
- X
118
- tensor
119
- X(n)
120
- mode-n unfolding matrix of X
121
- ×n
122
- mode-n product of tensor and matrix
123
- In
124
- identity matrix with size n × n
125
- σi(A)
126
- the ith largest singular value of A
127
- A⊤
128
- transpose of A
129
- A†
130
- pseudo-inverse of A
131
-
132
- 4
133
- Sketching Algorithms for Low-Rank Tucker Approximation
134
- We denote an Nth-order tensor X ∈ RI1×I2×...×IN with entries given by
135
- xi1,i2,...,iN, 1 ≤ in ≤ In, n = 1, 2, ..., N. The Frobenius norm of X is defined as
136
- ∥X∥F =
137
-
138
-
139
-
140
-
141
- I1,I2,...,IN
142
-
143
- i1,i2,...,iN
144
- x2
145
- i1,i2,...,iN .
146
- The mode-n tensor-matrix multiplication is a frequently encountered operation
147
- in tensor computation. The mode-n product of a tensor X ∈ RI1×I2×...×IN
148
- by a matrix A ∈ RK×In (with entries ak,in) is denoted as Y = X ×n A ∈
149
- RI1×...×In−1×K×In+1×...×IN, with entries
150
- yi1,...,in−1,k,in+1,...,iN =
151
- In
152
-
153
- in=1
154
- xi1,...,in−1,in,in+1,...,iNak,in.
155
- The mode-n matricization of higher-order tensors is the reordering of ten-
156
- sor elements into a matrix. The columns of mode-n unfolding matrix X(n) ∈
157
- RIn×(�
158
- N̸=n IN ) are the mode-n fibers of X. More specifically, a element
159
- (i1, i2, ..., iN) of X is maps on a element (in, j) of X(n), where
160
- j = 1 +
161
- N
162
-
163
- k=1,k̸=n
164
- [(ik − 1)
165
- k−1
166
-
167
- m=1,m̸=n
168
- Im].
169
- Let the rank of mode-n unfolding matrix X(n) is rn, the integer array
170
- (r1, r2, ..., rN) is Tucker-rank of Nth-order tensor X, also known as the mul-
171
- tilinear rank. The Tucker decomposition of X with rank (r1, r2, ..., rN) is
172
- expressed as
173
- X = G ×1 U (1) ×2 U (2) . . . ×N U (N),
174
- (2)
175
- where G ∈ Rr1×r2×...×rN is the core tensor, and {U (n)}N
176
- n=1 with U (n) ∈ RIn×rn
177
- is the mode-n factor matrices. The graphical illustration of Tucker decom-
178
- position for a third-order tensor shows in Figure 1. We denote an optimal
179
- rank-(r1, r2, ..., rN) approximation of a tensor X as ˆ
180
- Xopt, which is the optimal
181
- Tucker approximation by solving the minimization problem in (1). Below we
182
- Fig. 1 Tucker decomposition of a third-order tensor.
183
- present the definitions of some concepts used in this paper.
184
-
185
- B
186
- 9
187
- 3
188
- 2
189
- ASketching Algorithms for Low-Rank Tucker Approximation
190
- 5
191
- Definition 1 (Kronecker products) The Kronecker product of matrices A ∈ Rm×n
192
- and B ∈ Rk×l is defined as
193
- A ⊗ B =
194
-
195
- 
196
- a11B
197
- a12B
198
- ... a1nB
199
- a21B
200
- a22B
201
- ... a2nB
202
- :
203
- :
204
- ...
205
- :
206
- am1B am2B ... amnB
207
-
208
-  ∈ Rmk×nl.
209
- The Kronecker product helps express Tucker decomposition. The Tucker
210
- decomposition in (2) implies
211
- X(n) = U (n)G(n)(U (N) ⊗ ... ⊗ U (n+1) ⊗ U (n−1) ⊗ ... ⊗ U (1))⊤.
212
- Definition 2 (Standard normal matrix) The elements of a standard normal matrix
213
- follow the real standard normal distribution (i.e., Gaussian with mean zero and
214
- variance one) form an independent family of standard normal random variables.
215
- Definition 3 (Standard Gaussian tensor) The elements of a standard Gaussian
216
- tensor follow the standard Gaussian distribution.
217
- Definition 4 (Tail energy) The jth tail energy of a matrix X is defined as
218
- τ 2
219
- j (X) :=
220
- min
221
- rank(Y )<j ∥X − Y ∥2
222
- F =
223
-
224
- i≥j
225
- σ2
226
- i (X).
227
- 2.2 Truncated higher-order SVD
228
- Since the actual Tucker rank of large-scale higher-order tensor is hard to com-
229
- pute, the truncated Tucker decomposition with a pre-determined truncation
230
- (r1, r2, ..., rN) is widely used in practice. THOSVD is a popular approach to
231
- computing the truncated Tucker approximation, also known as the best low
232
- multilinear rank-(r1, r2, ..., rN) approximation, which reads
233
- min
234
- G; U(1),U(2),··· ,U(N) ∥X − G ×1 U (1) ×2 U (2) · · · ×N U (N)∥2
235
- F
236
- s.t.
237
- U (n)⊤U (n) = Irn, n ∈ {1, 2, ..., N}.
238
- Algorithm 1 THOSVD
239
- Require: tensor X ∈ RI1×I2×...×IN and target rank (r1, r2, . . . , rN)
240
- Ensure: Tucker approximation ˆ
241
- X = G ×1 U (1) ×1 U (2) · · · ×N U (N)
242
- 1: for n = 1, 2, . . ., N do
243
- 2:
244
- (U (n), ∼, ∼) ← truncatedSVD(X(n), rn)
245
- 3: end for
246
- 4: G ← X×1U (1)⊤ ×2 U (2)⊤ · · · ×N U (N)⊤
247
-
248
- 6
249
- Sketching Algorithms for Low-Rank Tucker Approximation
250
- Algorithm 1 summarizes the THOSVD approach. Each mode is processed
251
- individually in Algorithm 1. The low-rank factor matrices of mode-n unfolding
252
- matrix X(n) are computed through the truncated SVD, i.e.,
253
- X(n) =
254
-
255
- U (n)
256
- ˜
257
- U (n)
258
- � �S(n)
259
- ˜
260
- S(n)
261
- � �V (n)⊤
262
- ˜
263
- V (n)⊤
264
-
265
- ∼= U (n)S(n)V (n)⊤,
266
- where U (n)S(n)V (n)⊤ is a rank-rn approximation of X(n), the orthogonal
267
- matrix U (n) ∈ RIn×rn is the mode-n factor matrix of X in Tucker decomposi-
268
- tion, S(n) ∈ Rrn×rn and V (n) ∈ RI1...In−1In+1...IN×rn. Once all factor matrices
269
- have been computed, the core tensor G can be computed as
270
- G = X×1U (1)⊤ ×2 U (2)⊤ · · · ×N U (N)⊤ ∈ Rr1×r2×...×rN.
271
- Then, the Tucker approximation ˆ
272
- X of X can be computed as
273
- ˆ
274
- X = G ×1 U (1) ×2 U (2) · · · ×N U (N)
275
- = X ×1 (U (1)U (1)⊤) ×2 (U (2)U (2)⊤) · · · ×N (U (N)U (N)⊤).
276
- With the notation ∆2
277
- n(X) ≜ �In
278
- i=rn+1 σ2
279
- i (X(n)) and ∆2
280
- n(X) ≤ ∥X − ˆ
281
- Xopt∥2
282
- F
283
- [14], the error-bound for Algorithm 1 can be stated in the following Theorem 1.
284
- Theorem 1 ([11], Theorem 5.1) Let ˆ
285
- X = G ×1 U(1) ×2 U(2) · · · ×N U(N) be the
286
- low multilinear rank-(r1, r2, ..., rN) approximation of a tensor X ∈ RI1×I2×...×IN by
287
- THOSVD. Then
288
- ∥X − ˆ
289
- X ∥2
290
- F ≤
291
- N
292
-
293
- n=1
294
- ∥X ×n (IIn − U(n)U(n)⊤)∥2
295
- F =
296
- N
297
-
298
- n=1
299
- In
300
-
301
- i=rn+1
302
- σ2
303
- i (X(n))
304
- =
305
- N
306
-
307
- n=1
308
- ∆2
309
- n(X ) ≤ N∥X − ˆ
310
- Xopt∥2
311
- F .
312
- 2.3 Sequentially truncated higher-order SVD
313
- Vannieuwenhoven et al.[11] proposed one more efficient and less computation-
314
- ally complex approach for computing approximate Tucker decomposition of
315
- tensors, called STHOSVD. Unlike THOSVD algorithm, STHOSVD updates
316
- the core tensor simultaneously whenever a factor matrix has computed.
317
- Given the target rank (r1, r2, . . . , rN) and a processing order sp
318
- :
319
- {1, 2, ..., N}, the minimization problem (1) can be formulated as the following
320
-
321
- Sketching Algorithms for Low-Rank Tucker Approximation
322
- 7
323
- optimization problem
324
- min
325
- U(1),··· ,U(N) ∥X − X ×1 (U (1)U (1)⊤) ×2 (U (2)U (2)⊤) · · · ×N (U (N)U (N)⊤)∥2
326
- F
327
- =
328
- min
329
- U(1),··· ,U(N)(∥X ×1 (I1 − U (1)U (1)⊤)∥2
330
- F + ∥ ˆ
331
- X (1) ×2 (I2 − U (2)U (2)⊤)∥2
332
- F +
333
- · · · + ∥ ˆ
334
- X (N−1) ×N (IN − U (N)U (N)⊤)∥2
335
- F )
336
- = min
337
- U(1)(∥X ×1 (I1 − U (1)U (1)⊤)∥2
338
- F + min
339
- U(2)(∥ ˆ
340
- X (1) ×2 (I2 − U (2)U (2)⊤)∥2
341
- F +
342
- min
343
- U(3)(· · · + min
344
- U(N) ∥ ˆ
345
- X (N−1) ×N (IN − U (N)U (N)⊤)∥2
346
- F ))),
347
- (3)
348
- where
349
- ˆ
350
- X (n) = X ×1 (U (1)U (1)⊤) ×2 (U (2)U (2)⊤) · · · ×n (U (n)U (n)⊤), n =
351
- 1, 2, ..., N − 1, denote the intermediate approximation tensors.
352
- Algorithm 2 STHOSVD
353
- Require: tensor X ∈ RI1×I2×...×IN , target rank (r1, r2, . . . , rN), and process-
354
- ing order sp : {i1, i2, . . . , iN}
355
- Ensure: Tucker approximation ˆ
356
- X = G ×1 U (1) ×2 U (2) . . . ×N U (N)
357
- 1: G ← X
358
- 2: for n = i1, i2, . . . , iN do
359
- 3:
360
- (U (n), S(n), V (n)⊤) ← truncatedSVD(G(n), rn)
361
- 4:
362
- G ← foldn(S(n)V (n)⊤) (% forming the updated tensor from its mode-n
363
- unfolding)
364
- 5: end for
365
- In Algorithm 2, the solution U (n) of problem (3) can be obtained via
366
- truncatedSVD(G(n), rn), where G(n) is mode-n unfolding matrix of the (n−1)-
367
- th intermediate core tensor G = X ×n−1
368
- i=1 U (i)⊤ ∈ Rr1×r2×...×rn−1×In×...×IN,
369
- i.e.,
370
- G(n) =
371
-
372
- U (n)
373
- ˜
374
- U (n)
375
- � �S(n)
376
- ˜
377
- S(n)
378
- � �V (n)⊤
379
- ˜
380
- V (n)⊤
381
-
382
- ∼= U (n)S(n)V (n)⊤,
383
- where the orthogonal matrix U (n)
384
- is the mode-n factor matrix, and
385
- S(n)V (n)⊤ ∈ Rrn×r1...rn−1In+1...IN is used to update the n-th intermediate core
386
- tensor G. Function foldn(S(n)V (n)⊤) tensorizes matrix S(n)V (n)⊤ into ten-
387
- sor G ∈ Rr1×r2×...×rn×In+1×...×IN. When the target rank rn is much smaller
388
- than In, the size of the updated intermediate core tensor G is much smaller
389
- than original tensor. This method can significantly improve computational
390
- performance. STHOSVD algorithm possesses the following error-bound.
391
- Theorem 2 ([11], Theorem 6.5) Let ˆ
392
- X = G ×1 U(1) ×2 U(2) . . . ×N U(N) be the
393
- low multilinear rank-(r1, r2, ..., rN) approximation of a tensor X ∈ RI1×I2×...×IN by
394
-
395
- 8
396
- Sketching Algorithms for Low-Rank Tucker Approximation
397
- STHOSVD with processsing order sp : {1, 2, . . . , N}. Then
398
- ∥X − ˆ
399
- X ∥2
400
- F =
401
- N
402
-
403
- n=1
404
- ∥ ˆ
405
- X (n−1) − ˆ
406
- X (n)∥2
407
- F ≤
408
- N
409
-
410
- n=1
411
- ∥X ×n (IIn − U(n)U(n)⊤)∥2
412
- F
413
- =
414
- N
415
-
416
- n=1
417
- ∆2
418
- n(X ) ≤ N∥X − ˆ
419
- Xopt∥2
420
- F .
421
- Although STHOSVD has the same error-bound as THOSVD, it is less com-
422
- putationally complex and requires less storage. As shown in Section 5 for the
423
- numerical experiment, the running (CPU) time of the STHOSVD algorithm
424
- is significantly reduced, and the approximation error has slightly better than
425
- that of THOSVD in some cases.
426
- 2.4 Randomized STHOSVD
427
- When the dimensions of data tensors are enormous, the computational cost
428
- of the classical deterministic algorithm TSVD for finding a low-rank approx-
429
- imation of mode-n unfolding matrix can be expensive. Randomized low-rank
430
- matrix algorithms replace original large-scale matrix with a new one through
431
- a preprocessing step. The new matrix contains as much information as possi-
432
- ble about the rows or columns of original data matrix. Its size is smaller than
433
- original matrix, allowing the data matrix to be processed efficiently and thus
434
- reducing the memory requirements for solving low-rank approximation of large
435
- matrix.
436
- Algorithm 3 R-SVD
437
- Require: matrix A ∈ Rm×n, target rank r, and oversampling parameter p ≥ 0
438
- Ensure: low-rank approximation matrix ˆA = ˆU ˆS ˆV ⊤ of A
439
- 1: Ω ← randn(n, r + p)
440
- 2: Y ← AΩ
441
- 3: (Q, ∼) ← thinQR(Y )
442
- 4: B ← Q⊤A
443
- 5: (U, S, V ⊤) ← thinSVD(B)
444
- 6: ˆU ← QU(:, 1 : r)
445
- 7: ˆS ← S(1 : r, 1 : r), ˆV ← V (:, 1 : r)
446
- N. Halko et al. [21] proposed a randomized SVD (R-SVD) for matrices. The
447
- preprocessing stage of the algorithm is performed by right multiplying original
448
- data matrix A ∈ Rm×n with a random Gaussian matrix Ω ∈ Rn×r. Each
449
- column of the resulting new matrix Y = AΩ ∈ Rm×r is a linear combination
450
- of the columns of original data matrix. When r < n, the size of matrix Y
451
- is smaller than A. The oversampling technique can improve the accuracy of
452
- solutions. Subsequent computations are summarised in Algorithm 3, where
453
-
454
- Sketching Algorithms for Low-Rank Tucker Approximation
455
- 9
456
- randn generates a Gaussian random matrix, thinQR produces an economy-size
457
- of the QR decomposition, and thinSVD is the thin SVD decomposition. When
458
- A is dense, the arithmetic cost of Algorithm 3 is O(2(r + p)mn + r2(m + n))
459
- flops, where p > 0 is the oversampling parameter satisfying r+p ≤ min{m, n}.
460
- Algorithm 3 is an efficient randomized algorithm for computing rank-r
461
- approximations to matrices. Minster et al. [14] applied Algorithm 3 directly
462
- to the STHOSVD algorithm and then presented a randomized version of
463
- STHOSVD (i.e., R-STHOSVD), see Algorithm 4.
464
- Algorithm 4 R-STHOSVD
465
- Require: tensor X ∈ RI1×I2×...×IN , targer rank (r1, r2, . . . , rN), processing
466
- order sp : {i1, i2, . . . , iN}, and oversampling parameter p ≥ 0
467
- Ensure: Tucker approximation ˆ
468
- X = G ×1 U (1) ×2 U (2) . . . ×N U (N)
469
- 1: G ← X
470
- 2: for n = i1, i2, . . . , iN do
471
- 3:
472
- ( ˆU, ˆS, ˆV ⊤) ← R-SVD(G(n), rn, p) (cf. Algorithm 3)
473
- 4:
474
- U (n) ← ˆU
475
- 5:
476
- G ← foldn( ˆS ˆV ⊤)
477
- 6: end for
478
- 3 Sketching algorithm for STHOSVD
479
- A drawback of R-SVD algorithm is that when both dimensions of the inter-
480
- mediate matrices are enormous, the computational cost can still be high. To
481
- resolve this problem, we could resort to the two-sided sketching algorithm for
482
- low-rank matrix approximation proposed by Joel A. Tropp et al. [22]. The
483
- preprocessing of sketching algorithm needs two sketch matrices to contain
484
- information regarding the rows and columns of input matrix A ∈ Rm×n. Thus
485
- we should choose two sketch size parameters k and l, s.t. , r ≤ k ≤ min{l, n},
486
- 0 < l ≤ m. The random matrices Ω ∈ Rn×k and Ψ ∈ Rl×m are fixed indepen-
487
- dent standard normal matrices. Then we can multiply matrix A left and right
488
- respectively to obtain random sketch matrices Y ∈
489
- Rm×k and W ∈ Rl×n,
490
- which collect sufficient data about the input matrix to compute the low-rank
491
- approximation. The dimensionality and distribution of the random sketch
492
- matrices determine the approximation’s potential accuracy, with larger values
493
- of k and l resulting in better approximations but also requiring more storage
494
- and computational cost.
495
- The sketching algorithm for low-rank approximation is given in Algorithm
496
- 5. Function orth(A) in Step 2 produces an orthonormal basis of A. Using
497
- orthogonalization matrices will achieve smaller errors and better numerical
498
- stability than directly using the randomly generated Gaussian matrices. In
499
- particular, when A is dense, the arithmetic cost of Algorithm 5 is O((k +
500
- l)mn + kl(m + n)) flops. Algorithm 5 is simple, practical, and possesses the
501
- sub-optimal error-bound as stated in the following Theorem 3. In Theorem 3,
502
-
503
- 10
504
- Sketching Algorithms for Low-Rank Tucker Approximation
505
- Algorithm 5 Sketch for low-rank approximation
506
- Require: matrix A ∈ Rm×n, and sketch size parameters k, l
507
- Ensure: rank-k approximation ˆA = QX of A
508
- 1: Ω ← randn(n, k), Ψ ← randn(l, m)
509
- 2: Ω ← orth(Ω), Ψ⊤ ← orth(Ψ⊤)
510
- 3: Y ← AΩ
511
- 4: W ← ΨA
512
- 5: (Q, ∼) ← thinQR(Y )
513
- 6: X ← (ΨQ)†W
514
- function f(s, t) := s/(t − s − 1)(t > s + 1 > 1). The minimum in Theorem
515
- 3 reveals that the low rank approximation of given matrix A automatically
516
- exploits the decay of tail energy.
517
- Theorem 3 ([22], Theorem 4.3) Assume that the sketch size parameters satisfy
518
- l > k + 1, and draw random test matrices Ω ∈ Rn×k and Ψ∈ Rl×m independently
519
- forming the standard normal distribution. Then the rank-k approximation ˆA obtained
520
- from Algorithm 5 satisfies
521
- E ∥ A − ˆA ∥2
522
- F ≤ (1 + f(k, l)) · min
523
- ̺<k−1(1 + f(̺, k)) · τ 2
524
- ̺+1(A)
525
- =
526
- k
527
- l − k − 1 · min
528
- ̺<k−1
529
- k
530
- k − ̺ − 1 · τ 2
531
- ̺+1(A).
532
- Using the two-sided sketching algorithm to leverage STHOSVD algorithm,
533
- we propose a practical sketching algorithm for STHOSVD named Sketch-
534
- STHOSVD. We summarize the procedures of Sketch-STHOSVD algorithm in
535
- Algorithm 6, with its error analysis stated in Theorem 4.
536
- Algorithm 6 Sketch-STHOSVD
537
- Require: tensor X ∈ RI1×I2×...×IN , targer rank (r1, r2, . . . , rN), processing
538
- order sp : {i1, i2, . . . , iN}, and sketch size parameters {l1, l2, ..., lN}
539
- Ensure: Tucker approximation ˆ
540
- X = G ×1 U (1) ×2 U (2) . . . ×N U (N)
541
- 1: G ← X
542
- 2: for n = i1, i2, . . . , iN do
543
- 3:
544
- (Q, X) ← Sketch(G(n), rn, ln) (cf. Algorithm 5)
545
- 4:
546
- U (n) ← Q
547
- 5:
548
- G ← foldn(X)
549
- 6: end for
550
- Theorem 4 Let ˆ
551
- X = G ×1 U(1) ×2 U(2) . . . ×N U(N) be the Tucker approximation
552
- of a tensor X ∈ RI1×I2×...×IN by the Sketch-STHOSVD algorithm (i.e., Algorithm
553
- 6) with target rank rn < In, n = 1, 2, ..., N, sketch size parameters {l1, l2, ..., lN} and
554
-
555
- Sketching Algorithms for Low-Rank Tucker Approximation
556
- 11
557
- processing order sp : {1, 2, . . . , N}. Then
558
- E{Ωj}N
559
- j=1∥X − �
560
- X ∥2
561
- F ≤
562
- N
563
-
564
- n=1
565
- rn
566
- ln − rn − 1
567
- min
568
- ̺n<rn−1
569
- rn
570
- rn − ̺n − 1∆2
571
- n(X )
572
-
573
- N
574
-
575
- n=1
576
- rn
577
- ln − rn − 1
578
- min
579
- ̺n<rn−1
580
- rn
581
- rn − ̺n − 1∥X − ˆ
582
- Xopt∥2
583
- F .
584
- Proof Combining Theorem 2 and Theorem 3, we have
585
- E{Ωj}N
586
- j=1∥X − �
587
- X ∥2
588
- F
589
- =
590
- N
591
-
592
- n=1
593
- E{Ωj}N
594
- j=1∥ ˆ
595
- X (n−1) − ˆ
596
- X (n)∥2
597
- F
598
- =
599
- N
600
-
601
- n=1
602
- E{Ωj}n−1
603
- j=1
604
-
605
- EΩn∥ ˆ
606
- X (n−1) − ˆ
607
- X (n)∥2
608
- F
609
-
610
- =
611
- N
612
-
613
- n=1
614
- E{Ωj}n−1
615
- j=1
616
-
617
- EΩn∥G(n−1) ×n−1
618
- i=1 U(i)×n(I − U(n)U(n)⊤)∥2
619
- F
620
-
621
-
622
- N
623
-
624
- n=1
625
- E{Ωj}n−1
626
- j=1
627
-
628
- EΩn∥(I − U(n)U(n)⊤)Gn−1
629
- n
630
- )∥2
631
- F
632
-
633
-
634
- N
635
-
636
- n=1
637
- E{Ωj}n−1
638
- j=1
639
- rn
640
- ln − rn − 1
641
- min
642
- ̺n<rn−1
643
- rn
644
- rn − ̺n − 1
645
- In
646
-
647
- i=rn+1
648
- σ2
649
- i (G(n−1)
650
- (n)
651
- )
652
-
653
- N
654
-
655
- n=1
656
- E{Ωj}n−1
657
- j=1
658
- rn
659
- ln − rn − 1
660
- min
661
- ̺n<rn−1
662
- rn
663
- rn − ̺n − 1∆2
664
- n(X )
665
- =
666
- N
667
-
668
- n=1
669
- rn
670
- ln − rn − 1
671
- min
672
- ̺n<rn−1
673
- rn
674
- rn − ̺n − 1∆2
675
- n(X )
676
-
677
- N
678
-
679
- n=1
680
- rn
681
- ln − rn − 1
682
- min
683
- ̺n<rn−1
684
- rn
685
- rn − ̺n − 1∥X − ˆ
686
- Xopt∥2
687
- F .
688
-
689
- We assume the processing order for STHOSVD, R-STHOSVD, and Sketch-
690
- STHOSVD algorithms is sp : {1, 2, ..., N}. Table 2 summarises the arithmetic
691
- cost of different algorithms for the cases related to the general higher-order
692
- tensor X ∈ RI1×I2×...×IN with target rank (r1, r2, . . . , rN) and the special
693
- cubic tensor X ∈ RI×I×...×I with target rank (r, r, ..., r). Here the tensors are
694
- dense and the target ranks rj ≪ Ij, j = 1, 2, . . ., N.
695
-
696
- 12
697
- Sketching Algorithms for Low-Rank Tucker Approximation
698
- Table 2 Arithmetic cost for the algorithms THOSVD, STHOSVD, R-STHOSVD, and
699
- the proposed Sketch-STHOSVD.
700
- Algorithm
701
- X ∈ RI1×I2×...×IN
702
- X ∈ RI×I×...×I
703
- THOSVD
704
- O(
705
- N
706
-
707
- j=1
708
- Ij I1:N + �N
709
- j=1 r1:j Ij:N )
710
- O(NIN+1 +
711
- N
712
-
713
- j=1
714
- rj IN−j+1)
715
- STHOSVD
716
- O(
717
- N
718
-
719
- j=1
720
- Ij r1:j−1Ij:N +
721
- N
722
-
723
- j=1
724
- r1:j Ij+1:N )
725
- O(
726
- N
727
-
728
- j=1
729
- rj−1IN−j+2 + rj IN−j)
730
- R-STHOSVD
731
- O(
732
- N
733
-
734
- j=1
735
- r1:jIj:N +
736
- N
737
-
738
- j=1
739
- r1:j Ij+1:N )
740
- O(
741
- N
742
-
743
- j=1
744
- rj IN−j+1 + rj IN−j )
745
- Sketch-STHOSVD
746
- O(
747
- N
748
-
749
- j=1
750
- rj lj(Ij + r1:j−1Ij+1:N ) +
751
- N
752
-
753
- j=1
754
- r1:j Ij+1:N )
755
- O(
756
- N
757
-
758
- j=1
759
- rl(I + rj−1IN−j ) + rj IN−j )
760
- 4 Sketching algorithm with subspace power
761
- iteration
762
- When the size of original matrix is very large or the singular spectrum of
763
- original matrix decays slowly, Algorithm 5 may produce a poor basis in many
764
- applications. Inspired by [23], we suggest using the power iteration technique
765
- to enhance the sketching algorithm by replacing A with (AA⊤)qA, where q
766
- is a positive integer. According to the SVD decomposition of matrix A, i.e.,
767
- A = USV ⊤, we know that (AA⊤)qA = US2q+1V ⊤. It can see that A and
768
- (AA⊤)qA have the same left and right singular vectors, but the latter has a
769
- faster decay rate of singular values, making its tail energy much smaller.
770
- Algorithm 7 Sketching algorithm with subspace power iteration (sub-
771
- Sketch)
772
- Require: matrix A ∈ Rm×n, sketch size parameters k, l, and integer q > 0
773
- Ensure: rank-k approximation ˆA = QX of A
774
- 1: Ω ← randn(n, k), Ψ ← randn(l, m)
775
- 2: Ω ← orth(Ω), Ψ⊤ ← orth(Ψ⊤)
776
- 3: Y = AΩ, W = ΨA
777
- 4: Q0 ← thinQR(Y )
778
- 5: for j = 1, . . . , q do
779
- 6:
780
- ˆYj = A⊤Qj−1
781
- 7:
782
- ( ˆQj, ∼) ← thinQR( ˆYj)
783
- 8:
784
- Yj = A ˆQj
785
- 9:
786
- (Qj, ∼) ← thinQR(Yj)
787
- 10: end for
788
- 11: Q = Qq
789
- 12: X ← (ΨQ)†W
790
- Although power iteration can improve the accuracy of Algorithm 5 to some
791
- extent, it still suffers from a problem, i.e., during the execution with power
792
- iteration, the rounding errors will eliminate all information about the singular
793
- modes associated with the singular values. To address this issue, we propose an
794
-
795
- Sketching Algorithms for Low-Rank Tucker Approximation
796
- 13
797
- improved sketching algorithm by orthonormalizing the columns of the sample
798
- matrix between each application of A and A⊤, see Algorithm 7. When A is
799
- dense, the arithmetic cost of Algorithm 7 is O((q + 1)(k + l)mn + kl(m + n))
800
- flops. Numerical experiments show that a good approximation can achieve
801
- with a choice of 1 or 2 for subspace power iteration parameter [21].
802
- Algorithm 8 sub-Sketch-STHOSVD
803
- Require: tensor X ∈ RI1×I2×...×IN , targer rank (r1, r2, . . . , rN), processing
804
- order sp : {i1, i2, . . . , iN}, sketch size parameters {l1, l2, ..., lN}, and integer
805
- q > 0
806
- Ensure: Tucker approximation ˆ
807
- X = G ×1 U (1) ×2 U (2) . . . ×N U (N)
808
- 1: G ← X
809
- 2: for n = i1, i2, . . . , iN do
810
- 3:
811
- (Q, X) ← sub-Sketch(G(n), rn, ln, q) (cf. Algorithm 7)
812
- 4:
813
- U (n) ← Q
814
- 5:
815
- G ← foldn(X)
816
- 6: end for
817
- Using Algorithm 7 to compute the low-rank approximations of intermedi-
818
- ate matrices, we can obtain an improved sketching algorithm for STHOSVD,
819
- called sub-Sketch-STHOSVD, see Algorithm 8. The error-bound for Algorithm
820
- 8 states in the following Theorem 5. Its proof is deferred in Appendix.
821
- Theorem 5 Let ˆ
822
- X = G ×1 U(1) ×2 U(2) . . . ×N U(N) be the Tucker approximation
823
- of a tensor X ∈ RI1×I2×...×IN obtained by the sub-Sketch-STHOSVD algorithm
824
- (i.e., Algorithm 8) with target rank rn < In, n = 1, 2, ..., N, sketch size parameters
825
- {l1, l2, ..., lN} and processing order p : {1, 2, . . . , N}. Let ̟k ≡
826
- σk+1
827
- σk
828
- denote the
829
- singular value gap, then
830
- E{Ωj}N
831
- j=1∥X − �
832
- X ∥2
833
- F ≤
834
- N
835
-
836
- n=1
837
- (1 + f(rn, ln)) ·
838
- min
839
- ̺n<rn−1(1 + f(̺n, rn)̟r4q) · τ 2
840
- ̺+1(X(n))
841
-
842
- N
843
-
844
- n=1
845
- (1 + f(rn, ln)) ·
846
- min
847
- ̺n<rn−1(1 + f(̺n, rn)̟r4q)∥X − ˆ
848
- Xopt∥2
849
- F .
850
- Proof See Appendix.
851
-
852
- 5 Numerical experiments
853
- This section conducts numerical experiments with synthetic data and
854
- real-world data, including comparisons between the traditional THOSVD,
855
- STHOSVD algorithms, the R-STHOSVD algorithm proposed in [14], and our
856
-
857
- 14
858
- Sketching Algorithms for Low-Rank Tucker Approximation
859
- proposed algorithms Sketch-STHOSVD and sub-Sketch-STHOSVD. Regard-
860
- ing the numerical settings, the oversampling parameter p = 5 is used in
861
- Algorithm 3, the sketch parameters ln = rn + 2, n = 1, 2, . . ., N, are used
862
- in Algorithms 6 and 8, and the power iteration parameter q = 1 is used in
863
- Algorithm 8.
864
- 5.1 Hilbert tensor
865
- Hilbert tensor is a synthetic and supersymmetric tensor, with each entry
866
- defined as
867
- Xi1i2...in =
868
- 1
869
- i1 + i2 + ... + in
870
- , 1 ≤ in ≤ In, n = 1, 2, ..., N.
871
- In the first experiment, we set N = 5 and In = 25, n = 1, 2, . . . , N. The target
872
- rank is chosen as (r, r, r, r, r), where r ∈ [1, 25]. Due to the supersymmetry of
873
- the Hilbert tensor, the processing order in the algorithms does not affect the
874
- final experimental results, and thus the processing order can be directly chosen
875
- as sp : {1, 2, 3, 4, 5}.
876
- 0
877
- 5
878
- 10
879
- 15
880
- 20
881
- 25
882
- Target rank
883
- 10-15
884
- 10-10
885
- 10-5
886
- 100
887
- Relative Error
888
- THOSVD
889
- STHOSVD
890
- R-STHOSVD
891
- Sketch-STHOSVD
892
- sub-Sketch-STHOSVD
893
- 0
894
- 5
895
- 10
896
- 15
897
- 20
898
- 25
899
- Target rank
900
- 10-1
901
- 100
902
- 101
903
- 102
904
- Running Time
905
- THOSVD
906
- STHOSVD
907
- R-STHOSVD
908
- Sketch-STHOSVD
909
- sub-Sketch-STHOSVD
910
- Fig. 2 Results comparison on the Hilbert tensor with a size of 25 × 25 × 25 × 25 × 25 in
911
- terms of numerical error (left) and CPU time (right).
912
- The results of different algorithms are given in Figure 2. It shows that our
913
- proposed algorithms (i.e., Sketch-STHOSVD and sub-Sketch-STHOSVD) and
914
- algorithm R-STHOSVD outperform the algorithms THOSVD and STHOSVD.
915
- In particular, the error of the proposed algorithms Sketch-STHOSVD and sub-
916
- Sketch-STHOSVD is comparable to R-STHOSVD (see the left plot in Figure
917
- 2), while they both use less CPU time than R-STHOSVD (see the right plot in
918
- Figure 2). This result demonstrates the excellent performance of the proposed
919
-
920
- Sketching Algorithms for Low-Rank Tucker Approximation
921
- 15
922
- algorithms and indicates that the two-sided sketching method and the subspace
923
- power iteration used in our algorithms can indeed improve the performance of
924
- STHOSVD algorithm.
925
- For a large-scale test, we use a Hilbert tensor with a size of 500×500×500
926
- and conduct experiments using ten different approximate multilinear ranks. We
927
- perform the tests ten times and report the algorithms’ average running time
928
- and relative error in Table 3 and Table 4, respectively. The results show that
929
- the randomized algorithms can achieve higher accuracy than the deterministic
930
- algorithms. The proposed Sketch-STHOSVD algorithm is the fastest, and the
931
- sub-Sketch-STHOSVD algorithm achieves the highest accuracy efficiently.
932
- Table 3 Results comparison in terms of the CPU time (in second) on the Hilbert tensor
933
- with a size of 500 × 500 × 500 as the target rank increases.
934
- Target rank
935
- THOSVD
936
- STHOSVD
937
- R-STHOSVD
938
- Sketch-STHOSVD
939
- sub-Sketch-STHOSVD
940
- (10,10,10)
941
- 17.18
942
- 7.49
943
- 0.92
944
- 0.86
945
- 0.98
946
- (20,20,20)
947
- 23.13
948
- 8.87
949
- 1.25
950
- 1.05
951
- 1.48
952
- (30,30,30)
953
- 24.91
954
- 9.35
955
- 1.66
956
- 1.53
957
- 2.16
958
- (40,40,40)
959
- 28.05
960
- 10.41
961
- 1.94
962
- 1.44
963
- 2.11
964
- (50,50,50)
965
- 29.44
966
- 11.39
967
- 2.07
968
- 1.67
969
- 2.43
970
- (60,60,60)
971
- 30.14
972
- 11.07
973
- 2.37
974
- 1.90
975
- 2.77
976
- (70,70,70)
977
- 29.44
978
- 11.18
979
- 2.57
980
- 2.10
981
- 3.02
982
- (80,80,80)
983
- 29.65
984
- 12.30
985
- 3.05
986
- 2.54
987
- 3.75
988
- (90,90,90)
989
- 31.11
990
- 12.80
991
- 3.80
992
- 2.80
993
- 4.33
994
- (100,100,100)
995
- 32.22
996
- 13.51
997
- 4.04
998
- 3.07
999
- 4.61
1000
- Table 4 Results comparison in terms of the relative error on the Hilbert tensor with a
1001
- size of 500 × 500 × 500 as the target rank increases.
1002
- Target rank
1003
- THOSVD
1004
- STHOSVD
1005
- R-STHOSVD
1006
- Sketch-STHOSVD
1007
- sub-Sketch-STHOSVD
1008
- (10,10,10)
1009
- 2.7354e-06
1010
- 2.7347e-06
1011
- 2.7347e-06
1012
- 1.1178e-05
1013
- 2.7568e-06
1014
- (20,20,20)
1015
- 1.1794e-12
1016
- 1.1793e-12
1017
- 1.1794e-12
1018
- 7.1408e-12
1019
- 1.2677e-12
1020
- (30,30,30)
1021
- 4.6574e-15
1022
- 3.2739e-15
1023
- 3.2201e-15
1024
- 4.0641e-15
1025
- 2.0182e-15
1026
- (40,40,40)
1027
- 4.4282e-15
1028
- 3.4249e-15
1029
- 2.8212e-15
1030
- 2.1562e-15
1031
- 1.7860e-15
1032
- (50,50,50)
1033
- 4.1628e-15
1034
- 3.2342e-15
1035
- 2.6823e-15
1036
- 2.3205e-15
1037
- 1.8625e-15
1038
- (60,60,60)
1039
- 4.1214e-15
1040
- 3.1271e-15
1041
- 2.3652e-15
1042
- 2.2920e-15
1043
- 1.7472e-15
1044
- (70,70,70)
1045
- 4.1085e-15
1046
- 3.0000e-15
1047
- 2.1761e-15
1048
- 2.0499e-15
1049
- 1.6370e-15
1050
- (80,80,80)
1051
- 4.0956e-15
1052
- 3.1350e-15
1053
- 1.8382e-15
1054
- 1.8209e-15
1055
- 1.6424e-15
1056
- (90,90,90)
1057
- 4.0792e-15
1058
- 3.3742e-15
1059
- 1.8102e-15
1060
- 1.7193e-15
1061
- 1.5264e-15
1062
- (100,100,100)
1063
- 4.0390e-15
1064
- 3.0571e-15
1065
- 1.7323e-15
1066
- 1.6304e-15
1067
- 1.4957e-15
1068
- 5.2 Sparse tensor
1069
- In this experiment, we test the performance of different algorithms on a sparse
1070
- tensor X ∈ R200×200×200, i.e.,
1071
- X =
1072
- 10
1073
-
1074
- i=1
1075
- γ
1076
- i2 xi ◦ yi ◦ zi +
1077
- 200
1078
-
1079
- i=11
1080
- 1
1081
- i2 xi ◦ yi ◦ zi.
1082
- Where xi, yi, zi ∈ Rn are sparse vectors all generated using the sprand com-
1083
- mand in MATLAB with 5% nonzeros each, and γ is a user-defined parameter
1084
-
1085
- 16
1086
- Sketching Algorithms for Low-Rank Tucker Approximation
1087
- 20
1088
- 40
1089
- 60
1090
- 80
1091
- 100
1092
- Target rank
1093
- 10-3
1094
- 10-2
1095
- Relative Error
1096
- THOSVD
1097
- STHOSVD
1098
- R-STHOSVD
1099
- Sketch-STHOSVD
1100
- sub-Sketch-STHOSVD
1101
- 20
1102
- 40
1103
- 60
1104
- 80
1105
- 100
1106
- Target rank
1107
- 10-4
1108
- 10-3
1109
- Relative Error
1110
- THOSVD
1111
- STHOSVD
1112
- R-STHOSVD
1113
- Sketch-STHOSVD
1114
- sub-Sketch-STHOSVD
1115
- 20
1116
- 40
1117
- 60
1118
- 80
1119
- 100
1120
- Target rank
1121
- 10-6
1122
- 10-5
1123
- Relative Error
1124
- THOSVD
1125
- STHOSVD
1126
- R-STHOSVD
1127
- Sketch-STHOSVD
1128
- sub-Sketch-STHOSVD
1129
- 20
1130
- 40
1131
- 60
1132
- 80
1133
- 100
1134
- Target rank
1135
- 10-1
1136
- 100
1137
- Running Time
1138
- THOSVD
1139
- STHOSVD
1140
- R-STHOSVD
1141
- Sketch-STHOSVD
1142
- sub-Sketch-STHOSVD
1143
- 20
1144
- 40
1145
- 60
1146
- 80
1147
- 100
1148
- Target rank
1149
- 10-1
1150
- 100
1151
- Running Time
1152
- THOSVD
1153
- STHOSVD
1154
- R-STHOSVD
1155
- Sketch-STHOSVD
1156
- sub-Sketch-STHOSVD
1157
- 20
1158
- 40
1159
- 60
1160
- 80
1161
- 100
1162
- Target rank
1163
- 10-1
1164
- 100
1165
- Running Time
1166
- THOSVD
1167
- STHOSVD
1168
- R-STHOSVD
1169
- Sketch-STHOSVD
1170
- sub-Sketch-STHOSVD
1171
- Fig. 3 Results comparison on a sparse tensor with a size of 200 × 200 × 200 in terms of
1172
- numerical error (first row) and CPU time (second row).
1173
- which determines the strength of the gap between the first ten terms and the
1174
- rest terms. The target rank is chosen as (r, r, r), where r ∈ [20, 100]. The exper-
1175
- imental results show in Figure 3, in which three different values γ = 2, 10, 200
1176
- are tested. The increase of gap means that the tail energy will be reduced, and
1177
- the accuracy of the algorithms will be improved. Our numerical experiments
1178
- also verified this result.
1179
- Figure 3 demonstrates the superiority of the proposed sketching algo-
1180
- rithms. In particular, we see that the proposed Sketch-STHOSVD is the fastest
1181
- algorithm, with a comparable error against R-STHOSVD; the proposed sub-
1182
- Sketch-STHOSVD can reach the same accuracy as the STHOSVD algorithm
1183
- but in much less CPU time; and the proposed sub-Sketch-STHOSVD achieves
1184
- much better low-rank approximation than R-STHOSVD with similar CPU
1185
- time.
1186
- Now we consider the influence of noise on algorithms’ performance. Specif-
1187
- ically, the sparse tensor X with noise is designed in the same manner as in
1188
-
1189
- Sketching Algorithms for Low-Rank Tucker Approximation
1190
- 17
1191
- 20
1192
- 40
1193
- 60
1194
- 80
1195
- 100
1196
- Target rank
1197
- 0.19
1198
- 0.195
1199
- 0.2
1200
- 0.205
1201
- 0.21
1202
- 0.215
1203
- 0.22
1204
- 0.225
1205
- Relative Error
1206
- THOSVD
1207
- STHOSVD
1208
- R-STHOSVD
1209
- Sketch-STHOSVD
1210
- sub-Sketch-STHOSVD
1211
- 20
1212
- 40
1213
- 60
1214
- 80
1215
- 100
1216
- Target rank
1217
- 0.045
1218
- 0.05
1219
- 0.055
1220
- 0.06
1221
- Relative Error
1222
- THOSVD
1223
- STHOSVD
1224
- R-STHOSVD
1225
- Sketch-STHOSVD
1226
- sub-Sketch-STHOSVD
1227
- 20
1228
- 40
1229
- 60
1230
- 80
1231
- 100
1232
- Target rank
1233
- 1.45
1234
- 1.5
1235
- 1.55
1236
- 1.6
1237
- 1.65
1238
- 1.7
1239
- 1.75
1240
- 1.8
1241
- 1.85
1242
- 1.9
1243
- 1.95
1244
- Relative Error
1245
- 10-3
1246
- THOSVD
1247
- STHOSVD
1248
- R-STHOSVD
1249
- Sketch-STHOSVD
1250
- sub-Sketch-STHOSVD
1251
- 20
1252
- 40
1253
- 60
1254
- 80
1255
- 100
1256
- Target rank
1257
- 10-1
1258
- 100
1259
- Running Time
1260
- THOSVD
1261
- STHOSVD
1262
- R-STHOSVD
1263
- Sketch-STHOSVD
1264
- sub-Sketch-STHOSVD
1265
- 20
1266
- 40
1267
- 60
1268
- 80
1269
- 100
1270
- Target rank
1271
- 10-1
1272
- 100
1273
- Running Time
1274
- THOSVD
1275
- STHOSVD
1276
- R-STHOSVD
1277
- Sketch-STHOSVD
1278
- sub-Sketch-STHOSVD
1279
- 20
1280
- 40
1281
- 60
1282
- 80
1283
- 100
1284
- Target rank
1285
- 10-1
1286
- 100
1287
- Running Time
1288
- THOSVD
1289
- STHOSVD
1290
- R-STHOSVD
1291
- Sketch-STHOSVD
1292
- sub-Sketch-STHOSVD
1293
- Fig. 4 Results comparison on a 200×200×200 sparse tensor with noise in terms of numerical
1294
- error (first row) and CPU time (second row).
1295
- [24], i.e.,
1296
- ˆ
1297
- X = X + δK,
1298
- where K is a standard Gaussian tensor and δ is used to control the noise
1299
- level. Let δ = 10−3 and keep the rest parameters the same as the settings
1300
- in the previous experiment. The relative error and running time of different
1301
- algorithms are shown in Figure 4. In Figure 4, we see that noise indeed affects
1302
- the accuracy of the low-rank approximation, especially when the gap is small.
1303
- However, the influence of noise does not change the conclusion obtained on
1304
- the case without noise. The accuracy of our sub-Sketch-STHOSVD algorithm
1305
- is the highest among the randomized algorithms. As γ increases, sub-Sketch-
1306
- STHOSVD can achieve almost the same accuracy as that of THOSVD and
1307
- STHOSVD in a comparable CPU time against R-STHOSVD.
1308
-
1309
- 18
1310
- Sketching Algorithms for Low-Rank Tucker Approximation
1311
- 5.3 Real-world data tensor
1312
- In this experiment, we test the performance of different algorithms on a colour
1313
- image, called HDU picture1, with a size of 1200 × 1800 × 3. We also evaluate
1314
- the proposed sketching algorithms on the widely used YUV Video Sequences2.
1315
- Taking the ‘hall monitor’ video as an example and using the first 30 frames, a
1316
- three order tensor with a size of 144 × 176 × 30 is then formed for this test.
1317
- Firstly, we conduct an experiment on the HDU picture with target rank
1318
- (500, 500, 3), and compare the PSNR and CPU time of different algorithms.
1319
- The experimental result is shown in Figure 5, which shows that the PSNR
1320
- of sub-Sketch-STHOSVD, THOSVD and STHOSVD is very similar (i.e.,
1321
- ∼ 40) and that sub-Sketch-STHOSVD is more efficient in terms of CPU
1322
- time. R-STHOSVD and Sketch-STHOSVD are also very efficient compared to
1323
- sub-Sketch-STHOSVD; however, the PSNR they achieve is 5 dB less than sub-
1324
- Sketch-STHOSVD. Then we conduct separate numerical experiments on the
1325
- HDU picture and the ‘hall monitor’ video clip as the target rank increases, and
1326
- compare these algorithms in terms of the relative error, CPU time and PSNR,
1327
- see Figure 6 and Figure 7. These experimental results again demonstrate
1328
- the superiority (i.e., low error and good approximation with high efficiency)
1329
- of the proposed sub-Sketch-STHOSVD algorithm in computing the Tucker
1330
- decomposition approximation.
1331
- Original
1332
- THOSVD (2.62; 40.61)
1333
- STHOSVD (1.89; 40.65)
1334
- R-STHOSVD
1335
- Sketch-STHOSVD
1336
- sub-Sketch-STHOSVD
1337
- (0.61; 34.72)
1338
- (0.55; 34.63)
1339
- (0.84; 39.97)
1340
- Fig. 5 Results comparison on a HDU picture with a size of 1200 × 1800 × 3 in terms of
1341
- PSNR (i.e., peak signal-to-noise ratio) and CPU time. The target rank is (500,500,3). The
1342
- two values in e.g. (2.62; 40.61) represent the CPU time and the PSNR, respectively.
1343
- In the last experiment, a larger-scale real-world tensor data is used. We
1344
- choose a color image (called the LONDON picture) with a size of 4775×7155×3
1345
- as the test image and consider the influence of noise. The LONDON picture
1346
- 1https://www.hdu.edu.cn/landscape
1347
- 2http://trace.eas.asu.edu/yuv/index.html
1348
-
1349
- Sketching Algorithms for Low-Rank Tucker Approximation
1350
- 19
1351
- 0
1352
- 200
1353
- 400
1354
- 600
1355
- 800
1356
- 1000
1357
- Target rank
1358
- -11
1359
- -10
1360
- -9
1361
- -8
1362
- -7
1363
- -6
1364
- -5
1365
- -4
1366
- Relative Error
1367
- THOSVD
1368
- STHOSVD
1369
- R-STHOSVD
1370
- Sketch-STHOSVD
1371
- sub-Sketch-STHOSVD
1372
- 0
1373
- 200
1374
- 400
1375
- 600
1376
- 800
1377
- 1000
1378
- Target rank
1379
- 0.5
1380
- 1
1381
- 1.5
1382
- 2
1383
- 2.5
1384
- 3
1385
- 3.5
1386
- Running Time
1387
- THOSVD
1388
- STHOSVD
1389
- R-STHOSVD
1390
- Sketch-STHOSVD
1391
- sub-Sketch-STHOSVD
1392
- 0
1393
- 200
1394
- 400
1395
- 600
1396
- 800
1397
- 1000
1398
- Target rank
1399
- 20
1400
- 25
1401
- 30
1402
- 35
1403
- 40
1404
- 45
1405
- 50
1406
- 55
1407
- PSNR
1408
- THOSVD
1409
- STHOSVD
1410
- R-STHOSVD
1411
- Sketch-STHOSVD
1412
- sub-Sketch-STHOSVD
1413
- Fig. 6 Results comparison on a HDU picture with size of 1200 × 1800 × 3 in terms of
1414
- numerical error (left), CPU time (middle) and PSNR (right). The HDU picture is with target
1415
- rank (r, r, 3), r ∈ [50, 1000].
1416
- 0
1417
- 20
1418
- 40
1419
- 60
1420
- 80
1421
- 100
1422
- Target rank
1423
- -9
1424
- -8
1425
- -7
1426
- -6
1427
- -5
1428
- -4
1429
- -3
1430
- Relative Error
1431
- THOSVD
1432
- STHOSVD
1433
- R-STHOSVD
1434
- Sketch-STHOSVD
1435
- sub-Sketch-STHOSVD
1436
- 0
1437
- 20
1438
- 40
1439
- 60
1440
- 80
1441
- 100
1442
- Target rank
1443
- 0.005
1444
- 0.01
1445
- 0.015
1446
- 0.02
1447
- 0.025
1448
- 0.03
1449
- 0.035
1450
- 0.04
1451
- 0.045
1452
- 0.05
1453
- 0.055
1454
- Running Time
1455
- THOSVD
1456
- STHOSVD
1457
- R-STHOSVD
1458
- Sketch-STHOSVD
1459
- sub-Sketch-STHOSVD
1460
- 0
1461
- 20
1462
- 40
1463
- 60
1464
- 80
1465
- 100
1466
- Target rank
1467
- 10
1468
- 15
1469
- 20
1470
- 25
1471
- 30
1472
- 35
1473
- PSNR
1474
- THOSVD
1475
- STHOSVD
1476
- R-STHOSVD
1477
- Sketch-STHOSVD
1478
- sub-Sketch-STHOSVD
1479
- Fig. 7 Results comparison on the ‘hall monitor’ grey video with size of 144 × 176 × 30 in
1480
- terms of numerical error (left), CPU time (middle) and PSNR (right). The ‘hall monitor’
1481
- grey video is with target rank (r, r, 10), r ∈ [5, 100].
1482
- with white Gaussian noise is generated using the awgn(X,SNR) built-in function
1483
- in MATLAB. We set the target rank as (50,50,3) and SNR to 20. The results
1484
- comparisons without and with white Gaussian noise are respectively shown in
1485
- Figure 8 and Figure 9 in terms of the CPU time and PSNR. Moreover, we also
1486
- test the algorithms on the LONDON picture as the target rank increases. The
1487
- results regarding the relative error, the CPU time and the PSNR are reported
1488
- in Tables 5, 6 and 7, respectively. On the whole, the results again show the
1489
- consistent performance of the proposed methods.
1490
-
1491
- 20
1492
- Sketching Algorithms for Low-Rank Tucker Approximation
1493
- Original
1494
- THOSVD (154.95; 24.07)
1495
- STHOSVD (49.34; 24.09)
1496
- R-STHOSVD
1497
- Sketch-STHOSVD
1498
- sub-Sketch-STHOSVD
1499
- (1.29; 21.27)
1500
- (1.17; 21.09)
1501
- (1.29; 23.65)
1502
- Fig. 8 Results comparison on LONDON picture with a size of 4775 × 7155 × 3 in terms of
1503
- CPU time and PSNR. The target rank is (50,50,3).
1504
- Noisy picture(PSNR=16.92)
1505
- THOSVD (160.59; 20.54)
1506
- STHOSVD (50.16; 20.54)
1507
- R-STHOSVD
1508
- Sketch-STHOSVD
1509
- sub-Sketch-STHOSVD
1510
- (1,25; 19.37)
1511
- (1.15; 19.25)
1512
- (1.45; 20.45)
1513
- Fig. 9 Results comparison on LONDON picture with a size of 4775 × 7155 × 3 and white
1514
- Gaussian noise in terms of CPU time and PSNR. The target rank is (50,50,3).
1515
- In summary, the numerical results show the superiority of the sub-sketch
1516
- STHOSVD algorithm for large-scale tensors with or without noise. We can see
1517
- that sub-Sketch-STHOSVD could achieve close approximations to that of the
1518
- deterministic algorithms in a time similar to other randomized algorithms.
1519
-
1520
- Sketching Algorithms for Low-Rank Tucker Approximation
1521
- 21
1522
- Table 5 Results comparison in terms of the relative error on the LONDON picture with a
1523
- size of 4775 × 7155 × 3 as the target rank increases.
1524
- Target rank
1525
- THOSVD
1526
- STHOSVD
1527
- R-STHOSVD
1528
- Sketch-STHOSVD
1529
- sub-Sketch-STHOSVD
1530
- (10,10,10)
1531
- 0.019037
1532
- 0.019025
1533
- 0.031000
1534
- 0.040006
1535
- 0.020756
1536
- (20,20,20)
1537
- 0.012669
1538
- 0.012644
1539
- 0.023467
1540
- 0.027398
1541
- 0.013703
1542
- (30,30,30)
1543
- 0.010168
1544
- 0.010124
1545
- 0.018354
1546
- 0.020451
1547
- 0.010965
1548
- (40,40,40)
1549
- 0.008630
1550
- 0.008599
1551
- 0.015792
1552
- 0.017029
1553
- 0.009443
1554
- (50,50,50)
1555
- 0.007576
1556
- 0.007532
1557
- 0.013917
1558
- 0.015333
1559
- 0.008286
1560
- (60,60,60)
1561
- 0.006778
1562
- 0.006710
1563
- 0.012967
1564
- 0.013589
1565
- 0.007359
1566
- (70,70,70)
1567
- 0.006119
1568
- 0.006049
1569
- 0.011813
1570
- 0.011886
1571
- 0.006687
1572
- (80,80,80)
1573
- 0.005532
1574
- 0.005491
1575
- 0.010658
1576
- 0.011148
1577
- 0.006123
1578
- (90,90,90)
1579
- 0.005076
1580
- 0.005023
1581
- 0.010018
1582
- 0.010378
1583
- 0.005602
1584
- (100,100,100)
1585
- 0.004669
1586
- 0.004619
1587
- 0.009249
1588
- 0.009578
1589
- 0.005172
1590
- Table 6 Results comparison in terms of the CPU time (in second) on the LONDON
1591
- picture with a size of 4775 × 7155 × 3 as the target rank increases.
1592
- Target rank
1593
- THOSVD
1594
- STHOSVD
1595
- R-STHOSVD
1596
- Sketch-STHOSVD
1597
- sub-Sketch-STHOSVD
1598
- (10,10,10)
1599
- 156.13
1600
- 49.22
1601
- 0.94
1602
- 0.99
1603
- 1.12
1604
- (20,20,20)
1605
- 165.22
1606
- 77.64
1607
- 1.24
1608
- 1.48
1609
- 1.56
1610
- (30,30,30)
1611
- 241.11
1612
- 76.57
1613
- 1.69
1614
- 1.39
1615
- 1.69
1616
- (40,40,40)
1617
- 242.08
1618
- 74.25
1619
- 1.57
1620
- 1.45
1621
- 1.68
1622
- (50,50,50)
1623
- 268.71
1624
- 72.85
1625
- 1.51
1626
- 1.45
1627
- 1.80
1628
- (60,60,60)
1629
- 265.52
1630
- 77.80
1631
- 1.75
1632
- 1.51
1633
- 2.26
1634
- (70,70,70)
1635
- 241.95
1636
- 77.82
1637
- 1.93
1638
- 1.78
1639
- 2.24
1640
- (80,80,80)
1641
- 264.86
1642
- 73.53
1643
- 1.86
1644
- 1.74
1645
- 2.31
1646
- (90,90,90)
1647
- 274.73
1648
- 72.67
1649
- 1.93
1650
- 1.83
1651
- 2.16
1652
- (100,100,100)
1653
- 283.88
1654
- 86.42
1655
- 2.24
1656
- 2.20
1657
- 2.46
1658
- Table 7 Results comparison in terms of the PSNR on the LONDON picture with a size
1659
- of 4775 × 7155 × 3 as the target rank increases.
1660
- Target rank
1661
- THOSVD
1662
- STHOSVD
1663
- R-STHOSVD
1664
- Sketch-STHOSVD
1665
- sub-Sketch-STHOSVD
1666
- (10,10,10)
1667
- 20.06
1668
- 20.07
1669
- 17.96
1670
- 16.86
1671
- 19.70
1672
- (20,20,20)
1673
- 21.84
1674
- 21.84
1675
- 19.18
1676
- 18.51
1677
- 21.50
1678
- (30,30,30)
1679
- 22.79
1680
- 22.81
1681
- 20.25
1682
- 19.78
1683
- 22.46
1684
- (40,40,40)
1685
- 23.50
1686
- 23.52
1687
- 20.90
1688
- 20.57
1689
- 23.11
1690
- (50,50,50)
1691
- 24.07
1692
- 24.09
1693
- 21.45
1694
- 21.03
1695
- 23.68
1696
- (60,60,60)
1697
- 24.55
1698
- 24.60
1699
- 21.76
1700
- 21.55
1701
- 24.20
1702
- (70,70,70)
1703
- 25.00
1704
- 25.05
1705
- 22.16
1706
- 22.13
1707
- 24.61
1708
- (80,80,80)
1709
- 25.43
1710
- 25.47
1711
- 22.61
1712
- 22.41
1713
- 25.00
1714
- (90,90,90)
1715
- 25.81
1716
- 25.85
1717
- 22.87
1718
- 22.72
1719
- 25.38
1720
- (100,100,100)
1721
- 26.17
1722
- 26.22
1723
- 23.22
1724
- 23.07
1725
- 25.73
1726
- 6 Conclusion
1727
- In this paper we proposed efficient sketching algorithms, i.e., Sketch-
1728
- STHOSVD and sub-Sketch-STHOSVD, to calculate the low-rank Tucker
1729
- approximation of tensors by combining the two-sided sketching technique with
1730
- the STHOSVD algorithm and using the subspace power iteration. Detailed
1731
- error analysis is also conducted. Numerical results on both synthetic and real-
1732
- world data tensors demonstrate the competitive performance of the proposed
1733
- algorithms in comparison to the state-of-the-art algorithms.
1734
- Acknowledgements
1735
- We would like to thank the anonymous referees for their comments and sug-
1736
- gestions on our paper, which lead to great improvements of the presentation.
1737
-
1738
- 22
1739
- Sketching Algorithms for Low-Rank Tucker Approximation
1740
- G. Yu’s work was supported in part by National Natural Science Foundation
1741
- of China (No. 12071104) and Natural Science Foundation of Zhejiang Province
1742
- (No. LD19A010002).
1743
- Appendix
1744
- Lemma 1 [[25], Theorem 2] Let ̺ < k − 1 be a positive natural number and Ω ∈
1745
- Rk×n be a Gaussian random matrix. Suppose Q is obtained from Algorithm 7. Then
1746
- ∀A ∈ Rm×n, we have
1747
- EΩ∥A − QQ⊤A∥2
1748
- F ≤ (1 + f(̺, k)̟4q
1749
- k ) · τ 2
1750
- ̺+1(A).
1751
- (4)
1752
- Lemma 2 [[22], Lemma A.3] Let A ∈ Rm×n be an input matrix and ˆA = QX
1753
- be the approximation obtained from Algorithm 7. The approximation error can be
1754
- decomposed as
1755
- ∥A − ˆA∥2
1756
- F = ∥A − QQ⊤A∥2
1757
- F + ∥X − Q⊤A∥2
1758
- F .
1759
- (5)
1760
- Lemma 3 [[22], Lemma A.5] Assume Ψ ∈ Rl×n is a standard normal matrix
1761
- independent from Ω. Then
1762
- EΨ∥X − Q⊤A∥2
1763
- F = f(k, l) · ∥A − QQ⊤A∥2
1764
- F .
1765
- (6)
1766
- The error-bound for Algorithm 7 can be shown in Lemma 4 below.
1767
- Lemma 4 Assume the sketch size parameter satisfies l > k + 1. Draw random
1768
- test matrices Ω ∈ Rn×k and Ψ∈ Rl×m independently from the standard normal
1769
- distribution. Then the rank-k approximation ˆA obtained from Algorithm 7 satisfies
1770
- E ∥ A − ˆA ∥2
1771
- F ≤ (1 + f(k, l)) · min
1772
- ̺<k−1(1 + f(̺, k)̟k
1773
- 4q) · τ 2
1774
- ̺+1(A).
1775
- Proof Using equations (4), (5) and (6), we have
1776
- E ∥ A − ˆA ∥2
1777
- F = EΩ∥A − QQ⊤A∥2
1778
- F + EΩEΨ∥X − Q⊤A∥2
1779
- F
1780
- = (1 + f(k, l)) · EΩ∥A − QQ⊤A∥2
1781
- F
1782
- ≤ (1 + f(k, l)) · (1 + f(̺, k)̟k
1783
- 4q) · τ 2
1784
- ̺+1(A).
1785
- After minimizing over eligible index ̺ < k − 1, the proof is completed.
1786
-
1787
-
1788
- Sketching Algorithms for Low-Rank Tucker Approximation
1789
- 23
1790
- We are now in the position to prove Theorem 5. Combining Theorem 2
1791
- and Lemma 4, we have
1792
- E{Ωj}N
1793
- j=1∥X − �
1794
- X ∥2
1795
- F
1796
- =
1797
- N
1798
-
1799
- n=1
1800
- E{Ωj}N
1801
- j=1∥ ˆ
1802
- X (n−1) − ˆ
1803
- X (n)∥2
1804
- F
1805
- =
1806
- N
1807
-
1808
- n=1
1809
- E{Ωj}n−1
1810
- j=1
1811
-
1812
- EΩn∥ ˆ
1813
- X (n−1) − ˆ
1814
- X (n)∥2
1815
- F
1816
-
1817
- =
1818
- N
1819
-
1820
- n=1
1821
- E{Ωj}n−1
1822
- j=1
1823
-
1824
- EΩn∥G(n−1) ×n−1
1825
- i=1 U (i)×n(I − U (n)U (n)⊤)∥2
1826
- F
1827
-
1828
-
1829
- N
1830
-
1831
- n=1
1832
- E{Ωj}n−1
1833
- j=1
1834
-
1835
- EΩn∥(I − U (n)U (n)⊤)G(n−1)
1836
- (n)
1837
- )∥2
1838
- F
1839
-
1840
-
1841
- N
1842
-
1843
- n=1
1844
- E{Ωj}n−1
1845
- j=1 (1 + f(rn, ln)) ·
1846
- min
1847
- ̺n<rn−1(1 + f(̺n, rn)̟r
1848
- 4q)
1849
- In
1850
-
1851
- i=rn+1
1852
- σ2
1853
- i (G(n−1)
1854
- (n)
1855
- )
1856
-
1857
- N
1858
-
1859
- n=1
1860
- E{Ωj}n−1
1861
- j=1 (1 + f(rn, ln)) ·
1862
- min
1863
- ̺n<rn−1(1 + f(̺n, rn)̟r4q)∆2
1864
- n(X)
1865
- =
1866
- N
1867
-
1868
- n=1
1869
- (1 + f(rn, ln)) ·
1870
- min
1871
- ̺n<rn−1(1 + f(̺n, rn)̟r
1872
- 4q)∆2
1873
- n(X)
1874
-
1875
- N
1876
-
1877
- n=1
1878
- (1 + f(rn, ln)) ·
1879
- min
1880
- ̺n<rn−1(1 + f(̺n, rn)̟r
1881
- 4q)∥X − ˆ
1882
- Xopt∥2
1883
- F ,
1884
- which completes the proof of Theorem 5.
1885
- References
1886
- [1] Comon, P.: Tensors: A brief introduction. IEEE Signal Processing Maga-
1887
- zine. 31(3), 44-53(2014)
1888
- [2] Hitchcock, F. L.: Multiple Invariants and Generalized Rank of a P-
1889
- Way Matrix or Tensor. Journal of Mathematics and Physics. 7(1-4),
1890
- 39-79(1928)
1891
- [3] Kiers, H. A. L.: Towards a standardized notation and terminology in
1892
- multiway analysis. Journal of Chemometrics Society. 14(3), 105-122(2000)
1893
- [4] Tucker, L. R.: Implications of factor analysis of three-way matrices for
1894
- measurement of change. Problems in measuring change. 15, 122-137(1963)
1895
-
1896
- 24
1897
- Sketching Algorithms for Low-Rank Tucker Approximation
1898
- [5] Tucker, L. R.: Some mathematical notes on three-mode factor analysis.
1899
- Psychometrika. 31(3), 279-311(1966)
1900
- [6] De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singu-
1901
- lar value decomposition. SIAM journal on Matrix Analysis Applications.
1902
- 21(4), 1253-1278(2000)
1903
- [7] Hackbusch, W., K¨uhn, S.: A new scheme for the tensor representation.
1904
- Journal of Fourier analysis applications. 15(5), 706-722(2009)
1905
- [8] Grasedyck, L.: Hierarchical Singular Value Decomposition of Tensors.
1906
- SIAM journal on Matrix Analysis Applications. 31(4), 2029-2054 (2010)
1907
- [9] Oseledets, I. V.: Tensor-train decomposition. SIAM Journal on Scientific
1908
- Computing. 33(5), 2295-2317(2011)
1909
- [10] De Lathauwer, L., De Moor, B., Vandewalle, J.: On the best rank-1 and
1910
- rank-(r1, r2,...,rn) approximation of higher-order tensors. SIAM journal
1911
- on Matrix Analysis Applications. 21(4), 1324-1342(2000)
1912
- [11] Vannieuwenhoven, N., Vandebril, R., Meerbergen, K.: A new truncation
1913
- strategy for the higher-order singular value decomposition. SIAM Journal
1914
- on Scientific Computing. 34(2), A1027-A1052(2012)
1915
- [12] Zhou, G., Cichocki, A., Xie, S.: Decomposition of big tensors with low
1916
- multilinear rank. arXiv preprint, arXiv:1412.1885(2014)
1917
- [13] Che, M., Wei, Y.: Randomized algorithms for the approximations of
1918
- Tucker and the tensor train decompositions. Advances in Computational
1919
- Mathematics. 45(1), 395-428(2019)
1920
- [14] Minster, R., Saibaba, A. K., Kilmer, M. E.: Randomized algorithms for
1921
- low-rank tensor decompositions in the Tucker format. SIAM Journal on
1922
- Mathematics of Data Science. 2(1), 189-215 (2020)
1923
- [15] Che, M., Wei, Y., Yan, H.: The computation of low multilinear rank
1924
- approximations of tensors via power scheme and random projection.
1925
- SIAM Journal on Matrix Analysis Applications. 41(2), 605-636 (2020)
1926
- [16] Che, M., Wei, Y., Yan, H.: Randomized algorithms for the low multilin-
1927
- ear rank approximations of tensors. Journal of Computational Applied
1928
- Mathematics. 390(2), 113380(2021)
1929
- [17] Sun, Y., Guo, Y., Luo, C., Tropp, J., Udell, M.: Low-rank tucker approx-
1930
- imation of a tensor from streaming data. SIAM Journal on Mathematics
1931
- of Data Science. 2(4), 1123-1150(2020)
1932
-
1933
- Sketching Algorithms for Low-Rank Tucker Approximation
1934
- 25
1935
- [18] Tropp, J. A., Yurtsever, A., Udell, M., Cevher, V.: Streaming low-rank
1936
- matrix approximation with an application to scientific simulation. SIAM
1937
- Journal on Scientific Computing. 41(4), A2430-A2463(2019)
1938
- [19] Malik, O. A., Becker, S.: Low-rank tucker decomposition of large tensors
1939
- using tensorsketch. Advances in neural information processing systems.
1940
- 31, 10116-10126 (2018)
1941
- [20] Ahmadi-Asl, S., Abukhovich, S., Asante-Mensah, M. G., Cichocki, A.,
1942
- Phan, A. H., Tanaka, T.: Randomized algorithms for computation of
1943
- Tucker decomposition and higher order SVD (HOSVD). IEEE Access. 9,
1944
- 28684-28706(2021)
1945
- [21] Halko, N., Martinsson, P.-G., Tropp, J. A.: Finding structure with ran-
1946
- domness: Probabilistic algorithms for constructing approximate matrix
1947
- decompositions. SIAM review. 53(2), 217-288 (2011)
1948
- [22] Tropp, J. A., Yurtsever, A., Udell, M., Cevher, V.: Practical sketching
1949
- algorithms for low-rank matrix approximation. SIAM Journal on Matrix
1950
- Analysis Applications. 38(4), 1454-1485(2017)
1951
- [23] Rokhlin, V., Szlam, A., Tygert, M.: A randomized algorithm for princi-
1952
- pal component analysis. SIAM Journal on Matrix Analysis Applications,
1953
- 31(3), 1100-1124(2009)
1954
- [24] Xiao, C., Yang, C., Li, M.: Efficient Alternating Least Squares Algorithms
1955
- for Low Multilinear Rank Approximation of Tensors. Journal of Scientific
1956
- Computing. 87(3), 1-25(2021)
1957
- [25] Zhang, J., Saibaba, A. K., Kilmer, M. E., Aeron, S.: A randomized tensor
1958
- singular value decomposition based on the t-product. Numerical Linear
1959
- Algebra with Applications. 25(5), e2179(2018)
1960
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
knowledge_base/99FJT4oBgHgl3EQfpCw2/content/tmp_files/load_file.txt DELETED
The diff for this file is too large to render. See raw diff
 
knowledge_base/99FJT4oBgHgl3EQfpCw2/vector_store/index.faiss DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1aa21446e57d0451ddd4a93e627a48cf9e24d88a8141f5ca37cc350c98f48a5f
3
- size 5308461
 
 
 
 
knowledge_base/99FJT4oBgHgl3EQfpCw2/vector_store/index.pkl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:99160d4abd7469d43569b83852673022f7f391d8e721189d99b48392a28ab396
3
- size 170524
 
 
 
 
knowledge_base/9tAyT4oBgHgl3EQfQ_bX/content/2301.00059v1.pdf DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:82602c2890bcfbdfcfb55514090756f9afa436c88ceca408e5d03bab71f225cd
3
- size 695070
 
 
 
 
knowledge_base/9tAyT4oBgHgl3EQfQ_bX/content/tmp_files/2301.00059v1.pdf.txt DELETED
@@ -1,1576 +0,0 @@
1
- 1
2
-
3
- Describing NMR chemical exchange by effective phase diffusion approach
4
- Guoxing Lin*
5
- Carlson School of Chemistry and Biochemistry, Clark University, Worcester, MA 01610, USA
6
-
7
- *Email: [email protected]
8
-
9
- Abstract
10
- This paper proposes an effective phase diffusion method to analyze chemical exchange in nuclear
11
- magnetic resonance (NMR). The chemical exchange involves spin jumps around different sites where the
12
- spin angular frequencies vary, which leads to a random phase walk viewed from the rotating frame
13
- reference. Therefore, the random walk in phase space can be treated by the effective phase diffusion
14
- method. Both the coupled and uncoupled phase diffusions are considered; additionally, it includes normal
15
- diffusion as well as fractional diffusion. Based on these phase diffusion equations, the line shape of NMR
16
- exchange spectrum can be analyzed. By comparing these theoretical results with the conventional theory,
17
- this phase diffusion approach works for fast exchange, ranging from slightly faster than intermediate
18
- exchange to very fast exchange. For normal diffusion models, the theoretically predicted curves agree
19
- with those predicted from traditional models in the literature, and the characteristic exchange time
20
- obtained from phase diffusion with a fixed jump time is the same as that obtained from the conventional
21
- model. However, the phase diffusion with a monoexponential time distribution gives a characteristic
22
- exchange time constant which is half of that obtained from the traditional model. Additionally, the
23
- fractional diffusion obtains a significantly different line shape than that predicted based on normal
24
- diffusion.
25
- Keywords: NMR, chemical exchange, Mittag-Leffler function, phase diffusion
26
-
27
- 1.
28
- Introduction
29
- Chemical exchange is a powerful nuclear magnetic resonance (NMR) technique to detect dynamics
30
- behavior in biological and polymer systems at the atomic level [1,2,3,4]. Chemical exchange NMR
31
- monitors spin jumping around different environmental sites due to changes in conformational or chemical
32
- states. In chemical exchange, the angular frequency of the spin precession changes, which results in
33
- observable line shape changes in the NMR spectrum. Although chemical exchange has been a established
34
- tool, theoretical developments are still needed to better understand the chemical exchange NMR,
35
- particularly for complex systems.
36
- The characteristic exchange time could follow a complicated distribution. Many theoretical models
37
- have been developed to analyze chemical exchanges in NMR [3, 5, 6]. The two-site exchange model based
38
- on the modified-Bloch equation successfully interprets many NMR lines hape [5], where the jump time in
39
- the exchange equation is a fixed constant. However, in a real system, the exchange time could follow a
40
- distribution such as the exponential function in the Gaussian exchange model reported in [7].
41
- Additionally, a complex exchange distribution could exist in complicated conformational change or
42
- diffusion-induced exchange, such as Xenon diffusing in the heterogeneous system [8]. In a complex
43
- system, a monoexponential time distribution may not be sufficient to explain the dynamics behavior. For
44
- complicated system the time distribution function could be the Mittag-Leffler function (MLF) 𝐸𝛼 (− (
45
- 𝑡
46
- 𝜏)
47
- 𝛼
48
- )
49
- [9,10], or a stretched exponential function (SEF) exp (− (
50
- 𝑡
51
- 𝜏)
52
- 𝛼
53
- ) , where α is the time-fractional derivative
54
- order, and 𝜏 is the characteristic time. The MLF 𝐸𝛼(−𝑡𝛼) = ∑
55
- (−𝑡𝛼)𝑛
56
- Γ(𝑛𝛼+1)
57
-
58
- 𝑛=0
59
- , can be reduced to a SEF
60
-
61
- 2
62
-
63
- exp (−
64
- 𝑡𝛼
65
- Γ(1+𝛼)) when 𝑡 is small. The SEF exp (− (
66
- 𝑡
67
- 𝜏)
68
- 𝛼
69
- ) is the same as the Kohlrausch-Williams-Watts (KWW)
70
- function [11,12,13,14]], a well-known time correlation function in macromolecular systems. The Mittag-
71
- Leffler function-based distribution is heavy-tailed. Mittag Leffler function has been employed to analyze
72
- anomalous NMR dynamics processes such as PFG anomalous diffusion [15,16], and anomalous NMR
73
- relaxation [17,18,19,20]. Currently chemical theories of NMR are still difficult to handle these complex
74
- distributions.
75
- A phase diffusion method is proposed in this paper to explain the NMR chemical exchange. Most
76
- current methods are real space approaches, such as the modified Bloch exchange equations [3,4,6], while
77
- the Gaussian exchange model is a phase space method based on evaluating the accumulated phase
78
- variance for the random phase process during the exchange process. As the spin phase in the chemical
79
- exchange undergoes a random walk in the rotating frame reference, an effective phase diffusion method
80
- will be employed in this paper to analyze the exchange. Effective phase diffusion has been applied to
81
- analyze PFG diffusion and NMR relaxation. It has advantages over the traditional methods: It can provide
82
- the exact phase distribution that cannot be obtained by conventional real space theoretical method;
83
- additionally, the NMR signal can be directly obtained from vector sum by Fourier transform in phase
84
- space, which makes the analysis intuitive and often simplifies the solving process; furthermore, the phase
85
- diffusion method could be straightforwardly applied to anomalous dynamics process based on fractional
86
- diffusion [18, 21, 22, 23,24,25]. Both the normal and fractional phase diffusion are considered, where the
87
- exchange time can be a simple constant or a certain type of distribution.
88
- Additionally, each the individual phase jump length is proportional to the jump time and the angular
89
- frequency. Because both the jump time and angular frequency fluctuate and obey certain types of
90
- distributions, the distribution of phase jump length could be either strongly or weakly correlated to the
91
- jump time distribution. The phase walk with weak phase time correlation can be treated by the uncoupled
92
- diffusion, while the strong correlation may require coupled diffusion model [26]. The traditional
93
- uncoupled diffusion has been successful in explaining many transport phenomena. However, it is
94
- insufficient to account for the divergence of the second moment of Levy flight processes [26], where a
95
- coupled diffusion is needed.
96
- The rest of the paper is organized as follows. Section 2.1 treats the simplest normal diffusion with a
97
- fixed jump time: The obtained exchange time agrees well with the traditional two-site exchange. While
98
- the phase diffusion with jump time distribution is presented in Section 2.2: Firstly, the general expressions
99
- for phase evolution in chemical exchange are derived in Section 2.2.1; secondly, the normal diffusion is
100
- presented in Section 2.2.2, with both uncoupled and coupled diffusion, and it is found that the exchange
101
- time constant is two-time faster than that of the traditional model and the fixed jump time diffusion result;
102
- thirdly, the fractional diffusion with MLF based jump time distribution is derived in Section 2.2.3, where
103
- the uncoupled diffusion is handled by time-fractional diffusion equation, and the coupled fractional
104
- diffusion is handled by coupled random walk model [27]. The results here give additional insights into
105
- the NMR chemical exchange, which could improve the analysis of NMR and magnetic resonance imaging
106
- (MRI) experiments, particularly in complicated systems.
107
- 2. Theory
108
- The chemical exchange occurs when the spin jumps among different sites where the spin precession
109
- frequencies are different [1,2,5]. The precession frequencies of the spin moment are proportional to the
110
- intensity of the local magnetic field, which is affected by the surrounding electron cloud and nearby spin
111
- moments [2]. For simplicity, we consider only the basic exchange between two sites with equal
112
- populations [3,5] in this paper, neglecting the relaxation effect. The average precession angular frequencies
113
- for these two sites are arbitrarily set as 𝜔1 and 𝜔2 respectively, with 𝜔1 < 𝜔2 and
114
-
115
- 3
116
-
117
- ∆𝜔 = 𝜔2 − 𝜔1.
118
- If the angular frequency of the rotating frame reference is set as
119
- 𝜔1+𝜔2
120
- 2
121
- ; the two sites have relative
122
- angular frequencies -𝜔0 and 𝜔0, respectively, 𝜔0 =
123
- 𝜔2−𝜔1
124
- 2
125
- .
126
-
127
- The phase of spin undergoes chemical exchange during a time interval 𝜏 changes either by 𝜔0𝜏 or
128
- -𝜔0𝜏 depending on the site. From the traditional exchange equations for chemical exchange, the spin
129
- always jumps to a site with a different angular frequency after a time interval 𝜏. Here, the choice for
130
- the next sites is assumed to be random; the next site's angular frequency could be either the same or
131
- different. This assumption may be more realistic; for instance, after a time interval 𝜏, a spin may
132
- successfully jump to another site or return to the original site; or in a heterogeneous system, the spin
133
- moves to a similar environment with the same frequency or a different environment with a different
134
- frequency. The random phase jump can be viewed as a random walk process in phase space and
135
- analyzed by phase diffusion [16,17]. Both the normal and fractional phase diffusion will be
136
- considered in the following.
137
- 2.1 Simple normal diffusion with a fixed jump time
138
- If the jump time interval 𝜏 is a constant, the random phase jumps with average jump length ∆𝜙 equaling
139
- −𝜔0𝜏 or 𝜔0𝜏. The effective phase diffusion constant 𝐷𝜙,𝑠 for such a simple normal diffusion can be obtained
140
- by [16]
141
-
142
-
143
-
144
- 𝐷𝜙,𝑠 =
145
- 〈(∆𝜙)2〉
146
- 2𝜏
147
- =
148
- (𝜔0𝜏)2
149
- 2𝜏
150
- =
151
- 𝜔02
152
- 2 τ,
153
-
154
-
155
-
156
- (1)
157
- and the normal phase diffusion equation can be described by [16,17]
158
- 𝑑𝑃(𝜙,𝑡)
159
- 𝑑𝑡
160
- = 𝐷𝜙,𝑠Δ𝑃(𝜙, 𝑡),
161
-
162
-
163
-
164
-
165
- ( 2)
166
- where 𝜙 is the phase and 𝑃(𝜙, 𝑡) is the probability density function of spin at time t with 𝜙. The solution
167
- of Eq. (3) is [16]
168
- 𝑃(𝜙, 𝑡) =
169
- 1
170
- √4𝜋𝐷𝜙,𝑠𝑡 𝑒𝑥𝑝 [−
171
- 𝜙2
172
- 4𝐷𝜙,𝑠𝑡].
173
-
174
-
175
- (3)
176
- The total magnetization 𝑀(𝑡) is obtained by
177
- 𝑀(𝑡) = ∫
178
- 𝑑𝜙
179
-
180
- −∞
181
- 𝑒𝑖𝜙𝑃(𝜙, 𝑡) = 𝑒𝑥𝑝(−𝐷𝜙,𝑠𝑡),
182
-
183
-
184
- (4)
185
- which is a time domain signal. By Fourier transform, we have the frequency domain signal
186
- 𝑆(𝜔) =
187
- 𝐷𝜙,𝑠
188
- 𝐷𝜙,𝑠2+𝜔2 =
189
- 𝜔02
190
- 2 τ
191
- (
192
- 𝜔02
193
- 2 τ)
194
- 2
195
- +𝜔2
196
- .
197
-
198
-
199
-
200
- (5)
201
- Note that both 𝜔 and 𝜔0 are the angular frequencies in the rotating frame reference.
202
- 2.2 Diffusion with waiting time distribution
203
- 2.2.1 General expressions for phase evolution in chemical exchange
204
- A more realistic exchange time should follow a certain type of time distribution function 𝜑(𝑡), which is
205
- often related to the time correlation function 𝐺(𝑡) by 𝜑(𝑡) = −
206
- 𝑑𝐺(𝑡)
207
- 𝑑𝑡 . A commonly used simple time
208
- correlation function is the mono exponential distribution; in contrast, in a complicated system, it can be a
209
- Mittage Leffler function [20] or stretched exponential function such as the KWW function [11-14].
210
- For a spin starting jumps at a time 𝑡′ from the site with frequency 𝜔𝑖,0, the probability of acquiring
211
- phase 𝜔𝑖,0𝑡′+ 𝜙 at time t is
212
-
213
- 4
214
-
215
- 𝑃𝜔𝑖,0(𝜙, 𝑡) = 𝜑(𝑡′)𝑃(𝜙, 𝑡 − 𝑡′),
216
-
217
-
218
- (6)
219
- where the phase change 𝜔𝑖,0𝑡′ is obtained from time 0 to time 𝑡′ when the spin stays immobile at the site,
220
- and 𝑃(𝜙, 𝑡 − 𝑡′) is the phase PDF resulting from the diffusion, or random walk in the phase space during
221
- 𝑡 − 𝑡′. Summing all possible magnetization vectors with different phase 𝜔𝑖,0𝑡′ + 𝜙, at time t, the net
222
- magnetization 𝑀𝜔𝑖,0(𝑡′, 𝑡) contributed from these spins beginning to jump randomly from time 𝑡′ is
223
- 𝑀𝜔𝑖,0(𝑡′, 𝑡) = ∫
224
- 𝑑𝜙
225
-
226
- −∞
227
- 𝑒𝑖𝜔𝑖,0𝑡′+𝜙𝑃𝜔𝑖,0(𝜙, 𝑡 − 𝑡′) = 𝑒𝑖𝜔𝑖,0𝑡′𝜑(𝑡′)𝑝(𝑘, 𝑡 − 𝑡′)|𝑘=1 ,
228
- (7)
229
- where
230
- 𝑝(𝑘, 𝑡 − 𝑡′)|𝑘=1 = ∫
231
- 𝑑𝜙
232
-
233
- −∞
234
- 𝑒𝑖𝑘𝜙𝑃(𝜙, 𝑡 − 𝑡′).
235
-
236
-
237
- (8)
238
- The total magnetization from all spins in the systems at time t is
239
- 𝑀(𝑡) = ∫ 𝑑𝑡′
240
- 𝑡
241
- 0
242
- ∑ 𝑝𝑖𝑀𝜔𝑖,0(𝑡′, 𝑡)
243
- 𝑖
244
- ,
245
-
246
-
247
- (9)
248
- where 𝑝𝑖 is the population of spins at site i. For simplicity, only exchange with two equal population sites
249
- will be considered here; let 𝜔1,0 = −𝜔0, 𝜔2,0 = 𝜔0 and all the subindexes i will be dropped out throughout
250
- the rest of the paper. For a two-site system with equal populations 𝑝1 = 𝑝2 =
251
- 1
252
- 2,
253
- 𝑀(𝑡) = ∫ 𝑑𝑡′
254
- 𝑡
255
- 0
256
- 1
257
- 2 [𝑀−𝜔0(𝑡′, 𝑡) + 𝑀𝜔0(𝑡′, 𝑡)]
258
-
259
- = ∫ 𝑑𝑡′
260
- 𝑡
261
- 0
262
-
263
- 1
264
- 2 [𝑒−𝑖𝜔0𝑡′ + 𝑒𝑖𝜔0𝑡′] 𝜑(𝑡′)𝑝(𝑘, 𝑡 − 𝑡′)|𝑘=1
265
- = ∫ 𝑑𝑡′
266
- 𝑡
267
- 0
268
- 𝐵(𝑡′)𝑝(𝑘, 𝑡 − 𝑡′)|𝑘=1 , (10a)
269
- where
270
- 𝐵(𝑡′) =
271
- 1
272
- 2 [𝑒−𝑖𝜔0𝑡′ + 𝑒𝑖𝜔0𝑡′]𝜑(𝑡′). (10b)
273
- Eq. (10a) involves the convolution of 𝐵(𝑡′) and 𝑝(𝑘, 𝑡 − 𝑡′)|𝑘=1. In Laplace representation [25],
274
- 𝑀(𝑠) = 𝐵(𝑠)𝑝(𝑘, 𝑠)|𝑘=1;
275
-
276
-
277
-
278
- (11)
279
- in many cases (the coupled diffusion in the paper), the frequency domain signal can be obtained by the
280
- Fourier transform of 𝑀(𝑡):
281
- 𝑆(𝜔) = ∫
282
- 𝑒𝑖𝜔𝑡𝑀(𝑡)𝑑𝑡
283
-
284
- 0
285
- = 𝐵(𝜔)𝑝(𝑘, 𝜔)|𝑘=1
286
-
287
- (12)
288
- 2.2.2 Normal diffusion with monoexponential distribution function
289
- Now, let us consider if the jump time follows a monoexponential distribution 𝜑(𝑡) described by
290
- [25]
291
- 𝜑(𝑡) =
292
- 1
293
- 𝜏 exp (−
294
- 𝑡′
295
- 𝜏 ) ,
296
-
297
-
298
-
299
-
300
- (13)
301
- whose Laplace representation is
302
- 𝜑(𝑠) =
303
- 1
304
- 𝑠𝜏+1
305
-
306
-
307
-
308
-
309
-
310
- (14)
311
- According to Eq. (10),
312
- 𝐵(𝑡) =
313
- 1
314
- 2 [𝑒−𝑖𝜔0𝑡′ + 𝑒𝑖𝜔0𝑡′]
315
- 1
316
- 𝜏 exp (−
317
- 𝑡′
318
- 𝜏 ),
319
-
320
- (15a)
321
- whose Laplace representation is [25]
322
-
323
- 5
324
-
325
- 𝐵(𝑠) =
326
- 1
327
- 2 [
328
- 1
329
- 𝜏(𝑠−𝑖𝜔0)+1 +
330
- 1
331
- 𝜏(𝑠+𝑖𝜔0)+1] ≈
332
- 1
333
- 1+𝜔02𝜏2+𝜏𝑠(1−𝜔02𝜏2) =
334
- 1
335
- 1+𝜔02𝜏2
336
- 1+𝑠
337
- 𝜏(1−𝜔02𝜏2)
338
- 1+𝜔02𝜏2
339
- .
340
- (15b)
341
- I.
342
- Uncoupled normal diffusion
343
- The spin angular frequency is often affected by a random fluctuating magnetic field, which is produced
344
- by surrounding spins undergoing the thermal motion [**]; additionally, the angular frequency could be
345
- affected by the electron cloud change during the exchange process; further, the chemical exchange may
346
- take place because the spin moves among different domains in a heterogeneous system where the
347
- frequency fluctuating around positive or negative 𝜔0 . This angular frequency can be denoted as 𝜔, and
348
- the average of its absolute value is 〈|𝜔|〉 = 𝜔0. Because 𝜔 is randomly fluctuating, the individual random
349
- phase jump ∆𝜙 = 𝜔��𝑗𝑢𝑚𝑝 randomly fluctuates for each jump time 𝜏𝑗𝑢𝑚𝑝; the space and time uncoupled
350
- phase diffusion could be applied to treat the phase random walk, a more complicated coupled diffusion
351
- will be considered in Section in subsequence. For an uncoupled diffusion,
352
- 〈𝜏𝑗𝑢𝑚𝑝〉 = ∫
353
- 𝑡
354
- 𝜏 exp (−
355
- 𝑡
356
- 𝜏) 𝑑𝑡 = 𝜏
357
-
358
- 0
359
- ,
360
-
361
-
362
-
363
- (16)
364
- 〈𝜏𝑗𝑢𝑚𝑝
365
- 2
366
- 〉 = ∫
367
- 𝑡2
368
- 𝜏 exp (−
369
- 𝑡
370
- 𝜏) 𝑑𝑡 = 2𝜏2
371
-
372
- 0
373
- .
374
-
375
-
376
-
377
- (17)
378
- The average phase jump length square 〈(∆𝜙)2〉 is
379
-
380
- 〈(∆𝜙)2〉 = 〈(|𝜔|𝜏𝑗𝑢𝑚𝑝)2〉 = 〈𝜔2〉〈𝜏𝑗𝑢𝑚𝑝
381
- 2
382
- 〉 = 𝜔0
383
- 22𝜏2 = 2𝜔0
384
- 2𝜏2.
385
-
386
- (18)
387
- Such an uncoupled random walk has an effective phase diffusion constant
388
-
389
- 𝐷𝜙 =
390
- 〈(∆𝜙)2〉
391
- 2〈𝜏𝑗𝑢𝑚𝑝〉 =
392
- 〈(𝜔𝜏𝑗𝑢𝑚𝑝)2〉
393
- 2𝜏
394
- =
395
- 〈𝜔2〉〈𝜏𝑗𝑢𝑚𝑝
396
- 2
397
-
398
- 2𝜏
399
- =
400
- 𝜔022𝜏2
401
- 2𝜏
402
- = 𝜔0
403
- 2𝜏.
404
-
405
-
406
- (19)
407
- With 𝐷𝜙 , the normal phase diffusion equation can be described by [18]
408
- 𝑑𝑃(𝜙,𝑡)
409
- 𝑑𝑡
410
- = 𝐷𝜙Δ𝑃(𝜙, 𝑡).
411
-
412
-
413
-
414
-
415
- (20)
416
- From Eq (20), the probability density function is
417
- 𝑃(𝜙, 𝑡) =
418
- 1
419
- √4𝜋𝐷𝜙𝑡 𝑒𝑥𝑝 [−
420
- 𝜙2
421
- 4𝐷𝜙𝑡].
422
-
423
-
424
- (21)
425
- Substituting Eq. (21) into Eq. (8), we have
426
- 𝑝(𝑘, 𝑡 − 𝑡′)|𝑘=1 = ∫
427
- 𝑑𝜙
428
-
429
- −∞
430
- 𝑃(𝜙, 𝑡 − 𝑡′) = 𝑒𝑥𝑝[−𝐷𝜙(𝑡 − 𝑡′)],
431
-
432
- (22)
433
- whose Laplace transform representation is
434
- 𝑝(𝑘, 𝑠)|𝑘=1 =
435
- 1
436
- 𝑠+𝐷𝜙.
437
-
438
-
439
-
440
-
441
-
442
- (23)
443
- Substituting Eqs. (15b) and (23) into Eq. (11) yields
444
- 𝑀(𝑠) = 𝐵(𝑠)𝑝(𝑘, 𝑠)|𝑘=1=
445
- 1
446
- 1+𝜔02𝜏2
447
- 1+𝑠
448
- 𝜏(1−𝜔02𝜏2)
449
- 1+𝜔02𝜏2
450
- 1
451
- 𝑠+𝐷𝜙.
452
-
453
-
454
- (24)
455
- From 𝑀(𝑠), the inverse Laplace transform gives
456
- 𝑀(𝑡) = ∫
457
- 𝑑𝑡
458
-
459
- 0
460
- 1
461
- 𝜏(1−𝜔02𝜏2) exp (−
462
- 𝑡′
463
- 𝜏(1−𝜔02𝜏2)
464
- 1+𝜔02𝜏2
465
- ) 𝑒𝑥𝑝[−𝐷𝜙(𝑡 − 𝑡′)].
466
- (25)
467
-
468
- 6
469
-
470
- Eq. (25) includes the convolution of two parts, the
471
- 1
472
- 𝜏(1−𝜔02𝜏2) exp (−
473
- 𝑡′
474
- 𝜏(1−𝜔02𝜏2)
475
- 1+𝜔02𝜏2
476
- ) comes from the Fourier
477
- transform of the 𝜑(𝑡), while 𝑒𝑥𝑝[−𝐷𝜙(𝑡 − 𝑡′)] results from the phase diffusion; the Frequency domain
478
- signal can be obtained from the Fourier Transform of expression (25) as
479
- 𝑆(𝜔) =
480
- 1
481
- 1+𝜔02𝜏2
482
- 1+[
483
- 𝜏(1−𝜔02𝜏2)
484
- 1+𝜔02𝜏2 ]
485
- 2
486
- 𝜔2
487
-
488
- 𝐷𝜙
489
- 𝐷𝜙2+𝜔2.
490
-
491
-
492
- (26)
493
- II.
494
- Coupled normal diffusion with monoexponential distribution function
495
- The coupled random phase walk has a joint probability function 𝜓(𝜙, 𝑡) expressed by {25}
496
- 𝜓(𝜙, 𝑡) = 𝜑(𝑡)Φ(𝜙|𝑡),
497
-
498
-
499
-
500
-
501
- (27)
502
- where 𝜑(𝑡) is the waiting time function, and Φ(𝜙|𝑡) is the conditional probability that a phase jump length
503
- 𝜙 requiring time t. In Fourier-Laplace representation, the probability density function of a coupled random
504
- walk has been derived in Ref. [25] as
505
- 𝑃(𝑘, 𝑠) =
506
- Ψ(𝑘,𝑠)
507
- 1−𝜓(𝑘,𝑠) ,
508
-
509
-
510
-
511
-
512
- (28)
513
- where 𝜓(𝑘, 𝑠) is the joint probability, and Ψ(𝜙, 𝑡) is the PDF for the phase displacement of the last,
514
- incomplete walk, which is [25]
515
- Ψ(𝜙, 𝑡) = 𝛿( 𝜙) ∫
516
- 𝜑(𝑡′)𝑑𝑡′
517
-
518
- 𝑡
519
- ,
520
-
521
-
522
-
523
- (29a)
524
- and
525
- Ψ(𝑘, 𝑠) =
526
- 1−𝜑(𝑠)
527
- 𝑠
528
- .
529
-
530
-
531
-
532
-
533
- (29b)
534
- By substituting Eq. (29b) into Eq. (28), it arrives [25]
535
- 𝑃(𝑘, 𝑠) =
536
- Ψ(𝑘,𝑠)
537
- 1−𝜓(𝑘,𝑠) =
538
- 1−𝜑(𝑠)
539
- 𝑠
540
- 1
541
- 1−𝜓(𝑘,𝑠).
542
-
543
-
544
-
545
- (30)
546
- In the chemical exchange, the joint probability could be described by
547
- Φ(𝜙|𝑡) =
548
- 1
549
- 2 𝛿(|𝜙| − 𝜔0𝑡),
550
-
551
-
552
-
553
- (31a)
554
- 𝜓(𝜙, 𝑡) =
555
- 1
556
- 2 𝜑(𝑡)𝛿(|𝜙| − 𝜔0𝑡),
557
-
558
-
559
-
560
- (31b)
561
- and
562
- 𝜓(𝑘, 𝑠)=∫ 𝑒𝑖𝑘𝜙−𝑠𝑡𝜓(𝜙, 𝑡)𝑑𝜙𝑑𝑡=
563
- 1
564
- 2 [
565
- 1
566
- 𝜏(𝑠−𝑖𝑘𝜔0)+1 +
567
- 1
568
- 𝜏(𝑠+𝑖𝑘𝜔0)+1] ≈
569
- 1+𝜏𝑠
570
- 1+𝑘2𝜔02𝜏2+2𝜏𝑠.
571
-
572
- (31c)
573
- Substitute Eq. (31c) into Eq. (30), and we get
574
- 𝑃(𝑘, 𝑠) =
575
- Ψ(𝑘,𝑠)
576
- 1−𝜓(𝑘,𝑠) =
577
- 1−𝜑(𝑠)
578
- 𝑠
579
- 1
580
- 1−𝜓(𝑘,𝑠) =
581
- 𝜏
582
- 1−
583
- 1+𝜏𝑠
584
- 1+𝑘2𝜔02𝜏2+2𝜏𝑠
585
- .
586
-
587
- (32)
588
- and
589
-
590
-
591
-
592
-
593
-
594
- 𝑝(𝑘, 𝑠)|𝑘=1 =
595
- 𝜏
596
- 1−
597
- 1+𝜏𝑠
598
- 1+𝜔02𝜏2+2𝜏𝑠
599
- .
600
-
601
-
602
-
603
- (33)
604
-
605
- Substituting Eqs. (15b) and (33) into Eq. (11) yields
606
-
607
- 7
608
-
609
-
610
- 𝑀(𝑠) =
611
- 1+𝜏𝑠
612
- 1+𝜔02𝜏2+2𝜏𝑠
613
- 𝜏
614
- 1−
615
- 1+𝜏𝑠
616
- 1+12𝜔02𝜏2+2𝜏𝑠
617
- =
618
- 𝜏(1+𝜏𝑠)
619
- 𝜔02𝜏2+𝜏𝑠 ≈
620
- 𝜏
621
- 𝜏𝑠(1−𝜔02𝜏2)+𝜔02𝜏2 =
622
- 𝜏/𝜔02𝜏2
623
- 𝜏𝑠(1−𝜔02𝜏2)/𝜔02𝜏2+1
624
-
625
- (34)
626
- From 𝑀(𝑠), the inverse Laplace transform gives
627
- 𝑀(𝑡) =
628
- 1
629
- 1−𝜔02𝜏2 exp (−
630
- 𝑡
631
- 𝜏(1−𝜔02𝜏2)
632
- 𝜔02𝜏2
633
- ).
634
-
635
-
636
-
637
- (35)
638
- The frequency domain NMR signal can be obtained from the Fourier Transform of ���(𝑡) in Eq. (35) as
639
- 𝑆(𝜔) =
640
- 1
641
- 1−𝜔02𝜏2
642
- 𝜏(1−𝜔02𝜏2)
643
- 𝜔02𝜏2
644
- (
645
- 𝜏(1−𝜔02𝜏2)
646
- 𝜔02𝜏2
647
- )
648
- 2
649
- 𝜔2+1
650
- =
651
- 𝜏
652
- 𝜏2(1−𝜔02𝜏2)2𝜔2+𝜔02𝜏2.
653
-
654
- (36)
655
- 2.2.3 Fractional phase diffusion
656
- For a complicated system, the time correlation function may not be a simple monoexponential function,
657
- such as the Kohlrausch-Williams-Watts (KWW) function, or Mittag-Leffler function, and the
658
- corresponding phase diffusion could be an anomalous diffusion [21-24]. The time fractional phase
659
- diffusion will be investigated here, and the time correlation function is assumed as a MLF,
660
- G(t) = 𝐸𝛼 (− (
661
- 𝑡
662
- 𝜏)
663
- 𝛼
664
- ),
665
-
666
-
667
-
668
-
669
- (37)
670
- and its waiting time distribution function will be a heavy-tailed time distribution [27]
671
- 𝜑𝑓(𝑡) = −
672
- 𝑑
673
- 𝑑𝑡 𝐸𝛼 (− (
674
- 𝑡
675
- 𝜏)
676
- 𝛼
677
- ) ,
678
-
679
-
680
-
681
- (38)
682
- whose Laplace transform is [25,27]
683
- 𝜑𝑓(𝑠) =
684
- 1
685
- 𝑠𝛼𝜏𝛼+1.
686
-
687
-
688
-
689
-
690
- (39)
691
- Based on Eqs. (10b), (38) and (39), we have
692
- 𝐵(𝑠) = 𝑅𝑒𝜑𝑓(𝑠 + 𝑖𝜔) = 𝑅𝑒 1
693
- 2 [
694
- 1
695
- 𝜏𝛼
696
- 𝜔0
697
- 𝛼 [cos (𝜋
698
- 2 𝛼 − 𝑠𝛼
699
- 𝜔0) + 𝑖 sin (𝜋
700
- 2 𝛼 − 𝑠𝛼
701
- 𝜔0)] + 1
702
- 𝜏𝛼
703
- ]
704
-
705
- 𝜔0𝛼𝜏𝛼(cos𝜋
706
- 2𝛼+
707
- 1
708
- 𝜔0𝛼𝜏𝛼)
709
- 1+𝜔02𝛼𝜏2𝛼+2𝜔0𝛼𝜏𝛼cos𝜋
710
- 2𝛼+𝑠𝛼𝜔0𝛼−1𝜏𝛼sin𝜋
711
- 2𝛼
712
- 1−𝜔02𝛼𝜏2𝛼
713
- 𝜔0𝛼𝜏𝛼cos𝜋
714
- 2𝛼+1
715
- =
716
- 𝑐
717
- 1+𝑠𝜏′ ,
718
-
719
-
720
-
721
-
722
-
723
-
724
-
725
-
726
-
727
-
728
-
729
- (40a)
730
- where
731
- 𝑐 =
732
- 𝜔0𝛼𝜏𝛼(cos𝜋
733
- 2𝛼+
734
- 1
735
- 𝜔0𝛼𝜏𝛼)
736
- 1+𝜔02𝛼𝜏2𝛼+2𝜔0𝛼𝜏𝛼cos𝜋
737
- 2𝛼 ,
738
-
739
-
740
-
741
-
742
-
743
-
744
- (40b)
745
- and
746
-
747
-
748
-
749
- 𝜏′ =
750
- 𝛼𝜔0𝛼−1𝜏𝛼sin𝜋
751
- 2𝛼
752
- 1−𝜔02𝛼𝜏2𝛼
753
- 𝜔0𝛼𝜏𝛼cos𝜋
754
- 2𝛼+1
755
- 1+𝜔02𝛼𝜏2𝛼+2𝜔0𝛼𝜏𝛼cos𝜋
756
- 2𝛼 .
757
-
758
-
759
-
760
- (40c)
761
-
762
- I.
763
- Uncoupled fractional diffusion
764
-
765
-
766
- For such a time distribution function, the phase diffusion constant can be calculated according to Ref. [16,
767
-
768
- 8
769
-
770
- 22,23] as
771
-
772
-
773
-
774
-
775
- 𝐷𝜙𝑓 =
776
- 〈(∆𝜙)2〉
777
- 2Γ(1+𝛼)𝜏𝛼.
778
-
779
-
780
-
781
- (41a)
782
- The average phase jump may be assumed as 〈(∆𝜙)2〉 = 𝜔0
783
- 22𝜏2, then
784
-
785
-
786
-
787
-
788
-
789
-
790
- 𝐷𝜙𝑓 =
791
- 𝜔02𝜏2−𝛼
792
- Γ(1+𝛼)𝜏𝛼 .
793
-
794
-
795
-
796
- (41b)
797
- With 𝐷𝜙𝑓 , the fractional phase diffusion equation can be described by [16,17,21,23,24]
798
- 𝑡𝐷∗
799
- 𝛼𝑃𝑓 = 𝐷𝜙𝑓Δ𝑃𝑓(𝜙, 𝑡).
800
-
801
-
802
-
803
- (42)
804
- where 0 < 𝛼, 𝛽 ≤ 2, 𝐷𝑓𝑟 is the rotational diffusion coefficient, a is the spherical radius, 𝑡𝐷∗
805
- 𝛼 is the Caputo
806
- fractional derivative defined by [22,23]
807
- 𝑡𝐷∗
808
- 𝛼𝑓(𝑡): = {
809
- 1
810
- 𝛤(𝑚−𝛼) ∫
811
- 𝑓(𝑚)(𝜏)𝑑𝜏
812
- (𝑡−𝜏)𝛼+1−𝑚 , 𝑚 − 1 < 𝛼 < 𝑚,
813
- 𝑡
814
- 0
815
- 𝑑𝑚
816
- 𝑑𝑡𝑚 𝑓(𝑡), 𝛼 = 𝑚,
817
-
818
-
819
-
820
- Fourier transform of Eq. (42) give [
821
- 𝑡𝐷∗
822
- 𝛼𝑝(𝑘, 𝑡) = −𝐷𝜙𝑓𝑘2𝑝(𝑘, 𝑡).
823
-
824
-
825
-
826
-
827
- (43)
828
- The solution of Eq. (43) is 𝑝(𝑘, 𝑡) = 𝐸𝛼[−𝐷𝜙𝑓𝑘2𝑡𝛼] [16,17], whose Laplace representation is [22,23,25]
829
-
830
-
831
-
832
-
833
-
834
- 𝑝(𝑘, 𝑠) =
835
- 𝑆𝛼−1
836
- 𝑆𝛼+𝐷𝜙𝑓𝑘2,
837
-
838
-
839
-
840
-
841
- (44)
842
- then
843
-
844
-
845
-
846
-
847
-
848
- 𝑝(𝑘, 𝑠)|𝑘=1 =
849
- 𝑆𝛼−1
850
- 𝑆𝛼+𝐷𝜙𝑓.
851
-
852
-
853
-
854
-
855
- (45)
856
- Substituting Eqs. (40) and (45) into Eq. (11), we get
857
-
858
- 𝑀(𝑠) =
859
- 𝑐
860
- 1+𝑠𝜏′
861
- 𝑆𝛼−1
862
- 𝑆𝛼+𝐷𝜙𝑓,
863
-
864
-
865
-
866
-
867
- (46)
868
- whose inverse Laplace transform gives
869
- M(𝑡) = ∫ 𝑑𝑡′
870
- 𝑡
871
- 0
872
- 𝑐
873
- 𝜏′ exp (−
874
- 𝑡′
875
- 𝜏′)𝐸𝛼[−𝐷𝜙𝑓(𝑡 − 𝑡′)𝛼].
876
-
877
-
878
- (47)
879
- The NMR signal can be obtained from the Fourier transform of M(𝑡), which is
880
- 𝑆(𝜔) = 𝐵(𝜔) ∙ 𝐸(𝜔) =
881
- 𝑐
882
- 1+𝜏′2𝜔2 ∙
883
- 𝜔𝛼−1(
884
- 1
885
- 𝐷𝜙𝑓
886
- ) sin(𝜋
887
- 2𝛼)
888
- 𝜔2𝛼(
889
- 1
890
- 𝐷𝜙𝑓
891
- )
892
- 2
893
- +2𝜔𝛼(
894
- 1
895
- 𝐷𝜙𝑓
896
- ) cos(𝜋
897
- 2𝛼)+1
898
- . (48)
899
- II. Coupled fractional diffusion
900
- Similarly, for the coupled normal diffusion, the joint probability could be described by [25]
901
- Φ(𝜙|𝑡) =
902
- 1
903
- 2 𝛿(|𝜙| − 𝜔0𝑡),
904
-
905
-
906
-
907
-
908
- (49a)
909
- 𝜓(𝜙, 𝑡) =
910
- 1
911
- 2 𝜑(𝑡)𝛿(|𝜙| − 𝜔0𝑡),
912
-
913
-
914
-
915
- (49b)
916
- 𝜓(𝑘, 𝑠)=∫ 𝑒𝑖𝑘𝜙−𝑠𝑡𝜓(𝜙, 𝑡)𝑑𝜙𝑑𝑡=
917
- 1
918
- 2 [
919
- 1
920
- 𝜏𝛼(𝑠−𝑖𝑘𝜔0)𝛼+1 +
921
- 1
922
- 𝜏𝛼(𝑠+𝑖𝑘𝜔0)𝛼+1].
923
- (49c)
924
- and
925
-
926
- 9
927
-
928
- 𝜓(1, 𝑠) ≈
929
- 1
930
- 2 [
931
- 1
932
- 𝜏𝛼(𝑠−𝑖𝜔0)𝛼+1 +
933
- 1
934
- 𝜏𝛼(𝑠+𝑖𝜔0)𝛼+1].
935
-
936
-
937
-
938
- (50)
939
- Compared to Eq. (40a), it is obvious that
940
- 𝜓(1, 𝑠) = 𝐵(𝑠) =
941
- 𝑐
942
- 1+𝑠𝜏′ .
943
-
944
-
945
-
946
- (51)
947
- Substituted Eq. (51) into Eq. (30), we get
948
- 𝑝(𝑘, 𝑠)|𝑘=1 =
949
- Ψ(𝑘,𝑠)
950
- 1−𝜓(1,𝑠) =
951
- 1−𝜑(𝑠)
952
- 𝑠
953
- 1
954
- 1−𝜓(1,𝑠)=
955
- 𝜏𝛼𝑠𝛼−1
956
- 1���
957
- 𝑐
958
- 1+𝑠𝜏′
959
- .
960
-
961
-
962
- (52)
963
- Eq. (52) can be substituted into Eq. (11) to give
964
- 𝑀(𝑠) = 𝐵(𝑠)𝑝(𝑘, 𝑠)|𝑘=1 =
965
- 𝑐
966
- 1+𝑠𝜏′
967
- 𝜏𝛼𝑠𝛼−1
968
- 1−
969
- 𝑐
970
- 1+𝑠𝜏′
971
- =
972
- 𝑐𝜏𝛼𝑠𝛼−1
973
- 1+𝑠𝜏′−𝑐 =
974
- 1
975
- 𝑆𝛼−1
976
- 𝑐𝜏𝛼
977
- 1+𝑠𝜏′−𝑐=
978
- 1
979
- 𝑆𝛼−1
980
- 𝑐𝜏𝛼
981
- 1−𝑐
982
- 1+𝑠 𝜏′
983
- 1−𝑐
984
- ,
985
-
986
- (53)
987
- whose inverse Laplace transform yields
988
- 𝑀(𝑡) = ∫ 𝑑𝑡′
989
- 1
990
- Γ(1−𝛼) 𝑡′−𝛼 𝑐
991
- 𝑡
992
- 0
993
- 𝜏𝛼 1
994
- 𝜏′ exp (−
995
- 𝑡−𝑡′
996
- 𝜏′
997
- 1−𝑐
998
- ) .
999
-
1000
-
1001
- (54)
1002
- The Fourier transform of 𝑀(𝑡) gives NMR frequency domain signal
1003
- 𝑆(𝜔) = sin (
1004
- 𝜋
1005
- 2 𝛼) |𝜔|𝛼−1 𝑐𝜏𝛼
1006
- 𝜏′
1007
- 𝜏′
1008
- 1−𝑐
1009
- 1+( 𝜏′
1010
- 1−𝑐)
1011
- 2
1012
- 𝜔2 = sin (
1013
- 𝜋
1014
- 2 𝛼) |𝜔|𝛼−1
1015
- 𝑐𝜏𝛼
1016
- 1−𝑐
1017
- 1+( 𝜏′
1018
- 1−𝑐)
1019
- 2
1020
- 𝜔2.
1021
- (55)
1022
- 3. Results
1023
-
1024
- A phase diffusion equation method is proposed to describe the effect of chemical exchange on NMR
1025
- spectrum, based on uncoupled and coupled normal and fractional diffusions. The exchange between two
1026
- sites with equal populations is considered, and the theoretical expressions are organized in Table 1.
1027
- Table 1
1028
- Comparison of theoretical NMR line shape expressions from phase diffusion method to traditional results for
1029
- chemical exchange between two sites with equal populations.
1030
- Frequency domain signal expression from phase diffusion results:
1031
- Simple phase diffusion with a constant jump time
1032
- 𝑆(𝜔) =
1033
- 𝐷𝜙,𝑠
1034
- 𝐷𝜙,𝑠2+𝜔2, 𝐷𝜙,𝑠 =
1035
- 𝜔02
1036
- 2 τ.
1037
- Normal phase diffusion with monoexponential function
1038
- Uncoupled diffusion
1039
- 𝑆(𝜔) =
1040
- 1
1041
- 1+𝜔02𝜏2
1042
- 1+[
1043
- 𝜏(1−𝜔02𝜏2)
1044
- 1+𝜔02𝜏2 ]
1045
- 2
1046
- 𝜔2
1047
- 𝐷𝜙
1048
- 𝐷𝜙2+𝜔2, 𝐷𝜙 = 𝜔0
1049
- 2𝜏.
1050
- Coupled diffusion
1051
- 𝑆(𝜔) =
1052
- 𝜏
1053
- 𝜏2(1−𝜔0
1054
- 2𝜏2)2𝜔2+𝜔0
1055
- 2𝜏2.
1056
- Fractional phase diffusion with heavy-tailed time distribution
1057
- 𝑐 =
1058
- 𝜔0𝛼𝜏𝛼(cos𝜋
1059
- 2𝛼+
1060
- 1
1061
- 𝜔0𝛼𝜏𝛼)
1062
- 1+𝜔0
1063
- 2𝛼𝜏2𝛼+2𝜔0
1064
- 𝛼𝜏𝛼cos𝜋
1065
- 2𝛼 , 𝜏′ =
1066
- 𝛼𝜔0𝛼−1𝜏𝛼sin𝜋
1067
- 2𝛼
1068
- 1−𝜔02𝛼𝜏2𝛼
1069
- 𝜔0𝛼𝜏𝛼cos𝜋
1070
- 2𝛼+1
1071
- 1+𝜔0
1072
- 2𝛼𝜏2𝛼+2𝜔0
1073
- 𝛼𝜏𝛼cos𝜋
1074
- 2𝛼
1075
- Uncoupled diffusion
1076
- 𝑆(𝜔) =
1077
- 𝑐
1078
- 1+𝜏′2𝜔2 ∙
1079
- 𝜔𝛼−1(
1080
- 1
1081
- 𝐷𝜙𝑓) sin(𝜋
1082
- 2𝛼)
1083
- 𝜔2𝛼(
1084
- 1
1085
- 𝐷𝜙𝑓)
1086
- 2
1087
- +2𝜔𝛼(
1088
- 1
1089
- 𝐷𝜙𝑓)cos(𝜋
1090
- 2𝛼)+1
1091
- , 𝐷𝜙 =
1092
- 𝜔02𝜏2
1093
- Γ(1+𝛼)𝜏𝛼
1094
- Coupled diffusion
1095
- 𝑆(𝜔) = sin (𝜋
1096
- 2 𝛼) |𝜔|𝛼−1
1097
- 𝑐𝜏𝛼
1098
- 1 − 𝑐
1099
- 1 + ( 𝜏′
1100
- 1 − 𝑐)
1101
- 2
1102
- 𝜔2
1103
-
1104
- Frequency domain signal expression from traditional method:
1105
-
1106
- 𝑆(𝜔) =
1107
- 𝜔02𝜏
1108
- 2
1109
- [𝜏
1110
- 2(𝜔0
1111
- 2−𝜔2)]
1112
- 2
1113
- +𝜔2 [3,4,5]
1114
-
1115
- 10
1116
-
1117
-
1118
-
1119
- phasediff_tau_2overkex_kex30dw_beta_0.75_122722 - Copy
1120
- Traditional
1121
- Fixed_jump_time_diffusion
1122
- Uncoupled_normal_diffusion
1123
- Coupled_Normal_diffusion
1124
- Uncoupled_fractional_diffusion
1125
- Coupled_fractional_diffusion
1126
- 0
1127
- 0.01
1128
- 0.02
1129
- 0.03
1130
- 0.04
1131
- 0.05
1132
- 0.06
1133
- 0.07
1134
- 0.08
1135
- -150
1136
- -100
1137
- -50
1138
- 0
1139
- 50
1140
- 100
1141
- 150
1142
- /2 (Hz)
1143
- S()
1144
- a
1145
-  = 2'
1146
-   traditional model, fixed time diffusion
1147
- '  coupled and uncoupled
1148
- norrmal and fractional diffusion
1149
-  = 2' =(15)
1150
- phasediff_tau_2overkex_kex5dw_beta_0.75_122722 - Copy
1151
- Traditional
1152
- Fixed_jump_time_diffusion
1153
- Uncoupled_normal_diffusion
1154
- Coupled_Normal_diffusion
1155
- Uncoupled_fractional_diffusion
1156
- Coupled_fractional_diffusion
1157
- 0
1158
- 0.01
1159
- 0.02
1160
- 0.03
1161
- 0.04
1162
- 0.05
1163
- -150
1164
- -100
1165
- -50
1166
- 0
1167
- 50
1168
- 100
1169
- 150
1170
- S()
1171
- /2 (Hz)
1172
-  = 2' =(2.5)
1173
- b
1174
-   traditional model, fixed time diffusion
1175
- '  coupled and uncoupled
1176
- norrmal and fractional diffusion
1177
-
1178
- 0
1179
- 0.005
1180
- 0.01
1181
- 0.015
1182
- 0.02
1183
- -150
1184
- -100
1185
- -50
1186
- 0
1187
- 50
1188
- 100
1189
- 150
1190
- phasediff_tau_2overkex_kex2dw_beta_0.75_122722 - Copy
1191
- Traditional
1192
- Fixed_jump_time_diffusion
1193
- Uncoupled_normal_diffusion
1194
- Coupled_Normal_diffusion
1195
- Uncoupled_fractional_diffusion
1196
- Coupled_fractional_diffusion
1197
- S()
1198
- /2 (Hz)
1199
- c
1200
-  = 2' = 1 
1201
- 0
1202
- 0.001
1203
- 0.002
1204
- 0.003
1205
- 0.004
1206
- 0.005
1207
- 0.006
1208
- 0.007
1209
- -150
1210
- -100
1211
- -50
1212
- 0
1213
- 50
1214
- 100
1215
- 150
1216
- phasediff_tau_2overkex_kex1dw_beta_0.75_122722 - Copy
1217
- Traditional
1218
- Fixed_jump_time_diffusion
1219
- Uncoupled_normal_diffusion
1220
- Coupled_Normal_diffusion
1221
- Uncoupled_fractional_diffusion
1222
- Coupled_fractional_diffusion
1223
- /2 (Hz)
1224
- S()
1225
- d
1226
-  = 2' =1(0.5)
1227
-
1228
- 0
1229
- 0.001
1230
- 0.002
1231
- 0.003
1232
- 0.004
1233
- 0.005
1234
- -150
1235
- -100
1236
- -50
1237
- 0
1238
- 50
1239
- 100
1240
- 150
1241
- phasediff_tau_2overkex_kex0.6dw_beta_0.75_122722 - Copy
1242
- Traditional
1243
- Fixed_jump_time_diffusion
1244
- Uncoupled_normal_diffusion
1245
- Coupled_Normal_diffusion
1246
- Uncoupled_fractional_diffusion
1247
- Coupled_fractional_diffusion
1248
- S()
1249
- /2 (Hz)
1250
- e
1251
-  = 2' = 1 (0.3)
1252
- tau = 2/kex = 2/0.6 dw
1253
- 0
1254
- 0.002
1255
- 0.004
1256
- 0.006
1257
- 0.008
1258
- 0.01
1259
- -150
1260
- -100
1261
- -50
1262
- 0
1263
- 50
1264
- 100
1265
- 150
1266
- phasediff_tau_2overkex_kex0.1dw_beta_0.75_122722 - Copy
1267
- Traditional
1268
- Fixed_jump_time_diffusion
1269
- Uncoupled_normal_diffusion
1270
- Coupled_Normal_diffusion
1271
- Uncoupled_fractional_diffusion
1272
- Coupled_fractional_diffusion
1273
- S()
1274
- /2 (Hz)
1275
- f
1276
-  = 2' =(0.05 )
1277
-
1278
- Fig. 1 The comparison among the various theoretical results obtained from the phase diffusion models and those
1279
- obtained by the traditional two-site exchange model, all equations listed in Table 1, with ∆𝜔/2𝜋 = 100 Hz, and 𝛼 = 0.75
1280
- for fractional phase diffusion.
1281
-
1282
-
1283
-
1284
-
1285
-
1286
-
1287
-
1288
- 11
1289
-
1290
- 4. Discussion
1291
-
1292
- In rotating frame reference, the spin phase in chemical exchange undergoes random phase jumps,
1293
- which can be intrinsically describe by either uncoupled effective phase diffusion equation or coupled
1294
- random walk.
1295
- Figure 1 shows the comparison among the various theoretical results obtained from the phase diffusion
1296
- models and those obtained by the traditional two-site exchange model, all equations listed in Table 1. From
1297
- Figure 1, when the exchange is sufficiently fast, 𝜏 = 2𝜏′ ≤ 1/∆𝜔, the theoretical curves from diffusion with
1298
- a fixed jump time, uncoupled and coupled normal diffusion almost overlap with that predicted from the
1299
- traditional model. However, the exchange time constant 𝜏 for the traditional model and the fixed time
1300
- diffusion is two times as 𝜏′ for the coupled and uncoupled normal diffusion with the monoexponential
1301
- distribution. The difference in exchange time could be explained by the following: the effective phase
1302
- diffusion constant is
1303
- 𝜔02
1304
- 2 τ for diffusion with a fixed jump time τ; in contrast, it is 𝜔0
1305
- 2τ for the uncoupled
1306
- diffusion with a monoexponential time distribution. The two-time difference in diffusion coefficients
1307
- resulted from the 〈𝜏𝑗𝑢𝑚𝑝
1308
- 2
1309
- 〉 = ∫
1310
- 𝑡2
1311
- 𝜏 exp (−
1312
- 𝑡
1313
- 𝜏) 𝑑𝑡 = 2𝜏2
1314
-
1315
- 0
1316
- , while in is the fixed time jump 〈𝜏𝑗𝑢𝑚𝑝
1317
- 2
1318
- 〉 = 𝜏2. The same
1319
- phase diffusion coefficient 𝜔0
1320
- 2τ has been used in Ref. [17] to obtain NMR relaxation expressions, which
1321
- replicate the traditional NMR relaxation theories; these NMR relaxation expressions have been verified by
1322
- numerous experimental results; although this theoretical and experimental confirm is from relaxation
1323
- NMR, it still provide a strong support to select 𝜔0
1324
- 2τ rather than
1325
- 𝜔02
1326
- 2 τ as a phase diffusion coefficient,
1327
- considering both the exchange and relaxation are random phase walk processes. Therefore, in the analysis
1328
- of NMR chemical exchange line shape, the exchange time constant could be a two-time difference
1329
- depending on the employed models.
1330
- Additionally, in Figure 1, the exchange line shapes in normal diffusion and fractional diffusion are
1331
- significantly different. The spectrum line from coupled fractional diffusion is broader than that of
1332
- uncoupled fractional diffusion, which may be reasonable because there is a more direct effect of heavy-
1333
- tailed time distribution on the phase length in the coupled fractional diffusion than that of uncoupled
1334
- fractional diffusion. Meanwhile, the effect of coupled and uncoupled diffusion on the NMR line shape
1335
- is different in normal and fractional diffusions; the difference between the coupled and uncoupled
1336
- diffusion is negligible in normal diffusion but significant in fractional diffusion.
1337
- Both the theoretical curves from the coupled and uncoupled fractional phase diffusion become
1338
- narrower when the fractional derivative parameter 𝛼 decreases. The overlapped curves in Figure 2 imply
1339
- the fractional diffusion results reduce to the normal diffusion results when 𝛼 = 1. While, Figure 3 shows
1340
- the changes in coupled and uncoupled fractional diffusion among different fractional derivative orders, 𝛼
1341
- = 1, 0.9, 0.75 and 0.5. The smaller the 𝛼 is, the broader the NMR peak is. Additionally, the middle part and
1342
- the end part of the fractional diffusion curves have different features: the middle part is a narrow peak,
1343
- while the end parts are broad shoulders. The narrower peak could come from fast exchange time, while
1344
- broader end shoulders come from slow exchange. In the view of the traditional model, this could be
1345
- interpreted as a bimodal exchange. However, both the fast and slow exchange times come from the same
1346
- heavy-tailed time distribution.
1347
- The diffusion method proposed here shows excellent results in the fast exchange range, but it
1348
- encounters challenges in slow exchange. This difficulty in slow exchange results from that the diffusion
1349
- limit is not met because the experimental time window in NMR is not infinite. It requires further effort to
1350
- overcome the hurdle. The current method can be combined with other anomalous diffusion models, such
1351
- as the fractal derivative [28,29,30]. Further research is needed to understand and apply the models,
1352
-
1353
- 12
1354
-
1355
- particularly the fractional diffusion model, and to extend the current method for multiple sites and unequal
1356
- population exchange.
1357
-
1358
-
1359
-
1360
- 0
1361
- 0.002
1362
- 0.004
1363
- 0.006
1364
- 0.008
1365
- 0.01
1366
- 0.012
1367
- -150
1368
- -100
1369
- -50
1370
- 0
1371
- 50
1372
- 100
1373
- 150
1374
- phasediff_tau_2overkex_kex2dw_beta_1_122722 3:20:13 PM 12/29/2022
1375
- Uncoupled_normal_diffusion
1376
- Uncoupled_fractional_diffusion
1377
- ' = 1/  =
1378
- S()
1379
- /2 (Hz)
1380
- a
1381
-
1382
- 0
1383
- 0.002
1384
- 0.004
1385
- 0.006
1386
- 0.008
1387
- 0.01
1388
- 0.012
1389
- -150
1390
- -100
1391
- -50
1392
- 0
1393
- 50
1394
- 100
1395
- 150
1396
- phasediff_tau_2overkex_kex2dw_beta_1_122722
1397
- Coupled_Normal_diffusion
1398
- Coupled_fractional_diffusion
1399
- S()
1400
- /2 (Hz)
1401
- b
1402
- ' = 1/  =
1403
-
1404
-
1405
- Fig. 2 The fractional diffusion results reduce to the normal diffusion results when 𝛼 = 1, and ∆𝜔/2𝜋 = 100 Hz.
1406
-
1407
- 0
1408
- 0.005
1409
- 0.01
1410
- 0.015
1411
- 0.02
1412
- -100
1413
- -50
1414
- 0
1415
- 50
1416
- 100
1417
- Uncoupled Fractional Diffusion
1418
-  = 
1419
-  = 
1420
-  = 
1421
-  = 
1422
- S()
1423
- a
1424
- /2 (Hz)
1425
- ' = 1   =
1426
- ' = 1/
1427
- 0
1428
- 0.005
1429
- 0.01
1430
- 0.015
1431
- 0.02
1432
- -100
1433
- -50
1434
- 0
1435
- 50
1436
- 100
1437
- Coupled Fractional Diffusion
1438
-  = 
1439
-  = 
1440
-  = 
1441
-  = 
1442
- S()
1443
- b
1444
- /2 (Hz)
1445
- ' = 1/
1446
-
1447
-
1448
- Fig. 3 The changes in coupled and uncoupled fractional diffusion among different fractional derivative orders, 𝛼 = 1,
1449
- 0.9, 0.75 and 0.5, and ∆𝜔/2𝜋 = 100 Hz.
1450
-
1451
-
1452
-
1453
-
1454
-
1455
-
1456
- 13
1457
-
1458
- 5. Conclusion
1459
- This paper proposes a phase diffusion method to describe the chemical exchange NMR spectrum. The
1460
- major conclusions are summarized in the following:
1461
- 1. This method directly analyzes the spin system evolution in phase space rather than real space used
1462
- by most other traditional models.
1463
- 2. The line shape difference between coupled and uncoupled phase diffusion is not obvious in
1464
- normal diffusion but significant in fractional diffusion.
1465
- 3. There is a significant difference in the line shape between the normal and fractional diffusions.
1466
- 4. Unlike the traditional method, the exchange time constant can follow certain types of distributions.
1467
- Additionally, the exchange time constant is two times faster based on the monoexponential time
1468
- distribution than that obtained by the traditional model.
1469
- 5. The method could be extended to multiple sites and unequal population chemical exchange.
1470
- Furthermore, this phase diffusion method could be combined with other phase diffusion equations
1471
- in relaxation and PFG diffusion to deal with more complicated scenarios.
1472
-
1473
-
1474
-
1475
-
1476
-
1477
-
1478
-
1479
-
1480
-
1481
-
1482
-
1483
-
1484
-
1485
-
1486
-
1487
-
1488
-
1489
-
1490
-
1491
-
1492
-
1493
-
1494
-
1495
-
1496
-
1497
-
1498
-
1499
-
1500
-
1501
-
1502
-
1503
-
1504
-
1505
-
1506
- 14
1507
-
1508
- References
1509
-
1510
- 1. A. Abragam, Principles of Nuclear Magnetism, Clarendon Press, Oxford, 1961.
1511
- 2. C. P. Slichter. Principles of magnetic resonance, Springer series in Solid‐State Sciences, Vol.
1512
- 1, Ed by M. Cardoua, P. Fulde and H. J. Queisser, Springer‐Verlag, Berlin (1978).
1513
- 3. A. G. Palmer, H. Koss, Chapter Six - Chemical Exchange, Editor(s): A. J. Wand, Methods in
1514
- Enzymology, Academic Press, Volume 615, 2019, 177-236.
1515
- 4. J. I. Kaplan, G. Fraenkel, NMR of Chemically Exchanging Systems, Academic Press, New
1516
- York, 1980.
1517
- 5. C. S. Johnson, Chemical rate processes and magnetic resonance, Adv. Magn. Reson. 1, 33
1518
- (1965).
1519
- 6. N. Daffern, C. Nordyke, M. Zhang, A. G. Palmer, J. E. Straub. Dynamical Models of Chemical
1520
- Exchange in Nuclear Magnetic Resonance Spectroscopy, The Biophysicist 3(1) (2022): 13-34
1521
- 7. J. M. Schurr, B. S. Fujimoto, R. Diaz, B. H. Robinson. 1999. Manifestations of slow site exchange
1522
- processes in solution NMR: a continuous Gaussian exchange model, J. Magn. Reson. 140
1523
- (1999) 404–431.
1524
- 8. G. Lin, A. A. Jones. A lattice model for the simulation of one and two dimensional 129Xe
1525
- exchange spectra produced by translational diffusion, Solid State Nuclear Magnetic
1526
- Resonance. 26(2) (2004) 87-98.
1527
- 9. R. Gorenflo, A.A. Kilbas, F. Mainardi, S.V. Rogosin. Mittag-Leffler Functions, Related Topics
1528
- and Applications; Springer: Berlin, 2014.
1529
- 10. T. Sandev, Z. Tomovsky, Fractional equations and models. Theory and applications, Springer
1530
- Nature Switzerland AG, Cham, Switzerland, 2019.
1531
- 11. R. Kohlrausch, Theorie des elektrischen Rückstandes in der Leidner Flasche, Annalen der
1532
- Physik und Chemie. 91 (1854) 179–213.
1533
- 12. G. Williams, D. C. Watts, Non-Symmetrical Dielectric Relaxation Behavior Arising from a
1534
- Simple Empirical Decay Function, Transactions of the Faraday Society 66 (1970) 80–85.
1535
- 13. T. R. Lutz, Y. He, M. D. Ediger, H. Cao, G. Lin, A. A. Jones, Macromolecules 36 (2003) 1724.
1536
- 14. E. Krygier, G. Lin, J. Mendes, G. Mukandela, D. Azar, A.A. Jones, J. A.Pathak, R. H. Colby, S.
1537
- K. Kumar, G. Floudas, R. Krishnamoorti, R. Faust, Macromolecules 38 (2005), 7721.
1538
- 15. G. Lin, General pulsed-field gradient signal attenuation expression based on a fractional
1539
- integral modified-Bloch equation, Commu Nonlinear Sci. Numer. Simul. 63 (2018) 404-420..
1540
- 16. G. Lin, An effective phase shift diffusion equation method for analysis of PFG normal and
1541
- fractional diffusions, J. Magn. Reson. 259 (2015) 232–240.
1542
- 17. G. Lin, Describing NMR relaxation by effective phase diffusion equation, Communications in
1543
- Nonlinear Science and Numerical Simulation. 99 (2021) 105825.
1544
- 18. R. L. Magin, Weiguo Li, M. P. Velasco, J. Trujillo, D. A. Reiter, A. Morgenstern, R. G. Spencer,
1545
- Anomalous NMR relaxation in cartilage matrix components and native cartilage: Fractional-
1546
- order models, J. Magn. Reson. 210 (2011)184-191.
1547
-
1548
- 15
1549
-
1550
-
1551
- 19. T. Zavada, N. Südland, R. Kimmich, T.F. Nonnenmacher, Propagator representation of
1552
- anomalous diffusion: The orientational structure factor formalism in NMR, Phys. Rev. 60
1553
- (1999) 1292-1298.
1554
- 20. G. Lin, Describe NMR relaxation by anomalous rotational or translational diffusion, Commu
1555
- Nonlinear Sci. Numer. Simul. 72 (2019) 232.
1556
- 21. W. Wyss, J. Math. Phys. (1986) 2782-2785.
1557
- 22. A. I. Saichev, G.M. Zaslavsky, Fractional kinetic equations: Solutions and applications. Chaos
1558
- 7 (1997) 753–764.
1559
- 23. R. Gorenflo, F. Mainardi, Fractional Diffusion Processes: Probability Distributions and
1560
- Continuous Time Random Walk, in: Lecture Notes in Physics, No 621, Springer-Verlag, Berlin,
1561
- 2003, pp. 148–166.
1562
- 24. F. Mainardi, Yu. Luchko, G. Pagnini, The fundamental solution of the space-time fractional
1563
- diffusion equation, Fract. Calc. Appl. Anal. 4 (2001) 153–192.
1564
- 25. Y. Povstenko. Linear Fractional Diffusion-Wave Equation for Scientists and Engineers,
1565
- Birkhäuser, New York, 2015.
1566
- 26. R. Metzler, J. Klafter, The random walk’s guide to anomalous diffusion: a fractional dynamics
1567
- approach, Phys. Rep. 339 (2000) 1–77.
1568
- 27. G. Germano, M. Politi, E. Scalas, R. L. Schilling. Phys Rev E 2009;79:066102.
1569
- 28. W. Chen, Time space fabric underlying anomalous diffusion, Chaos Solitons Fractals 28 (2006)
1570
- 923.
1571
- 29. W. Chen, H. Sun, X. Zhang, D. Korošak, Anomalous diffusion modeling by fractal and
1572
- fractional derivatives, Comput. Math. Appl. 5 (5) (2010) 1754–1758.
1573
- 30. H. Sun, M.M. Meerschaert, Y. Zhang, J. Zhu, W. Chen, A fractal Richards’ equation to capture
1574
- the non-Boltzmann scaling of water transport in unsaturated media, Adv. Water Resour. 52
1575
- (2013) 292–295.
1576
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
knowledge_base/9tAyT4oBgHgl3EQfQ_bX/content/tmp_files/load_file.txt DELETED
The diff for this file is too large to render. See raw diff
 
knowledge_base/9tAyT4oBgHgl3EQfQ_bX/vector_store/index.faiss DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c7eb7a72a58bafbd924635995474970de10c91cc57e9d9efa045a7368cbd67f5
3
- size 2162733
 
 
 
 
knowledge_base/9tAyT4oBgHgl3EQfQ_bX/vector_store/index.pkl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:82f23c8799553b97c5d6764899c1e1e3bcb66525bb6ae55026b5574a82c95580
3
- size 88227
 
 
 
 
knowledge_base/B9FQT4oBgHgl3EQfNza6/content/2301.13273v1.pdf DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5c095e2eec9a0acab55a0fce87404221faff6bb1201c383ef634897691828ffd
3
- size 999109
 
 
 
 
knowledge_base/B9FQT4oBgHgl3EQfNza6/content/tmp_files/2301.13273v1.pdf.txt DELETED
The diff for this file is too large to render. See raw diff
 
knowledge_base/B9FQT4oBgHgl3EQfNza6/content/tmp_files/load_file.txt DELETED
The diff for this file is too large to render. See raw diff
 
knowledge_base/B9FQT4oBgHgl3EQfNza6/vector_store/index.faiss DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:08f73147ed1bb8763476fb27a22e1d56dd37919c72c072e09e1cc8d9b84eef60
3
- size 8323117
 
 
 
 
knowledge_base/B9FQT4oBgHgl3EQfNza6/vector_store/index.pkl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:23bf556e08c24a06de86c6004ca0fa54745b612266a65149236336be39051469
3
- size 309174
 
 
 
 
knowledge_base/BdE2T4oBgHgl3EQfRge6/content/2301.03782v1.pdf DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4639a5b13564004063eea0d925b0087fedc4c3c5e5d6f0fec950722266e5d78b
3
- size 840351
 
 
 
 
knowledge_base/BdE2T4oBgHgl3EQfRge6/content/tmp_files/2301.03782v1.pdf.txt DELETED
@@ -1,1366 +0,0 @@
1
- Multiple phenotypes in HL60 leukemia cell population
2
- Yue Wang1,2, Joseph X. Zhou3, Edoardo Pedrini3, Irit Rubin3, May Khalil3, Hong
3
- Qian2, and Sui Huang3
4
- 1Department of Computational Medicine, University of California, Los Angeles,
5
- California, United States of America
6
- 2Department of Applied Mathematics, University of Washington, Seattle,
7
- Washington, United States of America
8
- 3Institute for Systems Biology, Seattle, Washington, United States of America
9
- Abstract
10
- Recent studies at individual cell resolution have revealed phenotypic heterogene-
11
- ity in nominally clonal tumor cell populations. The heterogeneity affects cell growth
12
- behaviors, which can result in departure from the idealized exponential growth. Here
13
- we measured the stochastic time courses of growth of an ensemble of populations of
14
- HL60 leukemia cells in cultures, starting with distinct initial cell numbers to capture
15
- the departure from the exponential growth model in the initial growth phase. De-
16
- spite being derived from the same cell clone, we observed significant variations in the
17
- early growth patterns of individual cultures with statistically significant differences in
18
- growth kinetics and the presence of subpopulations with different growth rates that
19
- endured for many generations. Based on the hypothesis of existence of multiple inter-
20
- converting subpopulations, we developed a branching process model that captures the
21
- experimental observations.
22
- 1
23
- Introduction
24
- Cancer has long been considered a genetic disease caused by oncogenic mutations in so-
25
- matic cells that confer a proliferation advantage. According to the clonal evolution theory,
26
- accumulation of random genetic mutations produces cell clones with cancerous cell phe-
27
- notype. Specifically, cells with the novel genotype(s) may display increased proliferative
28
- fitness and gradually out-grow the normal cells, break down tissue homeostasis and gain
29
- other cancer hallmarks [15]. In this view, a genetically distinct clone of cells dominates the
30
- cancer cell population and is presumed to be uniform in terms of the phenotype of indi-
31
- vidual cells within an isogenic clone. In this traditional paradigm, non-genetic phenotypic
32
- variation within one clone is not taken into account.
33
- 1
34
- arXiv:2301.03782v1 [q-bio.PE] 10 Jan 2023
35
-
36
- With the advent of systematic single-cell resolution analysis, however, non-genetic cell
37
- heterogeneity within clonal (cancer) cell populations is found to be universal [33]. This
38
- feature led to the consideration of the possibility of biologically (qualitatively) distinct
39
- (meta)stable cell subpopulations due to gene expression noise, representing intra-clonal
40
- variability of features beyond the rapid random micro-fluctuations.
41
- Hence, transitions
42
- between the subpopulations, as well as heterotypic interactions among them may influence
43
- cell growth, migration, drug resistance, etc. [39, 13, 9]. Thus, an emerging view is that
44
- cancer is more akin to an evolving ecosystem [11] in which cells form distinct subpopulations
45
- with persistent characteristic features that determine their mode of interaction, directly
46
- or indirectly via competition for resources [10, 36]. However, once non-genetic dynamics
47
- is considered, cell “ecology” differs fundamentally from the classic ecological system in
48
- macroscopic biology: the subpopulations can reversibly switch between each other whereas
49
- species in an ecological population do not convert between each other [7]. This affords
50
- cancer cell populations a remarkable heterogeneity, plasticity and evolvability, which may
51
- play important roles in their growth and in the development of resistance to treatment
52
- [30].
53
- Many new questions arise following the hypothesis that phenotypic heterogeneity and
54
- transitions between phenotypes within one genetic clone are important factors in cancer.
55
- Can tumors arise, as theoretical considerations indicate, because of a state conversion
56
- (within one clone) to a phenotype capable of faster, more autonomous growth as opposed
57
- to acquisition of a new genetic mutation that confers such a selectable phenotype [55,
58
- 1, 18, 34, 33, 56, 23, 41]? Is the macroscopic, apparently sudden outgrowth of a tumor
59
- driven by a new fastest-growing clone (or subpopulation) taking off exponentially, or due
60
- to the cell population reaching a critical mass that permits positive feedback between its
61
- subpopulations that stimulates outgrowth, akin to a collectively autocatalytic set [17]?
62
- Should therapy target the fastest growing subpopulations, or target the interactions and
63
- interconversions of cancer cells?
64
- At the core of these deliberations is the fundamental question on the mode of tumor
65
- cell population growth that now must consider the influence of inherent phenotypic hetero-
66
- geneity of cells and the non-genetic (hence potentially reversible) inter-conversion of cells
67
- between the phenotypes that manifest various growth behaviors and the interplay between
68
- these two modalities.
69
- Traditionally tumor growth has been described as following an exponential growth law,
70
- motivated by the notion of uniform cell division rate for each cell, i.e. a first order growth
71
- kinetics [29]. But departure from the exponential model has long been noted. To better fit
72
- experimental data, two major modifications have been developed, namely the Gompertz
73
- model and the West law model [53]. While no one specific model can adequately describe
74
- any one tumor, each model highlights certain aspects of macroscopic tumor kinetics, mainly
75
- the maximum size and the change in growth rate at different stages. These models however
76
- are not specifically motivated by cellular heterogeneity. Assuming non-genetic heterogene-
77
- ity with transitions between the cell states, the population behavior is influenced by many
78
- 2
79
-
80
- intrinsic and extrinsic factors that are both variable and unpredictable at the single-cell
81
- level. Thus, unlike macroscopic population dynamics [43], tumor growth cannot be ad-
82
- equately captured by a deterministic model, but a stochastic cell and population level
83
- kinetic model is more realistic.
84
- Using stochastic processes in modeling cell growth via clonal expansion has a long
85
- history [54]. An early work is the Luria-Delbr¨uck model, which assumes cells grow deter-
86
- ministically, with wildtype cells mutating and becoming (due to rare and quasi-irreversible
87
- mutations) cells with a different phenotype randomly [28]. Since then, there have been
88
- many further developments that incorporate stochastic elements into the model, such as
89
- those proposed by Lea and Coulson [25], Koch [22], Moolgavkar and Luebeck [27], and
90
- Dewanji et al. [8]. We can find various stochastic processes: Poisson processes [2], Markov
91
- chains [14], and branching processes [19], or even random sums of birth-death processes [8],
92
- all playing key roles in the mathematical theories of cellular clonal growth and evolution.
93
- These models have been applied to clinical data on lung cancer [31], breast cancer [37],
94
- and treatment of cancer [38].
95
- At single-cell resolution, another cause for departure from exponential growth is the
96
- presence of positive (growth promoting) cell-cell interactions (Allee effect) in the early
97
- phase of population growth, such that cell density plays a role in stimulating division,
98
- giving rise to the critical mass dynamics [20, 24].
99
- To understand the intrinsic tumor growth behavior (change of tumor volume over time)
100
- it is therefore essential to study tumor cell populations in culture which affords detailed
101
- quantitative analysis of cell numbers over time, unaffected by the tumor microenvironment,
102
- and to measure departure from exponential growth.
103
- This paper focuses on stochastic
104
- growth of clonal but phenotypically heterogeneous HL60 leukemia cells with near single-cell
105
- sensitivities in the early phase of growth, that is, in sparse cultures. We and others have in
106
- the past years noted that at the level of single cells, each cell behaves akin to an individual,
107
- differently from another, which can be explained by the slow correlated transcriptome-wide
108
- fluctuations of gene expression [4, 26]. Given the phenotypic heterogeneity and anticipated
109
- functional consequences, grouping of cells is necessary. Such classification would require
110
- molecular cell markers for said functional implication, but such markers are often difficult
111
- to determine a priori.
112
- Here, since most pertinent to cancer biology, we directly use a
113
- functional marker that is of central relevance for cancer: cell division, which maps into cell
114
- population growth potential — in brief “cell growth”.
115
- Therefore, we monitored longitudinally the growth of cancer cell populations seeded at
116
- very small numbers of cells (1, 4, or 10 cells) in statistical ensembles of microcultures (wells
117
- on a plate of wells). We found evidence that clonal HL60 leukemia cell populations contain
118
- subpopulations that exhibit diverse growth patterns.
119
- Based on statistical analysis, we
120
- propose the existence of three distinctive cell phenotypic states with respect to cell growth.
121
- We show that a branching process model captures the population growth kinetics of a
122
- population with distinct cell subpopulations. Our results suggest that the initial phase cell
123
- growth (“take-off” of a cell culture) in the HL60 leukemic cells is predominantly driven by
124
- 3
125
-
126
- the fast-growing cell subpopulation. Reseeding experiments revealed that the fast-growing
127
- subpopulation could maintain its growth rate over several cell generations, even after the
128
- placement in a new environment. Our observations underscore the need to not only target
129
- the fast-growing cells but also the transition to them from the other cell subpopulations.
130
- 2
131
- Results
132
- 2.1
133
- Experiment of the cell population growth from distinct initial cell
134
- numbers.
135
- To expose the variability of growth kinetics as a function of initial cell density N0 (“initial
136
- seed number”), HL60 cells were sorted into wells of a 384-well plate (0.084 cm2 area)
137
- to obtain “statistical ensembles” of replicate microcultures (wells) of the same condition,
138
- distinct only by N0. Based on prior titration experiments to determine ranges of interest
139
- for N0 and statistical power, for this experiment we plated 80 wells with N0 = 10 cells
140
- (N0 = 10-cell group), 80 wells with N0 = 4 cells (N0 = 4-cell group), and 80 wells with
141
- N0 = 1 cell (N0 = 1-cell group). Cells were grown in the same conditions for 23 days (for
142
- details of cell culture and sorting, see the Methods section). Digital images were taken
143
- every 24 hours for each well from Day 4 on, and the area occupied by cells in each well
144
- was determined using computational image analysis. We had previously determined that
145
- one area unit equals approximately 500 cells. This is consistent and readily measurable
146
- because the relatively rigid and uniformly spherical HL60 cells grow as a non-adherent
147
- “packed” monolayer at the bottom of the well. Note that we are interested in the initial
148
- exponential growth (and departure from it) and not in the latter phases when the culture
149
- becomes saturated as has been the historical focus of analysis (see Introduction).
150
- Wells that have reached at least 5 area units were considered for the characterization
151
- of early phase (before plateau) growth kinetics by plotting the areas in logarithmic scale as
152
- a function of time (Fig. 1). All the N0 = 10-cell wells required 3.6-4.6 days to grow from
153
- 5 area units to 50 area units (mean=4.05, standard deviation=0.23). For the N0 = 1-cell
154
- wells, we observed a diversity of behaviors. While some of the cultures only took 3.5-5
155
- days to grow from 5 area units to 50 area units, others needed 6-7.2 days (mean=5.02,
156
- standard deviation=0.75). The N0 = 4-cell wells had a mean=4.50 days and standard
157
- deviation=0.44 to reach that same population size.
158
- To examine the exponential growth model, in Fig. 2 (left panel), we plotted the per
159
- capita growth rate versus cell population size, where each point represents a well (popu-
160
- lation) at a time point. As expected, as the population became crowded, the growth rate
161
- decreased toward zero. But in the earlier phase, many populations in the N0 = 1-cell group
162
- had a lower per capita growth rate than those in the N0 = 10-cell group, even at the same
163
- population size – thus departing from the expected behavior of exponential growth. The
164
- weighted Welch’s t-test showed that the difference in these growth rates was significant
165
- (see the Methods section).
166
- 4
167
-
168
- While qualitative differences in the behaviors of cultures with different initial seeding
169
- cell numbers N0 can be expected for biological reasons (see below), in the elementary
170
- exponential growth model, the difference of growth rate should disappear when populations
171
- with distinct seeding numbers are aligned for the same population size that they have
172
- reached as in Fig. 2. A simple possibility is that the deviations of expected growth rates
173
- emanate from difference in cell-intrinsic properties.
174
- Some cells grew faster, with a per
175
- capita growth rate of 0.6 ∼ 0.9 (all N0 = 10-cell wells and some N0 = 1-cell wells), while
176
- some cells grew slower, with a per capita growth rate of 0.3 ∼ 0.5 (some of the N0 = 1-
177
- cell wells). In other words, there is intrinsic heterogeneity in the cell population that is
178
- not “averaged out” in the culture with low N0, and the sampling process exposes these
179
- differences between the cells that appear to be relatively stable.
180
- To illustrate the inherent diversity of initial growth rates, in Fig. 3 (left panel), we
181
- display the daily cell-occupied areas plotted on a linear scale starting from Day 4. All wells
182
- with seed of N0 = 10 or N0 = 4 cells grew exponentially. Among the N0 = 1-cell wells,
183
- 14 populations died out. Four wells in the N0 = 1-cell group had more than 10 cells on
184
- Day 8 but never grew exponentially, and had fewer than 1000 cells after 15 days (on Day
185
- 23). For these non-growing or slow-growing N0 = 1-cell wells, the per capita growth rate
186
- was 0 ∼ 0.2. In comparison, all the N0 = 10-cell wells needed at most 15 days to reach
187
- the carrying capacity (around 80 area units, or 40000 cells). See Table 1 for a summary of
188
- the N0 = 1-cell group’s growth patterns. This behavior is not idiosyncratic to the culture
189
- system because they recapitulate a pilot experiment performed in the larger scale format
190
- of 96-well plates (not shown).
191
- From the above experimental observations, we asserted that there might be at least
192
- three stable cell growth phenotypes in a population: a fast type, whose growth rate was
193
- 0.6 ∼ 0.9/day for non-crowded conditions; a moderate type, whose growth rate was 0.3 ∼
194
- 0.5/day for non-crowded conditions; and a slow type, whose growth rate was 0 ∼ 0.2/day
195
- for the non-crowded population.
196
- The graphs of Fig. 3 also revealed other phenomena of growth kinetics: (1) Most
197
- N0 = 4-cell wells plateaued by Day 14 to Day 17, but some lagged significantly behind.
198
- (2) Similarly, four wells in the N0 = 1-cell group exhibited longer lag-times before the
199
- exponential growth phase, and never reached half-maximal cell numbers by Day 23. These
200
- outliers reveal intrinsic variability and were taken into account in the parameter scanning
201
- (see the Methods section).
202
- 2.2
203
- Reseeding experiments revealing the enduring intrinsic growth pat-
204
- terns.
205
- When a well in the N0 = 1-cell group had grown to 10 cells, population behavior was
206
- still different from those in the N0 = 10-cell group at the outset. In view of the spate of
207
- recent results revealing phenotypic heterogeneity, we hypothesized that the difference was
208
- cell-intrinsic as opposed to being a consequence of the environment (e.g., culture medium
209
- 5
210
-
211
- Growth pattern
212
- Well label
213
- Day 1
214
- Day 8
215
- Day 14
216
- Day 19
217
- Day 23
218
- No growth,
219
- extinction
220
- 162,167,170,176,
221
- 177,179,182,183,
222
- 186,201,234,236,
223
- 239,240
224
- 1
225
- <10
226
- <10
227
- ∼0
228
- Empty
229
- Slow growth,
230
- no exponential
231
- growth
232
- 165
233
- 1
234
- 89
235
- ∼300
236
- ∼350
237
- ∼500
238
- 166
239
- 1
240
- 36
241
- ∼110
242
- ∼120
243
- ∼150
244
- 178
245
- 1
246
- 43
247
- ∼140
248
- ∼170
249
- ∼200
250
- 211
251
- 1
252
- 16
253
- ∼90
254
- ∼200
255
- ∼400
256
- Delayed
257
- exponential
258
- growth
259
- 163
260
- 1
261
- 12
262
- ∼130
263
- ∼300
264
- ∼5000
265
- 181
266
- 1
267
- 44
268
- ∼270
269
- ∼550
270
- ∼5500
271
- 193
272
- 1
273
- 25
274
- ∼200
275
- ∼800
276
- ∼9000
277
- 204
278
- 1
279
- 21
280
- ∼100
281
- ∼600
282
- ∼6000
283
- Normal
284
- exponential
285
- growth
286
- 200 and
287
- many others
288
- 1
289
- ∼130
290
- ∼20000
291
- ∼40000
292
- (full)
293
- ∼40000
294
- (full)
295
- Table 1: The population of some wells in the N0 = 1-cell group in the growth experiment
296
- with different initial cell numbers, where ∼ meant approximate cell number. These wells
297
- illustrated different growth patterns from those wells starting with N0 = 10 or N0 = 4
298
- cells. Such differences implied that cells from wells with different initial cell numbers were
299
- essentially different.
300
- 6
301
-
302
- Time (days) to
303
- reach one half area
304
- 11
305
- 12
306
- 13
307
- 14
308
- 15
309
- 16–20
310
- >20
311
- Faster wells
312
- 26
313
- 2
314
- 1
315
- 2
316
- 1
317
- 0
318
- 0
319
- Slower wells
320
- 0
321
- 0
322
- 0
323
- 1
324
- 1
325
- 25
326
- 5
327
- Table 2: The distribution of time needed for each well to reach the “half area” population
328
- size in the reseeding experiment. We reseeded equal numbers of cells that grew faster (from
329
- a full well) and cells that grew slower (from a half-full well), and cultivated them under the
330
- same new fresh medium environment to compare their intrinsic growth rates. The results
331
- showed that faster growing cells, even reseeded, still grew faster.
332
- in N0 = 1 vs N0 = 10 -cell wells).
333
- To test our hypothesis and exclude differences in the culture environment as determi-
334
- nants of growth behavior, we reseeded the cells that exhibited the different growth rates
335
- in fresh cultures. We started with a number of N0 = 1-cell wells. After a period of almost
336
- 3 weeks, again some wells showed rapid proliferation, with cells covering the well, while
337
- others were half full and yet others wells were almost empty. We collected cells from the full
338
- and half-full wells and reseeded them into 32 wells each (at about N0 = 78 cells per well).
339
- These 64 wells were monitored for another 20 days. We found that most wells reseeded
340
- from the full well took around 11 days to reach the population size of a half-full well, while
341
- most wells reseeded from the half-full well required around 16 ∼ 20 days to reach the same
342
- half full well population size. Five wells reseeded from the half-full wells were far from even
343
- reaching half full well population size by Day 20 (see Table 2). Permutation test showed
344
- that this difference in growth rate was significant (see the Methods section).
345
- This reseeding experiment shows that the difference in growth rate was maintained
346
- over multiple generations, even after slowing down in the plateau phase (full well) and
347
- was maintained when restarting a microculture at low density in fresh medium devoid of
348
- secreted cell products. Therefore, it is plausible that there exists endogenous heterogeneity
349
- of growth phenotypes in the clonal HL60 cell line and that these distinct growth phenotypes
350
- are stable for at least 15 ∼ 20 cell generations.
351
- 2.3
352
- Quantitative analysis of experimental results.
353
- In the experiments with different initial cell numbers N0, we observed at least three patterns
354
- with different growth rates, and the reseeding showed that these growth patterns were
355
- endogenous to the cells. Therefore, we propose that each growth pattern discussed above
356
- corresponded to a cell phenotype that dominated the population: fast, moderate, and slow.
357
- In the initial seeding of cells that varies N0, the cells were randomly chosen (by FACS);
358
- thus, their intrinsic growth phenotypes were randomly distributed. During growth, the
359
- population of a well would be dominated by the fastest type that existed in the seeding
360
- cells, thus qualitatively, we have following scenarios: (1) A well in the N0 = 10-cell group
361
- 7
362
-
363
- almost certainly had at least one initial cell of fast type, and the population would be
364
- dominated by fast type cells. Different wells had almost the same growth rate, reaching
365
- saturation at almost the same time. (2) For an N0 = 1-cell well, if the only initial cell is of
366
- the fast type, then the population has only the fast type, and the growth pattern will be
367
- close to that of N0 = 10-cell wells. If the only initial cell is of the moderate type, then the
368
- population could still grow exponentially, but with a slower growth rate. This explains why
369
- after reaching 5 area units, many but not all N0 = 1-cell wells were slower than N0 = 10-
370
- cell wells. (3) Moreover, in such an N0 = 1-cell well with a moderate type initial cell, the
371
- cell might not divide quite often during the first few days due to randomness of entering
372
- the cell cycle. This would lead to a considerable delay in entering the exponential growth
373
- phase. (4) By contrast, for an N0 = 1-cell well with a slow type initial cell, the growth rate
374
- could be too small, and the population might die out or survive without ever entering the
375
- exponential growth phase in duration of the experiment. (5) Most N0 = 4-cell wells had at
376
- least one fast type initial cell, and the growth pattern was the same as N0 = 10-cell wells.
377
- A few N0 = 4-cell wells only had moderate and slow cells, and thus had slower growth
378
- patterns.
379
- The above verbal argument is shown in Fig. 4 and entails mathematical modeling with
380
- the appropriate parameters that relate the relative frequency of these cell types in the
381
- original population, their associated growth and transition rates to examine whether it
382
- explains the data.
383
- 2.4
384
- Branching process model.
385
- To construct a quantitative dynamical model to recapitulate the growth dynamics differ-
386
- ences from cell populations with distinct initial seed cell numbers N0, and three intrinsic
387
- types of proliferation behaviors, we used a multi-type discrete-time branching process.
388
- The traditional method of population dynamics based on ordinary differential equation
389
- (ODE), which is deterministic and has continuous variables, is not suited when the cell
390
- population is small as is the case for the earliest stage of proliferation from a few cells
391
- being studied in our experiments. Deterministic models are also unfit because with such
392
- small populations and measurements at single-cell resolution, stochasticity in cell activity
393
- does not average out. The nuanced differences between individual cells cannot be captured
394
- by a different deterministic mechanism of each individual cell, and the only information
395
- available is the initial cell number. Thus, the unobservable nuances between cells are taken
396
- care of by a stochastic model.
397
- Given the small populations, our model should be purely stochastic, without determin-
398
- istic growth. The focus is the concrete population size of a finite number (three) of types,
399
- thus Poisson processes are not suitable. Markov chains can partially describe the propor-
400
- tions under some conditions [47], but population sizes are known, not just their ratios,
401
- therefore Markov chains are not necessary. Even the lifted Markov chains [48] and random
402
- dynamical systems [52] are not applicable in this situation, since the population should be
403
- 8
404
-
405
- non-negative. Branching processes can describe the population size of multiple types with
406
- symmetric and asymmetric division, transition, and death [19]. Also, the parameters can
407
- be temporally and spatially inhomogeneous, which is convenient. Therefore, we utilized
408
- branching processes in our model.
409
- In the branching process, each cell during each time interval independently and ran-
410
- domly chooses a behavior: division, death, or stagnation in the quiescent state, whose rates
411
- depend on the cell growth type. Denoting the growth rate and death rate of the fast type
412
- by gF and dF respectively, and the population size of fast type cells on Day n by F(n), the
413
- population at Day n + 1 is:
414
- F(n + 1) =
415
- F(n)
416
-
417
- i=1
418
- Ai,
419
- where Ai for different i are independent. Ai represents the descendants of a fast type cell i
420
- after one day. It equals 2 with probability gF, 0 with probability dF, and 1 with probability
421
- 1 − gF − dF. Therefore, given F(n), the distribution of F(n + 1) is:
422
- P[F(n + 1) = N] =
423
-
424
- 2a+b=N
425
- F(n)!
426
- a!b![F(n) − a − b]!ga
427
- Fd[F(n)−a−b]
428
- F
429
- (1 − gF − dF)b,
430
- where the summation is taken for all non-negative integer pairs (a, b) with 2a + b = N.
431
- Moderate and slow types evolve similarly, with their corresponding growth rates gM, gS,
432
- and death rates dM, dS.
433
- As shown in Fig. 2, the growth rates gF, gM, and gS should be decreasing functions of
434
- the total population. In our model, we adopted a quadratic function.
435
- We performed a parameter scan to show that our model could reproduce experimental
436
- phenomena for a wide range of model parameters (see details in Table 3).
437
- The simulation results are shown on the right panels of Figs. 1–3, in comparison with
438
- the experimental data in the left. Our model qualitatively captured the growth patterns
439
- of groups with different initial seeding cell numbers. For example, in Fig. 2, when wells
440
- were less than half full (cell number < 20000), most wells in the N0 = 10-cell group grew
441
- faster than the N0 = 1-cell group even when they had the same cell number. In Fig. 3,
442
- all wells in the N0 = 10-cell group in our model grew quickly until saturation. Similar to
443
- the experiment, some wells in the N0 = 1-cell group in our model never grew, while some
444
- began to take off very late.
445
- In our model, the high extinction rate in the N0 = 1-cell group (14/80) was explained
446
- as “bad luck” at the early stage, since birth rate and death rate were close, and a cell could
447
- easily die without division. Another possible explanation for such a difference in growth
448
- rates was that the population would be 10 small colonies when starting from 10 initial cells,
449
- while starting from 1 initial cell, the population would be 1 large colony. With the same
450
- area, 10 small colonies should have a larger total perimeter, thus larger growth space and
451
- larger growth rate than that of 1 large colony. However, we carefully checked the photos,
452
- 9
453
-
454
- Parameters
455
- Appearance of experimental phenomena
456
- pF
457
- pM
458
- pS
459
- d
460
- g0
461
- r
462
- Feature 1
463
- Feature 2
464
- Feature 3
465
- Feature 4
466
- 0.4
467
- 0.4
468
- 0.2
469
- 0.01
470
- 0.5
471
- 0.1
472
- Yes
473
- Yes
474
- Yes
475
- Yes
476
- 0.4
477
- 0.4
478
- 0.2
479
- 0
480
- 0.5
481
- 0.1
482
- Yes
483
- Yes
484
- Yes
485
- Yes
486
- 0.4
487
- 0.4
488
- 0.2
489
- 0.05
490
- 0.5
491
- 0.1
492
- Yes
493
- Yes
494
- Yes
495
- Yes
496
- 0.4
497
- 0.4
498
- 0.2
499
- 0.1
500
- 0.5
501
- 0.1
502
- No
503
- Yes
504
- Yes
505
- No
506
- 0.4
507
- 0.4
508
- 0.2
509
- 0.01
510
- 0.45
511
- 0.1
512
- Yes
513
- Yes
514
- Yes
515
- Yes
516
- 0.4
517
- 0.4
518
- 0.2
519
- 0.01
520
- 0.6
521
- 0.1
522
- Yes
523
- Yes
524
- Yes
525
- Yes
526
- 0.4
527
- 0.4
528
- 0.2
529
- 0.01
530
- 0.4
531
- 0.1
532
- Yes
533
- Yes
534
- Yes
535
- No
536
- 0.4
537
- 0.4
538
- 0.2
539
- 0.01
540
- 0.5
541
- 0.05
542
- Yes
543
- Yes
544
- Yes
545
- Yes
546
- 0.4
547
- 0.4
548
- 0.2
549
- 0.01
550
- 0.5
551
- 0
552
- Yes
553
- Yes
554
- Yes
555
- Yes
556
- 0.4
557
- 0.4
558
- 0.2
559
- 0.01
560
- 0.5
561
- 0.15
562
- Yes
563
- Yes
564
- Yes
565
- No
566
- 0.4
567
- 0.4
568
- 0.2
569
- 0.01
570
- 0.5
571
- 0.2
572
- No
573
- Yes
574
- Yes
575
- No
576
- 0.3
577
- 0.5
578
- 0.2
579
- 0.01
580
- 0.5
581
- 0.1
582
- Yes
583
- Yes
584
- Yes
585
- Yes
586
- 0.5
587
- 0.3
588
- 0.2
589
- 0.01
590
- 0.5
591
- 0.1
592
- Yes
593
- Yes
594
- Yes
595
- Yes
596
- 0.4
597
- 0.5
598
- 0.1
599
- 0.01
600
- 0.5
601
- 0.1
602
- Yes
603
- Yes
604
- Yes
605
- Yes
606
- 0.4
607
- 0.3
608
- 0.3
609
- 0.01
610
- 0.5
611
- 0.1
612
- Yes
613
- Yes
614
- Yes
615
- Yes
616
- 0.5
617
- 0.4
618
- 0.1
619
- 0.01
620
- 0.5
621
- 0.1
622
- Yes
623
- Yes
624
- Yes
625
- Yes
626
- 0.3
627
- 0.4
628
- 0.3
629
- 0.01
630
- 0.5
631
- 0.1
632
- Yes
633
- Yes
634
- Yes
635
- Yes
636
- 0.1
637
- 0.1
638
- 0.8
639
- 0.01
640
- 0.5
641
- 0.1
642
- No
643
- Yes
644
- Yes
645
- No
646
- 0.5
647
- 0.5
648
- 0
649
- 0.01
650
- 0.5
651
- 0.1
652
- Yes
653
- Yes
654
- No
655
- Yes
656
- 0
657
- 0.5
658
- 0.5
659
- 0.01
660
- 0.5
661
- 0.1
662
- No
663
- Yes
664
- Yes
665
- Yes
666
- 0.5
667
- 0
668
- 0.5
669
- 0.01
670
- 0.5
671
- 0.1
672
- Yes
673
- No
674
- Yes
675
- No
676
- 1
677
- 0
678
- 0
679
- 0.01
680
- 0.5
681
- 0.1
682
- Yes
683
- No
684
- No
685
- No
686
- Table 3: Performance of our model with different parameters. Here we adjusted the param-
687
- eters of our model in a wide range and observed whether the model could still reproduce
688
- four important “features” in the experiment. This parameter scan showed that our model
689
- is robust under perturbations on parameters. Here pF, pM, pS are the probabilities that an
690
- initial cell is of fast, moderate, or slow type; d is the death rate; g0 is the growth factor;
691
- r is the range of the random modifier. See the Methods section for explanations of these
692
- parameters.
693
- Feature 1, all wells in the N0 = 10-cell group were saturated; Feature 2,
694
- presence of late-growing wells in the N0 = 1-cell group; Feature 3, presence of non-growing
695
- wells in the N0 = 1-cell group; Feature 4, different growth rates at the same population
696
- size between the N0 = 10-cell group and the N0 = 1-cell group.
697
- 10
698
-
699
- and found that almost all wells produced 1 large colony with nearly the same shape, and
700
- there was no significant relationship between colony perimeter and growth rate.
701
- 3
702
- Discussion
703
- As many recent single-cell level data have shown, a tumor can contain multiple distinct
704
- subpopulations engaging in interconversions and interactions among them that can in-
705
- fluence cancer cell proliferation, death, migration, and other features that contribute to
706
- malignancy [33, 55, 1, 18, 34, 56, 20, 24, 5, 32, 6]. Presence of these two intra-population
707
- behaviors can be manifest as departure from the elementary model of exponential growth
708
- [35] (in the early phase of population growth, far away from carrying capacity of the culture
709
- environment which is trivially non-exponential). The exponential growth model assumes
710
- uniformity of cell division rates across all cells (hence a population doubling rate that is
711
- proportional to a given population size N(t)) and the absence of cell-cell interactions that
712
- affect cell division and death rates. Investigating the “non-genetic heterogeneity” hypoth-
713
- esis of cancer cells quantitatively is therefore paramount for understanding cancer biology
714
- but also for elementary principles of cell population growth.
715
- As an example, here we showed that clonal cell populations of the leukemia HL60
716
- cell line are heterogeneous with regard to growth behaviors of individual cells that can
717
- be summarized in subpopulations characterized by a distinct intrinsic growth rates which
718
- were revealed by analysis of the early population growth starting with microcultures seeded
719
- with varying (low) cell number N0.
720
- Since we have noted only very weak effect of cell-cell interactions on cell growth be-
721
- haviors (Allee effect) in this cell line (as opposed to another cell tumor cell line in which
722
- we found that departure from exponential growth could be explained by the Allee effect
723
- [20]), we focused on the very presence among HL60 cells of subpopulations with distinct
724
- proliferative capacity as a mechanism for the departure of the early population growth
725
- curve from exponential growth.
726
- The reseeding experiment demonstrated that the characteristic growth behaviors of
727
- subpopulations could be inherited across cell generations and after moving to a new envi-
728
- ronment (fresh culture), consistent with long-enduring endogenous properties of the cells.
729
- This result might be explained by cells occupying distinct stable cell states (in a multi-
730
- stable system). Thus, we introduced multiple cell types with different growth rates in our
731
- stochastic model. Specifically, in a branching process model, we assumed the existence
732
- of three types: fast, moderate, and slow cells. The model we built could replicate the
733
- key features in the experimental data, such as different growth rates at the same popula-
734
- tion size between the N0 = 10-cell group and the N0 = 1-cell group, and the presence of
735
- late-growing and non-growing wells in the N0 = 1-cell group.
736
- While we were able to fit the observed behaviors in which the growth rate depended not
737
- only on N(t) but also on N0, the existence of the three or even more cell types still needs
738
- 11
739
-
740
- to be verified experimentally. For instance, statistical cluster analysis of transcriptomes of
741
- individual cells by single-cell RNA-seq [3] over the population may identify the presence
742
- of transcriptomically distinct subpopulations that could be isolated (e.g., after association
743
- with cell surface markers) and evaluated separately for their growth behaviors. We might
744
- apply inference methods on such sequencing data to determine the gene regulatory relations
745
- that lead to multiple phenotypes [50, 44], although the causal relationship might not always
746
- be determined [49]. Besides, since the existence of transposons might affect the growth
747
- rates, corresponding analysis should be conducted [21, 40].
748
- The central assumption of coexistence of multiple subpopulations in the cell line stock
749
- must be accompanied by the second assumption that there are transitions between these
750
- distinct cell populations. For otherwise, in the stock population the fastest growing cell
751
- would eventually outgrow the slow growing cells. Furthermore, one has to assume a steady-
752
- state in which the population of slow growing cells are continuously replenished from the
753
- population of fast-growing cells. Finally, we must assume that the steady-state proportions
754
- of the subpopulations are such that at low seeding wells with N0 = 1 cells, there is a sizable
755
- probability that a microculture receives cells from each of the (three) presumed subtypes of
756
- cells. The number of wells in the ensemble of replicate microcultures for each N0- condition
757
- has been sufficiently large for us to make the observations and inform the model, but a
758
- larger ensemble would be required to determine with satisfactory accuracy the relative
759
- proportions of the cell types in the parental stock population.
760
- Transitions might also have been happening during our experiment. For example, those
761
- late growing wells in the N0 = 1-cell group could be explained by such a transition: Initially,
762
- only slow type cells were present, but once one of these slow growing cells switched to the
763
- moderate type, an exponential growth ensued at the same rate that is intrinsic to that of
764
- moderate cells.
765
- If there are transitions, what is the transition rate? Our reseeding experiments are
766
- compatible with a relatively slow rate for interconversion of growth behaviors in that the
767
- same growth type was maintained across 30 generations. An alternative to the principle
768
- of transition at a constant intrinsic to each of the types of cells may be that transition
769
- is extrinsically determined. Specifically, the seeding in the “lone” condition of N0 = 1
770
- may induce a dormant state, that is a transition to a slower growth mode that is then
771
- maintained, on average over 30+ generations, with occasional return to the faster types
772
- that account for the delayed exponential growth. The lack of experimental data might be
773
- partially made up by inference methods [51].
774
- This model however would bring back the notion of “environment awareness”, or the
775
- principle of a “critical density” for growth implemented by cell-cell interaction (Allee effect)
776
- which we had deliberately not considered (see above) since it was not necessary. We do not
777
- exclude this possibility which could be experimentally tested as follows: Cultivate N0 = 1-
778
- cell wells for 20 days when the delayed exponential growth has happened in some wells,
779
- but then use the cells of those wells with fast-growing population (which should contain of
780
- the fast type) to restart the experiment, seeded at N0 = 10, 4, 1 cells. If wells with different
781
- 12
782
-
783
- seeding numbers exhibit the same growth rates, then the growth difference in the original
784
- experiment is solely due to preexisting (slow interconverting) cell phenotypes. If now the
785
- N0 = 1-cell wells resumes the typical slow growth, this would indicate a density induced
786
- transition to the slow growth type. If cell-cell interaction needs to be taken into account,
787
- certain results in developmental biology might help, since they study the emergence of
788
- patterns through strong cell-cell interactions [46, 45, 42].
789
- In the spirit of Occam’s razor, and given the technical difficulty in separate experiments
790
- to demonstrate cell-cell interactions in HL60 cells, we were able to model the observed
791
- behaviors with the simplest assumption of cell-autonomous properties, including existence
792
- of multiple states (growth behaviors) and slow transitions between them but without cell
793
- density dependence or interactions.
794
- Taken together, we showed that one manifestation of the burgeoning awareness of ubiq-
795
- uitous cell phenotype heterogeneity in an isogenic cell population is the presence of distinct
796
- intrinsic types of cells that slowly interconvert among them, resulting in a stationary popu-
797
- lation composition. The differing growth rates of the subtypes and their stable proportions
798
- may be an elementary characteristic of a given population that by itself can account for the
799
- departure of early population growth kinetics from the basic exponential growth model.
800
- 4
801
- Methods
802
- 4.1
803
- Setup of growth experiment with different initial cell numbers.
804
- HL60 cells were maintained in IMDM wGln, 20% FBS(heat inactivated), 1% P/S at a
805
- cell density between 3 × 105 and 2.5 × 106 cells/ml (GIBCO). Cells were always handled
806
- and maintained under sterile conditions (tissue culture hood; 37◦C, 5% CO2, humidified
807
- incubator). At the beginning of the experiment, cells were collected, washed two times in
808
- PBS, and stained for vitality (Trypan blue GIBCO). The population of cells was first gated
809
- for morphology and then for vitality staining. Only Trypan negative cells were sorted (BD
810
- FACSAria II). The cells were sorted in a 384 well plate with IMDM wGln, 20% FBS(heat
811
- inactivated), and 1% P/S (GIBCO).
812
- Cell population growth was monitored using a Leica microscope (heated environmental
813
- chamber and CO2 levels control) with a motorized tray. Starting from Day 4, the 384
814
- well plate was placed inside the environmental chamber every 24 hours. The images were
815
- acquired in a 3 × 3 grid for each well; after acquisition, the 9 fields were stitched into a
816
- single image. Software ImageJ was applied to identify and estimate the area occupied by
817
- “entities” in each image. The area (proportional to cell number) was used to follow the
818
- cell growth.
819
- 13
820
-
821
- 4.2
822
- Setup of reseeding experiment for growth pattern inheritance.
823
- HL60 cells were cultivated for 3 weeks, and then we chose one full well and one half full
824
- well. We supposed the full well was dominated by fast type cells, and the half-full well
825
- was dominated by moderate type cells, which had lower growth rates. We reseeded cells
826
- from these two wells and cultivated them in two 96-well (rows A-H, columns 1-12) plates.
827
- In each plate, B2-B11, D2-D11, and F2-F11 wells started with 78 fast cells, while C2-C11,
828
- E2-E11, and G2-G11 wells started with 78 moderate cells. Rows A, H, columns 1, 12 had
829
- no cells and no media, and we found that wells in rows B, G, columns 2, 11, which were
830
- the outmost non-empty wells, evaporated much faster than inner wells. Therefore, the
831
- growth of cells in those wells was much slower than inner wells. Hence we only considered
832
- inner wells, where D3-D10 and F3-F10 started with fast cells, C3-C10 and E3-E10 started
833
- with moderate cells, namely 32 fast wells and 32 moderate wells in total.
834
- During the
835
- experiment, no media was added. Each day, we observed those wells to check whether
836
- their areas exceeded one-half of the whole well. The experiment was terminated after 20
837
- days.
838
- 4.3
839
- Weighted Welch’s t-test.
840
- The weighted Welch’s t-test is used to test the hypothesis that two populations have equal
841
- mean, while sample values have different weights [12].
842
- Assume for group i (i = 1, 2),
843
- the sample size is Ni and the jth sample is the average of cj
844
- i independent and identically
845
- distributed variables. Let Xj
846
- i be the observed average for the jth sample. Set ν1 = N1 − 1,
847
- ν2 = N2 − 1. Define
848
- ¯
849
- Xi
850
- W = (
851
- Ni
852
-
853
- j=1
854
- Xj
855
- i cj)/(
856
- Ni
857
-
858
- j=1
859
- )cj,
860
- s2
861
- i,W =
862
- Ni[�Ni
863
- j=1(Xj
864
- i )2cj]/(�Ni
865
- j=1 cj
866
- i) − Ni( ¯
867
- Xi
868
- W )2
869
- Ni − 1
870
- ,
871
- t =
872
- ¯
873
- X1
874
- W − ¯
875
- X2
876
- W
877
-
878
- s2
879
- 1,W
880
- N1 +
881
- s2
882
- 2,W
883
- N2
884
- ,
885
- ν =
886
- (
887
- s2
888
- 1,W
889
- N1 +
890
- s2
891
- 2,W
892
- N2 )2
893
- s4
894
- 1,W
895
- N2
896
- 1 ν1 +
897
- s4
898
- 2,W
899
- N2
900
- 2 ν2
901
- .
902
- If two populations have equal mean, then t satisfies the t-distribution with degree of freedom
903
- ν.
904
- The weighted Welch’s t-test was applied to the growth experiment with different initial
905
- cell numbers, in order to determine whether the growth rates during exponential phase
906
- 14
907
-
908
- (5–50 area units) were different between groups. Here Xj
909
- i corresponded to growth rate,
910
- and cj
911
- i corresponded to cell area. The p-value for N0 = 10-cell group vs. N0 = 4-cell group
912
- was 2.12 × 10−8; the p-value for N0 = 10-cell group vs. N0 = 1-cell group was smaller than
913
- 10−12; the p-value for N0 = 4-cell group vs. N0 = 1-cell group was 5.35 × 10−5. Therefore,
914
- the growth rate difference between any two groups was statistically significant.
915
- 4.4
916
- Permutation Test.
917
- The permutation test is a non-parametric method to test whether two samples are signifi-
918
- cantly different with respect to a statistic (e.g., sample mean) [16]. It is easy to calculate
919
- and fits our situation, thus we adopt this test rather than other more complicated tests,
920
- such as the Mann-Whitney test.
921
- For two samples {x1, · · · , xm}, {y1, · · · , yn}, consider
922
- the null hypothesis: the mean of x and y are the same. For these samples, calculate the
923
- mean of the first sample: µ0 =
924
- 1
925
- m
926
- � xi. Then we randomly divide these m + n samples
927
- into two groups with size m and n: {x′
928
- 1, · · · , x′
929
- m}, {y′
930
- 1, · · · , y′
931
- n}, such that each permuta-
932
- tion has equal probability. For these new samples, calculate the mean of the first sample:
933
- µ′
934
- 0 = 1
935
- m
936
- � x′
937
- i. Then the two-sided p-value is defined as
938
- p = 2 min{P(µ0 ≤ µ′
939
- 0), 1 − P(µ0 ≤ µ′
940
- 0)}.
941
- If µ0 is an extreme value in the distribution of µ′
942
- 0, then the two sample means are different.
943
- In the reseeding experiment, the mean time of exceeding half well for the fast group
944
- was 11.4375 days. For all
945
- �64
946
- 32
947
-
948
- possible result combinations, only 7 combinations had equal
949
- or less mean time. Thus the p-value was 2 × 7/
950
- �64
951
- 32
952
-
953
- = 7.6 × 10−18. This indicated that the
954
- growth rate difference between fast group and moderate group was significant.
955
- 4.5
956
- Model Details.
957
- The simulation time interval was half day, but we only utilized the results in full days. For
958
- each initial cell, the probabilities of being fast, moderate or slow type, pF, pM, pS, were 0.4,
959
- 0.4, 0.2.
960
- Each half day, a fast type cell had probability d to die, and probability gF to divide.
961
- The division produced two fast cells, capturing the intrinsic growth behavior that is to
962
- some extent inheritable. Denote the total cell number of previous day as N, then
963
- gF = g0(1 − N2/C2) + δ,
964
- where δ is a random variable that satisfies the uniform distribution on [−r, r], and it is a
965
- constant for all cells in the same well. If gF < 0, set gF = 0. If gF > 1 − d, set gF = 1 − d.
966
- In the simulation displayed, death rate d = 0.01, carrying capacity C = 40000, growth
967
- factor g0 = 0.5, and the range of random modifier r = 0.1.
968
- Each half day, a moderate type cell had probability d to die, and probability gM to
969
- divide. The division produced two moderate cells. gM = gF/1.5.
970
- 15
971
-
972
- Similarly, each half day, a slow type cell had probability d to die, and probability gS to
973
- divide. The division produced two slow-growing cells. gS = gF/3.
974
- 4.6
975
- Parameter scan.
976
- Since growth is measured by the area covered by cells, we could not experimentally verify
977
- most assumptions of our model, or determine the values of parameters.
978
- Therefore, we
979
- performed a parameter scan by evaluating the performance of our model for different sets
980
- of parameters.
981
- We adjusted 6 parameters: initial type probabilities pF, pM, pS, death
982
- rate d, growth factor g0, and random modifier r. We checked whether these 4 features
983
- observable in the experiment could be reproduced: growth of all wells in the N0 = 10-cell
984
- group to saturation; existence of late-growing wells in the N0 = 1-cell group; existence of
985
- non-growing wells in the N0 = 1-cell group; difference in growth rates in the N0 = 10-cell
986
- group and the N0 = 1-cell group at the same population size. Table 3 shows the results
987
- of the performance of simulations with the various parameter sets. Within a wide range
988
- of parameters, our model is able to replicate the experimental results shown in Figs. 1–3,
989
- indicating that our model is robust under perturbations.
990
- Acknowledgements
991
- We would like to thank Ivana Bozic, Yifei Liu, Georg Luebeck, Weili Wang, Yuting Wei
992
- and Lingxue Zhu for helpful advice and discussions.
993
- References
994
- [1] Angelini, E., Wang, Y., Zhou, J. X., Qian, H., and Huang, S. A model for
995
- the intrinsic limit of cancer therapy: Duality of treatment-induced cell death and
996
- treatment-induced stemness. PLOS Comput. Biol. 18, 7 (2022), e1010319.
997
- [2] Bartoszynski, R., Brown, B. W., McBride, C. M., and Thompson, J. R.
998
- Some nonparametric techniques for estimating the intensity function of a cancer re-
999
- lated nonstationary Poisson process. Ann. Stat. (1981), 1050–1060.
1000
- [3] Bhartiya, D., Kausik, A., Singh, P., and Sharma, D. Will single-cell rnaseq
1001
- decipher stem cells biology in normal and cancerous tissues?
1002
- Hum. Reprod. Update
1003
- 27, 2 (2021), 421–421.
1004
- [4] Chang, H. H., Hemberg, M., Barahona, M., Ingber, D. E., and Huang,
1005
- S. Transcriptome-wide noise controls lineage choice in mammalian progenitor cells.
1006
- Nature 453, 7194 (2008), 544–547.
1007
- 16
1008
-
1009
- [5] Chapman, A., del Ama, L. F., Ferguson, J., Kamarashev, J., Wellbrock,
1010
- C., and Hurlstone, A. Heterogeneous tumor subpopulations cooperate to drive
1011
- invasion. Cell Rep. 8, 3 (2014), 688–695.
1012
- [6] Chen, X., Wang, Y., Feng, T., Yi, M., Zhang, X., and Zhou, D. The overshoot
1013
- and phenotypic equilibrium in characterizing cancer dynamics of reversible phenotypic
1014
- plasticity. J. Theor. Biol. 390 (2016), 40–49.
1015
- [7] Clark, W. H. Tumour progression and the nature of cancer. Br. J. Cancer 64, 4
1016
- (1991), 631.
1017
- [8] Dewanji, A., Luebeck, E. G., and Moolgavkar, S. H. A generalized Luria–
1018
- Delbr¨uck model. Math. Biosci. 197, 2 (2005), 140–152.
1019
- [9] Durrett, R., Foo, J., Leder, K., Mayberry, J., and Michor, F. Intratumor
1020
- heterogeneity in evolutionary models of tumor progression. Genetics 188, 2 (2011),
1021
- 461–477.
1022
- [10] Egeblad, M., Nakasone, E. S., and Werb, Z.
1023
- Tumors as organs: Complex
1024
- tissues that interface with the entire organism. Dev. Cell 18, 6 (2010), 884–901.
1025
- [11] Gatenby, R. A., Cunningham, J. J., and Brown, J. S. Evolutionary triage gov-
1026
- erns fitness in driver and passenger mutations and suggests targeting never mutations.
1027
- Nat. Commun. 5 (2014), 5499.
1028
- [12] Goldberg, L. R., Kercheval, A. N., and Lee, K.
1029
- T-statistics for weighted
1030
- means in credit risk modeling. J. Risk Finance 6, 4 (2005), 349–365.
1031
- [13] Gunnarsson, E. B., De, S., Leder, K., and Foo, J. Understanding the role of
1032
- phenotypic switching in cancer drug resistance. J. Theor. Biol 490 (2020), 110162.
1033
- [14] Gupta, P. B., Fillmore, C. M., Jiang, G., Shapira, S. D., Tao, K., Kuper-
1034
- wasser, C., and Lander, E. S. Stochastic state transitions give rise to phenotypic
1035
- equilibrium in populations of cancer cells. Cell 146, 4 (2011), 633–644.
1036
- [15] Hanahan, D., and Weinberg, R. A. Hallmarks of cancer: The next generation.
1037
- Cell 144, 5 (2011), 646–674.
1038
- [16] Hastie, T., Tibshirani, R., and Friedman, J. The Elements of Statistical Learn-
1039
- ing, 2nd ed. Springer, New York, 2016.
1040
- [17] Hordijk, W., Steel, M., and Dittrich, P.
1041
- Autocatalytic sets and chemical
1042
- organizations: modeling self-sustaining reaction networks at the origin of life. New J.
1043
- Phys. 20, 1 (2018), 015011.
1044
- 17
1045
-
1046
- [18] Howard, G. R., Johnson, K. E., Rodriguez Ayala, A., Yankeelov, T. E.,
1047
- and Brock, A. A multi-state model of chemoresistance to characterize phenotypic
1048
- dynamics in breast cancer. Sci. Rep. 8, 1 (2018), 1–11.
1049
- [19] Jiang, D.-Q., Wang, Y., and Zhou, D. Phenotypic equilibrium as probabilistic
1050
- convergence in multi-phenotype cell population dynamics. PLOS ONE 12, 2 (2017),
1051
- e0170916.
1052
- [20] Johnson, K. E., Howard, G., Mo, W., Strasser, M. K., Lima, E. A., Huang,
1053
- S., and Brock, A. Cancer cell population growth kinetics at low densities deviate
1054
- from the exponential growth model and suggest an Allee effect. PLOS Biol. 17, 8
1055
- (2019), e3000399.
1056
- [21] Kang, Y., Gu, C., Yuan, L., Wang, Y., Zhu, Y., Li, X., Luo, Q., Xiao, J.,
1057
- Jiang, D., Qian, M., et al. Flexibility and symmetry of prokaryotic genome re-
1058
- arrangement reveal lineage-associated core-gene-defined genome organizational frame-
1059
- works. mBio 5, 6 (2014), e01867–14.
1060
- [22] Koch, A. L. Mutation and growth rates from Luria-Delbr¨uck fluctuation tests. Mutat.
1061
- Res. -Fund. Mol. M. 95, 2-3 (1982), 129–143.
1062
- [23] Kochanowski, K., Sander, T., Link, H., Chang, J., Altschuler, S. J., and
1063
- Wu, L. F. Systematic alteration of in vitro metabolic environments reveals empirical
1064
- growth relationships in cancer cell phenotypes. Cell Rep. 34, 3 (2021), 108647.
1065
- [24] Korolev, K. S., Xavier, J. B., and Gore, J. Turning ecology and evolution
1066
- against cancer. Nat. Rev. Cancer 14, 5 (2014), 371–380.
1067
- [25] Lea, D. E., and Coulson, C. A. The distribution of the numbers of mutants in
1068
- bacterial populations. J. Genet. 49, 3 (1949), 264–285.
1069
- [26] Li, Q., Wennborg, A., Aurell, E., Dekel, E., Zou, J.-Z., Xu, Y., Huang, S.,
1070
- and Ernberg, I. Dynamics inside the cancer cell attractor reveal cell heterogeneity,
1071
- limits of stability, and escape. Proc. Natl. Acad. Sci. U.S.A. 113, 10 (2016), 2672–2677.
1072
- [27] Luebeck, G., and Moolgavkar, S. H. Multistage carcinogenesis and the incidence
1073
- of colorectal cancer. Proc. Natl. Acad. Sci. 99, 23 (2002), 15095–15100.
1074
- [28] Luria, S. E., and Delbr¨uck, M. Mutations of bacteria from virus sensitivity to
1075
- virus resistance. Genetics 28, 6 (1943), 491.
1076
- [29] Mackillop, W. J. The growth kinetics of human tumours. Clin. Phys. Physiol. M.
1077
- 11, 4A (1990), 121.
1078
- [30] Meacham, C. E., and Morrison, S. J.
1079
- Tumour heterogeneity and cancer cell
1080
- plasticity. Nature 501, 7467 (2013), 328–337.
1081
- 18
1082
-
1083
- [31] Newton, P. K., Mason, J., Bethel, K., Bazhenova, L. A., Nieva, J., and
1084
- Kuhn, P.
1085
- A stochastic markov chain model to describe lung cancer growth and
1086
- metastasis. PLOS ONE 7, 4 (2012), e34637.
1087
- [32] Niu, Y., Wang, Y., and Zhou, D. The phenotypic equilibrium of cancer cells:
1088
- From average-level stability to path-wise convergence. J. Theor. Biol. 386 (2015),
1089
- 7–17.
1090
- [33] Pisco, A. O., and Huang, S. Non-genetic cancer cell plasticity and therapy-induced
1091
- stemness in tumour relapse: ‘What does not kill me strengthens me’. Br. J. Cancer
1092
- 112, 11 (2015), 1725–1732.
1093
- [34] Sahoo, S., Mishra, A., Kaur, H., Hari, K., Muralidharan, S., Mandal, S.,
1094
- and Jolly, M. K. A mechanistic model captures the emergence and implications of
1095
- non-genetic heterogeneity and reversible drug resistance in ER+ breast cancer cells.
1096
- NAR Cancer 3, 3 (2021), zcab027.
1097
- [35] Skehan, P., and Friedman, S. J. Non-exponential growth by mammalian cells in
1098
- culture. Cell Prolif. 17, 4 (1984), 335–343.
1099
- [36] Sonnenschein, C., and Soto, A. M. Somatic mutation theory of carcinogenesis:
1100
- Why it should be dropped and replaced. Mol. Carcinog. 29, 4 (2000), 205–211.
1101
- [37] Speer, J. F., Petrosky, V. E., Retsky, M. W., and Wardwell, R. H. A
1102
- stochastic numerical model of breast cancer growth that simulates clinical data. Cancer
1103
- Res. 44, 9 (1984), 4124–4130.
1104
- [38] Spina, S., Giorno, V., Rom´an-Rom´an, P., and Torres-Ruiz, F. A stochastic
1105
- model of cancer growth subject to an intermittent treatment with combined effects:
1106
- Reduction in tumor size and rise in growth rate. Bull. Math. Biol. 76, 11 (2014),
1107
- 2711–2736.
1108
- [39] Tabassum, D. P., and Polyak, K. Tumorigenesis: It takes a village. Nat. Rev.
1109
- Cancer 15, 8 (2015), 473–483.
1110
- [40] Wang, Y.
1111
- Algorithms for finding transposons in gene sequences.
1112
- arXiv preprint
1113
- arXiv:1506.02424 (2015).
1114
- [41] Wang, Y. Some Problems in Stochastic Dynamics and Statistical Analysis of Single-
1115
- Cell Biology of Cancer. Ph.D. thesis, University of Washington, 2018.
1116
- [42] Wang, Y. Two metrics on rooted unordered trees with labels. Algorithms for Molec-
1117
- ular Biology 17, 1 (2022), 1–17.
1118
- 19
1119
-
1120
- [43] Wang, Y., Dessalles, R., and Chou, T. Modelling the impact of birth control
1121
- policies on China’s population and age: effects of delayed births and minimum birth
1122
- age constraints. Royal Society Open Science 9, 6 (2022), 211619.
1123
- [44] Wang, Y., and He, S. Inference on autoregulation in gene expression. arXiv preprint
1124
- arXiv:2201.03164 (2022).
1125
- [45] Wang, Y., Kropp, J., and Morozova, N. Biological notion of positional informa-
1126
- tion/value in morphogenesis theory. International Journal of Developmental Biology
1127
- 64, 10-11-12 (2020), 453–463.
1128
- [46] Wang, Y., Minarsky, A., Penner, R., Soul´e, C., and Morozova, N. Model
1129
- of morphogenesis. Journal of Computational Biology 27, 9 (2020), 1373–1383.
1130
- [47] Wang, Y., Mistry, B. A., and Chou, T. Discrete stochastic models of selex:
1131
- Aptamer capture probabilities and protocol optimization. The Journal of Chemical
1132
- Physics 156, 24 (2022), 244103.
1133
- [48] Wang, Y., and Qian, H. Mathematical representation of Clausius’ and Kelvin’s
1134
- statements of the second law and irreversibility. Journal of Statistical Physics 179, 3
1135
- (2020), 808–837.
1136
- [49] Wang, Y., and Wang, L. Causal inference in degenerate systems: An impossibility
1137
- result. In International Conference on Artificial Intelligence and Statistics (2020),
1138
- PMLR, pp. 3383–3392.
1139
- [50] Wang, Y., and Wang, Z. Inference on the structure of gene regulatory networks.
1140
- Journal of Theoretical Biology 539 (2022), 111055.
1141
- [51] Wang, Y., Zhang, B., Kropp, J., and Morozova, N. Inference on tissue trans-
1142
- plantation experiments. Journal of Theoretical Biology 520 (2021), 110645.
1143
- [52] Ye, F. X.-F., Wang, Y., and Qian, H. Stochastic dynamics: Markov chains and
1144
- random transformations. Discrete & Continuous Dynamical Systems-B 21, 7 (2016),
1145
- 2337.
1146
- [53] Yorke, E. D., Fuks, Z., Norton, L., Whitmore, W., and Ling, C. C. Model-
1147
- ing the development of metastases from primary and locally recurrent tumors: Com-
1148
- parison with a clinical data base for prostatic cancer.
1149
- Cancer Res. 53, 13 (1993),
1150
- 2987–2993.
1151
- [54] Zheng, Q. Progress of a half century in the study of the Luria–Delbr¨uck distribution.
1152
- Math. Biosci. 162, 1 (1999), 1–32.
1153
- [55] Zhou, D., Wang, Y., and Wu, B.
1154
- A multi-phenotypic cancer model with cell
1155
- plasticity. J. Theor. Biol. 357 (2014), 35–45.
1156
- 20
1157
-
1158
- [56] Zhou, J. X., Pisco, A. O., Qian, H., and Huang, S. Nonequilibrium population
1159
- dynamics of phenotype conversion of cancer cells. PLOS ONE 9, 12 (2014), e110714.
1160
- 21
1161
-
1162
- Figure 1: Growth curves of the experiment (left) and simulation (right), starting from
1163
- the time of reaching 5 area units (experiment) or having 2500 cells (simulation), with a
1164
- logarithm scale for the y-axis. The time required for reaching 5 area units was determined
1165
- by exponential extrapolation, as reliable imaging started at > 5 area units. The x-axis is
1166
- the time from reaching 5 area units (experiment) or 2500 cells (simulation). Red, green,
1167
- or blue curves correspond to 10, 4, or 1 initial cell(s). Although starting from the same
1168
- population level, patterns are different for distinct initial cell numbers. The N0 = 1-cell
1169
- group has higher diversity.
1170
- 22
1171
-
1172
- experimental
1173
- 80
1174
- cell area
1175
- 40
1176
- 20
1177
- 10-cell group
1178
- 4-cell group
1179
- 10
1180
- 1-cell group
1181
- 5
1182
- 0
1183
- 5
1184
- 10
1185
- 15
1186
- time (day)
1187
- 80
1188
- cell area
1189
- 40
1190
- 20simulation
1191
- 40000
1192
- cell number
1193
- 20000
1194
- 10000
1195
- 5000
1196
- 2500
1197
- 0
1198
- 5
1199
- 10
1200
- 15
1201
- time (day)
1202
- 40000
1203
- ell number
1204
- 20000
1205
- 100005
1206
- 0
1207
- 5
1208
- 10
1209
- 15
1210
- time (day)
1211
- 80
1212
- cell area
1213
- 40
1214
- 20
1215
- 10
1216
- 5
1217
- 0
1218
- 5
1219
- 10
1220
- 15
1221
- time (day)8
1222
- QQQ
1223
- 2500
1224
- 0
1225
- 5
1226
- 10
1227
- 15
1228
- time (day)
1229
- 40000
1230
- cell number
1231
- 20000
1232
- 10000
1233
- 5000
1234
- 2500
1235
- 0
1236
- 5
1237
- 10
1238
- 15
1239
- time (day)20
1240
- 40
1241
- 60
1242
- 80
1243
- cell area
1244
- 0
1245
- 0.5
1246
- 1
1247
- 1.5
1248
- growth rate
1249
- experimental
1250
- 10-cell group
1251
- 4-cell group
1252
- 1-cell group
1253
- 0
1254
- 20
1255
- 40
1256
- 60
1257
- 80
1258
- cell area
1259
- 0
1260
- 0.5
1261
- 1
1262
- 1.5
1263
- growth rate
1264
- 1
1265
- 2
1266
- 3
1267
- 4
1268
- cell number
1269
- 104
1270
- 0
1271
- 0.5
1272
- 1
1273
- 1.5
1274
- growth rate
1275
- simulation
1276
- 0
1277
- 1
1278
- 2
1279
- 3
1280
- 4
1281
- cell number
1282
- 104
1283
- 0
1284
- 0.5
1285
- 1
1286
- 1.5
1287
- growth rate
1288
- Figure 2: Per capita growth rate (averaged within one day) vs. cell population for the
1289
- experiment (left) and simulation (right). Each point represents one well in one day. Red,
1290
- green, or blue points correspond to 10, 4, or 1 initial cell(s).
1291
- 23
1292
-
1293
- 0
1294
- 5
1295
- 10
1296
- 15
1297
- 20
1298
- time (day)
1299
- 0
1300
- 20
1301
- 40
1302
- 60
1303
- 80
1304
- cell area
1305
- experimental
1306
- 10-cell group
1307
- 4-cell group
1308
- 1-cell group
1309
- 0
1310
- 5
1311
- 10
1312
- 15
1313
- 20
1314
- time (day)
1315
- 5
1316
- 10
1317
- 20
1318
- 40
1319
- 80
1320
- cell area
1321
- 0
1322
- 5
1323
- 10
1324
- 15
1325
- 20
1326
- time (day)
1327
- 0
1328
- 1
1329
- 2
1330
- 3
1331
- 4
1332
- cell number
1333
- 104
1334
- simulation
1335
- 0
1336
- 5
1337
- 10
1338
- 15
1339
- 20
1340
- time (day)
1341
- 2500
1342
- 5000
1343
- 10000
1344
- 20000
1345
- 40000
1346
- cel number
1347
- Figure 3: Growth curves of the experiments with different initial cell numbers N0 (left)
1348
- and growth curves of corresponding simulation (right). Each curve describes the change in
1349
- the cell population (measured by area or number) over a well along time. Red, green, or
1350
- blue curves correspond to N0 = 10, 4, or 1 initial cell(s).
1351
- 24
1352
-
1353
- Figure 4: Schematic illustration of the qualitative argument: Three cell types and growth
1354
- patterns (three colors) with different seeding numbers. One N0 = 10-cell well will have
1355
- at least one fast type cell with high probability, which will dominate the population. One
1356
- N0 = 1-cell well can only have one cell type, thus in the microculture ensemble of replicate
1357
- wells, three possible growth patterns for wells can be observed.
1358
- 25
1359
-
1360
- fast
1361
- moderate
1362
- slow
1363
- fast
1364
- fast
1365
- moderate
1366
- slow
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
knowledge_base/BdE2T4oBgHgl3EQfRge6/content/tmp_files/load_file.txt DELETED
The diff for this file is too large to render. See raw diff
 
knowledge_base/BdE2T4oBgHgl3EQfRge6/vector_store/index.faiss DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:cb98d20c7a47fd65b12339f3b93a2349c22522aaf896fd57ec1842db3d97d528
3
- size 4194349
 
 
 
 
knowledge_base/BdE2T4oBgHgl3EQfRge6/vector_store/index.pkl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:92a209d77bde2f71d9801d94de1e39f975e17a8eaeb69d4ad8362b722afe07a1
3
- size 151696
 
 
 
 
knowledge_base/BdE4T4oBgHgl3EQf5Q5A/content/2301.05321v1.pdf DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6d96a590ac96bc21cd11ad454bc2ad0b2970aec2742165ec1bf0ef55d16bce30
3
- size 983040
 
 
 
 
knowledge_base/BdE4T4oBgHgl3EQf5Q5A/content/tmp_files/2301.05321v1.pdf.txt DELETED
@@ -1,2198 +0,0 @@
1
- Springer Nature 2021 LATEX template
2
- Homeostatic regulation of renewing tissue cell
3
- populations via crowding control: stability,
4
- robustness and quasi-dedifferentiation
5
- Cristina Parigini1,2,3 and Philip Greulich1,2*
6
- 1*School of Mathematical Sciences, University of Southampton,
7
- Southampton, United Kingdom.
8
- 2Institute for Life Sciences, University of Southampton,
9
- Southampton, United Kingdom.
10
- 3Te P¯unaha ¯Atea - Space Institute, University of Auckland,
11
- Auckland, New Zealand.
12
- *Corresponding author(s). E-mail(s): [email protected];
13
- Contributing authors: [email protected];
14
- Abstract
15
- To maintain renewing epithelial tissues in a healthy, homeostatic state,
16
- (stem) cell divisions and differentiation need to be tightly regulated.
17
- Mechanisms of homeostatic control often rely on crowding control: cells
18
- are able to sense the cell density in their environment (via various
19
- molecular and mechanosensing pathways) and respond by adjusting
20
- division, differentiation, and cell state transitions appropriately. Here
21
- we determine, via a mathematically rigorous framework, which general
22
- conditions for the crowding feedback regulation (i) must be minimally
23
- met, and (ii) are sufficient, to allow the maintenance of homeosta-
24
- sis in renewing tissues. We show that those conditions naturally allow
25
- for a degree of robustness toward disruption of regulation. Further-
26
- more, intrinsic to this feedback regulation is that stem cell identity is
27
- established collectively by the cell population, not by individual cells,
28
- which implies the possibility of ‘quasi-dedifferentiation’, in which cells
29
- committed to differentiation may reacquire stem cell properties upon
30
- depletion of the stem cell pool. These findings can guide future exper-
31
- imental campaigns to identify specific crowding feedback mechanisms.
32
- Keywords: keyword1, Keyword2, Keyword3, Keyword4
33
- 1
34
- arXiv:2301.05321v1 [q-bio.TO] 12 Jan 2023
35
-
36
- Springer Nature 2021 LATEX template
37
- 2
38
- Homeostatic regulation of renewing tissue cell populations via crowding control
39
- 1 Introduction
40
- Many adult tissues are renewing, that is, terminally differentiated cells are
41
- steadily removed and replaced by new cells produced by the division of cycling
42
- cells (stem cells and progenitor cells), which then differentiate. In order to
43
- maintain those tissues in a healthy, homeostatic state, (stem) cell divisions
44
- and differentiation must be tightly balanced. Adult stem cells are the key
45
- players in maintaining and renewing such tissues due to their ability to produce
46
- cells through cell division and differentiation persistently [1]. However, the
47
- underlying cell-intrinsic and extrinsic factors that regulate a homeostatic state
48
- are complex and not always well understood.
49
- Several experimental studies have identified mechanisms and pathways that
50
- regulate homeostasis. For example, cell crowding can trigger delamination and
51
- thus loss of cells in Drosophila back [2], and differentiation in cultured human
52
- colon, various zebrafish epiderimises, and canine kidney cells [3, 4]. On the
53
- other hand, cell crowding can affect cell proliferation: overcrowding can inhibit
54
- proliferation [5], whereas a reduction in the cell density, obtained, for example,
55
- by stretching a tissue [6] causes an increase in proliferative activity (both
56
- shown in cultured canine kidney cells). Although the mechanisms to mediate
57
- this regulation are not always clear, experimental studies on mechanosensing
58
- showed that cell overcrowding reduces cell motility and consequently produces
59
- a compression on cells that inhibits cell proliferation [5, 7]. Another mechanism
60
- utilising crowding feedback is the competition for limited growth signalling
61
- factors [8]. More specifically, in the mouse germ line, cells in the niche respond
62
- to a growth factor (FGF5) that promotes proliferation over differentiation,
63
- which they deplete upon being exposed to it. Therefore, the more cells are
64
- in the niche, the less FGF5 is available per cell, and the less proliferation (or
65
- more differentiation) occurs.
66
- Despite differing in the involved molecular pathways and many other
67
- details, all these regulatory mechanisms are, in essence, sensing the cell den-
68
- sity in their environment and responding by adjusting their propensities to
69
- divide, differentiate, die, or emigrate from the tissue. This class of mechanisms,
70
- for which cell fate propensities depend on the cell density, can be classified as
71
- crowding feedback regulation: the cell density determines the cells’ prolifera-
72
- tion and differentiation, which affects their population dynamics and thus the
73
- cell density. However, the crowding response to changes in cell density cannot
74
- be arbitrary in order to maintain homeostasis. It must provide a (negative)
75
- feedback, in the sense that cells sense the cell density and adjust proliferation,
76
- differentiation, and cell loss, such that the cell density is decreased if it is too
77
- high and increased if it is too low. For simple tissues consisting of a single
78
- cell type with a unique cell state, it is relatively straightforward to give the
79
- conditions for crowding feedback to maintain homeostasis successfully. In this
80
- case, when the cell division rate decreases with cell density and differentiation
81
- and or death rate increase with cell density, a homeostatic state is maintained.
82
- However, such conclusions are not as simple to make when a tissue consists of
83
- a complex lineage hierarchy and a multitude of underlying cellular states. In
84
-
85
- Springer Nature 2021 LATEX template
86
- Homeostatic regulation of renewing tissue cell populations via crowding control
87
- 3
88
- the latter, more realistic case, conditions for successful homeostatic regulation
89
- – in which case we speak of crowding control – may take more complex forms.
90
- Previous studies based on mathematical modelling have shed some light
91
- on quantitative mechanisms for homeostatic control [9–13]. In particular, in
92
- [13], a mathematical assessment of crowding feedback modelling shows that
93
- a (dynamic) homeostatic state exists under reasonable biological conditions.
94
- Nevertheless, the case of dynamic homeostasis considered there may not nec-
95
- essarily be a steady state but could also exhibit oscillations in cell numbers
96
- (as does realistically happen in the uterus during the menstrual cycle). While
97
- the criterion presented in [13] provides a valid sufficient condition for dynamic
98
- homeostasis, it relies on a rather abstract mathematical quantity – the domi-
99
- nant eigenvalue of the dynamical matrix – that is difficult, if not impossible,
100
- to measure in reality.
101
- Here, we wish to generalise previous findings and seek to identify general
102
- conditions for successful homeostatic control if propensities for cell division,
103
- differentiation, and loss are responsive to variations in cell density. More
104
- precisely, we derive conditions that must be minimally fulfilled (necessary
105
- conditions) and conditions which are sufficient, to ensure that homeostasis pre-
106
- vails. To identify and formulate those conditions, we note that homeostasis is
107
- a property of the tissue cell population dynamics, which can be mathemati-
108
- cally expressed as a dynamical system. Even if a numerically exact formulation
109
- of the dynamics may not be possible, one can formulate generic yet mathe-
110
- matically rigorous conditions by referring to the criteria for the existence of
111
- stable steady states in the cell population dynamics of renewing tissues. We
112
- will derive those conditions by mathematical, analytical means, augmented by
113
- a numerical analysis testing the limits of those conditions.
114
- We will also show that homeostatic control by crowding feedback possesses
115
- inherent robustness to failures and perturbations of the regulatory pathways,
116
- which may occur through external influences (e.g. wide-spread biochemical fac-
117
- tors) and genetic mutations. Finally, we will assess the response of cells when
118
- the pool of stem cells is depleted. Crucially, we find that inherent to crowd-
119
- ing feedback control is that formerly committed progenitor cells reacquire
120
- self-renewal capacity without substantial changes in their internal states. Ded-
121
- ifferentiation has been widely reported under conditions of tissue regeneration
122
- [14, 15] or when stem cells are depleted [16–19], which is usually thought to
123
- involve a substantial reprogramming of the cell-intrinsic states towards a stem
124
- cell type. On the other hand, our analysis suggests the possibility of “quasi”-
125
- dedifferentiation, the reversion from a committed cell to a stem cell by a
126
- mere quantitative adjustment of the pacing of proliferation and differentiation,
127
- without a substantial qualitative change in its expression profiles.
128
-
129
- Springer Nature 2021 LATEX template
130
- 4
131
- Homeostatic regulation of renewing tissue cell populations via crowding control
132
- 2 Modelling of tissue cell dynamics under
133
- crowding feedback
134
- We seek to assess the conditions for homeostasis in renewing tissue cell pop-
135
- ulations, that is, either a steady state of the tissue cell population (strict
136
- homeostasis) or long-term, bounded oscillations or fluctuations (dynamic
137
- homeostasis), which represent well-defined constraints on the dynamics of the
138
- tissue cell population. To this end, we will here derive a formal, mathematical
139
- representation of the tissue cell dynamics under crowding feedback.
140
- The cell population is fully defined by (i) the number of cells, (ii) the
141
- internal (biochemical and mechanical) states of each cell, and (iii) the spatial
142
- position of cells. We assume that a cell’s behaviour can depend on the cell
143
- density and the states of cells in its close cellular environment. As we examine
144
- a situation close to a homeostatic state, we assume that the cell density is
145
- homogeneous over the range of interaction between cells, which expands over
146
- a volume V . Hence, the cell density ρ is proportional to the average number
147
- of cells, ¯n, in that volume, ρ =
148
- ¯n
149
- V . Similarly, we define the number of cells
150
- in internal state i as ni, and the cell density of cells in internal state i as
151
- ρi = ¯ni
152
- V , where ¯ni is the expected value of ni. As we consider only the crowding
153
- feedback response of cells, which only accounts for the cell densities ρi but
154
- not the explicit position of cells, the spatial configuration (iii) is not relevant
155
- to our considerations. Thus, the configuration of the cell population and its
156
- time evolution is entirely determined by the average number of cells in each
157
- state i, as a function of time t, ¯ni(t). The configuration of cell numbers ni
158
- can change only through three processes: (1) cell division, whereby it must be
159
- distinguished between the cell state of daughter cells, (2) the transition from
160
- one cell state to another, (3) loss of a cell, through cell death or emigration
161
- out of the tissue. Following the lines of Refs. [13, 20] and denoting as Xi,j,k a
162
- cell in internal states i, j, k, respectively, we can formalise these events as:
163
- cell division: Xi
164
- λirjk
165
- i
166
- −−−→ Xj + Xk
167
- (1)
168
- cell state transition: Xi
169
- ωij
170
- −−→ Xj
171
- (2)
172
- cell loss: Xi
173
- γi
174
- −→ ∅ ,
175
- (3)
176
- where the symbols above the arrows denote the dynamical rates of the transi-
177
- tions, i.e. the average frequency at which such events occur. In particular, γi
178
- is the rate at which a cell in state i is lost, ωij the rate at which a cell changes
179
- its state from i to j and λirjk
180
- i
181
- denotes the rate at which a cell i divides to pro-
182
- duce two daughter cells, one in state j and one in state k (i = j, j = k, k = i
183
- are possible). For later convenience, we distinguish here the overall rate of cell
184
- division in state i, λi and the probability rjk
185
- i
186
- that such a division produces
187
- daughter cells in states j and k.
188
- Since we consider a situation where cells can respond to the cell densities
189
- ρi via crowding feedback, all the rates and probabilities (λi, γi, ωij, rjk
190
- i ) may
191
-
192
- Springer Nature 2021 LATEX template
193
- Homeostatic regulation of renewing tissue cell populations via crowding control
194
- 5
195
- depend on the cell densities of either state j, ρj. For convenience, we discretise
196
- the number of states in case the state space is a continuum and only distinguish
197
- states which have substantially different propensities (λi, γi, ωij, rjk
198
- i ). Without
199
- loss of generality, we assume that there are m states, that is, i, j, k = 1, ..., m
200
- (for a rigorous argument for the discretisation of the state space, see [13]).
201
- The rates given above denote the average number of events happening per
202
- time unit. Thus, we can express the total rate of change of the average (i.e.
203
- expected) number of cells ¯ni(t), that is, the derivative ˙¯ni = d¯ni
204
- dt , in terms of
205
- the rates of those events. This defines a set of ordinary differential equations.
206
- Following the lines of Refs. [13, 20], we can write ˙ni as,
207
- ˙¯ni =
208
- ��
209
- j
210
- ωji¯nj + λj
211
- ��
212
- k
213
- rik
214
- j + rki
215
- j
216
-
217
- ¯nj
218
-
219
- − ¯ni
220
-
221
- λi + γi +
222
-
223
- j
224
- ωij
225
-
226
- ,
227
- (4)
228
- where for convenience, we did not write the time dependence explicitly, i.e.
229
- ni = ni(t), and all parameters may depend on the cell densities ρj. Since V is
230
- constant, we can divide by V to equivalently express this in terms of the cell
231
- state densities, ρi = ¯ni
232
- V , and then write Eq. (4) compactly as,
233
- d
234
- dtρ(t) = A(ρ(t)) ρ(t)
235
- (5)
236
- where ρ = (ρ1, ρ2, ...) is the vector of cell state densities and A(ρ) is the matrix,
237
- A =
238
-
239
-
240
- λ1 − �
241
- j̸=1 κ1j − γ1
242
- κ21
243
- κ31
244
- · · ·
245
- κ12
246
- λ2 − �
247
- j̸=2 κ2j − γ2 κ32
248
- · · ·
249
- κ1m
250
- κ2m
251
- · · · λm − �
252
- j̸=m κmj − γm
253
-
254
- � ,
255
- (6)
256
- in which κij = λi2rj
257
- i + ωij, with rj
258
- i = �
259
- k(rjk
260
- i
261
- + rkj
262
- i )/2, is the total transition
263
- rate, that combines all transitions from Xi to Xj by cell divisions and direct
264
- state transitions (again, all parameters may depend on ρ, as therefore also
265
- does A). We can thus generally write the elements of the matrix A, aij with
266
- i, j = 1, ..., m as
267
- aij =
268
- � λi − γi − �
269
- k̸=i κik
270
- for i = j
271
- κji
272
- for i ̸= j
273
- (7)
274
- We now make the mild assumption that divisions of the form Xi → Xj+Xk
275
- are effectively three events, namely, cell duplication, Xi → Xi + Xi coupled to
276
- cell state changes, Xi → Xj and Xi → Xk, if j ̸= i or k ̸= i. In this view, the
277
- parameters relevant for crowding feedback are the total cell state transition
278
- propensities κij and the cell division rate λi, as in (6), instead of ωij and rjk
279
- i .
280
- These equations describe a dynamical system which, for given initial con-
281
- ditions, determines the time evolution of the cell densities, ρi(t). Crucially,
282
-
283
- Springer Nature 2021 LATEX template
284
- 6
285
- Homeostatic regulation of renewing tissue cell populations via crowding control
286
- this description allows for a rigorous mathematical definition of what a home-
287
- ostatic state is, and to apply tools of dynamical systems analysis to determine
288
- the circumstances under which a homeostatic state prevails. In particular, we
289
- define a strict homeostatic state as a steady state of the system, (5), when the
290
- cell numbers – and thus cell densities, given that V is fixed – in each state
291
- do not change, mathematically expressed as dρ
292
- dt = 0 (a fixed point of the sys-
293
- tem). A dynamic homeostatic state is when cell densities may also oscillate
294
- or fluctuate but remain bounded and thus possess a finite long-term average
295
- cell population (in which case the system either approaches a steady state or
296
- limit cycles – that is, oscillations – or chaotic but bounded behaviour). Based
297
- on these definitions, we can now analyse under which circumstances crowd-
298
- ing feedback can maintain those states, which in the case of strict homeostasis
299
- requires, in addition, that the corresponding steady state is stable.
300
- 2.1 Cell types and lineage hierarchies
301
- According to [13], cell population dynamics of the type (5) can be associated
302
- with a cell state network, in which each state is a node, and the nodes are
303
- connected through cell state transition (direct transitions and cell divisions).
304
- Furthermore, by decomposing this network in strongly connected components
305
- (SCCs), the cell fate model can be viewed as a directed acyclic network [21],
306
- generally called the condensed network. Here, we follow the definitions of [13]
307
- and define a cell type as an SCC of the cell state network, so that any cell states
308
- connected via cyclic cell state trajectories (sequences of cell state transitions)
309
- are of the same type, and the condensed network of cell types represents the
310
- cell lineage hierarchy. This definition ensures that cells of the same type have
311
- the same lineage potential (outgoing cell state trajectories) and that the stages
312
- of the cell cycle are associated with the same cell type. In this context, we
313
- will in the following also speak of differentiation when a cell state transition
314
- between different cell types occurs.
315
- Each cell type can be classified as self-renewing, declining or hyper-
316
- proliferating, depending on the dominant eigenvalue µ (called growth parame-
317
- ter) of the dynamical matrix A (from Eq. (5) ff.) reduced to that SCC. This
318
- is µ = 0 for self-renewing cell types, when cell numbers of that type remain
319
- constant over time, µ < 0 (µ > 0) for the declining (hyperproliferating) types
320
- when cell numbers decline (increase) in the long term [13]. Importantly, for the
321
- population dynamics to be strictly homeostatic, which means that a steady
322
- state of model (5) exists, the cell type network must fulfil strict rules. These
323
- are: (i) at least one self-renewing cell type (with µ = 0) must exist; (ii) self-
324
- renewing cell types must stay at an apex of the condensed network; (iii) all
325
- the other cells must be of declining types. This means that the critical task of
326
- homeostatic control is to ensure that the kinetic parameters of the cell type at
327
- the apex of the cell lineage hierarchy are fine-tuned to maintain exactly µ = 0.
328
- Therefore, we can restrict our analysis to find conditions for the cell type
329
- at the lineage hierarchy’s apex to be self-renewing, which we will do in the
330
- following. Other cell types simply need that differentiation (transition towards
331
-
332
- Springer Nature 2021 LATEX template
333
- Homeostatic regulation of renewing tissue cell populations via crowding control
334
- 7
335
- another cell type) or loss is faster than proliferation, so that they become
336
- declining cell types, µ < 0, but those rates do not require fine-tuning and thus
337
- trivially regulated. We note that when we consider only cell states of the type
338
- at the apex of the cell lineage hierarchy, any differentiation event is – according
339
- to this restricted model – a cell loss event and included as event occurring with
340
- rates γi. Given that cell loss from a cell type at the lineage apex is rare, we
341
- will therefore in the following also denote the rates γi simply as differentiation
342
- rates.
343
- 3 Results
344
- We will now determine necessary and sufficient conditions for the establish-
345
- ment of strict and dynamical homeostasis when subject to crowding feedback,
346
- which we here define through the derivatives of the dynamical parameters
347
- λi, rjk
348
- i , ωij, γi as a function of the cell densities. As argued before, we only need
349
- to consider cell types at an apex of the cell type network, which, for home-
350
- ostasis to prevail, must have a growth parameter (i.e. dominant eigenvalue of
351
- matrix A in Eq. (6)) µ = 0. Furthermore, we assume that the apex cell type
352
- resides in a separate stem cell niche. Therefore, the parameters only depend
353
- on cell densities ρi of states associated with that cell type, i.e. we can write
354
- A = A(ρ), where ρ = �
355
- i∈S ρi comprises only cell states of the apex cell type
356
- S. Provided that, the matrix elements are functions of ρ, and therefore also µ
357
- is a function of ρ. Thus, self-renewal corresponds to a non-trivial fixed point,
358
- ρ∗, of Eq. (5), restricted to cell type S, for which the dominant eigenvalue of
359
- A is zero, that is µ(ρ∗) = 0 (ρ∗ = �
360
- i∈S ρ∗
361
- i ).
362
- For convenience, we will often generally refer to parameters as αi, i =
363
- 1, ..., 2m + m2, where αi stands for any of the parameters, {λi, γi, κij|i, j =
364
- 1, ..., m}, respectively1. Hence, we study which conditions the functions αi(ρ)
365
- must meet to maintain homeostasis. In particular, we study how those param-
366
- eters qualitatively change with the cell density – increase or decrease – that
367
- is, we study how the sign and magnitude of derivatives α′
368
- i :=
369
- dαi
370
-
371
- affects
372
- homeostasis.
373
- A crucial property of the matrix A(ρ) is that it is always a Metzler matrix,
374
- since all its off-diagonal elements, κij ≥ 0. Since the cell state network of a
375
- cell type is strongly connected, we can further state that A(ρ) is irreducible.
376
- Notably, for irreducible Metzler matrices holds the Perron-Frobenius theorem
377
- [22], and thus A(ρ) possesses a simple, real dominant eigenvalue µ. Besides,
378
- it as left and right eigenvectors associated with µ, respectively indicated as v
379
- and w, which are strictly positive, that is, all their entries are vi > 0, wi > 0.
380
- From this follows that the partial derivative of the dominant eigenvalue µ by
381
- 1More
382
- precisely,
383
- αi|i=1,..,m
384
- :=
385
- λi, αi|i=m+1,..,2m
386
- :=
387
- γi−m, αi|i=2m+1,..,2m+m2
388
- :=
389
- κ⌊(i−2m)/m⌋,i−⌊(i−2m)/m⌋m
390
-
391
- Springer Nature 2021 LATEX template
392
- 8
393
- Homeostatic regulation of renewing tissue cell populations via crowding control
394
- the i, j-th element of A, aij = [A]ij is always positive:
395
- ∂µ
396
- ∂aij
397
- = viwj
398
- vw > 0
399
- (8)
400
- where the left equality is according to [23] and is generally valid for simple
401
- eigenvalues. Here, v is assumed to be in row form, and vw thus corresponds
402
- to a scalar product.
403
- 3.1 Sufficient condition for dynamic homeostasis
404
- In [13], it was shown that a dynamic homeostatic state, where cell numbers
405
- may change over time but stay bounded, is assured if, 2
406
- µ′(ρ) < 0 for all ρ > 0.
407
- (9)
408
- This sufficient condition requires that the dominant eigenvalue of A as a func-
409
- tion of the cell density, µ(ρ), is a strictly decreasing function of cell density.
410
- Also, the range of this function must be sufficiently large so that it has a root,
411
- i.e. a value ρ∗ with µ(ρ∗) = 0 must exist for the function µ(ρ).
412
- Assuming that a non-trivial steady state, ρ∗ > 0, exists, we now translate
413
- the sufficient condition for a dynamic homeostatic state, Eq. (9), into condi-
414
- tions on the parameters as a function of the cell density, αi(ρ). In particular,
415
- we can write,
416
- µ′(ρ) =
417
-
418
- ij
419
- ∂µ
420
- ∂aij
421
- ∂aij
422
- ∂ρ =
423
-
424
- ij
425
- viwj
426
- vw a′
427
- ij =
428
-
429
- i
430
- viwi
431
- vw a′
432
- ii +
433
-
434
- i,j̸=i
435
- viwj
436
- vw a′
437
- ij
438
- =
439
-
440
- i
441
- viwi
442
- vw
443
-
444
- �λ′
445
- i − γ′
446
- i −
447
-
448
- j̸=i
449
- κ′
450
- ij
451
-
452
- � +
453
-
454
- i,j̸=i
455
- vjwi
456
- vw κ′
457
- ij ,
458
- (10)
459
- where we used Eq. (8) and the explicit forms of aij, the elements of the matrix
460
- A according to Eq. (7). Provided that all the parameters depend on ρ, condition
461
- (9) results in:
462
- 0 > µ′ =⇒ 0 >
463
-
464
- i
465
- viwi (λ′
466
- i − γ′
467
- i) + wi
468
-
469
- j̸=i
470
- (vj − vi)κ′
471
- ij
472
- for all ρ > 0 ,
473
- (11)
474
- While we cannot give an explicit general expression for the dominant eigen-
475
- vectors v, w, this condition is sufficiently fulfilled if each term of the sum on
476
- the right-hand side of Eq. (11) is negative. More restrictively, we have Eq. (11)
477
- 2In [13], this condition, defined through dependency on cell number, can be directly translated
478
- into a condition on the cell density derivative if the volume is assumed as a constant.
479
-
480
- Springer Nature 2021 LATEX template
481
- Homeostatic regulation of renewing tissue cell populations via crowding control
482
- 9
483
- sufficiently fulfilled if
484
-
485
-
486
-
487
-
488
-
489
- λ′
490
- i ≤ 0, γ′
491
- i ≥ 0 for all i
492
- λ′
493
- i < 0 or γ′
494
- i > 0 at for least one i
495
- κ′
496
- ij = 0 for all i, j
497
- for ρ > 0
498
- (12)
499
- This means that, excluding rates that are zero, which are biologically mean-
500
- ingless, if no direct state transitions within a cell type are subject to crowding
501
- feedback (κ′
502
- ij = 0), while all (non-zero) cell division rates depend negatively
503
- on ρ (λ′
504
- i < 0), and differentiation rates depend positively (γ′
505
- i > 0), for all
506
- attainable levels of ρ, then dynamical homeostasis is ensured.
507
- Alternatively, we can rewrite Eq. (11) as
508
- 0 >
509
-
510
- i
511
- viwi
512
- vw
513
-
514
- �λ′
515
- i − γ′
516
- i −
517
-
518
- j̸=i
519
- κ′
520
- ij +
521
-
522
- j̸=i
523
- vj
524
- vi
525
- κ′
526
- ij
527
-
528
-
529
- for all ρ > 0 ,
530
- (13)
531
- which, due to
532
- vj
533
- vi
534
- > 0, implies another sufficient condition for dynamic
535
- homeostasis:
536
-
537
-
538
-
539
-
540
-
541
- λ′
542
- i ≤ 0, γ′
543
- i ≥ 0 for all i
544
- λ′
545
- i < 0 or γ′
546
- i > 0 at for least one i
547
- κ′
548
- ij ≤ 0 with |�
549
- j κ′
550
- ij| ≤ γ′
551
- i − λ′
552
- i for all i, j
553
- (14)
554
- The above condition is less restrictive than Eq. (12), allowing for some non-
555
- zero crowding feedback dependency of state transition rates κij, as long as the
556
- crowding feedback strength of the total outgoing transition rate of each state
557
- does not outweigh the feedback on proliferation and differentiation rate of that
558
- state (if there is).
559
- 3.2 Necessary condition for strict homeostasis
560
- We now consider the circumstances under which a strict homeostatic is main-
561
- tained, that is, when a steady state of the cell population exists and is
562
- asymptotically stable.
563
- A necessary condition for the existence of a steady state ρ∗ (irrespective
564
- of stability) has been given in [13], namely, that the cell type at the apex of
565
- the lineage hierarchy is self-renewing, i.e. its dynamical matrix A has µ = 0.
566
- µ depends on the cell density ρ of the apex cell type, since the dynamical
567
- parameters αi and thus A depend on ρ. As before, it is required that µ(ρ∗)
568
- has sufficient range so that a value ρ∗ with µ(ρ∗) = 0 exists. This condition is
569
- fulfilled if the range of the feedback parameters αi(ρ) is sufficiently large. In
570
- that case there exists an eigenvector ρ∗ with A(ρ∗)ρ∗ = 0, which can be chosen
571
- by normalisation to fulfil �
572
- i∈S ρ∗
573
- i = ρ∗. Thus, ρ∗ is a fixed point (steady state)
574
-
575
- Springer Nature 2021 LATEX template
576
- 10
577
- Homeostatic regulation of renewing tissue cell populations via crowding control
578
- of the cell population system (5). Hence, we need to establish what is required
579
- for this state to be asymptotically stable.
580
- To start with, we give the Jacobian matrix of the system (5) at the fixed
581
- point ρ∗ :
582
- [J]ij = ∂[A(ρ)ρ]i
583
- ∂ρj
584
- ����
585
- ρ=ρ∗
586
- = a∗
587
- ij + ηi ,
588
- (15)
589
- where
590
- ηi =
591
-
592
- k
593
- a′
594
- ikρ∗
595
- k .
596
- (16)
597
- Here and in the following, we assume the derivatives to be taken at the steady
598
- state, i.e. a′
599
- ij :=
600
- daij
601
- dρ |ρ=ρ∗. The eigenvalues of the Jacobian matrix J at ρ∗
602
- determine the stability of the steady state ρ∗: it is asymptotically stable if and
603
- only if the real part of all eigenvalues of J(ρ∗) is negative.
604
- The Routh-Hurwitz theorem [24] states that for a polynomial to have only
605
- roots with negative real part, all its coefficients must necessarily be positive.
606
- Given that the eigenvalues of the Jacobian matrix J are the roots of its char-
607
- acteristic polynomial, a necessary condition for ρ∗ to be asymptotically stable
608
- is that the coefficients of the characteristic polynomial of J are all positive.
609
- Let us start by considering a self-renewing cell type with exactly two cell
610
- states being at the apex of a lineage hierarchy. This system has a 2 × 2 dynam-
611
- ical matrix A and Jacobian J, whereby A is irreducible and has dominant
612
- eigenvalue µA = 0. The characteristic polynomial of a generic 2×2 matrix, M,
613
- is
614
- P M(s) = s2 + pM
615
- 1 s + pM
616
- 0 .
617
- (17)
618
- with pM
619
- 1 = −tr(M) and pM
620
- 0 = det(M). In particular, since A has an eigenvalue
621
- zero,
622
- pA
623
- 0 = det(A) = a11a22 − a12a21 = 0 .
624
- (18)
625
- From this follows that the right and left eigenvectors to the matrix A
626
- associated with the dominant eigenvalue µA = 0, w and v, are:
627
- w =
628
-
629
- −a22
630
- a21
631
-
632
- and v =
633
-
634
- −a22 a12
635
-
636
- .
637
- (19)
638
- From the Jacobian matrix J, we get equivalently,
639
- pJ
640
- 0 = det(J) = (a21 − a22)(−a22η1 + a12η2)
641
- a22
642
- = vη |w|
643
- a22
644
- ,
645
- (20)
646
- with the L1-norm |w| = w1 + w2 = −a22 + a213. Here we used the form of J
647
- in Eq. (15) with η = (η1, η2) from (16), as well as the relations (18) and (19),
648
- and we factorised the determinant.
649
- 3Note that aii is always negative or zero
650
-
651
- Springer Nature 2021 LATEX template
652
- Homeostatic regulation of renewing tissue cell populations via crowding control
653
- 11
654
- From Eq. (10), we can further establish:
655
- µ′ =
656
-
657
- ij
658
- viwj
659
- vw a′
660
- ij =
661
-
662
- ij
663
- |w|
664
- ρ∗
665
- viρ∗
666
- j
667
- vw a′
668
- ij = |w|
669
- ρ∗
670
-
671
- vw
672
- (21)
673
- = − a22pJ
674
- 0
675
- ρ∗pJ
676
- 1 a22
677
- .
678
- (22)
679
- Here, we used that ρ∗ is a dominant right eigenvector, and thus ρ∗ =
680
- ρ∗
681
- |w|w,
682
- and furthermore we used the definition of ηi = �
683
- j a′
684
- ijρ∗
685
- j, we substituted Eq.
686
- (20), and used that vw = a2
687
- 22 + a12a21 = −pA
688
- 1 a22. Finally, we get:
689
- pJ
690
- 0 = −µ′ρ∗pA
691
- 1 .
692
- (23)
693
- Notably, we can show that this relation also holds for higher dimensions by
694
- explicitly computing the coefficients of characteristic polynomials pA,J
695
- i
696
- , the
697
- eigenvalues and eigenvectors, and then evaluating both sides of the equation.
698
- For systems with three states, this can be done analytically. For systems with
699
- 4,5, and 6 states we tested relation (23) numerically by generating N =1000
700
- random matrices with entries chosen from a uniform distribution4. In each
701
- case, this relation was fulfilled. Hence we are confident that this relation holds
702
- up to 6 states, and it is reasonable to expect this to hold also for larger systems.
703
- Since A has a simple dominant eigenvalue µA = 0, we can factorise one term
704
- from the characteristic polynomial, P(s) = sQ(s) knowing that all roots of
705
- Q(s) are negative. Applying the Routh-Hurwitz necessary condition to Q(s), it
706
- follows that the coefficients of the polynomial Q, pQ
707
- i > 0, where i = 1, 2, ..., n−
708
- 1. Thus, pA
709
- 1 > 0 and considering that ρ∗ > 0 by definition, then for having
710
- pJ
711
- 0 > 0 we must require µ′ < 0. Therefore, a necessary condition for a stable,
712
- strict homeostatic state is
713
- 0 > µ′ =⇒ 0 >
714
-
715
- i
716
- viwi (λ′
717
- i − γ′
718
- i) + wi
719
-
720
- j̸=i
721
- (vj − vi)κ′
722
- ij
723
- ������
724
- ρ=ρ∗
725
- ,
726
- (24)
727
- where on the right-hand side, we used Eq. (11). This condition is bound to the
728
- validity of Eq. (23), that is, we can show it analytically for up to three states
729
- and numerically up to 6 states. Nonetheless, we also expect this to be true for
730
- larger systems.
731
- One way to satisfy this necessary condition is if at ρ = ρ∗
732
-
733
-
734
-
735
-
736
-
737
- �
738
- i ≤ 0, γ′
739
- i ≥ 0 for all i
740
- λ′
741
- i < 0 or γ′
742
- i > 0 at for least one i
743
- κ′
744
- ij = 0
745
- .
746
- (25)
747
- 4The diagonal elements of the random matrix are tuned using a local optimiser (fmincon
748
- function of Matab) so that the matrix has a zero dominant eigenvalue.
749
-
750
- Springer Nature 2021 LATEX template
751
- 12
752
- Homeostatic regulation of renewing tissue cell populations via crowding control
753
- Notably, the necessary conditions (24) and (25) only differ from the suffi-
754
- cient conditions for dynamic heterogeneity, Eqs. (11) and (12), by needing to
755
- be fulfilled only at the steady-state cell density ρ∗, whereas to ensure dynamic
756
- homeostasis, those should be valid for a sufficiently large range of ρ.
757
- 3.3 Sufficient condition for strict homeostasis
758
- Now we assess under which circumstances a strict homeostatic state is assured
759
- to prevail.
760
- First of all, the necessary conditions from above need to be fulfilled. In
761
- particular, the parameter functions αi(ρ) must have a sufficient range so that
762
- µ(ρ) has a root, i.e. ρ∗ with µ(ρ∗) = 0 exists, from which the existence of
763
- a steady state follows. The question now is whether we can find sufficient
764
- conditions assuring that the fixed point ρ∗ with �
765
- i ρ∗
766
- i = ρ∗ is (asymptotically)
767
- stable.
768
- Let us define a matrix B(x), x = (x1, ..., xm) with bij(x) = [B]ij(x) =
769
- a∗
770
- ij +xi. Hence, B(xi = 0) = A(ρ∗) and B(xi = ηi) = J, where J, the Jacobian
771
- matrix, and ηi are defined as in (15) and (16), respectively. We consider now the
772
- dominant eigenvalue as function of the entries of B, µ[B] := µ({bij}|i,j=1,...,m)
773
- (the square brackets are chosen to denote the difference from the function
774
- µ(ρ)). For sufficiently small ηi, we can then express the dominant eigenvalue
775
- of the Jacobian matrix J, µ[J], relative to the dominant eigenvalues of A∗ :=
776
- A(ρ∗) as,
777
- µ[J] = µ[A∗] +
778
-
779
- i
780
- ∂µ
781
- ∂xi
782
- |xi=0 ηi + O(η2) ,
783
- (26)
784
- with,
785
- ∂µ
786
- ∂xi
787
- |xi=0 =
788
-
789
- ij
790
- ∂µ
791
- ∂bij
792
- ∂bij
793
- ∂xi
794
- |xi=0 =
795
-
796
- ij
797
- ∂µ
798
- ∂aij
799
- |B=A∗ ,
800
- (27)
801
- since for x = 0, bij = aij for all i, j. It follows that for sufficiently small5 ηi,
802
- and if all ηi < 0, we have
803
- µJ = µ[A∗](ρ∗) +
804
-
805
- i
806
- ∂µB
807
- ∂xi
808
- |xi=0ηi + O(η2
809
- i ) ≈
810
-
811
- i
812
- ∂µA
813
- ∂aij
814
- ηi < 0
815
- (28)
816
- since all ∂µA
817
- ∂aij > 0 (according to Eq. (8)) and µA(ρ∗) = 0. Hence, since µJ < 0,
818
- the steady state ρ∗ is asymptotically stable if all ηi < 0. Thus, we get a
819
- sufficient condition for asymptotic stability of the steady state ρ∗:
820
- 0 > ηi = ρ∗
821
- i (λ′
822
- i − γ′
823
- i) +
824
-
825
- k̸=i
826
- (κ′
827
- kiρ∗
828
- k − κ′
829
- ikρ∗
830
- i ) > −ϵi for all i
831
- (29)
832
- 5That is, there exist ϵi > 0 so that this is valid for any |ηi| < ϵi
833
-
834
- Springer Nature 2021 LATEX template
835
- Homeostatic regulation of renewing tissue cell populations via crowding control
836
- 13
837
- where ϵi > 0 is sufficiently small. As this is an asymptotically stable steady
838
- state, it corresponds to a controlled strict homeostatic state. In this case,
839
- even if the cell numbers are disturbed (to some degree), the cell population is
840
- regulated to return to the strict homeostatic state.
841
- Notably, condition (29) is fulfilled if,
842
-
843
-
844
-
845
-
846
-
847
-
848
-
849
-
850
-
851
- λ′
852
- i ≤ 0, γ′
853
- i ≥ 0 for all i
854
- λ′
855
- i < 0 or γ′
856
- i > 0 at for least one i
857
- κ′
858
- ij = 0
859
- and |λ′
860
- i|, |γ′
861
- i|, < ϵi
862
- (30)
863
- Furthermore, we may soften the condition on κij to
864
- κ′
865
- ij
866
- κ′
867
- ji <
868
- ρ∗
869
- j
870
- ρ∗
871
- i to allow also
872
- some degree of feedback for the κij.
873
- The conditions (30) are very similar to the ones for dynamic homeostasis,
874
- (12), but here these conditions only need to be fulfilled at ρ = ρ∗, whereas
875
- for dynamic homeostasis they need to be fulfilled for a sufficient range of ρ.
876
- Moreover, in addition to the qualitative nature of the feedback (related to
877
- the signs of λ′
878
- i, γ′
879
- i), the ‘strength’ of the crowding feedback, i.e. the absolute
880
- values of λ′
881
- i, γ′
882
- i must not be ‘too large’, that is, smaller than ϵi. We cannot, in
883
- general and for all system sizes, give a definite value for the feedback strength
884
- bound ϵi below which strict homeostasis is assured. Nevertheless, by using the
885
- sufficient stability criterion based on the Routh-Hurwitz criterion [24] we can
886
- identify those bounds for systems with up to three cell states, which guides
887
- expectations for larger systems. The details of this criterion and the necessary
888
- derivations are shown in Appendix A. There, we show that for systems with one
889
- or two cell states, ϵi = ∞, which means that asymptotic stability is ensured for
890
- arbitrary feedback strengths. For systems with three cell states, we can assure
891
- that ϵi = ∞ if certain further conditions are met (see Eq. (A13)). Otherwise,
892
- ϵi can be determined implicitly from the roots of a quadratic form (Eq. (A14)),
893
- and thus stability may depend on the magnitude of the feedback. In principle,
894
- such bounds can be found for larger systems too, but the algebraic complexity
895
- of this process renders it unfeasible to do this in practical terms.
896
- 3.4 Robustness to perturbations and failures
897
- Now, we wish to assess the robustness of the above crowding control mecha-
898
- nism, i.e. what occurs if it is disrupted, for example, by the action of toxins,
899
- other environmental cues, or by cell mutations. More precisely, we will study
900
- what happens if one or more feedback pathways, here characterised as a param-
901
- eter αi with α′
902
- i ̸= 0 fulfilling the conditions for (dynamic or strict) homeostatic
903
- control, is failing, that is, it becomes α′
904
- i = 0. We will first address the case
905
- of tissue-extrinsic factors, i.e. those affecting all the cells in the tissue, and
906
- then the case of single-cell mutations. In the latter case, only a single cell
907
- would initially show a dysregulated behaviour, yet, if this confers a proliferative
908
- advantage, it can lead to hyperplasia and possibly cancer [25–27].
909
-
910
- Springer Nature 2021 LATEX template
911
- 14
912
- Homeostatic regulation of renewing tissue cell populations via crowding control
913
- First, we note that the sufficient condition for strict homeostasis, given
914
- by Eq. (30), is overly restrictive. In a tissue cell type under crowding feed-
915
- back control with λ′
916
- i < 0 and γ′
917
- i > 0 for more than one i, there is a degree
918
- of redundancy. That is, if the feedback is removed for one or more of these
919
- parameters (changing to λ′
920
- i = 0 and, or γ′
921
- i = 0), then the sufficient condition
922
- for a strict homeostatic state remains fulfilled as long as at least one λ′
923
- i or
924
- γ′
925
- i remains non-zero. This possible redundancy confers a degree of robustness,
926
- meaning that feedback pathways can be removed – setting α′
927
- i = 0 – without
928
- losing homeostatic control. Since the necessary conditions, Eqs. (24), are even
929
- less restrictive, tissue homeostasis may even tolerate more severe disruptions
930
- that reverse some feedback pathways, e.g. switching from λ′
931
- i < 0 to λ′
932
- i > 0,
933
- as long as other terms in the sum on the right-hand side of (24) compensate
934
- for this changed sign, ensuring that the sum as a whole is negative. In any
935
- case, it is important to remind the underlying assumption for which a non-
936
- trivial steady state exists. In case the variability of the kinetic parameters is
937
- not enough to assure the condition µ(ρ∗ = 0), then the tissue will degenerate,
938
- either shrinking and eventually disappearing or indefinitely growing.
939
- From the above considerations, we conclude that if crowding control applies
940
- to more than one parameter αi, that is, α′
941
- i ̸= 0 with appropriate sign and
942
- magnitude, homeostasis is potentially robust to feedback disruption. This may
943
- include a simple variation of the feedback function α′
944
- i but also perturbation in
945
- the feedback functions shape and complete feedback failure, α′
946
- i = 0.
947
- An illustrative example of this situation is shown in Figure 1. Here, the time
948
- evolution of the cell density is shown for a three-state cell fate model, which has
949
- been computed by integration of the dynamical system (5) (the details of this
950
- model are given in Appendix B as Eq. (B15) and illustrated in Figure B1). Four
951
- kinetic parameters are regulated via crowding control satisfying the sufficient
952
- condition for strict homeostasis, (30). Then, starting from this homeostatic
953
- configuration, feedback disruption is introduced at a time equal to zero. In one
954
- case (“Single failure”), a single kinetic parameter suffers a complete failure of
955
- the type α′
956
- i = 0. In this case, the remaining feedback functions compensate
957
- for this failure, and a new homeostatic condition is achieved. Instead, in the
958
- second case (“Multiple failures”), failures are applied so that three of the four
959
- kinetic parameters initially regulated do not adjust with cell density6. Notably,
960
- the only feedback function left satisfies the condition for asymptotic stability,
961
- (30). Nevertheless, the variability of this kinetic parameter is not enough to
962
- assure the existence of a steady state, since in this case, the function µ(ρ) does
963
- not possess any root. Hence µ > 0 for all ρ, leading to an indefinite growth of
964
- the cell population. Additional test cases are presented in Appendix B.2.
965
- So far, we modelled the feedback dysregulation as acting on a global scale,
966
- thus changing the whole tissue’s dynamics behaviour. This situation represents
967
- a feedback mechanism affected by cell-extrinsic signals, in which any dysregu-
968
- lation applies to all the cells in the same way. However, dysregulation can also
969
- 6Only in this example, feedback control fails upon multiple failures, while in general, multiple
970
- failures may still be compensated to maintain homeostatic control.
971
-
972
- Springer Nature 2021 LATEX template
973
- Homeostatic regulation of renewing tissue cell populations via crowding control
974
- 15
975
- -10
976
- 0
977
- 10
978
- 20
979
- 30
980
- 40
981
- 0
982
- 2
983
- 4
984
- 6
985
- 8
986
- 10
987
- Homeostasis
988
- Single failure
989
- Multiple failures
990
- -10
991
- 0
992
- 10
993
- 20
994
- 30
995
- 40
996
- -0.1
997
- -0.05
998
- 0
999
- 0.05
1000
- 0.1
1001
- 0.15
1002
- Homeostasis
1003
- Single failure
1004
- Multiple failures
1005
- Fig. 1
1006
- Cell dynamics in terms of cell density, scaled by the steady state in the homeostatic
1007
- case, as a function of time (left) and the corresponding variation of the dominant eigenvalue µ
1008
- (right). Time is scaled by the inverse of ¯α = mini α∗
1009
- i . The homeostatic model is perturbed at
1010
- a time equal to zero to include feedback failure of the type α′
1011
- i = 0. In the case where only one
1012
- feedback function fails (“Single failure”), the system is able to achieve and maintain a new
1013
- homeostatic state, characterised by a constant cell density and a zero dominant eigenvalue.
1014
- In case more than one feedback fails (“Multiple failures”), the cell dynamics are unstable
1015
- since a steady state does not exist and µ > 0 for all ρ. The cell fate model corresponds to
1016
- model (B15) with parameters given in Table B1 and Table B2.
1017
- act at the single-cell level, for example, when DNA mutations occur. In this
1018
- case, the impact of the dysregulation is slightly different, as explained in the
1019
- following.
1020
- Suppose, upon disruption of crowding control in a single cell, for example,
1021
- by DNA mutations, a sufficient number of crowding feedback pathways remain
1022
- so that there is a steady state and the sufficient condition (30) is still fulfilled.
1023
- In that case, homeostasis is retained, just as when this occurs in a tissue-wide
1024
- disruption. However, if the homeostatic control of that single cell fails such that
1025
- the cell becomes hyperproliferative, µ > 0, or declining, µ < 0, the tissue may
1026
- still remain homeostatic. If µ < 0, the single mutated cell will be lost, upon
1027
- which only a population of crowding controlled cells remain, which remain in
1028
- homeostasis. If µ > 0 in a single cell, hyper-proliferation is not ensured either:
1029
- while the probability for mutated cells to grow in numbers is larger than to
1030
- decline, due to the low numbers, mere randomness can lead to the loss of the
1031
- mutated cell with a non-zero probability, which results in the extinction of
1032
- the dysregulated mutant7. In that case, the mutant cells go extinct and the
1033
- tissue remains homeostatic despite the disruption of homeostatic control in
1034
- the mutated cells; a stark contrast to disruption on the tissue level. Otherwise,
1035
- if the mutant clone (randomly) survives, it will continue to hyper-proliferate
1036
- and eventually dominate the tissue, which is thus rendered non-homeostatic.
1037
- However, the tissue divergence time scale may be much longer than the case
1038
- where the same dysregulation occurs in all cells.
1039
- The deterministic cell population model (5) is suitable for describing the
1040
- average cell numbers. Nevertheless, it fails to describe the stochastic nature of
1041
- 7For example, in the case of a single state with cell division rate λ and loss rate γ – a simple
1042
- branching process – the probability for a mutant with µ > 0, that is, λ > γ, to establish is 1−γ/λ,
1043
- which is less than certainty.
1044
-
1045
- Springer Nature 2021 LATEX template
1046
- 16
1047
- Homeostatic regulation of renewing tissue cell populations via crowding control
1048
- single-cell fate choice. Thus, assessing a single cell’s impact on tissue dynamics
1049
- requires stochastic modelling. To that end, we implemented this situation as
1050
- a Markov process with the same rates as the tissue cell population dynamics
1051
- model8 (see Appendix B.3 for more details).
1052
- In Figure 2, we show numerical simulation results of a stochastic version
1053
- of the model used for previous results in Figure 1, depicted in terms of tissue
1054
- cell density as a function of time. Here, two possible realisations of the same
1055
- stochastic process are presented. We note that the initially homeostatic tissue
1056
- results in stochastic fluctuations of the cell density, which remain, on average,
1057
- constant. At a time equal to zero, a single cell in this tissue switches behaviour,
1058
- presenting multiple failures which, if applied to all the cells, would determine
1059
- the growth of the tissue (corresponding to Multiple failures curve in Figure 1).
1060
- In one instance of the stochastic simulation, however, the mutated clone goes
1061
- extinct after some time, leaving a tissue globally unaffected by the mutation.
1062
- On the other hand, in another instance, the mutated clone prevails, leading to
1063
- the growth of the tissue cell population. The fact that vastly different outcomes
1064
- can occur with the same parameters and starting conditions demonstrates the
1065
- impact of stochasticity in the case of a single-cell mutation.
1066
- -10
1067
- 0
1068
- 10
1069
- 20
1070
- 30
1071
- 40
1072
- 50
1073
- 1
1074
- 1.2
1075
- 1.4
1076
- Homeostasis
1077
- Multiple Failure (instance #1)
1078
- Multiple Failure (instance #2)
1079
- Fig. 2
1080
- Numerical simulation results of a stochastic version of the model used in Figure 1
1081
- upon disruption of crowding control in a single cell, mimicking a DNA mutation. At a
1082
- time equal to 0, the initially homeostatic model is disrupted with a single cell presenting
1083
- multiple failures in the feedback control, as in Figure 1. Two instances of simulations run
1084
- with identical parameters are presented. The rescaled cell density ρ/ρ∗ is shown as a function
1085
- of the time, scaled by the inverse of ¯α = mini α∗
1086
- i . Whilst the mutated cell and its progeny go
1087
- extinct in one instance (#1), in the other (#2), mutated cells prevail and hyper-proliferate
1088
- so that tissue homeostasis is lost. The simulation stops when the clone goes extinct or when
1089
- instability is detected. Full details of the simulation are provided in Appendix B.3.
1090
- 3.5 Quasi-dedifferentiation
1091
- In the previous section, we addressed the case where external or cell-intrinsic
1092
- factors disrupt homeostatic control in self-renewing cells of a tissue. However,
1093
- 8While a Markov process is an approximation which not necessarily reflects the probability
1094
- distribution of subsequent event times realistically, it is often sufficient to assess the qualitative
1095
- behaviour of a system with low numbers, subject to random influences from the environment.
1096
-
1097
- Springer Nature 2021 LATEX template
1098
- Homeostatic regulation of renewing tissue cell populations via crowding control
1099
- 17
1100
- situations such as injury, poisoning, or cell radiation might also affect home-
1101
- ostasis in other ways. An example is when stem cells are completely depleted
1102
- from the tissue. In this context, many studies about tissue regeneration after
1103
- injury report evidence of cell plasticity [17, 18], when committed cells regain
1104
- the potential of the previously depleted stem cells. Cell dedifferentiation is just
1105
- an example where differentiated cells return to an undifferentiated state as a
1106
- response to tissue damage. Lineage tracing experiments confirmed this feature
1107
- in vivo in several cases [16, 28–30].
1108
- In the following, we assess how committed progenitor cells respond to the
1109
- depletion of the stem cell pool if they are under crowding feedback control.
1110
- Without loss of generality, let us consider an initially homeostatic scenario
1111
- where there is a self-renewing (i.e. stem) cell type (S) – with growth param-
1112
- eter µ = 0 – at the apex of a lineage hierarchy, and a committed progenitor
1113
- cell type (C) – with µ < 0, but with at least one state that has a non-zero
1114
- cell division rate – below type S in the hierarchy, as depicted in Figure 3.
1115
- Based on this cell fate model, S-cells proliferate and differentiate into C-cells
1116
- while maintaining the S-cell population. The C-cells also proliferate and dif-
1117
- ferentiate into other downstream cell types which we do not explicitly consider
1118
- here. C-cells do not maintain their own population; only the steady influx of
1119
- new cells of that type via differentiation of S-cells into C-cells maintains the
1120
- latter population (see [13]). We further assume that both S- and C-cells are
1121
- under appropriate crowding control, fulfilling both the sufficient conditions for
1122
- dynamic homeostasis, (12), and for stable, strict homeostasis, (30).
1123
- Based on the above modelling, we can write the dynamics of the cell
1124
- densities belonging to the committed progenitor type as,
1125
- d
1126
- dtρc = Ac(ρc)ρc + u ,
1127
- (31)
1128
- where ρc = (ρms+1, ρms+2, .., ρms+mc) are the cell densities in the committed
1129
- C-type, with ms being the number of states of the self-renewing S-type. Ac is
1130
- the dynamical matrix restricted to states in the C-type and ui = �ms
1131
- j=1 κjiρj
1132
- is a constant vector quantifying the influx of cells into the C-type.
1133
- First, we note that the Jacobian matrix of a committed cell type, described
1134
- by (31), J =
1135
-
1136
- ∂A(ρc)ρc
1137
- ∂ρj
1138
-
1139
- j=ms+1,...,ms+mc
1140
- , has the same form as a cell type at the
1141
- apex of the hierarchy, since u does not depend on the densities ρms+1,...,ms+mc.
1142
- From this follows that if C-cells are regulated by crowding control, fulfilling the
1143
- conditions (30), then also the population of C-cells is stable around a steady
1144
- state ρ∗
1145
- c, albeit with a growth parameter µc(ρ∗
1146
- c) < 09.
1147
- We now consider the scenario where all stem cells are depleted at some
1148
- point, as was experimentally done in [16, 18]. This would stop any replen-
1149
- ishment of C-cells through differentiation of S-cells, corresponding to setting
1150
- 9This can be seen when multiplying the steady state condition for (31), Ac(ρs, ρc)ρc + u = 0
1151
- with a positive left dominant eigenvector v, giving, µcvρ∗
1152
- c + vu = 0. Since ρ∗ and v have all
1153
- positive entries and u is non-negative, this equation can only be fulfilled for µc < 0.
1154
-
1155
- Springer Nature 2021 LATEX template
1156
- 18
1157
- Homeostatic regulation of renewing tissue cell populations via crowding control
1158
- Fig. 3 Sketch representative of the quasi-dedifferentiation scenario. A homeostatic system
1159
- enclosed in the black box comprises two cell types: a stem cell type, S, (blue) and a com-
1160
- mitted cell type, C, (green). In the unperturbed homeostatic scenario, S is self-renewing,
1161
- characterised by a growth parameter at the steady state µ∗ = 0, and C is transient, with
1162
- a growth parameter at the steady state µ∗ < 0. Both cell types are subject to crowding
1163
- control, fulfilling both conditions (12), and (30). By removing the stem cell type XS, the
1164
- committed cell type acquires self-renewing property through crowding control, effectively
1165
- becoming a stem cell type (see Figure 4).
1166
- u = 0 in (31). Hence we end up with the dynamics ˙ρc = A(ρc)ρc. Now, assum-
1167
- ing that the function µ(ρ) has sufficient range, so that µ(ρ∗∗
1168
- c ) = 0 for some
1169
- ρ∗∗
1170
- c , and provided that A(ρc) is under crowding control fulfilling the sufficient
1171
- conditions for asymptotic stability of a steady state, then, following our argu-
1172
- ments from section 3.3, the population of C-cells will attain a stable steady
1173
- state. In other words, those previously committed cells become self-renewing
1174
- cells. Also, since they now reside at the apex of the lineage hierarchy (given
1175
- that S-cells are absent), they effectively become stem cells.
1176
- Hence, under crowding control, previously committed progenitor cells
1177
- (committed cells that can divide) will automatically become stem cells if the
1178
- original stem cells are depleted. Commonly, such a reversion of a committed cell
1179
- type to a stem cell type would be called ‘dedifferentiation’ or ‘reprogramming’.
1180
- However, in this case, no genuine reversion of cell states occurs; previously
1181
- committed cells do not transition back to states associated with the stem cell
1182
- type. Instead, they respond by crowding feedback and adjust their dynamical
1183
- rates so that µ becomes zero, hence attaining a self-renewing cell type. Cru-
1184
- cially, this new stem cell type is fundamentally different to the original one
1185
- and still most similar to the original committed type. We call this process
1186
- quasi-dedifferentiation. The quasi-dedifferentiation follows the same reversion
1187
- of proliferative potential as in ‘genuine’ dedifferentiation but without explicit
1188
- reversion in the cell state trajectories.
1189
- The following numerical example illustrates this situation. We focus on the
1190
- cell dynamics of a single C-type regulated via crowding feedback (detail of
1191
- the model are provided in Appendix B.4). The cell density as a function of
1192
- the time, shown in Figure 4, is obtained by integrating the corresponding cell
1193
- population model according to Eq. (5). The system is initially in a homeostatic
1194
- condition, meaning that there is a constant influx of cells from some upstream
1195
- self-renewing types. Such upstream types are assumed to be properly regulated
1196
- such that this cell influx is constant over time. At a time equal to zero, the cell
1197
- influx becomes suddenly zero, representing an instantaneous removal of all the
1198
-
1199
- Xo-
1200
- HomeostasisSpringer Nature 2021 LATEX template
1201
- Homeostatic regulation of renewing tissue cell populations via crowding control
1202
- 19
1203
- -10
1204
- 0
1205
- 10
1206
- 20
1207
- 30
1208
- 40
1209
- 0
1210
- 0.2
1211
- 0.4
1212
- 0.6
1213
- 0.8
1214
- 1
1215
- Homeostasis
1216
- Quasi-dedifferentiation
1217
- -10
1218
- 0
1219
- 10
1220
- 20
1221
- 30
1222
- 40
1223
- -0.2
1224
- -0.1
1225
- 0
1226
- Homeostasis
1227
- Quasi-dedifferentiation
1228
- Fig. 4 Cell dynamics of an initially committed cell type C (µ < 0) upon removal of all stem
1229
- cells. (Left) Cell density scaled by the steady-state density as a function of time. (Right)
1230
- Corresponding variation of the dominant eigenvalue µc. Time is scaled by the inverse of
1231
- ¯α = mini α∗
1232
- i . It is assumed that a stem cell type, S, initially resides in the lineage hierarchy
1233
- above the committed cell type (as in Figure 3). S cells differentiate into C cells, which is
1234
- modelled as a constant cell influx of C-cells (S is not explicitly simulated). At a time equal
1235
- to zero, a sudden depletion of S cells is modelled by stopping the cell influx. After some
1236
- transitory phase, the cell population stabilises around a new steady state and becomes self-
1237
- renewing with µc = 0. The full description of the dynamical model, which corresponds to
1238
- model (B15) with parameters given in Table B1, is reported in Appendix B.4.
1239
- self-renewing cells from the tissue. A new homeostatic condition is achieved
1240
- after a transitory phase thanks to the crowding feedback acting on the C-
1241
- type. This example demonstrates how an initially committed cell type, i.e.
1242
- with µc < 0, regulated via crowding feedback, might be able to switch, upon
1243
- disruption, to a self-renewing behaviour µc = 0.
1244
- 4 Discussion
1245
- For maintaining healthy adult tissue, the tissue cell population must be
1246
- maintained in a homeostatic state. Here, we assessed one of the most com-
1247
- mon generalised regulation mechanisms of homeostasis, which we refer to as
1248
- crowding feedback. Based on this, progenitor cells (stem cells and committed
1249
- progenitors) adjust their propensities to divide, differentiate, and die, accord-
1250
- ing to the surrounding density of cells, which they sense via biochemical or
1251
- mechanical signals. For this purpose, we used a generic mathematical model
1252
- introduced before in Refs. [13, 20], which describes tissue cell population
1253
- dynamics in the most generic way, including cell divisions, cell state transi-
1254
- tions, and cell loss / differentiation. Based on this model, we rigorously define
1255
- what is meant when speaking of a ‘homeostatic state’, introducing two notions:
1256
- a strict homeostasis is a steady state of the tissue cell population dynamics,
1257
- while dynamical homeostasis allows, in addition to strict homeostasis, for oscil-
1258
- lations and fluctuations, as long as a finite long-term average cell population
1259
- is maintained (such as the endometrium during the menstrual cycle).
1260
- By analysing this dynamical system, we find several sufficient and necessary
1261
- conditions for homeostasis. These conditions are formulated in terms of how
1262
- the propensities of cell division, differentiation, and cell state changes, of cells
1263
-
1264
- Springer Nature 2021 LATEX template
1265
- 20
1266
- Homeostatic regulation of renewing tissue cell populations via crowding control
1267
- whose type is at the apex of an adult cell lineage hierarchy, may depend on
1268
- their cell density. We find that when, for a wide range of cell density values,
1269
- the cell division propensity of at least one state decreases with cell density or
1270
- the differentiation propensity increases with it, while other propensities (e.g.
1271
- of cell state transitions) are not affected by the cell density, then dynamic
1272
- homeostasis prevails (12). For strict homeostasis to prevail, this only needs
1273
- to be fulfilled at the steady state itself, but in addition, the magnitude of
1274
- the feedback strength may not be too large (30). We can derive explicit and
1275
- implicit expressions for the bound on feedback strength for systems of two
1276
- and three-cell states but cannot do so for arbitrary systems. Furthermore, we
1277
- find that a necessary condition for strict homeostasis is that the conditions for
1278
- dynamic homeostasis are met at least at the steady state cell density.
1279
- A direct consequence of the conditions we found is that they allow for a
1280
- considerable degree of redundancy when more than one propensity depends
1281
- appropriately on the cell density. Hence feedback pathways, that is, cell dynam-
1282
- ics parameters depending on the cell density, may serve as ‘back-ups’ to each
1283
- other if one fails. We demonstrate that this confers robustness to the home-
1284
- ostatic system in that one or more crowding feedback pathways may fail, yet
1285
- the tissue remains in homeostasis.
1286
- Finally, we assess how crowding feedback regulation affects the response of
1287
- committed progenitor cells to a complete depletion of all stem cells. We showed
1288
- that committed cells which can divide and are under appropriate crowding
1289
- feedback control (that is, meeting the sufficient conditions (12) and (30)), will
1290
- necessarily, without additional mechanisms or assumptions, reacquire stem cell
1291
- identity, that is, become self-renewing and are at the apex of the lineage hierar-
1292
- chy. Notably, while this process resembles that of dedifferentiation, it does not
1293
- involve explicit reprogramming, in that the cell state transitions are reversed.
1294
- Instead, only the cell fate propensities adjust to the changing environment by
1295
- balancing proliferation and differentiation as is required for self-renewal. While
1296
- these are purely theoretical considerations, and such a process has not yet
1297
- been experimentally found, we predict that it must necessarily occur under the
1298
- appropriate conditions. This can be measured by assessing the gene expression
1299
- profiles (e.g. via single-cell RNA sequencing) of cells that ‘dedifferentiate’, i.e.
1300
- reacquire stemness after depletion of stem cells. Moreover, those considerations
1301
- yield further, more general insights:
1302
- • Stem cell identity is neither the property of individual cells nor is it strictly
1303
- associated with particular cell types or states. Any cell that can divide and
1304
- differentiate, committed or not, may become a stem cell under appropriate
1305
- environmental control.
1306
- • From the latter follows that stemness is a property determined by the
1307
- environment, not the cell itself.
1308
- • ‘Cell plasticity’ might need to be seen in a wider context. Usually, cell
1309
- plasticity is associated with a change of a cell’s type when subjected to
1310
- environmental cues, which involves a substantial remodelling of the cell’s
1311
- morphology and biochemical state. However, we see that a committed cell
1312
-
1313
- Springer Nature 2021 LATEX template
1314
- Homeostatic regulation of renewing tissue cell populations via crowding control
1315
- 21
1316
- may turn into a stem cell simply by adjusting the pace of the cell cycle
1317
- and differentiation processes to the environment. This may not require
1318
- substantial changes in the cell’s state.
1319
- This exemplifies that homeostatic control through crowding feedback is not
1320
- only a way to render homeostasis stable and robust, but also to create stem
1321
- cell identities as a collective property of the tissue cell population.
1322
- Acknowledgments.
1323
- We thank Ben MacArthur and Ruben Sanchez-Garcia
1324
- for valuable discussions.
1325
- Declarations
1326
- PG is supported by an MRC New Investigator Award, Grant number
1327
- MR/R026610/1. The code generated for numerical computations in the cur-
1328
- rent study is available on Github, https://github.com/cp4u17/Feedback. No
1329
- other data was generated for this work.
1330
- Contributions are as follows: C.P. and P.G. conceptualised the paper, C.P.
1331
- and P.G. did the mathematical analysis, C.P. did the numerical analysis, P.G.
1332
- supervised the work.
1333
- The authors have no competing interests to declare that are relevant to the
1334
- content of this article.
1335
- Appendix A
1336
- Asymptotic stability assessment
1337
- based on Routh-Hurwitz
1338
- A.1
1339
- Background
1340
- In control system theory, a commonly used method for assessing the stability
1341
- of a linear system is the Routh-Hurtwiz (RH) criterion [24]. It is an algebraic
1342
- criterion providing a necessary and sufficient condition on the parameters of a
1343
- dynamic system of arbitrary order to ensure the dynamics are asymptotically
1344
- stable. In particular, the criterion defines a set of conditions on the coefficients,
1345
- pi, of the characteristic polynomial, P(s), written as
1346
- P(s) = sn +
1347
- n
1348
-
1349
- i=1
1350
- pisn−i ,
1351
- (A1)
1352
- in which n corresponds to the dimension of the system. Note that the notation
1353
- used in this section, based on that from [24], is different from that of the main
1354
- text, where pi is the polynomial coefficient of ith order.
1355
- A first result of the RH criterion is that a necessary condition for the
1356
- dynamical system to be asymptotically stable is that all the coefficients must
1357
- be positive, that is,
1358
- pi > 0, for all i .
1359
- (A2)
1360
-
1361
- Springer Nature 2021 LATEX template
1362
- 22
1363
- Homeostatic regulation of renewing tissue cell populations via crowding control
1364
- Additional conditions on the polynomial coefficients are added for a necessary
1365
- and sufficient condition. These conditions are based on Routh’s array, written
1366
- as
1367
-
1368
- �����
1369
- 1 p2 p4 ... 0
1370
- p1 p3 ...
1371
- b1 b2 ...
1372
- c1
1373
- ...
1374
-
1375
- �����
1376
- ,
1377
- (A3)
1378
- in which the first two rows contain all the coefficients of the characteristic
1379
- polynomial, and the following ones are recursively computed as
1380
- bi = −
1381
- det
1382
-
1383
- 1
1384
- p2i
1385
- p1 p2i+1
1386
-
1387
- p1
1388
- ,
1389
- (A4)
1390
- ci = −
1391
- det
1392
-
1393
- p1 p2i+1
1394
- b1
1395
- bi
1396
-
1397
- b1
1398
- ,
1399
- (A5)
1400
- and so on until a zero is encountered. The RH criterion states that the system is
1401
- asymptotically stable if and only if the elements in the first column of Routh’s
1402
- array are positive.
1403
- Based on that, it can be easily shown that for a second-order polynomial,
1404
- the necessary condition (A2) is also sufficient for asymptotic stability (a.s.)
1405
- since b1 = p1p2, which means that
1406
- The system is a. s.
1407
- ⇐⇒ pi > 0, for i = 1, 2 .
1408
- (A6)
1409
- Instead, the necessary and sufficient condition for a polynomial of order three
1410
- results in
1411
- The system is a. s.
1412
- ⇐⇒ pi > 0, for i = 1, 2, 3 and p1p2 − p3 > 0 .
1413
- (A7)
1414
- The same reasoning can be applied to higher-order dynamics to derive
1415
- additional conditions on the coefficients pi.
1416
- A.2
1417
- Verification of the necessary condition for
1418
- asymptotic stability
1419
- The Matlab code for verifying (23) is provided in https://github.com/
1420
- cp4u17/Feedback.git.
1421
- The strategy used is to evaluate each term in Eq. (23) and simply compare
1422
- the left and right-hand sides of the equation. We followed a symbolic approach
1423
- (based on the Matlab symbolic toolbox) for an arbitrary three-state model. A
1424
- numerical approach was used instead for higher-order dynamics, specifically
1425
- 4, 5 and 6 state cell fate models. To do so, we randomly defined the cell
1426
- dynamical matrix at the steady state, A(ρ∗), and its derivative with respect
1427
-
1428
- Springer Nature 2021 LATEX template
1429
- Homeostatic regulation of renewing tissue cell populations via crowding control
1430
- 23
1431
- to ρ. Entries were chosen from a uniform distribution and, for assuring a zero
1432
- dominant eigenvalue for A(ρ∗), a local optimiser (fmincon function of Matlab)
1433
- was used to find appropriate diagonal elements. For each dimension of the cell
1434
- fate model, we tested up to 1000 random cases.
1435
- A.3
1436
- Sufficient condition for asymptotic stability
1437
- In this section, we will indicate with the superscripts A and J the coefficients of
1438
- the characteristic polynomial expressed as Eq. (A1) respectively of the matrix
1439
- of the dynamical system, Eq. (6), and those of the Jacobian matrix, Eq. (15).
1440
- For a two and three-state system, the following relations can be alge-
1441
- braically derived
1442
- pJ
1443
- 1 = pA
1444
- 1 −
1445
-
1446
- i
1447
- ηi .
1448
- (A8)
1449
- where ηi is according to Eq. (16). Again, considering that pA
1450
- 1 > 0, if all ηi ≤ 0
1451
- then pJ
1452
- 1 > 0.
1453
- Hence, the above relation implies that in a two-state system, the RH cri-
1454
- terion given by Eq. (A6) is fulfilled when η ≤ 0, with at least one negative
1455
- component (otherwise J = A) and therefore the system is asymptotically sta-
1456
- ble. We recall that asking ηi ≤ 0 without further constraints is equivalent to
1457
- the previously derived condition (30) with ϵi = ∞.
1458
- For applying the RH criterion to a three-state cell dynamic system, given
1459
- by Eq. (A7), we need to evaluate the sign of pJ
1460
- 2 and then that of pJ
1461
- 1 pJ
1462
- 2 − pJ
1463
- 3 .
1464
- To do so, we first write
1465
- pJ
1466
- 2 = pA
1467
- 2 −
1468
-
1469
- i
1470
- fiηi ,
1471
- (A9)
1472
- in which fi = �
1473
- j aji − Tr(A) for i = 1, 2, 3. Since the off-diagonal elements
1474
- are non-negative, and the trace of A is negative, then fi > 0 for i = 1, 2, 3.
1475
- That means that if all ηi ≤ 0 then pJ
1476
- 2 > 0. Concerning the term pJ
1477
- 1 pJ
1478
- 2 − pJ
1479
- 3 ,
1480
- this can be written as a quadratic form in η =
1481
-
1482
- η1, η2, η3
1483
-
1484
- as
1485
- pJ
1486
- 1 pJ
1487
- 2 − pJ
1488
- 3 = Q(η) = ηT AQη + bT
1489
- Qη + cQ ,
1490
- (A10)
1491
- in which
1492
- AQ =
1493
-
1494
-
1495
- f1 f1 f1
1496
- f2 f2 f2
1497
- f3 f3 f3
1498
-
1499
- � ,
1500
- (A11)
1501
- bQ = −pA
1502
- 1
1503
-
1504
-
1505
- f1
1506
- f2
1507
- f3
1508
-
1509
- � − pA
1510
- 2
1511
- vw
1512
-
1513
-
1514
- v3(w3 − w1) + v2(w2 − w1)
1515
- v3(w3 − w2) + v1(w1 − w2)
1516
- v2(w2 − w3) + v1(w1 − w3)
1517
-
1518
- � ,
1519
- (A12)
1520
- and cQ = pA
1521
- 1 pA
1522
- 2 . Here, v = (v1, v2, v3) is a left dominant eigenvector and
1523
- w = (w1, w2, w3) a right dominant eigenvector.
1524
- We now note that the matrix AQ is semidefinite positive since two eigen-
1525
- values are zero (the rows are two-fold degenerate) and one is positive, equal
1526
- to Tr(AQ) = �
1527
- i fi, and cQ > 0. We now distinguish two cases, depending on
1528
-
1529
- Springer Nature 2021 LATEX template
1530
- 24
1531
- Homeostatic regulation of renewing tissue cell populations via crowding control
1532
- the sign of bQ elements. First, if bQ ≤ 0, then Q(η) > 0 for any η ≤ 0. Since
1533
- fi, pA
1534
- 1 , pA
1535
- 2 , vw > 0, we get a sufficient condition for bQ ≤ 0, namely,
1536
- 0 ≤ v3(w3 − w1) + v2(w2 − w1)
1537
- (A13)
1538
- 0 ≤ v3(w3 − w2) + v1(w1 − w2)
1539
- 0 ≤ v2(w2 − w3) + v1(w1 − w3)
1540
- In that case, asymptotic stability and thus crowding feedback control is assured
1541
- for any η < 0, and thus the bound for feedback strength is ϵi = ∞ for i =
1542
- 1, 2, 3.
1543
- Otherwise, if there is at least one positive element in bQ, then Q(η) > 0
1544
- only if |ηi| < ϵi, where ϵ = (ϵ1, ϵ2, ϵ3) are the absolute values of the solutions
1545
- to the equation Q(η) = 0, that is – given that ηi are negative – the solution to,
1546
- 0 = ϵT AQϵ − bT
1547
- Qϵ + cQ .
1548
- (A14)
1549
- Importantly, we note that the elements of bQ depend uniquely on the proper-
1550
- ties of the dynamical system and therefore, they can be determined without
1551
- requiring the knowledge of the parameter derivatives, i.e. the specific crowding
1552
- feedback dependencies.
1553
- The Matlab code for verifying (A8), (A9) and (A10) is provided in
1554
- https://github.com/cp4u17/Feedback.git.
1555
- Appendix B
1556
- Test case
1557
- B.1
1558
- Asymptotic stability
1559
- This section reports the details of the model used for numerical examples
1560
- presented in the main text. The cell dynamics correspond to the following
1561
- three-state cell fate model
1562
- X1
1563
- λ1
1564
- −→ X1 + X1,
1565
- X1
1566
- ω13
1567
- −−→ X3,
1568
- X1
1569
- γ1
1570
- −→ ∅
1571
- X2
1572
- ω21
1573
- −−→ X1,
1574
- X2
1575
- ω23
1576
- −−→ X3,
1577
- X2
1578
- γ2
1579
- −→ ∅
1580
- X3
1581
- λ3
1582
- −→ X3 + X3,
1583
- X3
1584
- ω31
1585
- −−→ X1,
1586
- X3
1587
- ω32
1588
- −−→ X2,
1589
- (B15)
1590
- whose network is shown in Figure B1. In such a model, for simplicity, we only
1591
- consider symmetric self-renewing divisions so that κij = ωij. Also, we apply
1592
- the crowding feedback to division rates, λi, and differentiation rates γi. In this
1593
- way, it is straightforward to apply the sufficient condition (30) for asymptotic
1594
- stability since κ′
1595
- ij = 0 for all i, j.
1596
- Hence, each kinetic parameter of the type αi ∈ {λj, γj}j=1,...,3 is expressed
1597
- as a function of ρ, whilst those of the type αi ∈ {κjk}j,k=1,...,3 are constant. In
1598
- particular, we chose a Hill function [31] where αi(ρ) = ci + kiρni/(Kni
1599
- i
1600
- + ρni)
1601
- in case αi is a differentiation rate, so that α′
1602
- i = ∂αi/∂ρ > 0, and αi(ρ) =
1603
-
1604
- Springer Nature 2021 LATEX template
1605
- Homeostatic regulation of renewing tissue cell populations via crowding control
1606
- 25
1607
- ci +ki/(Kni
1608
- i +ρ/ni) in case it is a proliferation rate, so that α′
1609
- i < 0. According
1610
- to (30) this choice assures that, if there is a value ρ = ρ∗ for which µ(ρ∗) = 0,
1611
- this corresponds to an asymptotically stable steady state.
1612
- The parameter values used in our example are reported in Table B1, and
1613
- the profiles of the proliferation and differentiation rates as a function of ρ are
1614
- shown in Figure B2. Based on these values, the steady state corresponds to
1615
- ρ∗ = 1 (arbitrary unit). As expected, the dominant eigenvalue of the Jacobian
1616
- at the steady state is negative (µJ = −1.21).
1617
- To test the dynamical behaviour of the tissue cell population, we numer-
1618
- ically solved the system of ODEs (5) for different initial conditions based on
1619
- the explicit Runge-Kutta Dormand-Prince method (Matlab ode45 function).
1620
- The results are shown in Figure B3 as the time evolution of ρ, normalised
1621
- by the steady-state ρ∗, (left panels), and of the dominant eigenvalue, µ (right
1622
- panels). The label H indicates an initial condition corresponding to the self-
1623
- renewing state ρ∗, that is, the system is initially in homeostasis. In the
1624
- simulations labelled as P− and P+, we applied perturbation in the initial
1625
- state ρ∗ = (ρ∗
1626
- 1, ρ∗
1627
- 2, ρ∗
1628
- 3), which are, respectively,
1629
-
1630
- 0.8ρ∗
1631
- 1, 0.75ρ∗
1632
- 2, 0.85ρ∗
1633
- 3
1634
-
1635
- and
1636
-
1637
- 1.5ρ∗
1638
- 1 1.1ρ∗
1639
- 2 1.2ρ∗
1640
- 3
1641
-
1642
- . As expected, in all these cases, the feedback’s effect is sta-
1643
- bilising the system so that it returns to the steady state upon perturbation,
1644
- ρ → ρ∗, (asymptotic stability) and thus regains self-renewal property, µ → 0,
1645
- over time.
1646
- Fig. B1
1647
- Cell state network representing a cell type composed of three states. The links
1648
- represent direct transitions, ωij; symmetric divisions occur with rates λi and differentiation
1649
- with rate γi, where subscripts i, j = 1, 2, 3 indicate the corresponding cell state, as per model
1650
- (B15).
1651
- B.2
1652
- Failure of feedback function
1653
- Based on the cell fate model regulated via crowding feedback described in
1654
- the previous section, we assess the impact of failure in one or more feedback
1655
- functions. In particular, the failure of the crowding regulation is modelled,
1656
- assuming one or more kinetic parameters as a constant. Five different failure
1657
- test cases are assessed. For doing so, we chose αi = (1 + C)α∗
1658
- i being constant
1659
- instead of depending on ρ, in which α∗ is the value at the steady state when
1660
-
1661
- M
1662
- Y1
1663
- XI
1664
- 23
1665
- 1
1666
- 013
1667
- 021
1668
- 1
1669
- 031
1670
- -
1671
- X
1672
- X2
1673
- 023
1674
- 032Springer Nature 2021 LATEX template
1675
- 26
1676
- Homeostatic regulation of renewing tissue cell populations via crowding control
1677
- k
1678
- K
1679
- n
1680
- α∗
1681
- α′
1682
- λ1
1683
- 0.74
1684
- 0.57
1685
- 2.00
1686
- 0.61
1687
- -0.84
1688
- λ3
1689
- 7.79
1690
- 2.07
1691
- 2.00
1692
- 1.53
1693
- -0.56
1694
- γ1
1695
- 3.07
1696
- 1.22
1697
- 2.00
1698
- 1.28
1699
- 1.48
1700
- γ2
1701
- 2.28
1702
- 0.43
1703
- 2.00
1704
- 1.97
1705
- 0.61
1706
- κ13
1707
-
1708
- 0.95
1709
- 0.00
1710
- κ21
1711
-
1712
- 1.44
1713
- 0.00
1714
- κ23
1715
-
1716
- 1.71
1717
- 0.00
1718
- κ31
1719
-
1720
- 2.03
1721
- 0.00
1722
- κ32
1723
-
1724
- 1.35
1725
- 0.00
1726
- Table B1
1727
- Values of the Hill function parameters describing the kinetic parameters in
1728
- case of homeostasis regulation via crowding feedback for the cell fate model (B15). The
1729
- generic kinetic parameters (represented as αi in the right columns of the table) are a
1730
- function of the total cell density, ρ, and are given by γi(ρ) = c + kρn/(Kn + ρn) and
1731
- λi(ρ) = c + k/(Kn + ρn) with i = 1, 2, 3. A common value c = 0.05 is assumed. State
1732
- transition rates ωij, are constant and equal to κij. For such a cell fate dynamics, the steady
1733
- state is ρ∗ = 1. The unit of the kinetic parameter is arbitrary and therefore omitted. Unless
1734
- specified otherwise, these values apply to all the numerical examples presented in this work.
1735
- 0
1736
- 0.5
1737
- 1
1738
- 1.5
1739
- 2
1740
- 0
1741
- 0.5
1742
- 1
1743
- 1.5
1744
- 2
1745
- 2.5
1746
- 0
1747
- 0.5
1748
- 1
1749
- 1.5
1750
- 2
1751
- -4
1752
- -2
1753
- 0
1754
- 2
1755
- 4
1756
- Fig. B2
1757
- Proliferation and differentiation rates (left panels, with α as a generic placeholder
1758
- for parameters), and their derivative with respect to ρ (right panels) as functions of cell
1759
- density normalised by the steady-state ρ∗ for the cell fate model (B15) schematised in
1760
- Figure B1. The profiles in the left panel correspond to Hill functions defined in Table B1.
1761
- there are no failures (reported in Table B1) and C is a constant (reported in
1762
- Table B2). Five test cases, indicated as F1−5, are assessed.
1763
- In test case F1, only one feedback fails. Three of the four kinetic parameters
1764
- fail in cases F2−4. Finally, F5 represents a case where all the feedback functions
1765
- fail. The corresponding variability of the dominant eigenvalue, µ, as a function
1766
- of the cell density is shown in Figure B4. It is clear that whilst F1−4 cases
1767
- satisfy the sufficient condition for strict homeostasis, (30), in test cases F5,
1768
- the dominant eigenvalue being constant means that there is no homeostatic
1769
- regulation. Importantly, there is no steady state in test cases F2,4 since the
1770
- dominant eigenvalue is always positive in one case or negative in the other.
1771
- Based on these assumptions, we numerically solved the system of ODEs
1772
- (5) using the explicit Runge-Kutta Dormand-Prince method (Matlab ode45
1773
- function). The failure test cases start at time 0 from an initially homeostatic
1774
- condition, H. The results are shown in Figure B5 as the time evolution of
1775
- ρ, normalised by the homeostatic steady-state, ρ∗, (left panels), and of the
1776
-
1777
- Springer Nature 2021 LATEX template
1778
- Homeostatic regulation of renewing tissue cell populations via crowding control
1779
- 27
1780
- 0
1781
- 5
1782
- 10
1783
- 15
1784
- 0.8
1785
- 1
1786
- 1.2
1787
- 1.4
1788
- H
1789
- P-
1790
- P+
1791
- 0
1792
- 5
1793
- 10
1794
- 15
1795
- -0.6
1796
- -0.4
1797
- -0.2
1798
- 0
1799
- 0.2
1800
- 0.4
1801
- H
1802
- P-
1803
- P+
1804
- Fig. B3
1805
- Effect of perturbation of homeostasis under crowding control, when feedback
1806
- parameters are according to Table B1. (Left) Cell density ρ, scaled by the steady-state ρ∗,
1807
- as a function of time. (Right) Corresponding variation of the dominant eigenvalue µ. Time
1808
- is scaled by the inverse of ¯α = mini α∗
1809
- i . Three different initial condition are tested: H,
1810
- corresponds to the steady state ρ∗ = (ρ∗
1811
- 1, ρ∗
1812
- 2, ρ∗
1813
- 3), P− to
1814
- �0.8ρ∗
1815
- 1, 0.75ρ∗
1816
- 2, 0.85ρ∗
1817
- 3
1818
-
1819
- and P+
1820
- to
1821
- �1.5ρ∗
1822
- 1, 1.1ρ∗
1823
- 1, 1.2ρ∗
1824
- 1
1825
-
1826
- . Since the steady state is asymptotically stable, thanks to crowding
1827
- control, the cell population remain in, or return to, a homeostatic state characterised by
1828
- µ = 0.
1829
- Parameter
1830
- F1
1831
- F2
1832
- F3
1833
- F4
1834
- F5
1835
- λ1
1836
- +5%
1837
- +5%
1838
- +5%
1839
- -20%
1840
- -5%
1841
- λ3
1842
- -
1843
- +5%
1844
- +5%
1845
- -20%
1846
- -5%
1847
- γ1
1848
- -
1849
- -5%
1850
- -
1851
- +20%
1852
- -5%
1853
- γ2
1854
- -
1855
- -
1856
- -5%
1857
- -
1858
- -5%
1859
- Table B2
1860
- Value of the constant C in the feedback failure test cases. Whenever a failure
1861
- in the feedback of one kinetic parameter α occurs, that parameter is modelled as a
1862
- constant, α = (1 + C)α∗, in which the steady-state value, α∗, is reported in Table B1. Test
1863
- cases F1 and F2 correspond to those presented in the main text (Figure 1).
1864
- dominant eigenvalue, µ, (right panels). Note that the cases F1,2 correspond
1865
- respectively to the Single failure and Multiple failures reported in the
1866
- main text (Figure 1).
1867
- In two cases, F1,3, despite a single or multiple feedback functions failing, a
1868
- new homeostatic condition is reached after some time, where µ = 0. However,
1869
- suppose a different set of feedback fails, like in F2,4, such that the dominant
1870
- eigenvalue is respectively positive or negative for any ρ. In that case, no steady
1871
- state can be attained, and the tissue cell population will hyper-proliferate or
1872
- decline in the long term. Hence, even if the condition for asymptotic stability
1873
- is met, there is no steady state. Finally, if homeostasis is not regulated at
1874
- all, as in F5, then the population dynamics only depend on the value of the
1875
- dominant eigenvalue (the cell dynamical model (5) turns linear). In the case
1876
- shown, µ > 0 and therefore, the cell population diverges.
1877
- B.3
1878
- Single cell mutation scenario
1879
- To assess the tissue dynamics with a single-cell mutation, as presented in the
1880
- main text, we modelled the clonal dynamics, namely, the dynamics of single
1881
- cells and their progeny. For doing so, we considered the model (B15) as a
1882
-
1883
- Springer Nature 2021 LATEX template
1884
- 28
1885
- Homeostatic regulation of renewing tissue cell populations via crowding control
1886
- 0
1887
- 0.5
1888
- 1
1889
- 1.5
1890
- 2
1891
- -2
1892
- 0
1893
- 2
1894
- 4
1895
- H
1896
- F1
1897
- F2
1898
- F3
1899
- F4
1900
- F5
1901
- Fig. B4
1902
- Variation of the dominant eigenvalue µ as a function of the cell density, ρ,
1903
- normalised by the reference homeostatic state value, ρ∗. The curve H corresponds to the
1904
- reference homeostatic model presented in Appendix B.1. The other curves, F1−5, represent
1905
- different sets of feedback failure, as reported in Table B2.
1906
- -10
1907
- 0
1908
- 10
1909
- 20
1910
- 30
1911
- 40
1912
- 0
1913
- 0.5
1914
- 1
1915
- 1.5
1916
- 2
1917
- 2.5
1918
- H
1919
- F1
1920
- F2
1921
- F3
1922
- F4
1923
- F5
1924
- -10
1925
- 0
1926
- 10
1927
- 20
1928
- 30
1929
- 40
1930
- -0.6
1931
- -0.4
1932
- -0.2
1933
- 0
1934
- 0.2
1935
- H
1936
- F1
1937
- F2
1938
- F3
1939
- F4
1940
- F5
1941
- Fig. B5
1942
- Failure of feedback control. (Left) Cell density, scaled by the steady state in the
1943
- homeostatic case, as a function of time. (Right) Corresponding variation of the dominant
1944
- eigenvalue µ. Time is scaled by the inverse of ¯α = mini α∗
1945
- i . The homeostatic model, H, is
1946
- perturbed at a time equal to zero to include the feedback failure reported in Table B2. Whilst
1947
- in F1,3, the regulation is able to achieve and maintain a new homeostatic state (µ = 0),
1948
- the remaining case fails to regulate the cell population, leading to an indefinite growth or
1949
- shrinking of the tissue.
1950
- Markov process with the same numerical rates as before, but now events are
1951
- treated as stochastic. Then, we run numerical simulations using the Gillespie
1952
- algorithm [32] to evaluate this model. In particular, the results presented in
1953
- this work are based on 100 independent instances, where each instance is a
1954
- possible realisation of the stochastic process. We chose a total cell number
1955
- N0 = 5000 as the initial condition (cell density is based on unitary volume).
1956
- In real tissues, the number of cells could be a few orders of magnitude larger.
1957
- However, this number is sufficiently large to avoid the extinction of the process
1958
- in the time scale analysed, so once rescaled, these dynamics are representative
1959
- of those in the tissue. All the simulations are stopped when the mutated clone
1960
- goes extinct or divergence of the dynamics is detected, defined as reaching
1961
- N = 5N0.
1962
-
1963
- Springer Nature 2021 LATEX template
1964
- Homeostatic regulation of renewing tissue cell populations via crowding control
1965
- 29
1966
- From an implementation point of view, we consider a cell fate model
1967
- represented by two disconnected cell state networks to model the tissue dynam-
1968
- ics, including the mutated cell. One network corresponds to the unperturbed
1969
- test case H, and the other to the dysregulated one, F2 (both described in
1970
- Appendix B.2). The simulation starts with N0 cells in the H network, dis-
1971
- tributed in each state proportionally to the expected steady-state distribution
1972
- in the tissue, and no cells in the F2 network. Thus, since the two networks
1973
- are disconnected, F2 remains empty, and the simulation represents the tissue
1974
- dynamics before the dysregulation. At a time equal to zero, we moved one
1975
- cell from a random state in the H network to the corresponding state in the
1976
- F2 one. This simulation represents the tissue dynamics, including the single
1977
- mutated cell.
1978
- In Figure B6 (left), all the trajectories where the mutated clones go extinct
1979
- are shown. In these cases, the tissue dynamics remain globally unaffected by
1980
- the mutation. Due to the stochastic nature of the process, mutant clones can
1981
- go extinct even if the growth parameter is positive. That is, even in cases where
1982
- divergence would be observed for the tissue-wide disruption. However, this does
1983
- not occur in all the instances. The right panel of the same figure shows those
1984
- instances where the mutated clone does not go extinct and eventually prevails,
1985
- resulting in diverging cell population dynamics. For the chosen parameters,
1986
- this divergence of the mutated clone is detected in 6% of all cases. Surprisingly,
1987
- only a few clones survive despite a proliferative advantage, but this is plausible
1988
- for a small fitness advantage (For example, in the case of a single state with
1989
- cell division rate λ and loss rate γ – a simple branching process [33] – the
1990
- probability for the a mutant with µ > 0, that is, λ > γ, to establish is 1−γ/λ,
1991
- which can be very low for λ ≈ γ).
1992
- In the main text (Figure 2), only one profile for each scenario is shown,
1993
- respectively. They correspond to instance #24 for the homeostatic case and
1994
- instance #43 for the diverging case.
1995
- B.4
1996
- Quasi-dedifferentiation
1997
- The numerical example presented in the main text is based on the same cell
1998
- fate model described in Appendix B.1. To model the dynamics of a committed
1999
- cell type, we choose a constant non-negative u =
2000
-
2001
- 0.02 0.07 0.06
2002
- �T to model
2003
- for the cell influx. For such a model, the steady state, ρ∗
2004
- c, is asymptotically
2005
- stable.
2006
- The figures presented in the main text are based on the numerical integra-
2007
- tion of the system of ordinary differential equation (31). In particular, we used
2008
- the explicit Runge-Kutta Dormand-Prince method (Matlab ode45 function).
2009
- References
2010
- [1] National Institute of Health: Stem Cell Basics (2016). https://stemcells.
2011
- nih.gov/info/basics
2012
-
2013
- Springer Nature 2021 LATEX template
2014
- 30
2015
- Homeostatic regulation of renewing tissue cell populations via crowding control
2016
- -5
2017
- 0
2018
- 5
2019
- 10
2020
- 15
2021
- 20
2022
- 0.8
2023
- 0.9
2024
- 1
2025
- 1.1
2026
- 1.2
2027
- H
2028
- F2
2029
- 0
2030
- 20
2031
- 40
2032
- 60
2033
- 80
2034
- 0.8
2035
- 0.9
2036
- 1
2037
- 1.1
2038
- 1.2
2039
- H
2040
- F2
2041
- Fig. B6
2042
- Results of numerical simulations of the stochastic process representing the cell
2043
- dynamics, according to section B.3. The cell density, scaled by the steady state in the
2044
- homeostatic case, as a function of the time is shown for 100 random instances. Each shown
2045
- trajectory is the result of a different instance of the stochastic process. At a time equal to
2046
- zero, the cell mutation is modelled as a switch of a single random cell from the homeostatic
2047
- H cell dynamics to the F2 model assessed in Appendix B.2. On the left panel, only the
2048
- trajectories for which the mutated clone goes extinct are shown. The right panel shows the
2049
- trajectories in which the mutated clone prevails. Dynamics are scaled by ¯α = mini{α∗
2050
- i }.
2051
- [2] Marinari, E., Mehonic, A., Curran, S., Gale, J., Duke, T., Baum, B.:
2052
- Live-cell delamination counterbalances epithelial growth to limit tis-
2053
- sue overcrowding. Nature 484(7395), 542–545 (2012). https://doi.org/10.
2054
- 1038/nature10984
2055
- [3] Eisenhoffer, G.T., Loftus, P.D., Yoshigi, M., Otsuna, H., Chien, C.-B.,
2056
- Morcos, P.A., Rosenblatt, J.: Crowding induces live cell extrusion to main-
2057
- tain homeostatic cell numbers in epithelia. Nature 484(7395), 546–549
2058
- (2012). https://doi.org/10.1038/nature10999
2059
- [4] Eisenhoffer, G.T., Rosenblatt, J.: Bringing balance by force: Live cell
2060
- extrusion controls epithelial cell numbers. Trends in Cell Biology 23(4),
2061
- 185–192 (2013). https://doi.org/10.1016/j.tcb.2012.11.006
2062
- [5] Puliafito, A., Hufnagel, L., Neveu, P., Streichan, S., Sigal, A., Fygenson,
2063
- D.K., Shraiman, B.I.: Collective and single cell behavior in epithelial con-
2064
- tact inhibition. Proceedings of the National Academy of Sciences 109(3),
2065
- 739–744 (2012). https://doi.org/10.1016/j.juro.2012.06.073
2066
- [6] Gudipaty, S.A., Lindblom, J., Loftus, P.D., Redd, M.J., Edes, K., Davey,
2067
- C.F., Krishnegowda, V., Rosenblatt, J.: Mechanical stretch triggers rapid
2068
- epithelial cell division through Piezo1. Nature 543(7643), 118–121 (2017).
2069
- https://doi.org/10.1038/nature21407
2070
- [7] Shraiman, B.I.: Mechanical feedback as a possible regulator of tis-
2071
- sue growth. Proceedings of the National Academy of Sciences 102(9),
2072
- 3318–3323 (2005). https://doi.org/10.1073/pnas.0404782102
2073
- [8] Kitadate, Y., J¨org, D.J., Tokue, M., Maruyama, A., Ichikawa, R.,
2074
-
2075
- Springer Nature 2021 LATEX template
2076
- Homeostatic regulation of renewing tissue cell populations via crowding control
2077
- 31
2078
- Tsuchiya, S., Segi-Nishida, E., Nakagawa, T., Uchida, A., Kimura-
2079
- Yoshida, C., Mizuno, S., Sugiyama, F., Azami, T., Ema, M., Noda, C.,
2080
- Kobayashi, S., Matsuo, I., Kanai, Y., Nagasawa, T., Sugimoto, Y., Taka-
2081
- hashi, S., Simons, B.D., Yoshida, S.: Competition for Mitogens Regulates
2082
- Spermatogenic Stem Cell Homeostasis in an Open Niche. Cell Stem Cell
2083
- 24(1), 79–92 (2019). https://doi.org/10.1016/j.stem.2018.11.013
2084
- [9] Johnston, M.D., Edwards, C.M., Bodmer, W.F., Maini, P.K., Chapman,
2085
- S.J.: Mathematical modeling of cell population dynamics in the colonic
2086
- crypt and in colorectal cancer. Proceedings of the National Academy
2087
- of Sciences 104(10), 4008–4013 (2007). https://doi.org/10.1073/pnas.
2088
- 0611179104
2089
- [10] Sun, Z., Komarova, N.L.: Stochastic modeling of stem-cell dynamics with
2090
- control Zheng. Math Bioscience 240(2), 231–240 (2012). https://doi.org/
2091
- 10.1016/j.mbs.2012.08.004
2092
- [11] Bocharov, G., Quiel, J., Luzyanina, T., Alon, H., Chiglintsev, E., Cheresh-
2093
- nev, V., Meier-Schellersheim, M., Paul, W.E., Grossman, Z.: Feedback
2094
- regulation of proliferation vs. differentiation rates explains the depen-
2095
- dence of CD4 T-cell expansion on precursor number. Proceedings of the
2096
- National Academy of Sciences of the United States of America 108(8),
2097
- 3318–3323 (2011). https://doi.org/10.1073/pnas.1019706108
2098
- [12] Greulich, P., Simons, B.D.: Dynamic heterogeneity as a strategy of
2099
- stem cell self-renewal. Proceedings of the National Academy of Sciences
2100
- 113(27), 7509–7514 (2016). https://doi.org/10.1073/pnas.1602779113
2101
- [13] Greulich, P., MacArthur, B.D., Parigini, C., S´anchez-Garc´ıa, R.J.: Uni-
2102
- versal principles of lineage architecture and stem cell identity in renewing
2103
- tissues. Development 148, 194399 (2021)
2104
- [14] Donati, G., Rognoni, E., Hiratsuka, T., Liakath-Ali, K., Hoste, E., Kar,
2105
- G., Kayikci, M., Russell, R., Kretzschmar, K., Mulder, K.W., Teich-
2106
- mann, S.A., Watt, F.M.: Wounding induces dedifferentiation of epidermal
2107
- Gata6+ cells and acquisition of stem cell properties. Nature Cell Biology
2108
- 19, 603–613 (2017). https://doi.org/10.1038/ncb3532
2109
- [15] Jopling, C., Boue, S., Belmonte, J.C.I.: Dedifferentiation, transdifferenti-
2110
- ation and reprogramming: three routes to regeneration. Nature Reviews
2111
- Molecular Cell Biology 12, 79 (2011)
2112
- [16] Tata, P.R., Mou, H., Pardo-Saganta, A., Zhao, R., Prabhu, M., Law, B.M.,
2113
- Vinarsky, V., Cho, J.L., Breton, S., Sahay, A., Medoff, B.D., Rajagopal,
2114
- J.: Dedifferentiation of committed epithelial cells into stem cells in vivo.
2115
- Nature 503(7475), 218–223 (2013). https://doi.org/10.1038/nature12777
2116
-
2117
- Springer Nature 2021 LATEX template
2118
- 32
2119
- Homeostatic regulation of renewing tissue cell populations via crowding control
2120
- [17] Tetteh, P.W., Farin, H.F., Clevers, H.: Plasticity within stem cell hier-
2121
- archies in mammalian epithelia. Trends in Cell Biology 25(2), 100–108
2122
- (2015). https://doi.org/10.1016/j.tcb.2014.09.003
2123
- [18] Tetteh, P.W., Basak, O., Farin, H.F., Wiebrands, K., Kretzschmar, K.,
2124
- Begthel, H., van den Born, M., Korving, J., De Sauvage, F.J., van Es, J.H.,
2125
- Van Oudenaarden, A., Clevers, H.: Replacement of Lost Lgr5-Positive
2126
- Stem Cells through Plasticity of Their Enterocyte-Lineage Daughters.
2127
- Cell Stem Cell 18(2), 203–213 (2016). https://doi.org/10.1016/j.stem.
2128
- 2016.01.001
2129
- [19] Murata, K., Jadhav, U., Madha, S., van Es, J., Dean, J., Cavazza, A.,
2130
- Wucherpfennig, K., Michor, F., Clevers, H., Shivdasani, R.A.: Ascl2-
2131
- Dependent Cell Dedifferentiation Drives Regeneration of Ablated Intesti-
2132
- nal Stem Cells. Cell Stem Cell 26(3), 377–3906 (2020). https://doi.org/
2133
- 10.1016/j.stem.2019.12.011
2134
- [20] Parigini, C., Greulich, P.: Universality of clonal dynamics poses funda-
2135
- mental limits to identify stem cell self-renewal strategies. eLife 9, 1–44
2136
- (2020). https://doi.org/10.7554/eLife.56532
2137
- [21] Bollob´as, B.: Modern Graph Theory, 1st edn. Graduate Texts in Mathe-
2138
- matics 184. Springer, New York (1998)
2139
- [22] MacCluer, C.R.: The Many Proofs and Applications of Perron’s Theorem.
2140
- SIAM Review 42(3), 487–498 (2000)
2141
- [23] Horn,
2142
- R.A.,
2143
- Johnson,
2144
- C.R.:
2145
- Matrix
2146
- Analysis,
2147
- 2nd
2148
- edn.
2149
- Cam-
2150
- bridge University Press, Cambridge (1985). https://doi.org/10.1017/
2151
- cbo9780511810817
2152
- [24] Franklin, G.F., Powell, J.D., Emami-Naeini, A.: Feedback Control of
2153
- Dynamic Systems, 7th edn. Prentice Hall Press, USA (2014)
2154
- [25] Tomasetti, C., Vogelstein, B., Parmigiani, G.: Half or more of the somatic
2155
- mutations in cancers of self-renewing tissues originate prior to tumor
2156
- initiation. Proceedings of the National Academy of Sciences 110(6),
2157
- 1999–2004 (2013). https://doi.org/10.1073/pnas.1221068110
2158
- [26] Colom, B., Jones, P.H.: Clonal analysis of stem cells in differentiation
2159
- and disease. Current Opinion in Cell Biology 43, 14–21 (2016). https:
2160
- //doi.org/10.1016/j.ceb.2016.07.002
2161
- [27] Rodilla, V., Fre, S.: Cellular plasticity of mammary epithelial cells under-
2162
- lies heterogeneity of breast cancer. Biomedicines 6(4), 9–12 (2018). https:
2163
- //doi.org/10.3390/biomedicines6040103
2164
-
2165
- Springer Nature 2021 LATEX template
2166
- Homeostatic regulation of renewing tissue cell populations via crowding control
2167
- 33
2168
- [28] Tata, P.R., Rajagopal, J.: Cellular plasticity: 1712 to the present day.
2169
- Current Opinion in Cell Biology 43, 46–54 (2016). https://doi.org/10.
2170
- 1016/j.ceb.2016.07.005
2171
- [29] Merrell,
2172
- A.J.,
2173
- Stanger,
2174
- B.Z.:
2175
- Adult
2176
- cell
2177
- plasticity
2178
- in
2179
- vivo:
2180
- De-
2181
- differentiation and transdifferentiation are back in style. Nature Reviews
2182
- Molecular Cell Biology 17(7), 413–425 (2016). https://doi.org/10.1038/
2183
- nrm.2016.24
2184
- [30] Puri, S., Folias, A.E., Hebrok, M.: Plasticity and dedifferentiation within
2185
- the pancreas: Development, homeostasis, and disease. Cell Stem Cell
2186
- 16(1), 18–31 (2015). https://doi.org/10.1016/j.stem.2014.11.001
2187
- [31] Lei, J., Levin, S.A., Nie, Q.: Mathematical model of adult stem cell
2188
- regeneration with cross-talk between genetic and epigenetic regulation.
2189
- Proceedings of the National Academy of Sciences 111(10) (2014). https:
2190
- //doi.org/10.1073/pnas.1324267111
2191
- [32] Gillespie, D.T.: Exact stochastic simulation of coupled chemical reactions.
2192
- The Journal of Physical Chemistry 81(25), 2340–2361 (1977). https://
2193
- doi.org/10.1021/j100540a008
2194
- [33] Haccou, P., Jagers, P., Vatutin, V.A.: Branching Processes: Varia-
2195
- tion, Growth, and Extinction of Populations. Cambridge University
2196
- Press, Cambridge (2005). https://doi.org/10.2277/0521832209. http://
2197
- pure.iiasa.ac.at/7598/
2198
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
knowledge_base/BdE4T4oBgHgl3EQf5Q5A/content/tmp_files/load_file.txt DELETED
The diff for this file is too large to render. See raw diff
 
knowledge_base/BdE4T4oBgHgl3EQf5Q5A/vector_store/index.faiss DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e6121d6bbb380f46d0246ce1a0bc996af565a8ceb5d78a6de64d1c6724fe5d64
3
- size 4587565
 
 
 
 
knowledge_base/BdE4T4oBgHgl3EQf5Q5A/vector_store/index.pkl DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d32a4ea40e522757e60d681544db9a465613159dea5e610cf94df6b37ad2c1d1
3
- size 194487