jackkuo commited on
Commit
443cf4b
·
verified ·
1 Parent(s): 489d644

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -tFJT4oBgHgl3EQfpywl/content/tmp_files/2301.11601v1.pdf.txt +1262 -0
  2. -tFJT4oBgHgl3EQfpywl/content/tmp_files/load_file.txt +0 -0
  3. .gitattributes +75 -0
  4. 19FRT4oBgHgl3EQfmjee/content/tmp_files/2301.13602v1.pdf.txt +1563 -0
  5. 19FRT4oBgHgl3EQfmjee/content/tmp_files/load_file.txt +0 -0
  6. 1NE2T4oBgHgl3EQfigeU/content/2301.03959v1.pdf +3 -0
  7. 1NE2T4oBgHgl3EQfigeU/vector_store/index.faiss +3 -0
  8. 1NE2T4oBgHgl3EQfigeU/vector_store/index.pkl +3 -0
  9. 29E3T4oBgHgl3EQfoAqh/content/2301.04630v1.pdf +3 -0
  10. 29FKT4oBgHgl3EQf8C4w/content/tmp_files/2301.11947v1.pdf.txt +946 -0
  11. 29FKT4oBgHgl3EQf8C4w/content/tmp_files/load_file.txt +0 -0
  12. 2tFAT4oBgHgl3EQfDhzt/content/tmp_files/2301.08417v1.pdf.txt +727 -0
  13. 2tFAT4oBgHgl3EQfDhzt/content/tmp_files/load_file.txt +0 -0
  14. 39AzT4oBgHgl3EQfffxn/content/2301.01453v1.pdf +3 -0
  15. 39AzT4oBgHgl3EQfffxn/vector_store/index.faiss +3 -0
  16. 39AzT4oBgHgl3EQfffxn/vector_store/index.pkl +3 -0
  17. 3dAyT4oBgHgl3EQfb_e6/content/2301.00275v1.pdf +3 -0
  18. 3dAyT4oBgHgl3EQfb_e6/vector_store/index.faiss +3 -0
  19. 3dAyT4oBgHgl3EQfb_e6/vector_store/index.pkl +3 -0
  20. 69FAT4oBgHgl3EQfnx3V/content/2301.08631v1.pdf +3 -0
  21. 69FAT4oBgHgl3EQfnx3V/vector_store/index.faiss +3 -0
  22. 69FAT4oBgHgl3EQfnx3V/vector_store/index.pkl +3 -0
  23. 79E0T4oBgHgl3EQffQDe/content/2301.02403v1.pdf +3 -0
  24. 79E0T4oBgHgl3EQffQDe/vector_store/index.pkl +3 -0
  25. 8tAyT4oBgHgl3EQf3PkC/content/tmp_files/2301.00763v1.pdf.txt +0 -0
  26. 8tAyT4oBgHgl3EQf3PkC/content/tmp_files/load_file.txt +0 -0
  27. 8tAzT4oBgHgl3EQf-v68/content/2301.01939v1.pdf +3 -0
  28. 8tAzT4oBgHgl3EQf-v68/vector_store/index.faiss +3 -0
  29. 8tAzT4oBgHgl3EQf-v68/vector_store/index.pkl +3 -0
  30. 9dE2T4oBgHgl3EQfQAa_/content/tmp_files/2301.03766v1.pdf.txt +1402 -0
  31. 9dE2T4oBgHgl3EQfQAa_/content/tmp_files/load_file.txt +0 -0
  32. 9tE2T4oBgHgl3EQfQQYW/content/tmp_files/2301.03767v1.pdf.txt +1506 -0
  33. 9tE2T4oBgHgl3EQfQQYW/content/tmp_files/load_file.txt +0 -0
  34. A9E2T4oBgHgl3EQf8Qn_/vector_store/index.faiss +3 -0
  35. A9E2T4oBgHgl3EQf8Qn_/vector_store/index.pkl +3 -0
  36. ANFJT4oBgHgl3EQfrC3Z/content/2301.11607v1.pdf +3 -0
  37. ANFJT4oBgHgl3EQfrC3Z/vector_store/index.faiss +3 -0
  38. ANFJT4oBgHgl3EQfrC3Z/vector_store/index.pkl +3 -0
  39. AtAzT4oBgHgl3EQfTPzA/content/2301.01247v1.pdf +3 -0
  40. AtAzT4oBgHgl3EQfTPzA/vector_store/index.faiss +3 -0
  41. AtAzT4oBgHgl3EQfTPzA/vector_store/index.pkl +3 -0
  42. BNE1T4oBgHgl3EQfDgMw/content/tmp_files/2301.02877v1.pdf.txt +1950 -0
  43. BNE1T4oBgHgl3EQfDgMw/content/tmp_files/load_file.txt +0 -0
  44. BdE1T4oBgHgl3EQf9gYX/content/2301.03556v1.pdf +3 -0
  45. BdE1T4oBgHgl3EQf9gYX/vector_store/index.faiss +3 -0
  46. BdE1T4oBgHgl3EQf9gYX/vector_store/index.pkl +3 -0
  47. CNAyT4oBgHgl3EQfePiT/content/2301.00318v1.pdf +3 -0
  48. CNAyT4oBgHgl3EQfePiT/vector_store/index.faiss +3 -0
  49. CNAyT4oBgHgl3EQfePiT/vector_store/index.pkl +3 -0
  50. E9FRT4oBgHgl3EQfyzhp/content/tmp_files/2301.13647v1.pdf.txt +1420 -0
-tFJT4oBgHgl3EQfpywl/content/tmp_files/2301.11601v1.pdf.txt ADDED
@@ -0,0 +1,1262 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Improved Differential-neural Cryptanalysis for
2
+ Round-reduced Simeck32/64 ∗
3
+ Liu Zhang1,3[0000−0001−6106−3767], Jinyu Lu2(�)[0000−0002−7299−0934],
4
+ Zilong Wang1,3[0000−0002−1525−3356], and Chao Li2,3[0000−0001−7467−7573]
5
+ 1 School of Cyber Engineering, Xidian University, Xi’an 710126, China
6
+ {liuzhang@stu., zlwang@}xidian.edu.cn
7
+ 2 College of Sciences, National University of Defense Technology, Hunan, Changsha
8
9
+ 3 State Key Laboratory of Cryptology, P.O.Box 5159, Beijing 100878, China
10
+ Abstract. In CRYPTO 2019, Gohr presented differential-neural crypt-
11
+ analysis by building the differential distinguisher with a neural network,
12
+ achieving practical 11-, and 12-round key recovery attack for Speck32/64.
13
+ Inspired by this framework, we develop the Inception neural network that
14
+ is compatible with the round function of Simeck to improve the accuracy
15
+ of the neural distinguishers, thus improving the accuracy of (9-12)-round
16
+ neural distinguishers for Simeck32/64. To provide solid baselines for neu-
17
+ ral distinguishers, we compute the full distribution of differences induced
18
+ by one specific input difference up to 13-round Simeck32/64. Moreover,
19
+ the performance of the DDT-based distinguishers in multiple ciphertext
20
+ pairs is evaluated. Compared with the DDT-based distinguishers, the 9-,
21
+ and 10-round neural distinguishers achieve better accuracy. Also, an in-
22
+ depth analysis of the wrong key response profile revealed that the 12-th
23
+ and 13-th bits of the subkey have little effect on the score of the neu-
24
+ ral distinguisher, thereby accelerating key recovery attacks. Finally, an
25
+ enhanced 15-round and the first practical 16-, and 17-round attacks are
26
+ implemented for Simeck32/64, and the success rate of both the 15-, and
27
+ 16-round attacks is almost 100%.
28
+ Keywords: Neural Distinguisher, Wrong Key Response Profile, Key
29
+ Recovery Attack, Simeck32/64
30
+ 1
31
+ Introduction
32
+ Lightweight block ciphers present trade-offs between appropriate security and
33
+ small resource-constrained devices, which is an essential foundation for data con-
34
+ fidentiality in resource-constrained environments. Therefore, the design require-
35
+ ments and security analysis of lightweight block ciphers are of great importance.
36
+ Combining traditional analysis methods with “machine speed” to efficiently and
37
+ ∗ Supported by organization x.
38
+ First Author and Second Author contribute equally to this work.
39
+ arXiv:2301.11601v1 [cs.CR] 27 Jan 2023
40
+
41
+ intelligently evaluate the security of cryptographic algorithm components, is one
42
+ of the critical points and trends of current research. The development of Artificial
43
+ Intelligence (AI) provides new opportunities for cryptanalysis.
44
+ In CRYPTO 2019 [8], Gohr creatively combines deep learning with differ-
45
+ ential cryptanalysis and applies it to the Speck32/64, gaining the neural dis-
46
+ tinguisher (ND) can surpass the DDT-based distinguisher (DD). Then, a hy-
47
+ brid distinguisher (HD) consisting of a ND and a classical differential (CD)
48
+ with highly selective key search strategies result in forceful practical 11-, and
49
+ 12-round key recovery attacks. In EUROCRYPT 2021 [7], Benamira et al. pro-
50
+ posed a thorough analysis of Gohr’s neural network. They discovered that these
51
+ distinguishers are basing their decisions on the ciphertext pair difference and the
52
+ internal state difference in penultimate and antepenultimate rounds.
53
+ To attack more rounds, the component CD or ND must be extended. In
54
+ ASIACRYPT 2022
55
+ [4], Bao et al. devised the first practical 13-round and an
56
+ improved 12-round ND-based key recovery attacks for Speck32/64 by enhanc-
57
+ ing the CDs, which they deeply explored more generalized neutral bits of dif-
58
+ ferentials, i.e., conditional (simultaneous) neutral bit/bit-sets. In addition, they
59
+ obtained NDs up to 11-round Simon32/64 by using DenseNet and SENet, thus
60
+ launching the practical 16-round key recovery attack. Zhang et al. [16] focused
61
+ on improving the accuracy of ND and added the Inception composed of the
62
+ multiple-parallel convolutional layers before the Residual network to capture
63
+ information on multiple dimensions. Under the combined effect of multiple im-
64
+ provements, they reduced the time complexity of key recovery attacks for 12-,
65
+ and 13-round Speck32/64 and 16-round Simon32/64. They also devised the
66
+ first practical 17-round key recovery for Simon32/64.
67
+ The Simeck algorithm [15], which combines the good design components
68
+ from both Simon and Speck [5] designed by National Security Agency (NSA),
69
+ has received a lot of attention for its security. In 2022, Lyu et al. [13] improved
70
+ Gohr’s framework and applied it to Simeck32/64. They obtained (8-10)-round
71
+ NDs for Simeck32/64 and successfully accomplished attacks for (13-15)-round
72
+ Simeck32/64 with low data complexity and time complexity. In the same year,
73
+ Lu et al. [12] adopted the multiple ciphertext pairs (8 ciphertext pairs) to train
74
+ the SE-ResNet neural network fed with a new data format for Simon and
75
+ Simeck. Finally, they obtained (9-12)-round NDs for Simeck32/64. This raises
76
+ the question of whether the key recovery attack for Simeck can be enhanced.
77
+ Our Contribution. The contributions of this work are summarized as follows.
78
+ • We improved the Inception neural network proposed by zhang et al. [16] ac-
79
+ cording to the number of cyclic rotation in the round function of Simeck32/64.
80
+ Meanwhile, to capture the connections between ciphertext pairs, we use mul-
81
+ tiple ciphertext pairs forming a sample as the input of the neural network.
82
+ Therefore, we improved the accuracy of (9-12)-round NDs using the ba-
83
+ sic training method and staged training method. The result can be seen in
84
+ Table 3.
85
+
86
+ • To provide solid baselines for NDs, the full distribution of differences induced
87
+ by the input difference (0x0000, 0x0040) is computed up to 13 rounds for
88
+ Simeck32/64. Also, to make a fair comparison with NDs, the accuracy of
89
+ the DDs with multiple ciphertext pairs under independent assumptions is
90
+ investigated. The comparison shows that the 9-, and 10-round NDs achieve
91
+ higher accuracy than the DDs, i.e., the ND contains more information than
92
+ the DDs (see Table 3).
93
+ • Based on the wrong key random hypothesis, we computed the score of the
94
+ ND for ciphertexts decrypted with different wrong keys and derived the
95
+ wrong key response profile (see Figure 3). Through a thorough study of the
96
+ wrong key response profile, we found that the 12-th and 13-th bit subkeys
97
+ have little effect on the score of the ND, but the ND is extremely sensitive
98
+ to the 14-th, and 15-th bit subkeys. Thus optimizing the Bayesian key search
99
+ algorithm (see Algorithm 3) and accelerating the key recovery attack.
100
+ • We enhanced the 15-round and launched the first practical 16-, 17-round
101
+ key recovery attacks for Simeck32/64 based on the ND. Table 1 provides a
102
+ summary of these results.
103
+ Table 1. Summary of key recovery attacks on Simeck32/64
104
+ Attacks
105
+ R
106
+ Configure
107
+ Data
108
+ Time
109
+ Success Rate
110
+ Ref.
111
+ ND
112
+ 13
113
+ 1+2+9+1
114
+ 216
115
+ 227.95+5⋆
116
+ 88%
117
+ [13]
118
+ 14
119
+ 1+3+9+1
120
+ 223
121
+ 232.99+5⋆
122
+ 88%
123
+ [13]
124
+ 15
125
+ 1+3+10+1
126
+ 224
127
+ 233.90+5⋆
128
+ 88%
129
+ [13]
130
+ 1+3+10+1
131
+ 222
132
+ 235.309
133
+ 99.17%
134
+ Sect. 5
135
+ 16
136
+ 1+3+11+1
137
+ 224
138
+ 238.189
139
+ 100%
140
+ Sect. 5
141
+ 17
142
+ 1+3+12+1
143
+ 226
144
+ 245.037
145
+ 30%
146
+ Sect. 5
147
+ 1.
148
+ ⋆: Time complexity is calculated in terms of the number of full rounds of
149
+ Simeck32/64 encryption per second of 223.304 in [13]. For a fair comparison, we
150
+ convert the time complexity to be calculated in terms of the number of 1-round
151
+ decryption performed per second. These two benchmarks differ by about 25.
152
+ 2. Time complexity is calculated based on that one-second equals to 226.693 1-round
153
+ decryption per second in this paper. Also, 221.762 full-rounds of Simeck32/64 en-
154
+ cryption per second can be performed on our device.
155
+ Organization. The rest of the paper is organized as follows. Section 2 introduces
156
+ the design of Simeck and gives the preliminary on the ND model. Section 3
157
+ gives the data format, network structure, training method, and result of NDs
158
+ The
159
+ experiment
160
+ is
161
+ conducted
162
+ by
163
+ Python
164
+ 3.7.15
165
+ and
166
+ Tensorflow
167
+ 2.5.0
168
+ in
169
+ Ubuntu
170
+ 20.04.
171
+ The
172
+ device
173
+ information
174
+ is
175
+ Intel
176
+ Xeon
177
+ E5-2680V4*2
178
+ with
179
+ 2.40GHz,
180
+ 256GB
181
+ RAM,
182
+ and
183
+ NVIDIA
184
+ RTX3080Ti
185
+ 12GB*6.
186
+ The
187
+ source
188
+ code
189
+ is
190
+ available
191
+ on
192
+ GitHub
193
+ https://github.com/CryptAnalystDesigner/
194
+ Differential-Neural-Cryptanalysis-Simeck32.git.
195
+
196
+ for Simeck32/64. Section 4 describes the neutral bits and wrong key response
197
+ profiles used for key recovery attacks. Section 5 exhibits details of the (15-17)-
198
+ round key recovery attacks. Section 6 concludes this paper.
199
+ 2
200
+ Preliminary
201
+ In this paper, we denote an n-bit binary vector by x = (xn−1, . . . , x0), where
202
+ xi is the bit in position i with x0 the least significant one. ⊕ and ⊙ denote the
203
+ eXclusive-OR operation and the bitwise AND operation, respectively. x ≪ γ
204
+ or Sγ(x) represent circular left shift of x by γ bits. x ≫ γ or S−γ(x) represent
205
+ circular right shift of x by γ bits. x ∥ y represents the concatenation of bit strings
206
+ x and y.
207
+ 2.1
208
+ A Brief Description of Simeck
209
+ The Simeck family of lightweight block cipher was designed by Yang et al. in
210
+ CHES 2015 [15]. To develop even more compact and efficient block ciphers, it
211
+ incorporates good design components from both Simon and Speck designed by
212
+ NSA. A standardized approach for lightweight cryptography was proposed by
213
+ the National Institute of Standards and Technology (NIST) in 2019. Some ideas
214
+ for this project use modified Simeck as a fundamental module, such as ACE [1],
215
+ SPOC [2], and SPIX [3], which suggests that Simeck has more practical promise.
216
+ Simeck adopt the feistel structure to perform encryptions or decryptions on
217
+ 2n-bit message blocks using a 4n-bit key, while n is the word size. The round
218
+ function of Simeck is defined as f5,0,1(x) =
219
+
220
+ S5 (x) ⊙ x
221
+
222
+ ⊕ S1(x). Designers
223
+ reuse the round function in the key schedule to subkeys like Speck does. The
224
+ encryption algorithm of Simeck32/64 is listed in Algorithms 1.
225
+ Algorithm 1: Encryption of Simeck32/64.
226
+ Input: P = (x0, y0): the paintext, (k0, k1, �� · · , k31): the round keys.
227
+ Output: C = (x32, y32): the ciphertext.
228
+ 1 for r = 0 to 31 do
229
+ 2
230
+ xr+1 ← (xr ≪ 5) & xr ⊕ (xr ≪ 1)
231
+ 3
232
+ yr+1 ← xr
233
+ 4 end
234
+ 2.2
235
+ Overview of Neural Distinguisher Model
236
+ The ND is a supervised model which distinguishes whether ciphertexts are en-
237
+ crypted by plaintexts that satisfies a specific input difference or by random
238
+ numbers. Given m plaintext pairs {(Pi,0, Pi,1), i ∈ [0, m − 1]} and target cipher,
239
+ the resulting ciphertext pairs {(Ci,0, Ci,1), i ∈ [0, m−1]} is regarded as a sample.
240
+ Each sample will be attached with a label Y :
241
+ Y =
242
+ �1, if Pi,0 ⊕ Pi,1 = ∆, i ∈ [0, m − 1]
243
+ 0, if Pi,0 ⊕ Pi,1 ̸= ∆, i ∈ [0, m − 1]
244
+
245
+ A large number of samples are fed into the neural network for training. Then,
246
+ the ND model can be described as:
247
+ Pr(Y = 1 | X0, . . . , Xm−1) = F (f(X0), · · · , f(Xm−1), ϕ(f(X0), · · · , f(Xm−1))) ,
248
+ Xi = (Ci,0, Ci,1), i ∈ [0, m − 1],
249
+ Pr(Y = 1 | X0, · · · , Xm−1) ∈ [0, 1],
250
+ where f(Xi) represents the basic features of a ciphertext pair Xi, ϕ(·) is the
251
+ derived features, and F(·) is the new posterior probability estimation function.
252
+ 3
253
+ Neural Distinguisher for Simeck32/64
254
+ It is crucial that a well-performing ND be obtained before a key recovery
255
+ can be conducted. In this section, we provided the state-of-the-art NDs for
256
+ Simeck32/64. More importantly, the DDs resulting from the input difference
257
+ (0x0000, 0x0040) are computed up to 13 rounds for Simeck32/64. These DDs
258
+ provide a solid baseline for NDs.
259
+ 3.1
260
+ Construction of the Dataset
261
+ Data quality is fundamentally the most important factor affecting the good-
262
+ ness of a model. Constructing a good dataset for NDs requires answering the
263
+ following questions:
264
+ • How to select a good input difference?
265
+ • What data format is used for a sample?
266
+ • How many ciphertext pairs are contained in a sample?
267
+ Input Difference. Numerous experiments have shown that the input difference
268
+ has a significant impact on the accuracy of the NDs/DDs [4,6,7,8,9,10,13,14].
269
+ Simultaneously, obtaining better results for the key recovery attack depends on
270
+ whether the input difference of the NDs leads to better accuracy, while leading
271
+ to the prepended CDs with high probability. Therefore, it is also necessary to
272
+ consider the number of rounds and the neutral bits of the prepended CDs.
273
+ The choice of input difference of NDs varies depending on the block cipher.
274
+ For Simeck32/64, Lyu et al. [13] present two methods to select the input differ-
275
+ ence of the NDs. In the first method, the input difference for the NDs is selected
276
+ from the input difference of the classical differential trail of existing literature.
277
+ As part of the second method, the MILP model was used to find input differ-
278
+ ences for classical differential transitions that had high probabilities, then NDs
279
+ based on these input differences were trained with short epochs, and then the
280
+ NDs whose input differences had higher accuracy were selected for training long
281
+ epochs. But they did not consider the effect of the Hamming weight of the input
282
+ difference on the neural network. Lu et al. [12] studied the effect of the input
283
+ difference of NDs of Hamming weight less than or equal to 3 on the performance
284
+ of HDs, and their experiments showed that the input difference (0, ei) is a good
285
+
286
+ choice to obtain a HD for Simon-like ciphers. Eventually, they built NDs for
287
+ Simeck32/64 up to 12 rounds with input difference (0x0000, 0x0040).
288
+ In this paper, we further explore the neutral bit of the input difference
289
+ (0x0000, 0x0040) (see Sect. 4.1) and, in a comprehensive comparison, chose this
290
+ input difference.
291
+ Data Format. In the process of training a ND, the format of the sample needs
292
+ to be specified in advance. This format is referred to as the ND’s data format
293
+ for convenience. The most intuitive data format is the ciphertext pair (C, C′) =
294
+ (xr, yr, x′
295
+ r, y′
296
+ r), which is used in Gohr’s network for Speck32/64 in [8,9]. As the
297
+ research progressed, Benamira et al. [7] constructed a new data format (xr ⊕
298
+ x′
299
+ r, xr ⊕x′
300
+ r ⊕yr ⊕y′
301
+ r, xr ⊕yr, x′
302
+ r ⊕y′
303
+ r) through the output of the first convolution
304
+ layer of Gohr’s neural network for Speck32/64, where xr ⊕ x′
305
+ r represents the
306
+ left branch difference of the ciphertext, xr ⊕ x′
307
+ r ⊕ yr ⊕ y′
308
+ r represents the right
309
+ branch difference after decrypting one round of ciphertexts without knowing the
310
+ (r − 1)-th subkey according to the round function of Speck, xr ⊕ yr/x′
311
+ r ⊕ y′
312
+ r
313
+ represents the right branch ciphertext C/C′ of the penultimate round. It shows
314
+ that the data format is closely related to the structure of the ciphers.
315
+ Bao et al. [4] accepted data of the form (xr−1, x′
316
+ r−1, yr−1 ⊕ y′
317
+ r−1) for Si-
318
+ mon32/64. Since when the output of the r-th round (C, C′) = (xr, yr, x′
319
+ r, y′
320
+ r)
321
+ is known, one can directly compute (xr−1, x′
322
+ r−1, yr−1 ⊕ y′
323
+ r−1) without knowing
324
+ the (r−1)-th subkey according to the round function of Simon-like ciphers. Lu et
325
+ al. [12] further proposed a new data format (∆xr, ∆yr, xr, yr, x′
326
+ r, y′
327
+ r, ∆yr−1, p∆yr−2)
328
+ and obtained better performance. The details are illustrated in Fig. 1, and this
329
+ data format is used in this paper due to its superiority.
330
+ Using Multiple Ciphertext Pairs. Gohr et al. [9] showed that for a single
331
+ ciphertext pair, only their differences may provide information for Simon. One
332
+ option to surpass DDs is to use multiple ciphertext pairs simultaneously, us-
333
+ ing dependencies between the pairs, especially if the key is fixed. Therefore, in
334
+ order to surpass DDs, we use multiple ciphertext pairs for training, and the re-
335
+ sults (Section 3) confirm that multiple ciphertext pairs indeed help to surpass
336
+ DDs, albeit only in some rounds. One current trend in deep learning-assisted
337
+ cryptanalysis is the employment of multiple ciphertext pairs per sample, and
338
+ our results offer solid evidence in favor of this trend.
339
+ The three questions above have been addressed, and the dataset can be gen-
340
+ erated. Specifically, training and test sets were generated by using the Linux
341
+ random number generator to obtain uniformly distributed keys Ki and mul-
342
+ tiple plaintext pairs {(Pi,j,0, Pi,j,1), j ∈ [0, m − 1]} with the input difference
343
+ (0x0000, 0x0040) as well as a vector of binary-valued labels Yi. During the pro-
344
+ duction of the training or test sets for r-round Simeck32/64, the multiple plain-
345
+ text pairs were then encrypted for r rounds if Yi = 1, while otherwise, the second
346
+ plaintext of the pairs were replaced with a freshly generated random plaintext
347
+ and then encrypted for r rounds. Then use the r-round ciphertext pairs to gen-
348
+ erate samples with data of form (∆xr, ∆yr, xr, yr, x′
349
+ r, y′
350
+ r, ∆yr−1, p∆yr−2).
351
+
352
+ ∆xr−1 = ∆yr
353
+ ∆yr−1 = yr−1 ⊕ y
354
+
355
+ r−1
356
+ Sa
357
+ Sb
358
+ Sc
359
+ kr−1
360
+ ∆xr = xr ⊕ x
361
+
362
+ r
363
+ ∆yr = yr ⊕ y
364
+
365
+ r
366
+ ∆xr−2 = ∆yr−1
367
+ p∆yr−2
368
+ Sa
369
+ Sb
370
+ Sc
371
+ kr−2
372
+ Fig. 1. Notation of the data format for Simon-like ciphers, where yr−1 = Sa(yr) ⊙
373
+ Sb(yr)⊕Sc(yr)⊕xr ⊕kr−1 ≜ A⊕kr−1, y
374
+
375
+ r−1 = Sa(y
376
+
377
+ r)⊙Sb(y
378
+
379
+ r)⊕Sc(y
380
+
381
+ r)⊕x
382
+
383
+ r ⊕kr−1 ≜
384
+ A
385
+ ′ ⊕ kr−1, and p∆yr−2 = Sa(A) ⊙ Sb(A) ⊕ Sc(A) ⊕ yr ⊕ Sa(A
386
+ ′) ⊙ Sb(A
387
+ ′) ⊕ Sc(A
388
+ ′) ⊕ y
389
+
390
+ r
391
+ 3.2
392
+ Network Architecture
393
+ In CRYPTO 2019, Gohr [8] used the Residual Network to capture the dif-
394
+ ferential information between the ciphertext pairs, thus getting the ND for
395
+ Speck32/64. To learn the XOR relation at the same position of the cipher-
396
+ text, a one-dimensional convolution of kernel size 1 is used in Gohr’s network
397
+ architecture. Since there may be some intrinsic connection between several adja-
398
+ cent bits, Zhang et al. [16] added multiple one-dimensional convolutional layers
399
+ with different kernel sizes in front of the residual block according to the circular
400
+ shift operation in the round function of Speck32/64 and Simon32/64. In this
401
+ paper, we improved Zhang et al.’s neural network to fit with the round function
402
+ of Simeck to improve the accuracy of the NDs, the framework shown in Fig. 2.
403
+ Initial Convolution (Module 1). The input layer is connected to the initial
404
+ convolutional layer, which comprises two convolutional layers with Nf channels
405
+ of kernel sizes 1 and 5. The two convolution layers are concatenated at the chan-
406
+ nel dimension. Batch normalization is applied to the output of the concatenate
407
+ layers. Finally, rectifier nonlinearity is applied to the output of batch normaliza-
408
+ tion, and the resulting [m, ω, 2Nf] matrix is passed to the convolutional blocks
409
+ layer where m = 8, ω = 16 and Nf = 32.
410
+ Convolutional Blocks (Module 2). Each convolutional block consists of two
411
+ layers of 2Nf filters. Each block applies first the convolution with kernel size
412
+
413
+ Output
414
+ Module 3
415
+ Module 2
416
+ Module 2
417
+ Module 1
418
+ Input
419
+ F(·)
420
+ f(·)
421
+ Module 1
422
+ Conv, 1, Nf
423
+ Conv, 5, Nf
424
+ Concatenate, 2Nf
425
+ BN
426
+ Relu
427
+ Module 2
428
+ ks = ks + 2
429
+ Conv, ks, 2Nf
430
+ BN
431
+ Relu
432
+ Conv, ks, 2Nf
433
+ BN
434
+ Relu
435
+
436
+ Module 3
437
+ FC, d1
438
+ BN
439
+ Relu
440
+ FC, d2
441
+ BN
442
+ Relu
443
+ Output
444
+ FC, 1
445
+ Sigmod
446
+ Fig. 2. The network architecture for Simeck32/64
447
+ ks, then a batch normalization, and finally a rectifier layer. At the end of the
448
+ convolutional block, a skip connection is added to the output of the final rec-
449
+ tifier layer of the block to the input of the convolutional block. It transfers the
450
+ result to the next block. After each convolutional block, the kernel size ks in-
451
+ creases by 2 where ks = 3. The number of convolutional blocks is 5 in our model.
452
+ Prediction Head (Module 3 and Output). The prediction head consists of
453
+ two hidden layers and one output unit. The three fully connected layers comprise
454
+ d1, d2 units, followed by the batch normalization and rectifier layers where d1 =
455
+ 512 and d2 = 64. The final layer consists of a single output unit using the
456
+ Sigmoid activation function.
457
+ 3.3
458
+ The Training method of Differential-Neural Distinguisher
459
+ The accuracy is the most critical indicator reflecting the performance of the neu-
460
+ ral distinguisher. The following training method was carried out to verify the
461
+ performance of our NDs.
462
+ Basic Training Scheme. We run the training for 20 epochs on the dataset for
463
+ N = 2∗107 and M = 2∗106. We set the batch size to 30000 and used Mirrored-
464
+ Strategy of TensorFlow to distribute it equally among the 6 GPUs. Optimization
465
+ was performed against mean square error loss plus a small penalty based on L2
466
+ weights regularization parameter c = 10−5 using the Adam algorithm [11]. A
467
+ cyclic learning rate schedule was applied, setting the learning rate li for epoch i
468
+
469
+ to li = α+ (n−i) mod (n+1)
470
+ n
471
+ ·(β −α) with α = 10−4, β = 2×10−3 and n = 9. The
472
+ networks obtained at the end of each epoch were stored, and the best network
473
+ by validation loss was evaluated against a test set.
474
+ Training using the Staged Train Method. We use several stages of pre-
475
+ training to train an r-round ND for Simeck. First, we use our (r−1)-round dis-
476
+ tinguisher to recognize (r − 3)-round Simeck with the input difference (0x0140,
477
+ 0x0080) (the most likely difference to appear three rounds after the input differ-
478
+ ence (0x0000, 0x0040). The training was done on 2 ∗ 107 instances for 10 epochs
479
+ with a cyclic learning rate schedule (2×10−3, 10−4). Then we trained the distin-
480
+ guisher to recognize r-round Simeck with the input difference (0x0000, 0x0040)
481
+ by processing 2 ∗ 107 freshly generated instances for 10 epochs with a cyclic
482
+ learning rate schedule (10−4, 10−5). Finally, the learning rate was dropped to
483
+ 10−5 after processing another 2 ∗ 107 new instances for 10 epochs.
484
+ 3.4
485
+ Compared Result
486
+ We presented the state-of-the-art NDs for Simeck32/ 64. Meanwhile, we calcu-
487
+ late the DDs for Simeck32/64 triggered by the input difference (0x0000, 0x0040)
488
+ up to 13 rounds to give baselines for NDs (see Table 2). This is accomplished
489
+ through the use of the frameworks of Gohr’s implementation for Speck32/64
490
+ and Bao et al.’s implementation for Simon32/64. The calculation is feasible on
491
+ Simeck32/64 but quite expensive. In fact, the calculation took about 939 core-
492
+ days of computation time and yielded about 34 gigabytes of distribution data
493
+ for each round, which was saved on disk for further studies.
494
+ Table 2. Accuracy of the DDs for Simeck32/64 with input difference (0x0000, 0x0040).
495
+ Combined means that the corresponding single pair distinguisher was used by combin-
496
+ ing the scores under independence assumption. For this, 2×106 samples, each consisting
497
+ of the given number of pairs m, were used to evaluating the accuracy.
498
+ R
499
+ m
500
+ 1
501
+ 2
502
+ 4
503
+ 8
504
+ 16
505
+ 32
506
+ 64
507
+ 128
508
+ 256
509
+ 7
510
+ 0.9040
511
+ 0.9765
512
+ 0.9936
513
+ 0.9996
514
+ 1.0
515
+ 1.0
516
+ 1.0
517
+ 1.0
518
+ 1.0
519
+ 8
520
+ 0.7105
521
+ 0.7921
522
+ 0.8786
523
+ 0.9518
524
+ 0.9907
525
+ 0.9995
526
+ 1.0
527
+ 1.0
528
+ 1.0
529
+ 9
530
+ 0.5738
531
+ 0.6097
532
+ 0.6590
533
+ 0.7221
534
+ 0.8011
535
+ 0.8848
536
+ 0.9554
537
+ 0.9919
538
+ 0.9998
539
+ 10
540
+ 0.5194
541
+ 0.5299
542
+ 0.5462
543
+ 0.5677
544
+ 0.5984
545
+ 0.6403
546
+ 0.6977
547
+ 0.7690
548
+ 0.8517
549
+ 11
550
+ 0.5044
551
+ 0.5068
552
+ 0.5109
553
+ 0.5176
554
+ 0.5247
555
+ 0.5364
556
+ 0.5530
557
+ 0.5761
558
+ 0.6085
559
+ 12
560
+ 0.5010
561
+ 0.5017
562
+ 0.5025
563
+ 0.5039
564
+ 0.5055
565
+ 0.5083
566
+ 0.5121
567
+ 0.5176
568
+ 0.5259
569
+ 13
570
+ 0.5002
571
+ 0.5001
572
+ 0.5007
573
+ 0.5009
574
+ 0.5012
575
+ 0.5016
576
+ 0.5032
577
+ 0.5039
578
+ 0.5086
579
+
580
+ It is important to note that when multiple ciphertext pairs are used as a
581
+ sample in the NDs, comparing the accuracy of the DDs computed with a single
582
+ ciphertext pair as a sample is not fair. Actually, the accuracy of the DDs with
583
+ multiple ciphertext pairs per sample can be calculated. This calculation is im-
584
+ plicitly used by Gohr in [8], and later Gohr et al. [9] explicitly proposed rules for
585
+ combining probabilities/distinguisher responses (see Corollary 2 in [9]). One can
586
+ use this rule to explicitly convert a distinguisher for one ciphertext pair into one
587
+ for an arbitrary number of ciphertext pairs. Algorithm 2 gives the pseudo-code
588
+ for computing this distinguisher, and the results are shown in Table 2.
589
+ Algorithm 2: Convert the DD for one ciphertext pair into one for an
590
+ m number of ciphertext pairs.
591
+ Input: DDT: the R round DDT table; N: the number of samples for single
592
+ ciphertext pairs; m: the combined number of ciphertext pairs for one
593
+ sample.
594
+ Output: the combined Acc, TPR, TNR with m ciphertext pairs.
595
+ 1 Y ← {}
596
+ 2 for i = 1 to N do
597
+ 3
598
+ Y[i ∗ m] ← random{0, 1}
599
+ 4
600
+ for j = 1 to m − 1 do
601
+ 5
602
+ Y[i ∗ m − j] ← Y[i ∗ m]
603
+ 6
604
+ end
605
+ 7 end
606
+ 8 Randomly generate N ∗ m samples [x1, x2, · · · , xN∗m] according to Y
607
+ 9 Z ← {}
608
+ 10 for i = 1 to N ∗ m do
609
+ 11
610
+ Z[i] ← DDT[xi]
611
+ 12 end
612
+ 13 Z ← Z / (Z+2−32)
613
+ 14 Z ← mean(Z.reshape(N,m), axis=1)
614
+ 15 predict_Y ← {}
615
+ 16 for i = 1 to N ∗ m do
616
+ 17
617
+ if
618
+ Z[i] > 0.5 then
619
+ 18
620
+ predict_Y[i] ← 1
621
+ 19
622
+ end
623
+ 20
624
+ else
625
+ 21
626
+ predict_Y[i] ← 0
627
+ 22
628
+ end
629
+ 23 end
630
+ 24 calculate Acc, TPR, TNR based on (Y, predict_Y)
631
+ 25 return Acc, TPR, TNR
632
+ /* In our experiments, N takes 220 when m no more than 210.
633
+ */
634
+ In addition, r-round ND should be compared with (r − 1)-round DD. Since
635
+ the data fed to r-round ND is the value of the ciphertext, one can directly com-
636
+
637
+ pute the differences on (r − 1)-round outputs without knowing the subkey. The
638
+ results are represented in Table 3, which shows that we improved the accuracy of
639
+ the NDs for Simeck32/64. More importantly, it is able to surpass the accuracy
640
+ of DDs for 9- and 10-round.
641
+ Table 3. Comparison of NDs on Simeck32/64 with 8 ciphertext pairs as a sample.
642
+ The input difference of ND/DD is (0x0000, 0x0040). *: The staged training method is
643
+ used to train ND.
644
+ R
645
+ Attack
646
+ Network
647
+ Acc
648
+ TPR
649
+ TNR
650
+ Ref.
651
+ 9
652
+ DD
653
+ DDT
654
+ 0.9518
655
+ 0.9604
656
+ 0.9433
657
+ Sect. 3
658
+ ND
659
+ SE-ResNet
660
+ 0.9952
661
+ 0.9989
662
+ 0.9914
663
+ [12]
664
+ ND
665
+ Inception
666
+ 0.9954
667
+ 0.9986
668
+ 0.9920
669
+ Sect. 3
670
+ 10
671
+ DD
672
+ DDT
673
+ 0.7221
674
+ 0.7126
675
+ 0.7316
676
+ Sect. 3
677
+ ND
678
+ SE-ResNet
679
+ 0.7354
680
+ 0.7207
681
+ 0.7501
682
+ [12]
683
+ ND
684
+ Inception
685
+ 0.7371
686
+ 0.7165
687
+ 0.7525
688
+ Sect. 3
689
+ 11
690
+ DD
691
+ DDT
692
+ 0.5677
693
+ 0.5416
694
+ 0.5940
695
+ Sect. 3
696
+ ND
697
+ SE-ResNet
698
+ 0.5646
699
+ 0.5356
700
+ 0.5936
701
+ [12]
702
+ ND
703
+ Inception
704
+ 0.5657
705
+ 0.5363
706
+ 0.5954
707
+ Sect. 3
708
+ ND
709
+ Inception
710
+ 0.5666⋆
711
+ 0.5441
712
+ 0.5895
713
+ Sect. 3
714
+ 12
715
+ DD
716
+ DDT
717
+ 0.5176
718
+ 0.4737
719
+ 0.5615
720
+ Sect. 3
721
+ ND
722
+ SE-ResNet
723
+ 0.5146⋆
724
+ 0.4770
725
+ 0.5522
726
+ [12]
727
+ ND
728
+ Inception
729
+ 0.5161⋆
730
+ 0.4807
731
+ 0.5504
732
+ Sect. 3
733
+ 4
734
+ Neutral bits and Wrong Key Response Profile
735
+ In Sect. 3, we provided the state-of-the-art NDs for Simeck32/64, which use to
736
+ perform better key recovery attacks in the following section. In [8], Gohr provides
737
+ a framework of (1+s+r +1)-round key recovery attack (refer to Appendix A.1)
738
+ consisting of three techniques to increase the success rate and speed up the at-
739
+ tacks, where s is the length of the CD, and r is the length of the ND. Here is a
740
+ description of these techniques.
741
+ Neutral Bits. In the key recovery attack, multiple samples (formed into a ci-
742
+ phertext structure) decrypted by the guessed subkey are predicted using the
743
+ distinguisher. Then, the multiple scores are combined according to formula vk =
744
+ �nb
745
+ i=1 Zk
746
+ i/1−Zk
747
+ i as the final score of that guessed subkey to reduce the misjudgment
748
+ rate of the ND. Since the CD suspended in front of the ND are probabilistic, re-
749
+ sulting in sample entering the distinguisher not satisfying the same distribution.
750
+
751
+ Multiple samples generated by neutral bits will have the same distribution. Also,
752
+ the lower the accuracy of the distinguisher, the more neutral bits are needed.
753
+ Priority of Ciphertext Structure. Spending the same amount of compu-
754
+ tation on every ciphertext structure is inefficient. Gohr used a generic method
755
+ (automatic exploitation versus exploration tradeoff based on Upper Confidence
756
+ Bounds) to focus the key search on the most promising ciphertext structures.
757
+ The priority score of each ciphertext structure is si = ωi
758
+ max + √nc ·
759
+
760
+ log2(j)/ni
761
+ where denote by ωi
762
+ max the highest distinguisher score, ni the number of previous
763
+ iterations in which the ith ciphertext structure, j the number of the current
764
+ iteration and √nc the number of ciphertext structures available.
765
+ Wrong Key Response Profile. The key search policy based on Bayesian Op-
766
+ timization drastically reduces the number of trial decryptions. The basic idea
767
+ of this policy is the wrong key randomization hypothesis. This hypothesis does
768
+ not hold when only one round of trial decryption is performed, especially in a
769
+ lightweight cipher. The expected response of the ND upon wrong-key decryp-
770
+ tion will depend on the bitwise difference between the trial and real keys. This
771
+ wrong-key response profile can be captured in a precomputation. Give some
772
+ trial decryptions, the optimization step then trials to come up with a new set
773
+ of candidate keys to try. These new candidate keys are chosen to maximize the
774
+ probability of the observed distinguisher responses.
775
+ 4.1
776
+ Exploring Neutral Bits
777
+ To be able to attack more rounds with the ND, the CD is generally prepended
778
+ in front of the ND. For the resulting HD used in the key recovery attack, it is
779
+ not straightforward to aggregate enough samples of the same distribution fed to
780
+ the ND due to the prepended CD. To overcome this problem, Gohr [8] used the
781
+ neutral bits of the CD. The more neutral bits there are for the prepended CD, the
782
+ more samples of the same distribution could be generated for the ND. However,
783
+ generally, the longer the CD, the fewer the neutral bits. Finding enough neutral
784
+ bits for prepending a long CD over a weak ND becomes a difficult problem for
785
+ devising a key recovery to cover more rounds. To solve this problem, Bao et al.
786
+ exploited various generalized NBs to make weak ND usable again. Particularly,
787
+ they employed conditional simultaneous neutral bit-sets (CSNBS) and switching
788
+ bits for adjoining differentials (SBfAD), which are essential for achieving efficient
789
+ 12-round and practical 13-round attacks for Speck32/64.
790
+ Thus, the first part of the key recovery attack focuses on finding various
791
+ types of neutral bits. Given a differential, in order to find the neutral bits, it is
792
+ generally divided into two steps: firstly, collect enough conforming pairs (correct
793
+ pairs); secondly, flip the target bits of the conforming pair, or flip all the bits
794
+ contained in the target set of bits, and check the probability that the new plain-
795
+ text pair is still the conforming pair.
796
+
797
+ Finding SNBSs for 3-round Differential. For the prepended 3-round CD
798
+ (0x0140, 0x0200) → (0x0000, 0x0040) on top of the NDs, one can experimen-
799
+ tally obtain 14 deterministic NBs and 2 SNBSs (simultaneously complementing
800
+ up to 4 bits) using an exhaustive search. Concretely, for the 3-round differential
801
+ (0x0140, 0x0200) → (0x0000, 0x0040), (simultaneous-) neutral bits and bit-sets
802
+ are [3], [4], [5], [7], [8], [9], [13], [14], [15], [18], [20], [22], [24], [30], [0, 31], [10, 25].
803
+ Finding SNBSs for 4-round Differential. For the prepended 4-round CD
804
+ (0x0300, 0x0440) → (0x0000, 0x0040) on top of the NDs, there are 7 com-
805
+ plete NB/SNBS: [2], [4], [6], [8], [14], [9, 24], [9, 10, 25]. Still, the numbers of
806
+ NBs/SNBSs are not enough for appending a weak neural network distinguisher.
807
+ Thus, conditional ones were searched using Algorithm 3 in paper [4], and the
808
+ obtained CSNBSs and their conditions are summarized together in Table 4.
809
+ Table
810
+ 4.
811
+ CSNBS
812
+ for
813
+ 4-round
814
+ Classical
815
+ Differential
816
+ (0x0300, 0x0440)
817
+
818
+ (0x0000, 0x0040) of Simeck32/64
819
+ Bit-set
820
+ C.
821
+ Bit-set
822
+ C.
823
+ x[0, 10]
824
+ x[2, 12]
825
+ [21]
826
+ 00
827
+ [23]
828
+ 00
829
+ [21, 5]
830
+ 10
831
+ [23, 12]
832
+ 10
833
+ [21, 10]
834
+ 01
835
+ [23, 7]
836
+ 01
837
+ [21, 10, 5]
838
+ 11
839
+ [23, 12, 7]
840
+ 11
841
+ C.: Condition on x[i, j], e.g., x[i, j] = 10 means x[i] = 1 and x[j] = 0.
842
+ 4.2
843
+ Wrong Key Response Profile
844
+ To calculate the r-round wrong key response profile, we generated 3000 random
845
+ keys and multiple input pairs {(Pi,0, Pi,1), i ∈ [0, m − 1]} for each difference
846
+ δ ∈ (0, 216) and encrypted for r +1 rounds to obtain ciphertexts {(Ci,0, Ci,1), i ∈
847
+ [0, m − 1]}, where Pi,0 ⊕ Pi,1 = ∆. Denoting the final real subkey of each
848
+ encryption operation by k, we then performed single-round decryption to get
849
+ E−1
850
+ k⊕δ({Ci,0, i ∈ [0, m − 1]}), E−1
851
+ k⊕δ({Ci,1, i ∈ [0, m − 1]}) and had the resulting
852
+ partially decrypted ciphertext pair rated by an r-round ND. µδ and σδ were
853
+ then calculated as empirical mean and standard deviation over these 3000 trials.
854
+ We call the r-round wrong key response profile WKRPr. From the wrong key
855
+ Response Profile, we can find some rules to speed up the key recovery attack.
856
+ • Analysis of WKRP9. In Figure 3a, when the difference between guessed
857
+ key and real key δ is greater than 16384, the score of the distinguisher is
858
+ close to 0. This phenomenon indicates that the score of the distinguisher
859
+ is very low when the 14-th and 15-th bit is guessed incorrectly. When δ ∈
860
+ {2048, 4096, 8192, 10240, 12288, 14436}, the score of the distinguisher is greater
861
+
862
+ than 0.6. This indicates that when the 11-th, 12-th, and 13-th bits are guessed
863
+ incorrectly, it has little effect on the score of the distinguisher.
864
+ • Analysis of WKRP10 and WKRP11. It is clear from Figure 3b that
865
+ when the δ is greater than 32768, the score of the distinguisher is less than
866
+ 0.45, i.e., the 15-th bit has a greater impact on the distinguisher score. When
867
+ δ ∈ {4096, 8192, 12288}, the score of the distinguisher is close to 0.55. This
868
+ indicates that when the 12-th and 13-th bits are guessed incorrectly, it has
869
+ little effect on the score of the distinguisher. It can also be observed from
870
+ Figure 3c that the 12-th and 13-th bits have less influence on the score of
871
+ the distinguisher, and the 14-th and 15-th bits have more influence on the
872
+ score of the distinguisher.
873
+ • Analysis of WKRP12. Despite the small difference in scores in Figure 3d,
874
+ it was found that when only the 12-th and 13-th bits are wrongly guessed,
875
+ the score of the distinguisher is still higher than the other positions.
876
+ (a) WKRP9
877
+ (b) WKRP10
878
+ (c) WKRP11
879
+ (d) WKRP12
880
+ Fig. 3. Wrong Key Response Profile for Simeck32/64.
881
+
882
+ 1.0
883
+ 0.50
884
+ 0.8
885
+ Meanresponse
886
+ 0.6
887
+ 0.4
888
+ 0.2
889
+ 0.0
890
+ 0
891
+ 4096
892
+ Differencetorealkey0.50
893
+ 0.65
894
+ 0.60
895
+ 0.55
896
+ response
897
+ 0.50
898
+ Mean
899
+ 0.45
900
+ 0.40
901
+ 0.35
902
+ 0
903
+ 4096
904
+ 81921228816384204802457628672327683686440960450564915253248573446144065536
905
+ Differenceto realkey0.520
906
+ 0.50
907
+ 0.515
908
+ 0.510
909
+ 0.505
910
+ 0.500
911
+ Mean
912
+ 0.495
913
+ 0.490
914
+ 0.485
915
+ 0.480
916
+ 0
917
+ Differenceto realkey0.50
918
+ 0.5015
919
+ 0.5010
920
+ 0.5005
921
+ response
922
+ 0.5000
923
+ Mean
924
+ 0.4995
925
+ 0.4990
926
+ 0.4985
927
+ 0.4980
928
+ 0.4975
929
+ 0
930
+ 4096
931
+ 81921228816384204802457628672327683686440960450564915253248573446144065536
932
+ Differencetoreal keyFrom the four wrong key response profiles, we can conclude that when the
933
+ 14-th and 15-th bit subkeys are guessed incorrectly, it has a greater impact on
934
+ the score of the distinguisher; when the 12-th and 13-th bit subkeys are guessed
935
+ incorrectly, it has a smaller impact on the score of the distinguisher. According
936
+ to these phenomena, we can speed up the key recovery attack.
937
+ • Guess the 14-th and 15-th bit subkeys. Since the difference between
938
+ the score of the distinguisher of bits 14 and 15 in the case of correct and
939
+ incorrect guesses is relatively large, we can first determine the values of these
940
+ two bits. Before performing a Bayesian key search, a random set of subkeys
941
+ is guessed, then the 14-th and 15-th bits of the subkeys are traversed, and
942
+ the ciphertext is decrypted using the subkeys. Thus, the values of the 14-th
943
+ and 15-th bits can be determined based on the score of the distinguisher.
944
+ The Bayesian key search algorithm can easily recover these two bits even if
945
+ the values of these two bits are not determined in advance.
946
+ • Ignore the 12-th and 13-th bit subkeys. Since the 12-th and 13-th bit
947
+ subkeys have less influence on the score of the distinguisher, we first set
948
+ these two bits to 0 when generating the first batch of candidate subkeys and
949
+ then randomize the values of the two bits after completing the Bayesian key
950
+ sorting and recommending the new candidate subkeys. Previous researchers
951
+ have also exploited this feature to accelerate key recovery attacks, and the
952
+ 14-th and 15-th bit subkeys have little impact on the score of the distin-
953
+ guisher when guessed incorrectly for Speck32/64 and Simon32/64[4,8,16].
954
+ The Bayesian key search algorithm considering insensitive key bits is shown
955
+ in Algorithm 3.
956
+ 5
957
+ Practical Key Recovery Attack
958
+ When a fast graphics card is used, the performance of the implementation is not
959
+ limited by the speed of neural network evaluation but by the total number of
960
+ iterations on the ciphertext structures. We count a key guess as successful if the
961
+ sum of the Hamming weights of the differences between the returned last two
962
+ subkeys and the real two subkeys are at most two. The experimental parameters
963
+ for key recovery attacks are denoted as follows.
964
+ 1. ncts: the number of ciphertext structure.
965
+ 2. nb: the number of ciphertext pairs in each ciphertext structures.
966
+ 3. nit: the total number of iterations on the ciphertext structures.
967
+ 4. c1 and c2: the cutoffs with respect to the scores of the recommended last
968
+ subkey and second to last subkey, respectively.
969
+ 5. nbyit1, ncand1 and nbyit2, ncand2: the number of iterations and number of key
970
+ candidates within each iteration in the BayesianKeySearch Algorithm for
971
+ guessing each of the last and the second to last subkeys, respectively.
972
+
973
+ Algorithm 3: BayesianKeySearch Algorithm For Simeck32/64.
974
+ Input: Ciphertext structure C := {C0, · · · , Cnb−1}, a neural distinguisher
975
+ ND, and its wrong key response profile µ and σ, the number of
976
+ candidates to be generated within each iteration ncand, the number of
977
+ iterations nbyit
978
+ Output: The list L of tuples of recommended keys and their scores
979
+ 1 S := {k0, k1, · · · , kncand−1} ← choose ncand values at random without
980
+ replacement from the set of all subkey candidates
981
+ 2 S = S & 0xCFFF
982
+ 3 L ← {}
983
+ 4 for t = 1 to nbyit do
984
+ 5
985
+ for ∀ki ∈ S do
986
+ 6
987
+ for j = 0 to nb − 1 do
988
+ 7
989
+ C
990
+
991
+ j,ki = F −1
992
+ ki (Cj)
993
+ 8
994
+ vj,ki = ND(C
995
+
996
+ j,ki)
997
+ 9
998
+ sj,ki = log2(vj,ki/(1 − vj,ki))
999
+ 10
1000
+ end
1001
+ 11
1002
+ ski = �nb−1
1003
+ j=0 sj,ki; /* the combined score of ki using neutral
1004
+ bits.
1005
+ */
1006
+ 12
1007
+ L ← L∥(ki, ski);
1008
+ 13
1009
+ mki = �nb−1
1010
+ j=0 vj,ki/nb
1011
+ 14
1012
+ end
1013
+ 15
1014
+ for k ∈ {0, 1, · · · , 216 − 1} & 0xCFFF do
1015
+ 16
1016
+ λk = �ncand−1
1017
+ i=0
1018
+ (mki − µki⊕k)2/σ2
1019
+ ki⊕k; /* using wrong key response
1020
+ profile.
1021
+ */
1022
+ 17
1023
+ end
1024
+ 18
1025
+ S ← argsortk(λ)[0 : ncand − 1];
1026
+ 19
1027
+ r := {r0, r1, · · · , rncand−1} ← choose ncand values at (0, 4) at random
1028
+ 20
1029
+ r = r << 12; /* Randomize the 12-th and 13-th bit subkeys.
1030
+ */
1031
+ 21
1032
+ S = S ⊕ r
1033
+ 22 end
1034
+ 23 return L
1035
+
1036
+ 5.1
1037
+ Complexity Calculation
1038
+ Theoretical Data Complexity. The theoretical data complexity of the exper-
1039
+ iment is calculated by the formula nb × nct × m × 2. In the actual experiment,
1040
+ when the accuracy of the ND is high, the key can be recovered quickly and suc-
1041
+ cessfully. Not all the ciphertext structure is used, so the actual data complexity
1042
+ is lower than the theoretical.
1043
+ Experimental Time Complexity. The time complexity calculation formula
1044
+ in our experiments is 226.693 × rt × log1−sr 0.01, which is borrowed from [16].
1045
+ Our device can perform 226.693 1-round decryption per second. rt is the average
1046
+ running time of multiple experiments. The success rate sr is the number of suc-
1047
+ cessfully recovered subkeys divided by the number of experiments. We calculate
1048
+ how many experiments need to be performed to ensure at least one successful
1049
+ experiment. When the overall success rate is 99%, we consider the experiment
1050
+ to be successful, and the number of experiments ne is: 1−(1−sr)ne = 0.99, i.e.,
1051
+ log1−sr 0.01.
1052
+ 5.2
1053
+ Key Recovery Attack on 15-round Simeck32/64
1054
+ Experiment 1: The components of key recovery attack ASimeck15R of 15-round
1055
+ Simeck32/64 are as follows.
1056
+ 1. 3-round CD (0x0140, 0x0200) → (0x0000, 0x0040).
1057
+ 2. neutral bits of generating multiple ciphertext pairs: [3], [4], [5].
1058
+ 3. neutral bits of combined response of neural distinguisher: [7], [8], [9], [13], [14],
1059
+ [15], [18], [20].
1060
+ 4. 10-round neural distinguisher NDSimeck10R and wrong key response profiles
1061
+ NDSimon10R · µ and NDSimeck10R · δ.
1062
+ 5. 9-round distinguisher NDSimeck9R and wrong key response profiles NDSimon9R·
1063
+ µ and NDSimeck9R · δ.
1064
+ Concrete parameters used in our 15-round key recovery attack ASimeck15R are
1065
+ listed as follows.
1066
+ m = 8
1067
+ nb = 28
1068
+ ncts = 210
1069
+ nit = 211
1070
+ c1 = 10
1071
+ c2 = 10
1072
+ nbyit1 = nbyit2 = 5
1073
+ ncand1 = ncand2 = 32
1074
+ The theoretical data complexity is m×nb ×ncts ×2 = 222 plaintexts. The ac-
1075
+ tual data complexity is 219.621. In total, 120 trials are running and 119 successful
1076
+ trials. Thus, the success rate sr is 99.17%. The average running time of the exper-
1077
+ iment rt is 407.901s. The time complexity is 226.693 × rt × log1−sr 0.01 = 235.309.
1078
+ 5.3
1079
+ Key Recovery Attack on 16-round Simeck32/64
1080
+ Experiment 2: The components of key recovery attack ASimeck16R of 16-round
1081
+ Simeck32/64 are shown as follows.
1082
+
1083
+ 1. 3-round CD (0x0140, 0x0200) → (0x0000, 0x0040).
1084
+ 2. neutral bits of generating multiple ciphertext pairs: [3], [4], [5].
1085
+ 3. neutral bits of combined response of neural distinguisher: [7], [8], [9], [13], [14],
1086
+ [15], [18], [20], [22], [24].
1087
+ 4. 11-round neural distinguisher NDSimeck11R and wrong key response profiles
1088
+ NDSimeck11R · µ and NDSimeck11R · δ.
1089
+ 5. 10-round neural distinguisher NDSimeck10R and wrong key response profiles
1090
+ NDSimeck10R · µ and NDSimeck10R · δ.
1091
+ Concrete parameters used in our 16-round key recovery attack ASimeck16R are
1092
+ listed as follows.
1093
+ m = 8
1094
+ nb = 210
1095
+ ncts = 210
1096
+ nit = 211
1097
+ c1 = 10
1098
+ c2 = 10
1099
+ nbyit1 = nbyit2 = 5
1100
+ ncand1 = ncand2 = 32
1101
+ The theoretical data complexity is m × nb × ncts × 2 = 224 plaintexts. The
1102
+ actual data complexity is 222.788. We use 6 processes, each running 20 experi-
1103
+ ments. Since the memory limit was exceeded during the experiment, one process
1104
+ was killed, leaving 100 experiments, 100 of which successfully recovered the key.
1105
+ Thus, the success rate sr is 100%. The average running time of the experiment
1106
+ rt is 2889.648s. The time complexity is 226.693 × rt = 238.189.
1107
+ 5.4
1108
+ Key Recovery Attack on 17-round Simeck32/64
1109
+ Experiment 3: The components of key recovery attack ASimeck17R of 17-round
1110
+ Simeck32/64 are shown as follows.
1111
+ 1. 3-round CD (0x0140, 0x0200) → (0x0000, 0x0040).
1112
+ 2. neutral bits of generating multiple ciphertext pairs: [3], [4], [5].
1113
+ 3. neutral bits of combined response of neural distinguisher: [7], [8], [9], [13], [14],
1114
+ [15], [18], [20], [22], [24], [30], [0, 31].
1115
+ 4. 12-round neural distinguisher NDSimeck12R and wrong key response profiles
1116
+ NDSimeck12R · µ and NDSimeck12R · δ.
1117
+ 5. 11-round neural distinguisher NDSimeck11R and wrong key response profiles
1118
+ NDSimeck11R · µ and NDSimeck11R · δ.
1119
+ Concrete parameters used in our 17-round key recovery attack ASimeck17R are
1120
+ listed as follows.
1121
+ m = 8
1122
+ nb = 212
1123
+ ncts = 210
1124
+ nit = 211
1125
+ c1 = 20
1126
+ c2 = −120
1127
+ nbyit1 = nbyit2 = 5
1128
+ ncand1 = ncand2 = 32
1129
+ The theoretical data complexity is m × nb × ncts × 2 = 226 plaintexts. The
1130
+ actual data complexity is 225.935. In total, trials are 50 running, and there are
1131
+ 15 successful trials. Thus, the success rate sr is 30%. The average running
1132
+ time of the experiment rt is 25774.822s. The time complexity is 226.693 × rt ×
1133
+ log1−sr 0.01 = 245.037.
1134
+
1135
+ Remark 1. There are two reasons why we do not launch a 17-round key recovery
1136
+ attack using a 4-round CD and an 11-round ND. One is that the probability
1137
+ of the 4-round CD (0x0300, 0x0440) → (0x0000, 0x0040) is about 212 (the prob-
1138
+ ability of the 3-round CD (0x0140, 0x0200) → (0x0000, 0x0040) is about 2−8),
1139
+ resulting in too much data required, and the second is that there are not enough
1140
+ neutral bits in the 4-round CD.
1141
+ 6
1142
+ Conclusion
1143
+ In this paper, we show practical key recovery attacks up to 17 rounds of Simeck
1144
+ 32/64, raising the technical level of practical attacks by two rounds. We design
1145
+ neural network that fits with the round function of Simeck to improve the ac-
1146
+ curacy of the neural distinguishers, and is able to outperform the DDT-based
1147
+ distinguisher in some rounds. To launch more rounds of the key recovery attack,
1148
+ we make a concerted effort on the classical differential and the neural distin-
1149
+ guisher to make both modules good. In addition, we optimize the key recovery
1150
+ attack process by deeply analyzing the wrong key response profile, thus reducing
1151
+ the complexity of the key recovery attack.
1152
+ References
1153
+ 1. Aagaard, M., AlTawy, R., Gong, G., Mandal, K., Rohit, R.: Ace: An authenticated
1154
+ encryption and hash algorithm. Submission to NIST-LWC (announced as round 2
1155
+ candidate on August 30, 2019) (2019)
1156
+ 2. AlTawy, R., Gong, G., He, M., Jha, A., Mandal, K., Nandi, M., Rohit, R.: Spoc:
1157
+ an authenticated cipher submission to the nist lwc competition (2019)
1158
+ 3. AlTawy, R., Gong, G., He, M., Mandal, K., Rohit, R.: Spix: An authenticated
1159
+ cipher submission to the nist lwc competition. Submitted to NIST Lightweight
1160
+ Standardization Process (2019)
1161
+ 4. Bao, Z., Guo, J., Liu, M., Ma, L., Tu, Y.: Enhancing differential-neural cryptanal-
1162
+ ysis. In: International Conference on the Theory and Application of Cryptology
1163
+ and Information Security. Springer (2022)
1164
+ 5. Beaulieu, R., Shors, D., Smith, J., Treatman-Clark, S., Weeks, B., Wingers, L.:
1165
+ The simon and speck lightweight block ciphers. In: Proceedings of the 52nd annual
1166
+ design automation conference. pp. 1–6 (2015)
1167
+ 6. Bellini, E., Gerault, D., Hambitzer, A., Rossi, M.: A cipher-agnostic neural train-
1168
+ ing pipeline with automated finding of good input differences. Cryptology ePrint
1169
+ Archive (2022)
1170
+ 7. Benamira, A., Gerault, D., Peyrin, T., Tan, Q.Q.: A deeper look at machine
1171
+ learning-based cryptanalysis. In: Annual International Conference on the Theory
1172
+ and Applications of Cryptographic Techniques. pp. 805–835. Springer (2021)
1173
+ 8. Gohr, A.: Improving attacks on round-reduced speck32/64 using deep learning. In:
1174
+ Annual International Cryptology Conference. pp. 150–179. Springer (2019)
1175
+ 9. Gohr, A., Leander, G., Neumann, P.: An assessment of differential-neural distin-
1176
+ guishers. Cryptology ePrint Archive (2022)
1177
+ 10. Hou, Z., Ren, J., Chen, S.: Improve neural distinguishers of simon and speck.
1178
+ Security and Communication Networks 2021 (2021)
1179
+
1180
+ 11. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint
1181
+ arXiv:1412.6980 (2014)
1182
+ 12. Lu, J., Liu, G., Liu, Y., Sun, B., Li, C., Liu, L.: Improved neural distinguishers
1183
+ with (related-key) differentials: Applications in simon and simeck. arXiv preprint
1184
+ arXiv:2201.03767 (2022)
1185
+ 13. Lyu, L., Tu, Y., Zhang, Y.: Deep learning assisted key recovery attack for round-
1186
+ reduced simeck32/64. In: International Conference on Information Security. pp.
1187
+ 443–463. Springer (2022)
1188
+ 14. Yadav, T., Kumar, M.: Differential-ml distinguisher: Machine learning based
1189
+ generic extension for differential cryptanalysis. In: International Conference on
1190
+ Cryptology and Information Security in Latin America. pp. 191–212. Springer
1191
+ (2021)
1192
+ 15. Yang, G., Zhu, B., Suder, V., Aagaard, M.D., Gong, G.: The simeck family of
1193
+ lightweight block ciphers. In: International Workshop on Cryptographic Hardware
1194
+ and Embedded Systems. pp. 307–329. Springer (2015)
1195
+ 16. Zhang, L., Wang, Z., Wang, B.: Improving differential-neural cryptanalysis with
1196
+ inception blocks. Cryptology ePrint Archive (2022)
1197
+
1198
+ A
1199
+ Appendix
1200
+ A.1
1201
+ Procedure of (1 + s + r + 1)-round key recovery attack
1202
+ The attack procedure is as follows.
1203
+ 1. Initialize variables Gbestkey ← (None, None), Gbestscore ← −∞.
1204
+ 2. Generate ncts random plaintext pairs with difference ∆P.
1205
+ 3. Using ncts plaintext pairs and log2 m neutral bit with probability one to
1206
+ generate ncts multiple plaintext pairs. Every multiple plaintext pairs have
1207
+ m plaintext pairs.
1208
+ 4. From the ncts multiple plaintext pairs, generate ncts plaintext structures
1209
+ using nb generalized neutral bit.
1210
+ 5. Decrypt one round using zero as the subkey for all multiple plaintext pairs
1211
+ in the structures and obtain ncts plaintext structure.
1212
+ 6. Query for the ciphertexts under (1 + s + r + 1)-round Simeck32/64 of the
1213
+ ncts × nb × 2 plaintext structures, thus obtain ncts ciphertext structures,
1214
+ denoted by {C1, . . . , Cncts}.
1215
+ 7. Initialize an array ωmax and an array nvisit to record the highest distinguisher
1216
+ score obtained so far and the number of visits have received in the last subkey
1217
+ search for the ciphertext structures.
1218
+ 8. Initialize variables bestscore ← −∞, bestkey ← (None, None), bestpos ←
1219
+ None to record the best score, the corresponding best recommended values
1220
+ for the two subkeys obtained among all ciphertext structures and the index
1221
+ of this ciphertext structures.
1222
+ 9. For j from 1 to nit:
1223
+ (a) Compute the priority of each of the ciphertext structures as follows:
1224
+ si = ωmaxi + α ·
1225
+
1226
+ log2 j/nvisiti, for i ∈ {1, . . . , ncts}, and α = √ncts;
1227
+ The formula of priority is designed according to a general method in
1228
+ reinforcement learning for achieving automatic exploitation versus ex-
1229
+ ploration trade-off based on Upper Confidence Bounds. It is motivated
1230
+ to focus the key search on the most promising ciphertext structures [8].
1231
+ (b) Pick the ciphertext structure with the highest priority score for further
1232
+ processing in this j-th iteration, denote it by C, and its index by idx,
1233
+ nvisitidx ← nvisitidx + 1.
1234
+ (c) Run BayesianKeySearch Algorithm [8] with C, the r-round neural
1235
+ distinguisher NDr and its wrong key response profile NDr ·µ and NDr ·
1236
+ σ, ncand1, and nbyit1 as input parameters; obtain the output, that is a
1237
+ list L1 of nbyit1 × ncand1 candidate values for the last subkey and their
1238
+ scores, i.e., L1 = {(g1i, v1i) : i ∈ {1, . . . , nbyit1 × ncand1}}.
1239
+ (d) Find the maximum v1max among v1i in L1, if v1max > ωmaxidx, ωmaxidx ←
1240
+ v1max.
1241
+ (e) For each of recommended last subkey g1i ∈ L1, if the score v1i > c1,
1242
+ i. Decrypt the ciphertext in C using the g1i by one round and obtain
1243
+ the ciphertext structures C′ of (1 + s + r)-round Simeck32/64.
1244
+
1245
+ ii. Run BayesianKeySearch Algorithm [8] with C′ , the neural dis-
1246
+ tinguisher NDr−1 and its wrong key response profile NDr−1 · µ and
1247
+ NDr−1·σ, ncand2, and nbyit2 as input parameters; obtain the output,
1248
+ that is a list L2 of nbyit2×ncand2 candidate values for the last subkey
1249
+ and their scores, i.e., L2 = {(g2i, v2i) : i ∈ {1, . . . , nbyit2 × ncand2}}.
1250
+ iii. Find the maximum v2i and the corresponding g2i in L2, and denote
1251
+ them by v2max and g2max.
1252
+ iv. If v2max > bestscore, update bestscore ← v2max, bestkey ← (g1i,
1253
+ g2max), bestpos ← idx.
1254
+ (f) If bestscore > c2, go to Step 10.
1255
+ 10. Make a final improvement using VerifierSearch [8] on the value of bestkey
1256
+ by examining whether the scores of a set of keys obtained by changing at
1257
+ most 2 bits on top of the incrementally updated bestkey could be improved
1258
+ recursively until no improvement obtained, update bestscore to the best score
1259
+ in the final improvement; If bestscore > Gbestscore, update Gbestscore ←
1260
+ bestscore, Gbestkey ← bestkey.
1261
+ 11. Return Gbestkey, Gbestscore.
1262
+
-tFJT4oBgHgl3EQfpywl/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
.gitattributes CHANGED
@@ -1793,3 +1793,78 @@ atE3T4oBgHgl3EQfdQoZ/content/2301.04532v1.pdf filter=lfs diff=lfs merge=lfs -tex
1793
  kNE_T4oBgHgl3EQf5Rz9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1794
  OdE3T4oBgHgl3EQfxQsz/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1795
  UdAyT4oBgHgl3EQfuvkh/content/2301.00617v1.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1793
  kNE_T4oBgHgl3EQf5Rz9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1794
  OdE3T4oBgHgl3EQfxQsz/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1795
  UdAyT4oBgHgl3EQfuvkh/content/2301.00617v1.pdf filter=lfs diff=lfs merge=lfs -text
1796
+ etA0T4oBgHgl3EQfHP_m/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1797
+ XtAyT4oBgHgl3EQfWfc6/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1798
+ f9E5T4oBgHgl3EQfhg82/content/2301.05641v1.pdf filter=lfs diff=lfs merge=lfs -text
1799
+ N9FJT4oBgHgl3EQfHSyb/content/2301.11451v1.pdf filter=lfs diff=lfs merge=lfs -text
1800
+ FtAzT4oBgHgl3EQfxP5f/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1801
+ Y9FST4oBgHgl3EQfADiR/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1802
+ YdFAT4oBgHgl3EQf3R5S/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1803
+ dtAyT4oBgHgl3EQfjPjp/content/2301.00413v1.pdf filter=lfs diff=lfs merge=lfs -text
1804
+ AtAzT4oBgHgl3EQfTPzA/content/2301.01247v1.pdf filter=lfs diff=lfs merge=lfs -text
1805
+ KNE4T4oBgHgl3EQfiA0C/content/2301.05129v1.pdf filter=lfs diff=lfs merge=lfs -text
1806
+ s9E_T4oBgHgl3EQf9Bwt/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1807
+ ltE5T4oBgHgl3EQfig-m/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1808
+ dtAyT4oBgHgl3EQfjPjp/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1809
+ dtE4T4oBgHgl3EQfQQzr/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1810
+ ltE5T4oBgHgl3EQfig-m/content/2301.05649v1.pdf filter=lfs diff=lfs merge=lfs -text
1811
+ OtAzT4oBgHgl3EQfWfy0/content/2301.01303v1.pdf filter=lfs diff=lfs merge=lfs -text
1812
+ ptE2T4oBgHgl3EQf0QjM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1813
+ _9FQT4oBgHgl3EQf8TZV/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1814
+ UdE1T4oBgHgl3EQfIgMu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1815
+ udFLT4oBgHgl3EQfjC8Y/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1816
+ ANFJT4oBgHgl3EQfrC3Z/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1817
+ ANFJT4oBgHgl3EQfrC3Z/content/2301.11607v1.pdf filter=lfs diff=lfs merge=lfs -text
1818
+ eNAzT4oBgHgl3EQfaPxf/content/2301.01365v1.pdf filter=lfs diff=lfs merge=lfs -text
1819
+ 8tAzT4oBgHgl3EQf-v68/content/2301.01939v1.pdf filter=lfs diff=lfs merge=lfs -text
1820
+ BdE1T4oBgHgl3EQf9gYX/content/2301.03556v1.pdf filter=lfs diff=lfs merge=lfs -text
1821
+ f9E5T4oBgHgl3EQfhg82/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1822
+ 29E3T4oBgHgl3EQfoAqh/content/2301.04630v1.pdf filter=lfs diff=lfs merge=lfs -text
1823
+ AtAzT4oBgHgl3EQfTPzA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1824
+ 1NE2T4oBgHgl3EQfigeU/content/2301.03959v1.pdf filter=lfs diff=lfs merge=lfs -text
1825
+ 39AzT4oBgHgl3EQfffxn/content/2301.01453v1.pdf filter=lfs diff=lfs merge=lfs -text
1826
+ bdFPT4oBgHgl3EQfAzS0/content/2301.12983v1.pdf filter=lfs diff=lfs merge=lfs -text
1827
+ UtE0T4oBgHgl3EQf2gK9/content/2301.02714v1.pdf filter=lfs diff=lfs merge=lfs -text
1828
+ 39AzT4oBgHgl3EQfffxn/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1829
+ 3dAyT4oBgHgl3EQfb_e6/content/2301.00275v1.pdf filter=lfs diff=lfs merge=lfs -text
1830
+ tdFJT4oBgHgl3EQfdSw-/content/2301.11547v1.pdf filter=lfs diff=lfs merge=lfs -text
1831
+ UtE0T4oBgHgl3EQf2gK9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1832
+ UdE1T4oBgHgl3EQfIgMu/content/2301.02939v1.pdf filter=lfs diff=lfs merge=lfs -text
1833
+ 3dAyT4oBgHgl3EQfb_e6/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1834
+ gtA0T4oBgHgl3EQfH_9T/content/2301.02068v1.pdf filter=lfs diff=lfs merge=lfs -text
1835
+ atE3T4oBgHgl3EQfdQoZ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1836
+ ptE2T4oBgHgl3EQf0QjM/content/2301.04140v1.pdf filter=lfs diff=lfs merge=lfs -text
1837
+ 1NE2T4oBgHgl3EQfigeU/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1838
+ VdE0T4oBgHgl3EQfVgAw/content/2301.02264v1.pdf filter=lfs diff=lfs merge=lfs -text
1839
+ lNE3T4oBgHgl3EQfhwqh/content/2301.04574v1.pdf filter=lfs diff=lfs merge=lfs -text
1840
+ UNA0T4oBgHgl3EQfEf9Z/content/2301.02018v1.pdf filter=lfs diff=lfs merge=lfs -text
1841
+ bdFPT4oBgHgl3EQfAzS0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1842
+ 79E0T4oBgHgl3EQffQDe/content/2301.02403v1.pdf filter=lfs diff=lfs merge=lfs -text
1843
+ ndE3T4oBgHgl3EQfiwq0/content/2301.04583v1.pdf filter=lfs diff=lfs merge=lfs -text
1844
+ ddAzT4oBgHgl3EQfLvvB/content/2301.01121v1.pdf filter=lfs diff=lfs merge=lfs -text
1845
+ N9FJT4oBgHgl3EQfHSyb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1846
+ gtA0T4oBgHgl3EQfH_9T/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1847
+ UNA0T4oBgHgl3EQfEf9Z/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1848
+ lNE3T4oBgHgl3EQfhwqh/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1849
+ tdFJT4oBgHgl3EQfdSw-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1850
+ ndE3T4oBgHgl3EQfiwq0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1851
+ CNAyT4oBgHgl3EQfePiT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1852
+ VdE0T4oBgHgl3EQfVgAw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1853
+ JtAzT4oBgHgl3EQfVPz7/content/2301.01283v1.pdf filter=lfs diff=lfs merge=lfs -text
1854
+ OtAzT4oBgHgl3EQfWfy0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1855
+ n9E2T4oBgHgl3EQfzwgf/content/2301.04133v1.pdf filter=lfs diff=lfs merge=lfs -text
1856
+ JtAzT4oBgHgl3EQfVPz7/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1857
+ _NE1T4oBgHgl3EQf8gXW/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1858
+ _NE1T4oBgHgl3EQf8gXW/content/2301.03547v1.pdf filter=lfs diff=lfs merge=lfs -text
1859
+ BdE1T4oBgHgl3EQf9gYX/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1860
+ 8tAzT4oBgHgl3EQf-v68/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1861
+ QtFRT4oBgHgl3EQf7Tgp/content/2301.13679v1.pdf filter=lfs diff=lfs merge=lfs -text
1862
+ A9E2T4oBgHgl3EQf8Qn_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1863
+ _NE0T4oBgHgl3EQfxQFJ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1864
+ XtAyT4oBgHgl3EQfWfc6/content/2301.00163v1.pdf filter=lfs diff=lfs merge=lfs -text
1865
+ 69FAT4oBgHgl3EQfnx3V/content/2301.08631v1.pdf filter=lfs diff=lfs merge=lfs -text
1866
+ EdFKT4oBgHgl3EQfZy6F/content/2301.11805v1.pdf filter=lfs diff=lfs merge=lfs -text
1867
+ 69FAT4oBgHgl3EQfnx3V/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1868
+ QtFRT4oBgHgl3EQf7Tgp/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
1869
+ _NE0T4oBgHgl3EQfxQFJ/content/2301.02643v1.pdf filter=lfs diff=lfs merge=lfs -text
1870
+ CNAyT4oBgHgl3EQfePiT/content/2301.00318v1.pdf filter=lfs diff=lfs merge=lfs -text
19FRT4oBgHgl3EQfmjee/content/tmp_files/2301.13602v1.pdf.txt ADDED
@@ -0,0 +1,1563 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.13602v1 [physics.flu-dyn] 31 Jan 2023
2
+ Impact of an Arc-shaped Control Plate on Flow and
3
+ Heat Transfer around a Isothermally Heated Rotating
4
+ Circular Cylinder
5
+ Amarjit Hatya, Rajendra K. Ray*a
6
+ aSchool of Mathematical and Statistical Sciences, Indian Institute of Technology
7
+ Mandi, Mandi, 175005, Himachal Pradesh, India
8
+ Abstract
9
+ The main objective of this paper is to study the flow characteristics of a rotat-
10
+ ing, isothermally heated circular cylinder with a vertical arc-shaped control plate
11
+ placed downstream. Stream function-Vorticity (ψ − ω) formulation of two di-
12
+ mensional (2-D) Navier-Stokes (N-S) equations is considered as the governing
13
+ equation and the simulations are performed for different distances of the control
14
+ plate (0.5, 1, 2, 3), rotational rates (0.5, 1, 2.07, 3.25) at Prandtl number 0.7 and
15
+ Reynolds number 150. The governing equations are discretized using the Higher
16
+ Order Compact (HOC) scheme and the system of algebraic equations, arising
17
+ from HOC discretization, is solved using the Bi-Conjugate Gradient Stabilized
18
+ approach. Present computed results show that the vortex shedding plane is shifted
19
+ upward from the centerline of the flow domain by the cylinder’s rotational mo-
20
+ tion. The structure of the wake varies based on the plate’s position. The size of
21
+ vortices is greatly reduced when the control plate is set at d/R0 = 3 and the rota-
22
+ tional rate is very high. At greater rotational rates, the impact of varied positions
23
+ of the arc-shaped control plate is very significant. The rotation of the cylinder and
24
+ the location of the plate can be used to lower or enhance the values of drag and
25
+ lift coefficients as well as the heat transfer from the surface of the cylinder. The
26
+ maximum value of the drag coefficient, which is about 3, is achieved for d/R0 = 2
27
+ and α = 3.25.
28
+ Keywords: Navier-Stokes equations, Circular cylinder, Arc-shaped control plate,
29
+ Heat transfer, HOC
30
+ *Corresponding author: [email protected]
31
+
32
+ 1. Introduction
33
+ Active control of flow past a rotating circular cylinder has always been an in-
34
+ teresting topic in fluid dynamics. The wake behaviour for flow past a rotating
35
+ cylinder is more complicated than for flow past a stationary cylinder because the
36
+ rotation of the cylinder separates the shear layer and modifies the boundary layer.
37
+ In 1928, Bickley [1] was among the first to attempt analytical study of the viscous
38
+ flow over a rotating cylinder. He considered the potential flow created by a vortex
39
+ in the vicinity of a cylinder. The wake structure in flow past a cylinder is compli-
40
+ cated due to interactions between a boundary layer, a separating free shear layer,
41
+ and a wake. It has huge significance in engineering as the alternating shedding
42
+ pattern of the vortices in the wake causes considerable fluctuating pressure forces
43
+ in a direction transverse to the fluid flow, which can produce structural vibrations,
44
+ acoustic noise, or resonance, and in certain situations, structural collapse. In 1966,
45
+ Gerrard [2] experimentally studied the flow past bluff bodies along with flow past
46
+ circular cylinder with splitter plates for high Reynolds numbers. He found that
47
+ the shear layer was drawn by the vortex formation from the opposite side of the
48
+ wake across the center line of the wake, cutting off the vorticity supply to the ex-
49
+ panding vortex. He found that the width of the gap between the cylinder and a
50
+ splitter plate parallel to the flow, is the only relevant parameter than the position
51
+ of the trailing edge of the plate. He studied the effect of a plate normal to the
52
+ flow and found that the length of the effective vortex formation area equalled the
53
+ distance of the plate from the domain boundary. He observed a substantial cross-
54
+ flow velocity created near the plate when a vortex grew close behind it, facilitating
55
+ the shedding process and increasing the frequency. Pralits et al. [3] numerically
56
+ studied the flow past rotary cylinder and found that the increased rotational speed
57
+ caused two distinct instability in the flow. Kang et al. [4] found that the vortex
58
+ shedding was stopped completely when the cylinder rotation rate was set at twice
59
+ the velocity of free stream fluid. Diaz et al. [5] have experimentally studied the
60
+ flow past rotary cylinder for Reynolds number 9000. They saw a decrease in pe-
61
+ riodic vortex activity and a rise in random modulation of the shedding process,
62
+ which he attributed to the relocation of the stagnation point and the thickening of
63
+ the spinning fluid layer near the cylinder surface. They discovered that when the
64
+ rotating speed equals the free-stream speed, a regular periodic vortex shedding
65
+ occurs, and that the periodic vortex shedding is suppressed at large velocity ra-
66
+ tios. For velocity ratios equal to or greater than 1.5, they concluded that rotation
67
+ considerably alters the traditional Karman vortex shedding. Similar findings were
68
+ produced by Massons et al. [6] for flow past rotating cylinder. Stojkovic et al. [7]
69
+ 2
70
+
71
+ studied the flow at greater rotation rates and discovered a second shedding mode
72
+ in a limited interval [4.85, 5.15] of rotation rate where the shedding frequency was
73
+ substantially lower than that of the traditional Von-Karman vortex shedding. At a
74
+ high Reynolds number (Re = 105), Roshko [8] investigated the impact of a splitter
75
+ plate positioned downstream of a bluff body and parallel to the free stream. By
76
+ bringing the plate closer to the cylinder, he observed that the shedding frequency
77
+ and base suction were reduced. Bearman [9] found that the separating shear flow
78
+ on the top of the surface is pushed to rejoin if the circular cylinder with an end
79
+ plate downstream is spun at a constant pace. As a result, the effects and vibrations
80
+ caused by boundary-layer development are diminished, and the vortex formation
81
+ is suppressed. Apelt et al. [10] used a horizontal splitter plate with varied lengths
82
+ to diameter ratios less than 2 to investigate the flow past a circular cylinder for
83
+ 104 < Re < 5 × 104. The splitter plate considerably reduces drag by stabilising
84
+ separation points, lowers the Strouhal number, and increases base pressure by
85
+ roughly 50%, according to their research. They also discovered that when using
86
+ a splitter plate instead of a cylinder without one, the wake pattern narrows. Kwon
87
+ and Choi [11] indicated that there is a critical length of splitter plate that causes
88
+ vortex shedding to totally disappear, and that this critical length is proportional
89
+ to the Reynolds number. They also discovered that the Strouhal number rises as
90
+ the plate’s length increases until it equals the cylinder’s diameter. Bao and Tao
91
+ [12] analyzed the flow past a circular cylinder with twin parallel plates attached
92
+ and discovered that optimal positioning can outperform the standard splitter plate.
93
+ More studies with control plate can be found in [13–16].
94
+ Along with studying the wake structure and pressure forces, force convective
95
+ heat transfer from rotating cylinders has been widely investigated by many re-
96
+ searchers for its many real-life applications and scientific interests. Drying cylin-
97
+ drical items [17]; cylindrical cooling devices in the plastics and glass industries;
98
+ drying and coating of papers using a hot spinning cylinder; chemical and food pro-
99
+ cessing industries; textile and paper manufacturing, and so on are some examples
100
+ of real-world uses. In an experiment, Anderson and Saunders [18] explored heat
101
+ convection in a confined room filled with air using an isothermally heated rotating
102
+ circular cylinder. Temperatures were elevated to 140 degrees Fahrenheit above the
103
+ ambient temperature while air pressure was maintained at 4 atm. The experiment
104
+ used three distinct cylinders, each with varying diameters (1, 1.8, and 3.9 inches)
105
+ but the same length (2 feet). They determined that heat exchange is nearly steady
106
+ when rotational speed is between 0 to a crucial value of 0.9, and past that point,
107
+ heat exchange increases in proportion to the rotational speed’s 2/3 power. Badr
108
+ 3
109
+
110
+ and Dennis [19] conducted a numerical research on force convective heat transfer
111
+ from an unconfined rotary cylinder, concluding that increasing rotational speed re-
112
+ duces overall rate of heat transfer because the cylinder is isolated from the stream
113
+ by the spinning fluid layer. Mohanty et al. [20] performed experimental study on
114
+ heat transfer from rotating cylinder for high Reynolds numbers. They discovered
115
+ that rotational motion increases average heat transmission by roughly 30% when
116
+ compared to a fixed cylinder with a fixed Reynolds number. They also discovered
117
+ that as compared to stationary cylinders, rotational motion caused a lower heat
118
+ transfer rate at the front stagnation point. An analytical study was attempted by
119
+ Kendoush [21] and a formula, Nu = 0.6366(RePr)1/2, was proposed to compute
120
+ the local Nusselt number (Nu) for low Prandtl numbers (Pr), where Re denotes
121
+ the Reynolds number. With the help of the finite volume technique, Paramane and
122
+ Sharma [22] studied the heat transfer and fluid flow across a rotating cylinder for
123
+ Prandtl number of 0.7, low Reynolds numbers ranging from 20 to 160, and rotary
124
+ speeds of 0 ≤ α ≤ 6. They discovered that when rotary speeds rise, the average
125
+ Nusselt number falls while the Reynolds number rises. It was concluded that the
126
+ rotation could be employed to reduce drag and suppress heat transmission from
127
+ the cylinder. Sufyan et al. [23] discovered that low and medium rotary speeds
128
+ immediately reduce heat transmission, but that at higher rotational rates, the in-
129
+ creased size of the enclosing vortex causes even more heat transfer reduction. A
130
+ few more studies on this subject can be found on [24–27].
131
+ After an extensive literature survey, it is found that many researchers worked
132
+ on heat transfer and flow across a rotating circular cylinder. There are numer-
133
+ ous works on the flow across a circular cylinder with splitter plates and attached
134
+ fins. Effect of curved fins and plates are studied by few researchers for missiles
135
+ [28] and formula−1 cars [29] and these are being used in real life. There are
136
+ some researchers who tried to study the wake structure and base pressure after
137
+ applying the rotation to the cylinder with attached splitter plates or fins, but the
138
+ effect of both rotation and the presence of control plates on the process of heat
139
+ transfer is not tested. Considering the importance, the current investigation is cen-
140
+ tred on the impact of a control plate on forced convective heat transfer and flow
141
+ across a rotating circular cylinder. It can be useful in electronic equipment cool-
142
+ ing and processing industries. We have taken into account an arc-shaped plate
143
+ with a vertical orientation since we are considering a polar coordinate system
144
+ with non-uniform grids. For this investigation, the Reynolds number is fixed at
145
+ 150 and the Prandtl number is fixed at 0.7. The plate distance to cylinder radius
146
+ ratio varies between 0.5 and 3, while the rotational rates range from 0.5 to 3.25.
147
+ 4
148
+
149
+ The two-dimensional unsteady Navier-Stokes equations and energy equation are
150
+ first non-dimensionalized and then discretized by using a Higher Order Compact
151
+ (HOC) scheme [30, 31] based on non-uniform polar grids. Temporal accuracy of
152
+ 2nd order and spatial accuracy of atleast 3rd order are obtained through the ap-
153
+ plication of the finite difference scheme. To obtain a solution from a discretized
154
+ system, the Bi-conjugate Gradient Stabilized method approach is employed.
155
+ The paper is arranged as follows: in Section 2, we discuss the governing equa-
156
+ tions and initial and boundary conditions related to the current problem; in Section
157
+ 3, the numerical scheme is described as well as the independence tests and valid-
158
+ ity of the numerical scheme are produced; results are discussed in Section 4; and
159
+ finally, we conclude our remarks in Section 5.
160
+ 2. The governing equations and the problem
161
+ The considered system is represented in Fig. 1 as a two-dimensional unsteady,
162
+ incompressible, laminar, and viscous flow of a Newtonian fluid over an isother-
163
+ mally heated circular cylinder of radius R0. At ˆt = 0, the cylinder acquires the
164
+ surface temperature Ts impulsively. The following formulas are used to transform
165
+ dimensional parameters to dimensionless form: t = ˆtU∞
166
+ R0 , r =
167
+ ˆr
168
+ R0, u =
169
+ ˆu
170
+ U∞, v =
171
+ ˆv
172
+ U∞,
173
+ ψ = ˆψU∞
174
+ R0 , ω = ˆωR0
175
+ U∞ , φ = (T−T∞)
176
+ (Ts−T∞). The control plate has unit arc length and a con-
177
+ stant thickness roughly equal to 0.18 times the cylinder radius and is situated at
178
+ a distance d from the cylinder surface. On the surface of the control plate, im-
179
+ permeability and no-slip boundary conditions are considered. The control plate is
180
+ kept constant at the same temperature as the free stream fluid.
181
+ The nondimensional stream-function-vorticity formulation of the 2-D Navier-
182
+ Stokes equations and energy equation in polar coordinates (r,θ) are given as,
183
+ ∂ 2ω
184
+ ∂r2 + 1
185
+ r
186
+ ∂ω
187
+ ∂r + 1
188
+ r2
189
+ ∂ 2ω
190
+ ∂θ2 = Re
191
+ 2
192
+
193
+ u∂ω
194
+ ∂r + v
195
+ r
196
+ ∂ω
197
+ ∂θ + ∂ω
198
+ ∂t
199
+
200
+ (1)
201
+ ∂ 2ψ
202
+ ∂r2 + 1
203
+ r
204
+ ∂ψ
205
+ ∂r + 1
206
+ r2
207
+ ∂ 2ψ
208
+ ∂θ2 = −ω
209
+ (2)
210
+ ∂ 2φ
211
+ ∂r2 + 1
212
+ r
213
+ ∂φ
214
+ ∂r + 1
215
+ r2
216
+ ∂ 2φ
217
+ ∂θ2 = RePr
218
+ 2
219
+
220
+ u∂φ
221
+ ∂r + v
222
+ r
223
+ ∂φ
224
+ ∂θ + ∂φ
225
+ ∂t
226
+
227
+ (3)
228
+ 5
229
+
230
+ Nomenclature
231
+ Re
232
+ Reynolds number (= 2R0U∞/ν)
233
+ Pr
234
+ Prandtl number (= ν/β)
235
+ R0
236
+ Radius of the circular cylinder
237
+ R∞
238
+ Radius of the far field boundary
239
+ d
240
+ Dimensional distance of the control plate
241
+ from the cylinder surface
242
+ U∞
243
+ The free-stream fluid’s velocity
244
+ T∞
245
+ The free-stream fluid’s temperature
246
+ ˆt, t
247
+ Time in dimensional and nondimensional form
248
+ Ts
249
+ Surface temperature of the cylinder in dimensional form
250
+ ˆα, α
251
+ Rotational velocity in dimensional and
252
+ nondimensional form (α = ˆαR0/U∞)
253
+ d
254
+ Distance of control plate from the surface of the cylinder
255
+ Nu, Nu, Nut
256
+ Nusselt number (local, average, and time-averaged total)
257
+ h, havg
258
+ Coefficients of heat transfer (local and average)
259
+ ν
260
+ The fluid’s kinematic viscosity
261
+ K
262
+ The fluid’s thermal conductivity
263
+ β
264
+ The fluid’s thermal diffusivity
265
+ Q′′
266
+ Radial heat flux on the surface (Local)
267
+ ˆψ, ψ
268
+ Stream function in dimensional and nondimensional form
269
+ ˆω, ω
270
+ Vorticity in dimensional and nondimensional form
271
+ T, φ
272
+ Temperature in dimensional and nondimensional form
273
+ ˆu, u
274
+ Radial velocity in dimensional and nondimensional form
275
+ ˆv, v
276
+ Tangential velocity in dimensional and nondimensional form
277
+ ˆr, r
278
+ Radius in dimensional and nondimensional form
279
+ 6
280
+
281
+ Figure 1: The schematic illustration of the current problem.
282
+ The velocities, v and u can be expressed as
283
+ v = −∂ψ
284
+ ∂r
285
+ and
286
+ u = 1
287
+ r
288
+ ∂ψ
289
+ ∂θ
290
+ (4)
291
+ ω can be written as
292
+ ω = 1
293
+ r
294
+ � ∂
295
+ ∂r(vr)− ∂u
296
+ ∂θ
297
+
298
+ (5)
299
+ The boundary conditions on the cylinder’s surface include impermeability, no-
300
+ slip, and constant temperature, i.e.
301
+ ψ = 0,
302
+ ∂ψ
303
+ ∂r = −α
304
+ and
305
+ φ = 1.0
306
+ when
307
+ r = 1
308
+ (6)
309
+ The condition of surface vorticity is provided by
310
+ ω = −∂ 2ψ
311
+ ∂r2
312
+ when
313
+ r = 1
314
+ (7)
315
+ 7
316
+
317
+ Uoo
318
+ Too
319
+ ()0
320
+ u.(r,0) = U
321
+ cos6
322
+ d
323
+ u(r,0,t) =0
324
+ 0=0
325
+ v(r,0,t) = 0
326
+ ControlPlate
327
+ Ts
328
+ u
329
+ Uo
330
+ V(r,0)
331
+ sing
332
+ r2
333
+ Too
334
+ V
335
+ Uoo
336
+ Roo
337
+ Rco
338
+ XIn the distant field, R∞, the vorticity’s resulting decay and the free-stream con-
339
+ dition are taken to constitute the boundary conditions, i.e.
340
+ ψ →
341
+
342
+ r − 1
343
+ r
344
+
345
+ sin θ,
346
+ ∂ψ
347
+ ∂r →
348
+
349
+ 1+ 1
350
+ r2
351
+
352
+ sin θ
353
+ and φ → 0 as r → R∞
354
+ R0
355
+ (8)
356
+ ω → 0 as r → R∞
357
+ R0
358
+ (9)
359
+ The criteria Eqs. (6) to (9) must be followed by all the parameters with 0 ≤
360
+ θ ≤ 2π for all θ. In addition, all of the parameters are functions of θ with a period
361
+ of 2π. The initial conditions for the stream function are given by Eqs. (8) and (9).
362
+ The vorticity in the distant field is initially assumed to be zero. Eqs. (4) and (8)
363
+ provide the initial requirements for the velocities as follows:
364
+ u =
365
+
366
+ 1− 1
367
+ r2
368
+
369
+ cos θ
370
+ and v = −
371
+
372
+ 1+ 1
373
+ r2
374
+
375
+ sin θ
376
+ (10)
377
+ 3. Numerical Scheme
378
+ Using a temporally second order accurate and spatially atleast third order accu-
379
+ rate higher order compact (HOC) finite difference technique [32–35], the govern-
380
+ ing equations of motion and the energy equation are discretized on non-uniform
381
+ polar grids in the circular region ([R0,R∞] × [0,2π]) with grid points (ri,θj).
382
+ The non-uniform grid concentrated around the cylinder is generated using the
383
+ stretching function ri = exp
384
+ �λπi
385
+ imax
386
+
387
+ , 0 ≤ i ≤ imax. The function θj is given by,
388
+ θj = 2π j
389
+ jmax
390
+ , 0 ≤ j ≤ jmax. The discretized equations can be written as [30, 36, 37]:
391
+ [X1i jδ 2
392
+ r +X2i jδ 2
393
+ θ +X3i jδr +X4i jδrδθ +X5i jδrδ 2
394
+ θ
395
+ +X6i jδ 2
396
+ r δθ +X7i jδ 2
397
+ r δ 2
398
+ θ ]ψi j = Gi j
399
+ (11)
400
+ [Y11i jδ 2
401
+ r +Y12i jδ 2
402
+ θ +Y13i jδr +Y14i jδθ +Y15i jδrδθ
403
+ +Y16i jδrδ 2
404
+ θ +Y17i jδ 2
405
+ r δθ +Y18i jδ 2
406
+ r δ 2
407
+ θ ]ωn+1
408
+ i j
409
+ = [Y21i jδ 2
410
+ r +Y22i jδ 2
411
+ θ +Y23i jδr +Y24i jδθ +Y25i jδrδθ
412
+ +Y26i jδrδ 2
413
+ θ +Y27i jδ 2
414
+ r δθ +Y28i jδ 2
415
+ r δ 2
416
+ θ ]ωn
417
+ i j
418
+ (12)
419
+ 8
420
+
421
+ and
422
+ [Z11i jδ 2
423
+ r +Z12i jδ 2
424
+ θ +Z13i jδr +Z14i jδθ +Z15i jδrδθ
425
+ +Z16i jδrδ 2
426
+ θ +Z17i jδ 2
427
+ r δθ +Z18i jδ 2
428
+ r δ 2
429
+ θ ]φn+1
430
+ i j
431
+ = [Z21i jδ 2
432
+ r +Z22i jδ 2
433
+ θ +Z23i jδr +Z24i jδθ +Z25i jδrδθ
434
+ +Z26i jδrδ 2
435
+ θ +Z27i jδ 2
436
+ r δθ +Z28i jδ 2
437
+ r δ 2
438
+ θ ]φn
439
+ i j
440
+ (13)
441
+ The coefficients X1i j, X2i j,..., X7i j; Gi j; Y11i j, Y12i j,..., Y18i j; Y21i j, Y22i j,...,
442
+ Y28i j; Z11i j, Z12i j,..., Z18i j and Z21i j, Z22i j,..., Z28i j are the functions of the
443
+ parameters r and θ. [30, 36, 37] provide the expressions for the non-uniform
444
+ central difference operators δθ, δ 2
445
+ θ , δr and δ 2
446
+ r , as well as the notations θf , θb, r f, rb
447
+ and the coefficients. The Bi-conjugate Gradient Stabilized approach is employed
448
+ in order to solve the discretized problem.
449
+ 3.1. Drag and lift coefficients
450
+ The forces acting on a circular cylinder submerged in fluids for uniform flow
451
+ are generally caused by surface friction and surface pressure distribution. The
452
+ expressions for drag (CD) and lift (CL) coefficients are adopted from [30, 36]. The
453
+ expressions are as follows,
454
+ CD = 1
455
+ Re
456
+ � 2π
457
+ 0
458
+ ��∂ω
459
+ ∂r
460
+
461
+ R0
462
+ −ωR0
463
+
464
+ cosθdθ
465
+ (14)
466
+ CL = 1
467
+ Re
468
+ � 2π
469
+ 0
470
+ ��∂ω
471
+ ∂r
472
+
473
+ R0
474
+ −ωR0
475
+
476
+ sinθdθ
477
+ (15)
478
+ The integral values are calculated using Simpson’s 1/3 method. The time-
479
+ averaged drag, CD is expressed as,
480
+ CD =
481
+ 1
482
+ t1 −t2
483
+ � t2
484
+ t1
485
+ CDdt
486
+ (16)
487
+ When the flow achieves a periodic mode and executes numerous cycles, the time
488
+ span between t1 and t2 is selected.
489
+ 9
490
+
491
+ 3.2. The heat transfer parameters
492
+ Initially, heat conduction happens from the cylinder surface to the adjacent
493
+ fluid, and subsequently it convects away with the flow. The heat conduction path
494
+ follows the radius of the cylinder surface. The dimensionless local heat flux in the
495
+ radial direction is the local Nusselt number, Nu, defined by,
496
+ Nu = 2hR0
497
+ k
498
+ = Q′′(2R0)
499
+ k(Ts −T∞)
500
+ (17)
501
+ where h represents the local heat transfer coefficient, k represents the thermal
502
+ conductivity of the fluid, and Q′′ represents the surface local radial heat flux. Q′′
503
+ is expressed as, Q′′ = −k ∂T
504
+ ∂r |r=R0. The average Nusselt number, denoted by Nu,
505
+ used to represent the dimensionless heat transfer from the cylinder’s surface, is
506
+ expressed as
507
+ Nu = 2havgR0
508
+ k
509
+ = 1
510
+
511
+ � 2π
512
+ 0
513
+ Nudθ
514
+ (18)
515
+ The average heat transfer coefficient (havg) is expressed as havg =
516
+ 1
517
+
518
+ � 2π
519
+ 0 hdθ.
520
+ Nut, the time-averaged total Nusselt number is given as,
521
+ Nut =
522
+ 1
523
+ t1 −t2
524
+ � t2
525
+ t1
526
+ Nudt
527
+ (19)
528
+ When the flow achieves a periodic mode and executes numerous cycles, the time
529
+ span between t1 and t2 is selected.
530
+ 3.3. Validation
531
+ The computational domain is discretized using non-uniform grids. The grid
532
+ independence test is performed in Fig. 2(a) with three different grid sizes (181 ×
533
+ 181), (191 ×202) and (351 ×341), with a set time step ∆t = 0.01, a fixed 25 : 1
534
+ domain-to-cylinder-radius ratio, Pr = 0.7, Re = 150, α = 1 and d/R0 = 1. All the
535
+ grid sizes seem to produce almost same results. The grid size (191×202) is cho-
536
+ sen for future computations. For grid size (181 × 181) and time step ∆t = 0.01,
537
+ the domain independence test is performed in Fig. 2(b) for three distinct radii,
538
+ 15, 25 and 35 of the outer boundary, other parameter values, on the other hand,
539
+ are treated the same as the grid independence test. This test demonstrates that
540
+ a domain radius of 25 is adequate to provide the best possible results. Finally,
541
+ with a set grid size (181 ×181) and the far field border defined at 25 : 1 domain-
542
+ to-cylinder-radius ratio, the time independence test is conducted in Fig. 2(c) for
543
+ 10
544
+
545
+ (a)
546
+ (b)
547
+ (c)
548
+ Figure 2: Variation of local Nusselt number distribution, Nu (a) grid independence test with grid
549
+ sizes 181 × 181, 191 × 202, 351 × 341, (b) space independence test with outer boundary radius
550
+ 15, 25, 35 and (c) time independence test with time steps 0.001, 0.005, 0.01 at instant t = 10 for
551
+ Re = 150, Pr = 0.7, α = 1 and d = 1.
552
+ time increments ∆t = 0.001, 0.005, 0.01, 0.02. For later computations, we used
553
+ R∞
554
+ R0 = 25 and ∆t = 0.01, as suggested by these test findings.
555
+ There hasn’t been any research towards controlling heat and flow transfer from
556
+ a rotating cylinder using an arc-shaped vertical control plate placed across a free
557
+ stream of uniform flow. To prove the correctness of our code and model, we be-
558
+ gin by comparing our findings to those of previous studies of heat transfer from
559
+ 11
560
+
561
+ 15
562
+ At=0.001
563
+ At=0.005
564
+ At=0.01
565
+ 12
566
+ 9
567
+ nN
568
+ 6
569
+ 0
570
+ 60
571
+ 120
572
+ 180
573
+ 240
574
+ 300
575
+ 360
576
+ 015
577
+ 15
578
+ 25
579
+ 35
580
+ 10
581
+ nN
582
+ 60
583
+ 120
584
+ 180
585
+ 240
586
+ 300
587
+ 360
588
+ 015
589
+ 181 X 181
590
+ 191 X 202
591
+ 8
592
+ 351 X 341
593
+ 10
594
+ Nu
595
+ 5
596
+ 11
597
+ 1
598
+ 0
599
+ 60
600
+ 120
601
+ 180
602
+ 240
603
+ 300
604
+ 360
605
+ 0Table 1: Comparison of the current computation with the equivalent time-averaged total Nusselt
606
+ number computed by Paramane & Sharma [22] for Re = 40, 100, Pr = 0.7, α = 1, and isother-
607
+ mally heated cylinder.
608
+ Re
609
+ 40
610
+ 100
611
+ Nut (Current)
612
+ 3.276112
613
+ 4.936597
614
+ Nut (Paramane & Sharma)
615
+ 3.213
616
+ 4.991
617
+ Di f ference(%)
618
+ 1.964%
619
+ 1.09%
620
+ Table 2: Comparison between time-averaged drag results from the current study to Kwon and
621
+ Choi’s work [11] for Re = 160.
622
+ Length of splitter plate
623
+ 1
624
+ 2
625
+ Time-averaged Drag (Present Study)
626
+ 1.133021
627
+ 1.056131
628
+ Time-averaged Drag (Kwon and Choi)
629
+ 1.10162
630
+ 1.08812
631
+ Di f ference(%)
632
+ 2.85%
633
+ 2.94%
634
+ rotating cylinders [22], then it is compared with the results of flow past circular
635
+ cylinder with a splitter plate attached [11]. When the flow becomes periodic, the
636
+ mean drag coefficients are used to determine the time-averaged drag coefficient
637
+ on the cylinder surface in Table 2. According to Table 1, the maximum difference
638
+ of time-averaged total Nusselt number is 1.964%, which is within a reasonable
639
+ range. Also, Table 2 shows the maximum difference of time-averaged Drag coef-
640
+ ficients from the results of the current study and the previously published works is
641
+ 2.94% which is also within a considerable range. As a result, the current findings
642
+ are consistent with earlier studies.
643
+ 4. Results and Discussions
644
+ Reynolds number (Re), Prandtl number (Pr), angular velocity of the cylinder
645
+ (α), and control plate distance (d/R0) are all well-known factors that influence
646
+ flow and heat fields. Fig. 3 exhibits the Drag (CD) and lift (CL) coefficients, as
647
+ well as the variation of local Nusselt number (Nu) for d/R0 = 0 and 0.5 with
648
+ fixed α = 0.5, Re = 150, Pr = 0.7. The parameter value d/R0 = 0 corresponds
649
+ to the case without the arc-shaped control plate. Fig. 3(a) clearly demonstrates
650
+ that, the peak value of CD is significantly reduced as well as the amplitude of
651
+ CL with the introduction of control plate at a distance d/R0 = 0.5 downstream.
652
+ d/R0 = 0, indicating flow across the cylinder in the absence of the control plate.
653
+ By comparing Fig. 3(b) and Fig. 3(c), it is found that the introduction of the con-
654
+ 12
655
+
656
+ (a)
657
+ (b)
658
+ (c)
659
+ Figure 3: (a) Drag (CD) and lift (CL) coefficients, (b) variation of local Nusselt number (Nu) for
660
+ d/R0 = 0 and (c) variation of local Nusselt number (Nu) for d/R0 = 0.5 with fixed α = 0.5.
661
+ trol plate slightly reduced the peak value of Nu approximately from 11.35 to 11.27
662
+ at θ ≈ 192◦, but the local maximum peak is significantly increased at θ ≈ 30◦.
663
+ It means, although the heat transfer near the front stagnation is slightly decreased
664
+ by the control plate, the heat transfer is significantly increased near the rear stag-
665
+ nation point, which eventually increases the overall heat transfer from the upper
666
+ half of the cylinder surface. The control plate alters the vortex shedding process,
667
+ which in turn affects the thermal boundary layer and causes this effect. Realizing
668
+ the importance of the arc-shaped control plate, the current studies are performed
669
+ for Re = 150, 0.5 ≤ α ≤ 3.25 and 0.5 ≤ d/R0 ≤ 3, while Pr is maintained at 0.7.
670
+ The values of α are typically chosen in accordance with [31].
671
+ For α = 0.5 and d/R0 = 1, Fig. 4 exhibits the isotherm, streamline and vor-
672
+ ticity at periodic phases. Two vortices are shed periodically from the upper and
673
+ lower sides of the cylinder, according to the vorticity and streamline. The upper
674
+ vortex is slightly larger than the lower vortex. The continuous and dashed lines
675
+ indicate the positive and negative contours, respectively. The vortex shedding
676
+ plane is shifted by approximately θ = 20◦ from the centerline or the x-axis due
677
+ to the rotation of the cylinder. The shear layer around the plate changes the nega-
678
+ 13
679
+
680
+ 16.5
681
+ 11.27
682
+ t = t+(O)T
683
+ 15
684
+ t = t +(1/4)T
685
+ 13.5
686
+ 1.265
687
+ t = t,+(1/2)T
688
+ t = t,+(3/4)T
689
+ 12
690
+ 11.26
691
+ 190
692
+ 192
693
+ 19.
694
+ t = t,+(1)T
695
+ 10.5
696
+ 7.5
697
+ 6
698
+ 4.5
699
+ 3
700
+ 1.5
701
+ 60
702
+ 120
703
+ 240
704
+ 300
705
+ 36016.5
706
+ t = t+(O)T
707
+ 15
708
+ 11.35
709
+ t = t +(1/4)T
710
+ 13.5
711
+ t = t,+(1/2)T
712
+ t = t,+(3/4)T
713
+ 12
714
+ 11.3
715
+ 190
716
+ 195
717
+ t = t,+(1)T
718
+ 10.5
719
+ 7.5
720
+ 6
721
+ 4.5
722
+ 3
723
+ 1.5
724
+ 60
725
+ 120
726
+ 180
727
+ 240
728
+ 300
729
+ 360
730
+ 03
731
+ Cp, d/R, = 0
732
+ Cp, d/R, = 0.5
733
+ CL, d/R.
734
+ =0
735
+ d/R.
736
+ = 0.5
737
+ 0
738
+ 50
739
+ 100
740
+ 150
741
+ tt = t0 +(0)T
742
+ t = t0 +(1/4)T
743
+ t = t0 +(1/2)T
744
+ t = t0 +(3/4)T
745
+ t = t0 +(1)T
746
+ (a)
747
+ (b)
748
+ (c)
749
+ Figure 4: (a) Isotherm, (b) streakline and (c) vorticity contour for Pr = 0.7, Re = 150, α = 0.5
750
+ and d/R0 = 1 at different phases.
751
+ tive equi-vorticity lines that come from the surface of the cylinder, but the positive
752
+ equi-vorticity lines from the cylinder merge with the shear layer due to the rotation
753
+ of the cylinder. No recirculation zone or vortex is observed between the cylinder
754
+ and the plate. Two large vortices as lumps of hot fluid shed periodically from the
755
+ upper and bottom sides of the cylinder according to the isotherm contours. The
756
+ 14
757
+
758
+ **
759
+ !
760
+ .......:2
761
+ :i..
762
+ :
763
+ 2t = t0 +(0)T
764
+ t = t0 +(1/4)T
765
+ t = t0 +(1/2)T
766
+ t = t0 +(3/4)T
767
+ t = t0 +(1)T
768
+ (a)
769
+ (b)
770
+ (c)
771
+ Figure 5: (a) Isotherm, (b) streakline and (c) vorticity contour for Pr = 0.7, Re = 150, α = 1 and
772
+ d/R0 = 1 at different phases.
773
+ isotherm density is high near the front stagnation point, which indicates the higher
774
+ heat transfer rate in this region. Isotherm, streakline and vorticity are displayed
775
+ in Fig. 5 for α = 1 and d/R0 = 1. Streakline and vorticity indicate that two vor-
776
+ tices are periodically shed from the upper side and lower side of the cylinder. The
777
+ increase in rotational rate, increases the movement of the fluid around the control
778
+ 15
779
+
780
+ :
781
+ .....t = t0 +(0)T
782
+ t = t0 +(1/4)T
783
+ t = t0 +(1/2)T
784
+ t = t0 +(3/4)T
785
+ t = t0 +(1)T
786
+ (a)
787
+ (b)
788
+ (c)
789
+ Figure 6: (a) Isotherm, (b) streakline and (c) vorticity contour for Pr = 0.7, Re = 150, α = 2.07
790
+ and d/R0 = 1 at different phases.
791
+ plate which leads to thickening of shear layer around the control plate. It affects
792
+ the vorticity contour coming from the cylinder. Positive equi-vorticity lines from
793
+ the cylinder and the plate get merged together to shed a sleek, elongated vortex.
794
+ The positive equi-vorticity lines from the cylinder completely cover the control
795
+ plate, also dragging the negative equi-vorticity lines towards the bottom of the
796
+ 16
797
+
798
+ t = t0 +(0)T
799
+ t = t0 +(1/4)T
800
+ t = t0 +(1/2)T
801
+ t = t0 +(3/4)T
802
+ t = t0 +(1)T
803
+ (a)
804
+ (b)
805
+ (c)
806
+ Figure 7: (a) Isotherm, (b) streakline and (c) vorticity contour for Pr = 0.7, Re = 150, α = 3.25
807
+ and d/R0 = 1 at different phases.
808
+ cylinder. This increases the density in thermal boundary layein the upper half of
809
+ the cylinder, increasing the heat transfer. The upper vortex is much wider as com-
810
+ pared to the sleek bottom vortex. Because of the increased α, the vortex shedding
811
+ plane is shifted by approximately θ = 23◦ from the centerline. The isotherm con-
812
+ tours suggest that two warm blobs convect away periodically from the upper and
813
+ 17
814
+
815
+ lower sides of the cylinder. There is no vortex or recirculation zone found be-
816
+ tween the cylinder and the plate. Isotherm, streakline and vorticity for α = 2.07
817
+ and d/R0 = 1 are shown in Fig. 6. The streakline and vorticity suggest that two
818
+ vortices are periodically shed in the flow domain. One vortex is shed from the top
819
+ of the cylinder and the other one is shed from the back of the plate. The lower
820
+ vortex pushes the upper vortex due to the high rotation rate of the cylinder. As a
821
+ result, the upper vortex is shed much earlier than at lower rotational rates. Also,
822
+ the upper vortex becomes sleek and the bottom vortex becomes wide. Negative
823
+ equi-vorticity lines coming from the cylinder completely cover the control plate
824
+ as well as the positive vortex. Due to the high movement of fluid around the con-
825
+ trol plate, the shear layers get thickened and drag the negative equi-vorticity lines
826
+ from the cylinder to the bottom of the plate. This affects the thermal boundary
827
+ layer of the cylinder by thinning around the rear stagnation point. As a result, the
828
+ heat transfer is increased in this region. However, the high rotation of the cylin-
829
+ der thickens the thermal boundary layer around the front stagnation point, leading
830
+ to a decrease in heat transfer rate. Here, The vortex shedding plane is shifted
831
+ by approximately θ = 37◦ from the centerline. The isotherm contours suggest
832
+ that two warm blobs convect away periodically by the vortices generated in the
833
+ flow domain. Fig. 7 shows the isotherm, streakline and vorticity for α = 3.25
834
+ and d/R0 = 1. Due to very high rotational rate, the negative equi-vorticity lines
835
+ completely cover the cylinder as well as the positive equi-vorticity lines originated
836
+ from the control plate. Two vortices are shed periodically, one from the top of the
837
+ cylinder and another from the back of the control plate. Due to the high rotational
838
+ speed, the bottom vortex pushes the upper vortex. As a result, the upper vortex is
839
+ shed much earlier. Also, the bottom vortex is much larger than the upper vortex.
840
+ One small negative vortex is formed behind the control plate, but it gets dissolved
841
+ into the positive vortex. After the negative vortex is shed, the shear layer from
842
+ the cylinder splits on top and bottom of the shear layer from the control plate. It
843
+ gradually merges and creates an elongated negative vortex. Between the cylinder
844
+ and the plate, no vortex or recirculation zone forms. The moving fluid around the
845
+ cylinder drags the shear layer from the control plate towards the top of the cylin-
846
+ der, which leads to the increased density of the isotherm contour. As a result, heat
847
+ transfer is boosted in this region. The vortex shedding plane is displaced from the
848
+ centerline by approximately θ = 50◦ at this rotational rate. The isotherm contours
849
+ suggest that the density of the isotherm around the cylinder becomes less than at
850
+ the lower rotational rates, which means that the high rotation rate is suppressing
851
+ the heat transfer rate from the cylinder surface. Additionally, two warm blobs pe-
852
+ riodically convect away from the cylinder’s upper side and the plate’s rear. The
853
+ 18
854
+
855
+ t = t0 +(0)T
856
+ t = t0 +(1/4)T
857
+ t = t0 +(1/2)T
858
+ t = t0 +(3/4)T
859
+ t = t0 +(1)T
860
+ (a)
861
+ (b)
862
+ (c)
863
+ Figure 8: (a) Isotherm, (b) streakline and (c) vorticity contour for Pr = 0.7, Re = 150, α = 0.5
864
+ and d/R0 = 2 at different phases.
865
+ top blob is sleek and the bottom one is wide, similar to the vortices. Figs. 4 to 7
866
+ show that increasing rotational rates increased the size of vortices as well as the
867
+ angle of vortex shedding plane from the centerline for a fixed d/R0 = 1.
868
+ Fig. 8 shows the isotherm, streakline and vorticity for α = 0.5 and d/R0 = 2.
869
+ 19
870
+
871
+ ::
872
+ :t = t0 +(0)T
873
+ t = t0 +(1/4)T
874
+ t = t0 +(1/2)T
875
+ t = t0 +(3/4)T
876
+ t = t0 +(1)T
877
+ (a)
878
+ (b)
879
+ (c)
880
+ Figure 9: (a) Isotherm, (b) streakline and (c) vorticity contour for Pr = 0.7, Re = 150, α = 3.25
881
+ and d/R0 = 2 at different phases.
882
+ Two vortices are shed periodically from the upper and lower sides of the cylinder,
883
+ according to the streakline and vorticity. One recirculation zone is formed by the
884
+ interaction of the shear layers between the cylinder and the plate, near the top of
885
+ the plate. Positive equi-vorticity lines originated from the cylinder partially covers
886
+ the control plate. The isotherm contours show that two warm blobs convect away
887
+ 20
888
+
889
+ with the shedding vortices. The vortex shedding plane is slightly higher than the
890
+ centerline by approximately θ = 15◦. This angle of the vortex shedding plane
891
+ is slightly lower than that of Fig. 4 due to the increase in d/R0. This happens
892
+ due to the interaction of shear layers around the control plate. Fig. 9 exhibits the
893
+ isotherm, streakline and vorticity for α = 3.25 and d/R0 = 2. Here, two vortices
894
+ are periodically shed. One is shed from the top of the cylinder, and the other one
895
+ is shed from behind the plate. One temporary recirculation zone is formed be-
896
+ tween the cylinder and the plate, which gradually merges with the upper vortex.
897
+ Due to the high rotational rate, the bottom vortex is pulled upwards and pushes
898
+ the upper vortex. As a result, the upper vortex is shed much earlier. The negative
899
+ equi-vorticity lines cover the positive equi-vorticity lines that originated from the
900
+ control plate. After the negative vortex is shed, the shear layer is split into two
901
+ by the positive vorticity contour. The shear layers from the top and bottom of the
902
+ control plate are squeezed together by the negative vorticity contour to form the
903
+ positive vortex. The vortex shedding plane is shifted by approximately θ = 40◦
904
+ from the centerline, and this angle is also slightly lower than that of Fig. 7. The
905
+ widths of vortices are much larger than those at Fig. 7. The interaction between
906
+ the shear layer and the boundary layer of the cylinder thickens the thermal bound-
907
+ ary layer near the front stagnation point and increases the density of the isotherm
908
+ contour near the rear stagnation point and at the bottom of the cylinder. It leads
909
+ to the reduction of heat transfer near the front stagnation point and an increase in
910
+ heat transfer rate near the rear stagnation point and bottom of the cylinder. Figs. 8
911
+ and 9 show that as α increases from 0.5 to 3.25 for d/R0 = 2, the vortices increase
912
+ in size.
913
+ Isotherm, streakline and vorticity are displayed in Fig. 10 for α = 0.5 and
914
+ d/R0 = 3. Two vortices shed periodically from the upper and lower sides of the
915
+ cylinder. The bottom vortex is slightly sleeker than the upper one. The positive
916
+ equi-vorticity lines coming from the cylinder, partially cover the control plate,
917
+ and the interaction between the shear layers sheds the positive vortex. One re-
918
+ circulation zone is formed between the cylinder and the plate, which gradually
919
+ merges with the upper vortex. The density of the isotherm contour is higher near
920
+ the front stagnation point, which means the rate of heat transfer is much higher in
921
+ this region. Also, two warm blobs convect away periodically from the upper and
922
+ lower sides of the cylinder. Also, the vortex shedding plane is at an angle of ap-
923
+ proximately θ = 8.5◦ with the centerline, which is much lower than the previous
924
+ placements of the control plate. It happens as the bottom shear layers are resisted
925
+ by the control plate to freely move upwards. Fig. 11 exhibits the isotherm, streak-
926
+ 21
927
+
928
+ t = t0 +(0)T
929
+ t = t0 +(1/4)T
930
+ t = t0 +(1/2)T
931
+ t = t0 +(3/4)T
932
+ t = t0 +(1)T
933
+ (a)
934
+ (b)
935
+ (c)
936
+ Figure 10: (a) Isotherm, (b) streakline and (c) vorticity contour for Pr = 0.7, Re = 150, α = 0.5
937
+ and d/R0 = 3 at different phases.
938
+ line and vorticity are displayed for α = 3.25 and d/R0 = 3. Two vortices shed
939
+ periodically behind the control plate. The rotational motion of the fluid surround-
940
+ ing the cylinder causes the negative equi-vorticity lines to surround the cylinder
941
+ as well as the positive equi-vorticity lines that originate from the control plate.
942
+ The positive equi-vorticity lines also cover the control plate. The shear layers that
943
+ 22
944
+
945
+ sitt:t = t0 +(0)T
946
+ t = t0 +(1/4)T
947
+ t = t0 +(1/2)T
948
+ t = t0 +(3/4)T
949
+ t = t0 +(1)T
950
+ (a)
951
+ (b)
952
+ (c)
953
+ Figure 11: (a) Isotherm, (b) streakline and (c) vorticity contour for Pr = 0.7, Re = 150, α = 3.25
954
+ and d/R0 = 3 at different phases.
955
+ originate from the cylinder get split after interaction with the shear layer around
956
+ the control plate, and they merge together during the shedding of the negative vor-
957
+ tex. Most of the fluid particles that flow across the cylinder are sucked down and
958
+ flow below the control plate. This complex flow dynamics is the combined effect
959
+ of the high rotational rate and the placement of the control plate. It reduces the
960
+ 23
961
+
962
+ angle of the vortex shedding plane with the centerline to approximately θ = 12◦,
963
+ which is much less than the previous placements of the control plate with this
964
+ high rotational rate. Also, the size of the negative vortex is drastically reduced
965
+ and becomes extremely sleek due to the interaction of shear layers. The lower
966
+ vortex grows from the bottom of the plate and moves upwards. The density of the
967
+ isotherm contour around the cylinder is very low due to the high rotational rate. As
968
+ a result, the boundary layer thickens around the cylinder and suppresses the rate
969
+ of force convective heat transfer. The isotherm contours indicate that two warm
970
+ blobs periodically convect away from the upper side of the cylinder and the lower
971
+ end of the control plate. Therefore, the placement of the control plate, together
972
+ with the rotational rate, considerably suppressed the vortex shedding process as
973
+ well as the heat convection. Figs. 10 and 11 illustrate that the size of the vortices
974
+ grow as α increases from 0.5 to 3.25 for d/R0 = 3. Also, Figs. 4, 8 and 10 show
975
+ that the wake length of vortices increases with increasing distance of the control
976
+ plate from the cylinder surface at α = 0.5. It is also observed that the increasing
977
+ distance of the control plate significantly decreases the angle of the vortex shed-
978
+ ding plane with the centerline for respecting rotational rates.
979
+ The drag (CD) and lift (CL) coefficients at different α with varying d/R0 are
980
+ shown in Fig. 12. The figures show that the drag and lift coefficients are periodic
981
+ in nature. For α = 0.5, the drag coefficient gradually decreases with increasing
982
+ d/R0 and the lift coefficient is minimum at d/R0 = 0.5. The differences in the
983
+ drag coefficients for this α = 0.5 are very small. There is not much difference
984
+ in lift coefficient for d/R0 = 1, 2, and 3. When α = 1, gradual decrease in the
985
+ drag coefficient is found with increasing d/R0. The lift coefficient is found to be
986
+ minimum for d/R0 = 0.5. The maximum value of the lift coefficient is observed
987
+ for d/R0 = 1 and 2. When α = 2.07, maximum value of the drag coefficient is
988
+ found for d/R0 = 3 and minimum value is found for d/R0 = 0.5. Here, the maxi-
989
+ mum value of the lift coefficient is found for d/R0 = 1 and the minimum value is
990
+ found for d/R0 = 0.5. When the rotation rate is at its maximum, i.e., α = 3.25,
991
+ the amplitudes of the drag and lift coefficients increase drastically for all d/R0.
992
+ Here, the minimum values of lift and drag coefficients are found for d/R0 = 0.5
993
+ and the minimum values are found d/R0 = 2. For d/R0 = 3, the amplitudes of the
994
+ drag and lift coefficients are the smallest. So, the impact of various positionings
995
+ of the arc-shaped control plate is significant at higher rotational rates. In Fig. 13,
996
+ the drag (CD) and lift (CL) coefficients at different d/R0 with varying α are shown.
997
+ At, d/R0 = 0.5, the maximum value of the CD is found for α = 1 and the mini-
998
+ mum value is found for α = 3.25. With increasing α, the maximum value of CL
999
+ 24
1000
+
1001
+ α = 0.5
1002
+ α = 1
1003
+ α = 2.07
1004
+ α = 3.25
1005
+ (a)
1006
+ (b)
1007
+ Figure 12: (a) Drag coefficient CD and (b) lift coefficient CL with varying d/R0.
1008
+ gradually decreases while the amplitude of CL gradually increases. The highest
1009
+ amplitude of the drag and lift coefficients is observed for α = 3.25. When the
1010
+ 25
1011
+
1012
+ 4
1013
+ d/R. = 0.5
1014
+ d/R。= 1
1015
+ 2
1016
+ d/R。= 2
1017
+ d/R. = 3
1018
+ 0
1019
+ -6
1020
+ 008
1021
+ 220
1022
+ 240
1023
+ 260
1024
+ 280
1025
+ 300
1026
+ t5
1027
+ d/R. = 0.5
1028
+ d/R。= 1
1029
+ d/R。= 2
1030
+ d/R. = 3
1031
+ 3
1032
+ 2
1033
+ 200
1034
+ 220
1035
+ 240
1036
+ 260
1037
+ 280
1038
+ 300
1039
+ t4
1040
+ d/R. = 0.5
1041
+ d/R。= 1
1042
+ 2
1043
+ d/R。= 2
1044
+ d/R. = 3
1045
+ 0
1046
+ 6
1047
+ 003
1048
+ 220
1049
+ 240
1050
+ 260
1051
+ 280
1052
+ 300
1053
+ t5
1054
+ d/R. = 0.5
1055
+ d/R。= 1
1056
+ d/R。= 2
1057
+ d/R.= 3
1058
+ 3
1059
+ 2
1060
+ 220
1061
+ 240
1062
+ 260
1063
+ 280
1064
+ 300
1065
+ t4
1066
+ d/R. = 0.5
1067
+ d/R。= 1
1068
+ 2
1069
+ d/R。= 2
1070
+ d/R.= 3
1071
+ 0
1072
+ -1.2
1073
+ -1.4
1074
+ -1.6
1075
+ -6
1076
+ -1.8
1077
+ 260
1078
+ 280
1079
+ 300
1080
+ 003
1081
+ 220
1082
+ 240
1083
+ 260
1084
+ 280
1085
+ 300
1086
+ t5
1087
+ d/R. = 0.5
1088
+ d/R。= 1
1089
+ d/R。= 2
1090
+ 4
1091
+ d/R. = 3
1092
+ 1.3
1093
+ 3
1094
+ D
1095
+ 1.2
1096
+ 2
1097
+ 250
1098
+ 260
1099
+ 270
1100
+ 280
1101
+ 290
1102
+ 220
1103
+ 240
1104
+ 260
1105
+ 280
1106
+ 300
1107
+ t4
1108
+ d/R. = 0.5
1109
+ d/R。= 1
1110
+ 2
1111
+ d/R。= 2
1112
+ d/R. = 3
1113
+ 0
1114
+ -0.7
1115
+ -4
1116
+ -0.8
1117
+ -6
1118
+ -0.9
1119
+ 260
1120
+ 280
1121
+ 300
1122
+ 800
1123
+ 220
1124
+ 240
1125
+ 260
1126
+ 280
1127
+ 300
1128
+ t5
1129
+ d/R. = 0.5
1130
+ d/R。= 1
1131
+ d/R。= 2
1132
+ 4
1133
+ d/R. = 3
1134
+ 3
1135
+ 1.2
1136
+ D
1137
+ 1.15
1138
+ C
1139
+ 2
1140
+ 1.1
1141
+ 250
1142
+ 260
1143
+ 270
1144
+ 280
1145
+ 290
1146
+ 220
1147
+ 240
1148
+ 260
1149
+ 280
1150
+ 300
1151
+ td/R0 = 0.5
1152
+ d/R0 = 1
1153
+ d/R0 = 2
1154
+ d/R0 = 3
1155
+ (a)
1156
+ (b)
1157
+ Figure 13: (a) Drag coefficient CD and (b) lift coefficient CL with varying α.
1158
+ 26
1159
+
1160
+ 4
1161
+ α = 0.5
1162
+ α=1
1163
+ 2
1164
+ α = 2.07
1165
+ α = 3.25
1166
+ 0
1167
+ 4
1168
+ -6
1169
+ 003
1170
+ 220
1171
+ 240
1172
+ 260
1173
+ 280
1174
+ 300
1175
+ tin
1176
+ α = 0.5
1177
+ α=1
1178
+ 4
1179
+ α = 2.07
1180
+ α = 3.25
1181
+ 3
1182
+ D
1183
+ 2
1184
+ 200
1185
+ 220
1186
+ 240
1187
+ 260
1188
+ 280
1189
+ 300
1190
+ t4
1191
+ α = 0.5
1192
+ α=1
1193
+ 2
1194
+ α = 2.07
1195
+ α = 3.25
1196
+ 0
1197
+ -4
1198
+ -6
1199
+ 003
1200
+ 220
1201
+ 240
1202
+ 260
1203
+ 280
1204
+ 300
1205
+ t5
1206
+ α = 0.5
1207
+ α=1
1208
+ 4
1209
+ α = 2.07
1210
+ α = 3.25
1211
+ 3
1212
+ D
1213
+ 2
1214
+ 220
1215
+ 240
1216
+ 260
1217
+ 280
1218
+ 300
1219
+ t4
1220
+ α = 0.5
1221
+ α=1
1222
+ 2
1223
+ α = 2.07
1224
+ α = 3.25
1225
+ 0
1226
+ A
1227
+ -6
1228
+ 800
1229
+ 220
1230
+ 240
1231
+ 260
1232
+ 280
1233
+ 300
1234
+ t5
1235
+ α = 0.5
1236
+ α=1
1237
+ A
1238
+ α = 2.07
1239
+ α = 3.25
1240
+ 3
1241
+ 2
1242
+ 220
1243
+ 240
1244
+ 260
1245
+ 280
1246
+ 300
1247
+ t4
1248
+ α = 0.5
1249
+ α=1
1250
+ 2
1251
+ α = 2.07
1252
+ α = 3.25
1253
+ 0
1254
+ 008
1255
+ 220
1256
+ 240
1257
+ 260
1258
+ 280
1259
+ 300
1260
+ t5
1261
+ α = 0.5
1262
+ α=1
1263
+ A
1264
+ α = 2.07
1265
+ α = 3.25
1266
+ 3
1267
+ 2
1268
+ 220
1269
+ 240
1270
+ 260
1271
+ 280
1272
+ 300
1273
+
1274
+ θ
1275
+ θ
1276
+ (a)
1277
+ (b)
1278
+ (c)
1279
+ θ
1280
+ θ
1281
+ θ
1282
+ (d)
1283
+ (e)
1284
+ (f)
1285
+ θ
1286
+ θ
1287
+ (g)
1288
+ (h)
1289
+ Figure 14: Local Nusselt number variation at periodic phases for (a) d/R0 = 1, α = 0.5; (b)
1290
+ d/R0 = 1, α = 1; (c) d/R0 = 1, α = 2.07; (d) d/R0 = 1, α = 3.25; (e) d/R0 = 2, α = 0.5; (f)
1291
+ d/R0 = 2, α = 3.25; (g) d/R0 = 3, α = 0.5; and (h) d/R0 = 3, α = 3.25.
1292
+ plate distance is increased to 1 and 2, the maximum value of CD and the minimum
1293
+ value of CL are found for α = 3.25. Also, the amplitudes are maximum for the
1294
+ highest rotational rate. When d/R0 = 3, CD gradually increases while CL gradu-
1295
+ ally deceases as α increases. The lift coefficients suggest that the lock-on vortices
1296
+ are shed under all the considered rotational rates and distances of the control plate.
1297
+ Fig. 14 shows the variation of local Nusselt numbers at periodic phases for var-
1298
+ 27
1299
+
1300
+ ious rotational rates of the cylinder and different positioning of the plate. Fig. 14(a)
1301
+ shows the variation of Nu for d/R0 = 1 and α = 0.5. It can be seen that the
1302
+ maximum value of Nu is slightly shifted downwards from the front stagnation
1303
+ point (θ = 180◦) approximately to θ = 192◦. It indicates the difference in heat
1304
+ transfer processes between the upper and lower half of the cylinder surface. The
1305
+ differences in values of Nu between the periodic phases are very small. A local
1306
+ maximum peak of Nu is found at θ ≈ 30◦ which indicates the higher rate of heat
1307
+ convection in this area. This is supported by the concentrated isotherm contours
1308
+ in this area close to the cylinder surface shown in Fig. 4. As the α increases to
1309
+ 2 for d/R0 = 1 in Fig. 14(b), the differences in values are increased at different
1310
+ phases. The highest point of Nu is found around θ = 204◦. It shows the difference
1311
+ in heat transfer mechanisms from the upper and lower surfaces. A local maximum
1312
+ peak of Nu is found at θ ≈ 42◦ indicating higher rate of heat convection in this
1313
+ area. It is also supported by the highly concentrated isotherm contours in Fig. 5.
1314
+ When α = 2.07 for d/R0 = 1, the maximum value of Nu slightly decreases in
1315
+ Fig. 14(c) than that the previous cases and the maximum point of heat transfer is
1316
+ around θ = 240◦. Local maximum peak is found to be changing position between
1317
+ θ ≈ 42◦ and θ ≈ 78◦ at different periodic phases due to the complex vortex shed-
1318
+ ding phenomenon. These areas convect a large amount of heat into the fluid. The
1319
+ asymmetric Nu-distribution around the front stagnation point shows that the heat
1320
+ transfer process from the upper part of the cylinder surface is far different from
1321
+ the heat transfer process from the lower part of the cylinder surface. Fig. 14(d)
1322
+ shows the variation of Nu with maximum rotation rate, α = 3.25 for d/R0 = 1 and
1323
+ the maximum value of Nu drastically decreases and occurs at θ ≈ 72◦ i.e. near
1324
+ the rear stagnation point. As many researchers previously mentioned, here too,
1325
+ large rotational rates significantly reduce the maximum heat transfer rate from the
1326
+ cylinder [19, 22, 23]. A local maximum peak is found at θ ≈ 252◦. The reduc-
1327
+ tion of the maximum peak value of Nu at front stagnation point with increasing α
1328
+ hints to the fact that more heat is transferred under conduction in this area. This
1329
+ happens due to the thickening of the boundary layer around the cylinder surface
1330
+ at the high rotational rate. Fig. 14(e) shows the variation of Nu with α = 0.5 and
1331
+ d/R0 = 2. It shows that the maximum point of heat transfer is around θ = 191◦.
1332
+ Also, the peak value is slightly lower than that of d/R0 = 1 due to the vortex shed-
1333
+ ding process. A local maximum peak is found at θ ≈ 24◦ i.e. the heat transfer is
1334
+ higher in this area. It is also supported by the respective dense isotherm contours.
1335
+ In Fig. 14(f), α is increased to 3.25 for d/R0 = 2 and it is found that the highest
1336
+ value of Nu significantly reduced than the previous case. The highest value of Nu
1337
+ is observed around θ = 264◦ i.e. maximum heat transfer under convection occurs
1338
+ 28
1339
+
1340
+ Nut
1341
+ α
1342
+ α
1343
+ α
1344
+ α
1345
+ α
1346
+ Nut
1347
+ (a)
1348
+ (b)
1349
+ Figure 15: (a) Nut for varying α, and (b) Nut for varying d/R0.
1350
+ in this area. This is supported by the respective dense isotherm contour in this
1351
+ region close to the cylinder surface. A local maximum value of Nu is found at
1352
+ θ ≈ 60◦ at periodic phases t = t0 + (0)T, t0 + (1)T, i.e. the heat transfer is en-
1353
+ hanced in this area under convection by the complex vortex shedding process. The
1354
+ Nu-distribution at 180◦ ≤ θ ≤ 0◦ is significantly different than the Nu-distribution
1355
+ at 360◦ ≤ θ ≤ 180◦. This demonstrates that the lower half of the cylinder surface
1356
+ convects more heat than the upper half. Fig. 14(g) shows the variation of Nu for
1357
+ α = 0.5 and d/R0 = 3. The highest value of Nu is observed around θ = 192◦.
1358
+ The maximum value is slightly lower than that of d/R0 = 1, 2. A local maximum
1359
+ value of Nu distribution is found at θ ≈ 24◦. The maximum heat transfer under
1360
+ convection occurs in these areas. The highest value of Nu-distribution curve is
1361
+ significantly reduced in Fig. 14(h) where α = 3.25 and d/R0 = 3 as compared to
1362
+ Fig. 14(d) for d/R0 = 1 and Fig. 14(f) for d/R0 = 2. This indicates that the in-
1363
+ creasing distance of the control plate significantly reduced the heat transfer under
1364
+ convection for the fixed α. The highest value of Nu is shifted to θ ≈ 276◦ due
1365
+ to the complex vortex shedding. The lowest value of Nu-distribution curve at the
1366
+ front stagnation point indicates that a large amount of heat is transferred by con-
1367
+ duction at this place. Also, the distribution curve at 180◦ ≤ θ ≤ 0◦ is significantly
1368
+ different than the curve at 360◦ ≤ θ ≤ 180◦, which shows that the lower half of
1369
+ the cylinder surface convects more heat than the upper half.
1370
+ Fig. 15 exhibits the variation of time-averaged total Nusselt number (Nut) with
1371
+ Fig. 15(a) varying α and Fig. 15(b) varying d/R0. The values of Nut for α = 0.5
1372
+ 29
1373
+
1374
+ are 6.672265, 6.251865, 6.154835 and 6.074185 with d/R0 = 0.5, 1, 2 and 3
1375
+ respectively. It means that the increasing distance of control plate reduces the
1376
+ heat transfer rate at α = 0.5. The values of Nut for α = 1 are 6.790804, 6.28877,
1377
+ 6.07076 and 5.89388 with d/R0 = 0.5, 1, 2 and 3 respectively. It means that the
1378
+ increasing distance of control plate also reduces the heat transfer rate at α = 1.
1379
+ The values of Nut for α = 2.07 are 6.686615, 5.774501, 5.6436 and 5.643545
1380
+ with d/R0 = 0.5, 1, 2 and 3 respectively. Here also, the increasing distance of
1381
+ control plate reduces the heat transfer rate. The values of Nut for α = 3.25 are
1382
+ 6.68507, 5.899766, 4.931795 and 4.9757 with d/R0 = 0.5, 1, 2 and 3 respectively.
1383
+ Again the increasing distance of the control plate reduces the heat transfer rate
1384
+ except for d/R0 = 3. This occurs due to the interaction of high rotation and the
1385
+ large distance of the control plate. Fig. 15(a) shows that Nut gradually deceases
1386
+ with increasing α at d/R0 = 2, 3 and the maximum value of Nut is found for
1387
+ d/R0 = 0.5, α = 0.5. Fig. 15(b) shows that increasing d/R0 significantly reduces
1388
+ Nut within the range of 0.5 ≤ d/R0 ≤ 2 for all rotational rates. However, if we
1389
+ place the plate further at a distance d/R0 = 3, not much change occurs. It is
1390
+ found from the comparison of maximum and minimum values of Nut that certain
1391
+ positioning of the control plate and rotational rate can enhance the heat transfer
1392
+ rate by 37.69%.
1393
+ 5. Conclusion
1394
+ We numerically examined the control of a uniform, viscous fluid flow past
1395
+ circular cylinder by an arc-shaped plate positioned in the normal direction behind
1396
+ an isothermally heated circular cylinder rotating in the cross stream. The gov-
1397
+ erning equations are discretized using a HOC finite difference technique, and the
1398
+ system of algebraic equations obtained by the HOC discretization is solved using
1399
+ the Bi-conjugate gradient stabilised iterative method. According to the research,
1400
+ the distance between the control plate and the cylinder surface has a considerable
1401
+ impact on fluid flow along with the rotation of the cylinder. The structure of the
1402
+ wake changes depending on the position of the plate. When α is less than 1 with
1403
+ d/R0 = 1, two vortices as lumps of hot fluid are shed periodically from either
1404
+ side of the cylinder; when α is greater than 2.07 with d/R0 = 1, a large nega-
1405
+ tive vortex of heated fluid is shed from the upper side of the cylinder and another
1406
+ positive vortex of hot fluid is shed behind the control plate on a periodic basis.
1407
+ The increasing rotational rates increase the size of vortices and decrease the wake
1408
+ length for all positions of the control plate. The vortex shedding plane is shifted
1409
+ from the centerline by the cylinder’s rotational motion. For all rotational rates, the
1410
+ 30
1411
+
1412
+ increased distance of the control plate decreases the angle of the vortex shedding
1413
+ plane with the centerline, but the angle is increased with increasing rotational rates
1414
+ for all positions of the control plate. At higher rotational rates, the positive vortex
1415
+ is pulled upwards due to the interaction of fluid, and it pushes the negative vortex,
1416
+ causing an early shedding of it. Placing the control plate at d/R0 = 3 along with
1417
+ a high rotational rate is found to significantly reduce the size of vortices. It is also
1418
+ found that the impact of various positionings of the arc-shaped control plate is
1419
+ significant at higher rotational rates. An additional recirculation zone is found for
1420
+ (d/R0 = 2, α = 0.5, 3.25) and (d/R0 = 3, α = 3.25). Drag and lift coefficients
1421
+ for all 0.5 ≤ d/R0 ≤ 3 and 0.5 ≤ α ≤ 3.25 have a periodic nature . The values of
1422
+ drag and lift coefficients can be reduced or increased by utilising the rotation of the
1423
+ cylinder and the placement of the plate. The maximum value of drag coefficient
1424
+ is achieved for d/R0 = 2 and α = 3.25 which is about 3. All vortices shed are
1425
+ locked-on under the scope of considered parameters. It is found that the rotational
1426
+ rates relocate the highest point of heat transfer further from the front stagnation
1427
+ point, i.e., increasing the heat transfer by conduction in this region. The increasing
1428
+ distance of the control plate significantly reduced the heat transfer under convec-
1429
+ tion for the fixed α. The combined effect of rotation and the positioning of the
1430
+ control plate causes a different heat transfer mechanism at the upper half of the
1431
+ cylinder surface than at the lower half. For fixed d/R0 = 2 and α = 3.25, the
1432
+ maximum point of heat transfer is shifted towards the rear stagnation point from
1433
+ the front stagnation point due to the complex vortex shedding.
1434
+ Author Declarations
1435
+ The authors have no conflicts to disclose.
1436
+ Data Availability Statement
1437
+ The data that support the findings of this study are available from the corre-
1438
+ sponding author upon reasonable request.
1439
+ References
1440
+ [1] W. Bickley, “The influence of vortices upon the resitance experienced by
1441
+ solids moving through a liquid,” Proceedings of the Royal Society of London.
1442
+ Series A, Containing Papers of a Mathematical and Physical Character, vol.
1443
+ 119, no. 781, pp. 146–156, 1928.
1444
+ 31
1445
+
1446
+ [2] J. Gerrard, “The mechanics of the formation region of vortices behind bluff
1447
+ bodies,” Journal of Fluid Mechanics, vol. 25, no. 2, pp. 401–413, 1966.
1448
+ [3] J. O. Pralits, L. Brandt, and F. Giannetti, “Instability and sensitivity of the
1449
+ flow around a rotating circular cylinder,” Journal of Fluid Mechanics, vol.
1450
+ 650, pp. 513–536, 2010.
1451
+ [4] S. Kang, H. Choi, and S. Lee, “Laminar flow past a rotating circular cylin-
1452
+ der,” Physics of Fluids, vol. 11, no. 11, pp. 3312–3321, 1999.
1453
+ [5] F. Diaz, J. Gavaldà, J. Kawall, J. Keffer, and F. Giralt, “Vortex shedding from
1454
+ a spinning cylinder,” The Physics of Fluids, vol. 26, no. 12, pp. 3454–3460,
1455
+ 1983.
1456
+ [6] J. Massons, X. Ruiz, and F. Diaz, “Image processing of the near wakes of
1457
+ stationary and rotating cylinders,” Journal of Fluid Mechanics, vol. 204, pp.
1458
+ 167–184, 1989.
1459
+ [7] D. Stojkovi´c, M. Breuer, and F. Durst, “Effect of high rotation rates on the
1460
+ laminar flow around a circular cylinder,” Physics of Fluids, vol. 14, no. 9,
1461
+ pp. 3160–3178, 2002.
1462
+ [8] A. Roshko, “On the wake and drag of bluff bodies,” Journal of the Aeronau-
1463
+ tical Sciences, vol. 22, no. 2, pp. 124–132, 1955.
1464
+ [9] P. Bearman, “Investigation of the flow behind a two-dimensional model with
1465
+ a blunt trailing edge and fitted with splitter plates,” Journal of Fluid Mechan-
1466
+ ics, vol. 21, no. 2, pp. 241–255, 1965.
1467
+ [10] C. J. Apelt, G. S. West, and A. A. Szewczyk, “The effects of wake split-
1468
+ ter plates on the flow past a circular cylinder in the range 104<r<5 × 104,”
1469
+ Journal of Fluid Mechanics, vol. 61, no. 1, pp. 187–198, oct 1973.
1470
+ [11] K. Kwon and H. Choi, “Control of laminar vortex shedding behind a circular
1471
+ cylinder using splitter plates,” Physics of Fluids, vol. 8, no. 2, pp. 479–486,
1472
+ 1996.
1473
+ [12] Y. Bao and J. Tao, “The passive control of wake flow behind a circular cylin-
1474
+ der by parallel dual plates,” Journal of Fluids and Structures, vol. 37, pp.
1475
+ 201–219, 2013.
1476
+ 32
1477
+
1478
+ [13] H. Akilli, B. Sahin, and N. F. Tumen, “Suppression of vortex shedding of
1479
+ circular cylinder in shallow water by a splitter plate,” Flow Measurement
1480
+ and Instrumentation, vol. 16, no. 4, pp. 211–219, 2005.
1481
+ [14] L. Lu, M.-M. Liu, B. Teng, Z.-D. Cui, G.-Q. Tang, M. Zhao, and L. Cheng,
1482
+ “Numerical investigation of fluid flow past circular cylinder with multiple
1483
+ control rods at low reynolds number,” Journal of Fluids and Structures,
1484
+ vol. 48, pp. 235–259, 2014.
1485
+ [15] K. Liu, J. Deng, and M. Mei, “Experimental study on the confined flow over
1486
+ a circular cylinder with a splitter plate,” Flow Measurement and Instrumen-
1487
+ tation, vol. 51, pp. 95–104, 2016.
1488
+ [16] S. Bouzari and J. Ghazanfarian, “Unsteady forced convection over cylinder
1489
+ with radial fins in cross flow,” Applied Thermal Engineering, vol. 112, pp.
1490
+ 214–225, 2017.
1491
+ [17] A. Kaya, O. Aydin, and I. Dincer, “Numerical modeling of forced-
1492
+ convection drying of cylindrical moist objects,” Numerical Heat Transfer,
1493
+ Part A: Applications, vol. 51, no. 9, pp. 843–854, 2007.
1494
+ [18] J. Anderson and O. Saunders, “Convection from an isolated heated hori-
1495
+ zontal cylinder rotating about its axis,” Proceedings of the Royal Society of
1496
+ London. Series A. Mathematical and Physical Sciences, vol. 217, no. 1131,
1497
+ pp. 555–562, 1953.
1498
+ [19] H. M. Badr and S. C. R. Dennis, “Laminar forced convection from a rotating
1499
+ cylinder,” International Journal of Heat and Mass Transfer, vol. 28, no. 1,
1500
+ pp. 253–264, 1985.
1501
+ [20] A. K. Mohanty, A. A. Tawfek, and B. Prasad, “Heat transfer from a rotating
1502
+ cylinder in crossflow,” Experimental Thermal and Fluid Science, vol. 10,
1503
+ no. 1, pp. 54–61, 1995.
1504
+ [21] A. A. Kendoush, “An approximate solution of the convective heat transfer
1505
+ from an isothermal rotating cylinder,” International Journal of Heat and
1506
+ Fluid Flow, vol. 17, no. 4, pp. 439–441, 1996.
1507
+ [22] S. B. Paramane and A. Sharma, “Numerical investigation of heat and fluid
1508
+ flow across a rotating circular cylinder maintained at constant temperature in
1509
+ 33
1510
+
1511
+ 2-d laminar flow regime,” International Journal of Heat and Mass Transfer,
1512
+ vol. 52, no. 13-14, pp. 3205–3216, 2009.
1513
+ [23] M. Sufyan, S. Manzoor, and N. A. Sheikh, “Free stream flow and forced
1514
+ convection heat transfer across rotating circular cylinder in steady regime:
1515
+ effects of rotation, prandtl number and thermal boundary condition,” Journal
1516
+ of Mechanical Science and Technology, vol. 29, no. 4, pp. 1781–1797, 2015.
1517
+ [24] J. Jalil, H. Abdulla, and A. Yousif, “Heat transfer and flow structure around
1518
+ circular cylinder with using rectangular winglet,” Emirates J. Eng. Res,
1519
+ vol. 12, no. 2, pp. 41–46, 2007.
1520
+ [25] S. B. Paramane and A. Sharma, “Heat and fluid flow across a rotating cylin-
1521
+ der dissipating uniform heat flux in 2d laminar flow regime,” International
1522
+ Journal of Heat and Mass Transfer, vol. 53, no. 21-22, pp. 4672–4683, 2010.
1523
+ [26] V. Sharma and A. K. Dhiman, “Heat transfer from a rotating circular cylinder
1524
+ in the steady regime: Effects of prandtl number,” Therm. Sci, vol. 16, no. 1,
1525
+ pp. 79–91, 2012.
1526
+ [27] M. Sufyan, S. Manzoor, and N. A. Sheikh, “Heat transfer suppression in flow
1527
+ around a rotating circular cylinder at high prandtl number,” Arabian Journal
1528
+ for Science and Engineering, vol. 39, no. 11, pp. 8051–8063, 2014.
1529
+ [28] D. Eastman and D. Wenndt, “Aerodynamics of maneuvering missiles with
1530
+ wrap-around fins,” in 3rd Applied Aerodynamics Conference, 1985, p. 4083.
1531
+ [29] D. Martins, J. Correia, and A. Silva, “The influence of front wing pressure
1532
+ distribution on wheel wake aerodynamics of a f1 car,” Energies, vol. 14,
1533
+ no. 15, p. 4421, 2021.
1534
+ [30] J. C. Kalita and R. K. Ray, “A transformation-free hoc scheme for incom-
1535
+ pressible viscous flows past an impulsively started circular cylinder,” Journal
1536
+ of Computational Physics, vol. 228, no. 14, pp. 5207–5236, 2009.
1537
+ [31] R. K. Ray, “A transformation-free hoc scheme for incompressible viscous
1538
+ flow past a rotating and translating circular cylinder,” Journal of Scientific
1539
+ Computing, vol. 46, no. 2, pp. 265–293, 2011.
1540
+ [32] A. Kumar and R. K. Ray, “Numerical study of shear flow past a square cylin-
1541
+ der at reynolds numbers 100, 200,” Procedia Engineering, vol. 127, pp. 102–
1542
+ 109, 2015.
1543
+ 34
1544
+
1545
+ [33] R. K. Ray and A. Kumar, “Numerical study of shear rate effect on unsteady
1546
+ flow separation from the surface of the square cylinder using structural bi-
1547
+ furcation analysis,” Physics of Fluids, vol. 29, no. 8, p. 83604, 2017.
1548
+ [34] A. Kumar and R. K. Ray, “Structural bifurcation analysis of vortex shedding
1549
+ from shear flow past circular cylinder,” Computational and Applied Mathe-
1550
+ matics, vol. 38, no. 3, p. 121, 2019.
1551
+ [35] ——, “A structural bifurcation analysis of flow phenomenon for shear flow
1552
+ past an inclined square cylinder: application to 2d unsteady separation,”
1553
+ Fluid Dynamics, vol. 55, pp. 391–406, 2020.
1554
+ [36] H. V. R. Mittal, R. K. Ray, and Q. M. Al-Mdallal, “A numerical study of ini-
1555
+ tial flow past an impulsively started rotationally oscillating circular cylinder
1556
+ using a transformation-free hoc scheme,” Physics of Fluids, vol. 29, no. 9, p.
1557
+ 93603, 2017.
1558
+ [37] H. V. R. Mittal and Q. M. Al-Mdallal, “A numerical study of forced con-
1559
+ vection from an isothermal cylinder performing rotational oscillations in a
1560
+ uniform stream,” International Journal of Heat and Mass Transfer, vol. 127,
1561
+ pp. 357–374, 2018.
1562
+ 35
1563
+
19FRT4oBgHgl3EQfmjee/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
1NE2T4oBgHgl3EQfigeU/content/2301.03959v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8eedaaa191b187347bded10422aae0435f0ac1d22be83765983d6e5c464f5f09
3
+ size 148175
1NE2T4oBgHgl3EQfigeU/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09c14c118d636f0c0b624aba28bd8269b58e4b8cb5de368eec11aa1985b0498d
3
+ size 3080237
1NE2T4oBgHgl3EQfigeU/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8af52d943b51a655765220fce29b45e1d7839a583e5533e27a81ae1f2807274f
3
+ size 98689
29E3T4oBgHgl3EQfoAqh/content/2301.04630v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40d44db8378e2564ce8d8e385a21ab173f37354fd885252ec3804e47329e3c68
3
+ size 30590352
29FKT4oBgHgl3EQf8C4w/content/tmp_files/2301.11947v1.pdf.txt ADDED
@@ -0,0 +1,946 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.11947v1 [math.DS] 27 Jan 2023
2
+ A survey on Lyapunov functions for epidemic
3
+ compartmental models
4
+ N. Cangiotti∗, M. Capolli†, M. Sensi‡, S. Sottile§
5
+ Abstract
6
+ In this survey, we propose an overview on Lyapunov functions for a variety of com-
7
+ partmental models in epidemiology. We exhibit the most widely employed functions,
8
+ together with a commentary on their use.
9
+ Our aim is to provide a comprehensive
10
+ starting point to readers who are attempting to prove global stability of systems of
11
+ ODEs. The focus is on mathematical epidemiology, however some of the functions and
12
+ strategies presented in this paper can be adapted to a wider variety of models, such as
13
+ prey-predator or rumor spreading.
14
+ Mathematics Subject Classification:
15
+ 34D20, 34D23, 37N25, 92D30.
16
+ Keywords:
17
+ Epidemic models, Lyapunov functions, Compartmental models, Global
18
+ stability, Ordinary Differential Equations, Disease Free and Endemic Equilibria.
19
+ 1
20
+ Introduction
21
+ Stemming from the pioneering work of Kermack and McKendrick [30], the mathematical
22
+ modelling of infectious diseases has developed, over the last century, in various directions.
23
+ An abundance of approaches and mathematical techniques have been employed to capture
24
+ the many facets and details which describe the spread of an infectious disease in a population.
25
+ In particular, compartmental models remain one of the most widely employed approaches.
26
+ In these models, a population is partitioned into compartments, characterizing each individ-
27
+ ual with respect to its current state in the epidemic.
28
+ One can then write a system of
29
+ Ordinary Differential Equations (from here onwards, ODEs) to study the evolution in time
30
+ of the disease.
31
+ ∗Politecnico di Milano, Department of Mathematics, via Bonardi 9, Campus Leonardo, 20133, Milan
32
+ (Italy). E-mail: [email protected]
33
+ †Institute of Mathematics, Polish Academy of Sciences, Jana i Jedrzeja Sniadeckich 8, 00-656, Warsaw,
34
+ (Poland). E-mail: [email protected]
35
+ ‡MathNeuro Team, Inria at Université Côte d’Azur, 2004 Rte des Lucioles, 06410, Biot, (France). E-mail:
36
37
+ §Department of Mathematics, University of Trento, Via Sommarive 14, 38123, Povo, (Italy). E-mail:
38
39
+ 1
40
+
41
+ These models usually take their names from the compartments they consider, the most
42
+ renowned one being the Susceptible-Infected-Recovered (SIR) model. The SIR models can
43
+ be extended to SIRS models by considering the acquired immunity to be temporary rather
44
+ than permanent, allowing Recovered individuals to become Susceptible again. Various com-
45
+ partments can be added, depending on the characteristic of the specific disease under study:
46
+ Asymptomatic, Exposed, Waning immunity and many others.
47
+ A remarkably useful tool for the study of this kind of models are Lyapunov functions,
48
+ which ensure global (or, in some cases, local) asymptotic convergence towards one of the
49
+ equilibria of the system.
50
+ Given a system of n ODEs X′ = f(X) and an equilibrium point X∗, we call a scalar
51
+ function V ∈ C1(Rn, R) a Lyapunov function if the following hold:
52
+ 1. V attains its minimum at X = X∗;
53
+ 2. V ′ = ∇V · f < 0 for X ̸= X∗.
54
+ The classical definition of Lyapunov function requires also the conditions
55
+ 3. X∗ = 0 and V (X∗) = 0;
56
+ however, these amount to a change of coordinates in Rn and a vertical translation of V , so
57
+ we will accept the more general definition. The existence of such a function guarantess the
58
+ global stability of the equilibrium X∗, as orbits of the systems naturally evolve towards the
59
+ minimum power level of V .
60
+ The Basic Reproduction Number R0 is a well know threshold in epidemics models. Usu-
61
+ ally, R0 < 1 suggests Global Asymptotic Stability (from here onwards, GAS) of the Disease
62
+ Free Equilibrium (from here onwards, DFE), whereas R0 > 1 suggests GAS of the Endemic
63
+ Equilibrium (from here onwards, EE). In more complex models, the aforementioned condi-
64
+ tions on R0 might not be sufficient to prove the GAS of either equilibria, especially in cases
65
+ in which the EE is not unique. Lyapunov functions often explicitly involve R0 to guarantee
66
+ the extinction of the disease or its endemicity over time.
67
+ Unfortunately, given a generic system of ODEs, there is no universal way of deriving a
68
+ Lyapunov function, nor to rule out the existence of one. However, there exist a few Lyapunov
69
+ functions which have proven quite effective in a variety of different models.
70
+ In this survey, we collect some of the most relevant functions available in the literature, to
71
+ provide the reader with a series of options to apply to the model of their interest, depending
72
+ on its formulation. We include an extensive bibliography to complement the essential infor-
73
+ mation of each model we present. This will provide the reader with a convenient starting
74
+ point to investigate the availability of a known Lyapunov function to analytically prove the
75
+ asymptotic behaviour of their system of ODEs. For the sake of brevity, we do not repeat
76
+ the proofs to show that any of the functions we present are, indeed, Lyapunov function
77
+ for the respective system of ODEs. These proofs can be found in the papers we cite when
78
+ introducing each model.
79
+ Consider a model with compartments X1, X2, . . . , Xn. Then, the DFE has coordinates
80
+ Xi = 0 for all i ∈ I, where I is the set of the indexes of infectious compartments, and the
81
+ EE, which we indicate with (X∗
82
+ 1, X∗
83
+ 2, . . . , X∗
84
+ n), has all positive entries. A vast majority of
85
+ Lyapunov functions in epidemic modelling fall into one of the categories listed below.
86
+ 2
87
+
88
+ 1. Linear combination of infectious compartments. The Lyapunov function for the
89
+ DFE when R0 < 1 is of the form
90
+ L =
91
+
92
+ i≥2
93
+ ciXi,
94
+ for some constants ci ≥ 0 to be determined [6, 16, 18, 21, 32, 36, 39, 45, 49, 50, 59, 64,
95
+ 70]. To prove convergence of the system to the DFE in this case it is often required the
96
+ use of additional tools, such as LaSalle’s invariance principle, which we briefly recall
97
+ at the end of Section 2.1.
98
+ 2. Goh-Lotka-Volterra. The Lyapunov function for the EE when R0 > 1 is of the form
99
+ L =
100
+
101
+ i
102
+ ci(Xi − X∗
103
+ i ln Xi),
104
+ for some constants ci ≥ 0 to be determined [2, 5, 6, 20, 27, 29, 32, 33, 45, 49, 52, 53,
105
+ 59, 63, 65]. These functions are adapted from a first integral of the notorious Lotka-
106
+ Volterra prey-predator system, and were popularized by Bean-San Goh in a series of
107
+ paper [12, 13, 14].
108
+ 3. Quadratic. The Lyapunov function for the EE when R0 > 1 is of the common form
109
+ L =
110
+
111
+ i
112
+ ci(Xi − X∗
113
+ i )2,
114
+ for some constants ci ≥ 0 to be determined, or the composite form
115
+ L =
116
+ ��
117
+ i
118
+ Xi − X∗
119
+ i
120
+ �2
121
+ .
122
+ Some examples can be found in [40, 41, 60, 65, 66].
123
+ 4. Integral Lyapunov. Lyapunov functions given as integrals over the dynamics of the
124
+ model.
125
+ The integration interval often start at some EE value X∗
126
+ i and ends at the
127
+ same Xi; this construction is very convenient if uniqueness of the EE is guaranteed,
128
+ but the exact values of the EE are hard (or impossible) to determine analytically.
129
+ Integral Lyapunov functions are particularly useful when the model includes multiple
130
+ stages of infection, and consequently the infectious period changes from an exponential
131
+ distribution to a gamma distribution [8, 11, 18, 38, 58, 61]. Integral Lyapunov functions,
132
+ albeit in different forms, are widely used in models which incorporate explicit delay,
133
+ such as systems of Delay Differential Equations (from here onwards, DDEs), and age-
134
+ structured models. However, these fall beyond the scope of this paper, and we will
135
+ briefly comment on them in Section 3.
136
+ 5. Hybrid.
137
+ A linear combination of the above, which often includes the Goh-Lotka-
138
+ Volterra in at least a few of the compartments of the system [15, 27, 37, 47, 50, 53, 51,
139
+ 63].
140
+ 3
141
+
142
+ For some high-dimensional models, proving convergence to the EE might require addi-
143
+ tional tools, such as the geometric approach used in [53, 64].
144
+ Lastly, we must notice that not all compartmental models only exhibit convergence to
145
+ equilibrium. Some systems of autonomous ODEs may present stable or unstable limit cycles
146
+ [9, 54, 68], homoclinic orbits [54] or even chaos [57]. In such cases, clearly, no global Lyapunov
147
+ function may exist.
148
+ In the remainder of this survey, we will present various models and the corresponding
149
+ Lyapunov functions, covering all the cases listed above.
150
+ 2
151
+ Epidemic models
152
+ In this section, we present various compartmental epidemic models with the corresponding
153
+ Lyapunov function(s). We present the models from the smallest to the largest, in terms
154
+ of number of compartments. We refer to [1, 28] for a basic introduction on compartmental
155
+ epidemic models, and to [55] for a detailed exemplification of Lyapunov theory in this setting.
156
+ We provide a schematic representation of the flows in most of the systems we present.
157
+ Flow diagrams can be useful to provide a visual, intuitive interpretation of the parameters
158
+ involved in each system. Arrows between compartments indicate a change in the current
159
+ state of individuals with respect to the ongoing epidemics, whereas arrows inward/outward
160
+ the union of the compartments represent birth rate and death rate in the population. Often,
161
+ these last two rates are considered to be equal, as this assumption allows the population to
162
+ either remain constant or converge to a constant value, reducing the dimensionality of the
163
+ system and (hopefully) its analytical complexity. However, some models include additional
164
+ disease-induced mortality, to increase realism when modelling severe infectious diseases. We
165
+ uniform the notation throughout the various models we present in this survey as much as
166
+ possible, and provide a brief description of each parameter the first time it is encountered.
167
+ We remark that each variable is assumed to be non-negative, since it represents a fraction of
168
+ the population, but the biologically relevant region varies depending on the specific model
169
+ we are describing.
170
+ Moreover, we illustrate the corresponding Lyapunov functions for 2D models, showcasing
171
+ a selection of their power levels. The same procedure can be easily adapted to 3D models,
172
+ but the corresponding visualizations can be hard to interpret in a static image.
173
+ 2.1
174
+ SIS
175
+ The SIS model is characterized by the total absence of immunity after infection, i.e. the
176
+ recovery from infection is followed by an instantaneous return to the susceptible class. The
177
+ ODEs system which describes this situation is
178
+ dS
179
+ dt = γI − βSI
180
+ N ,
181
+ dI
182
+ dt = βSI
183
+ N − γI,
184
+ (1)
185
+ S
186
+ I
187
+ β SI
188
+ N
189
+ γI
190
+ where β is the transmission rate and γ is the recovery rate.
191
+ 4
192
+
193
+ Notice that the population N = S + I is constant, thus we can normalize it to N = 1.
194
+ Moreover, since S + I = 1, we can reduce the system to one ODE which involves only
195
+ infectious individuals
196
+ dI
197
+ dt = (β(1 − I) − γ)I.
198
+ System (1) always admits the DFE, i.e. E0 = (1, 0), and the EE, i.e. E∗ =
199
+ �γ
200
+ β , β − γ
201
+ β
202
+
203
+ ,
204
+ which exists if and only if β > γ (or equivalently if R0 = β/γ > 1). Notice that, if R0 < 1,
205
+ then I is always decreasing in the biologically relevant interval [0, 1].
206
+ A variation of model (1) can be obtained by adding demography to the system. This is
207
+ the example of [65], in which the authors consider a birth/immigration rate different from
208
+ the natural death rate; moreover, they include an additional disease-induced death rate from
209
+ infectious class. Thus, the population is not constant and the system of ODEs which describe
210
+ the model is
211
+ dS
212
+ dt = Λ + γI − βSI
213
+ N − µS,
214
+ dI
215
+ dt = βSI
216
+ N − (δ + γ + µ)I,
217
+ (2)
218
+ S
219
+ I
220
+ β SI
221
+ N
222
+ γI
223
+ Λ
224
+ µS
225
+ (δ + µ)I
226
+ where Λ represents the birth/immigration rate, µ the natural death rate and δ the disease-
227
+ induced mortality rate. System (2) always admits the DFE, namely E0 = (S0, 0) :=
228
+ �Λ
229
+ µ, 0
230
+
231
+ ,
232
+ and the EE, namely E∗ = (S∗, I∗), where I∗ > 0 if and only if R0 =
233
+ Λβ
234
+ µ(µ + δ + γ) > 1. In
235
+ [65], a Lyapunov function for the DFE is defined as
236
+ V (S, I) := 1
237
+ 2 (S − S0 + I)2 + 2µ + δ
238
+ β
239
+ I,
240
+ (3)
241
+ whereas the Lyapunov function for the EE is built using a combination of the quadratic and
242
+ logarithmic functions
243
+ V (S, I) := 1
244
+ 2 (S − S∗ + I − I∗)2 + 2µ + δ
245
+ β
246
+
247
+ I − I∗ − I∗ ln
248
+ � I
249
+ I∗
250
+ ��
251
+ .
252
+ (4)
253
+ The authors also construct two more examples of Lyapunov functions for the EE, namely
254
+ V (S, I) := 1
255
+ 2(S − S∗)2 + µ + δ
256
+ β
257
+
258
+ I − I∗ − I∗ ln
259
+ � I
260
+ I∗
261
+ ��
262
+ ,
263
+ (5)
264
+ and
265
+ V (S, I) :=1
266
+ 2 (S − S∗ + I − I∗)2 + S∗(δ + 2µ)
267
+
268
+
269
+ S − S∗ − S∗ ln
270
+ � S
271
+ S∗
272
+ ��
273
+ + S∗(δ + 2µ)
274
+ γ
275
+
276
+ I − I∗ − I∗ ln
277
+ � I
278
+ I∗
279
+ ��
280
+ .
281
+ (6)
282
+ 5
283
+
284
+ Power levels of the functions (3), (4), (5) and (6) are visualized if Figure 1. By definition
285
+ of a Lyapunov functions, orbits of the corresponding system (2) evolve on decreasing power
286
+ levels, and they tend to the corresponding equilibrium as t → +∞.
287
+ (a)
288
+ (b)
289
+ (c)
290
+ (d)
291
+ Figure 1: Power levels of Lyapunov functions (3) (a), (4) (b), (5) (c), and (6) (d). Values of
292
+ the parameters are Λ = 0.8, µ = 1, δ = 1, γ = 1 in all the figures, β = 1 in (a), so that R0 =
293
+ 4/15 < 1, and β = 4 in (b), (c) and (d), so that R0 = 16/15 > 1. We represent V (S, I) = k,
294
+ with k ∈ {0.1, 0.25, 0.5, 1, 1.5, 2, 2.5} in (a), k ∈ {0.001, 0.01, 0.025, 0.05, 0.1, 0.2} in (b) and
295
+ (c), and k ∈ {0.01, 0.025, 0.05, 0.1, 0.2, 0.5} in (d). Black dots represent the globally stable
296
+ equilibrium the system converges to, and correspond to V (S, I) = 0.
297
+ In [66] the author found a simpler Lyapunov function for the DFE when R0 < 1, i.e.
298
+ V (I) = 1
299
+ 2I2.
300
+ (7)
301
+ However, this last Lyapunov function (7) only ensures that I → 0 as t → +∞. To complete
302
+ 6
303
+
304
+ the proof of the converge of the system to the DFE, one needs in addiction to invoke LaSalle’s
305
+ theorem [35] (see also [31, Thm. 3.4]), as is indeed done in [66].
306
+ Considering the importance of this theorem, especially when combined with the use of
307
+ Lyapunov functions, we include its statement here.
308
+ Theorem 2.1. (LaSalle’s invariance principle) Let X′ = f(X) be a system of n ODEs
309
+ defined on a positively invariant set Ω ⊂ Rn.
310
+ Assume the existence of a function V ∈
311
+ C1(Ω, R) such that V ′(X) ≤ 0 for all X ∈ Ω. Let MV be the set of stationary points for V ,
312
+ i.e. V ′(X) = 0 for all X ∈ MV , and let N be the largest invariant set of MV . Then, every
313
+ solution which starts in Ω approaches N as t → +∞.
314
+ In particular, this theorem implies that, if we can prove the approach of the disease to
315
+ the manifold describing absence of infection and the uniqueness of the DFE, then the DFE
316
+ is GAS.
317
+ 2.2
318
+ SIR/SIRS
319
+ The SIR model is characterized by the total immunity after the infections, i.e. recovered
320
+ individuals can not become susceptible again. A classical example for this scenario is measles.
321
+ The ODEs system which describes this situation is
322
+ dS
323
+ dt = −βSI
324
+ N ,
325
+ dI
326
+ dt = βSI
327
+ N − γI,
328
+ dR
329
+ dt = γI,
330
+ (8)
331
+ S
332
+ I
333
+ R
334
+ β SI
335
+ N
336
+ γI
337
+ where β is the transmission rate and γ is the recovery rate.
338
+ If we assume that recovered individuals eventually lose their immunity, we obtain the
339
+ SIRS model. Denoting by α the immunity loss rate, we obtain the following ODEs system
340
+ dS
341
+ dt = −βSI
342
+ N + αR,
343
+ dI
344
+ dt = βSI
345
+ N − γI,
346
+ dR
347
+ dt = γI − αR.
348
+ (9)
349
+ S
350
+ I
351
+ R
352
+ β SI
353
+ N
354
+ γI
355
+ αR
356
+ It is clear that, if α = 0, system (9) coincides with system (8).
357
+ These models admit only the DFE; in order to have an EE, we need to add the demog-
358
+ raphy to model (8) or (9).
359
+ In [65], the authors consider the following ODEs system
360
+ 7
361
+
362
+ dS
363
+ dt = Λ − βSI
364
+ N − µS + αR,
365
+ dI
366
+ dt = βSI
367
+ N − (γ + δ + µ)I,
368
+ dR
369
+ dt = γI − (α + µ)R.
370
+ (10)
371
+ S
372
+ I
373
+ R
374
+ β SI
375
+ N
376
+ γI
377
+ αR
378
+ Λ
379
+ µS
380
+ µR
381
+ (δ + µ)I
382
+ System (10) admits the DFE, E0 = (S0, 0, 0), and the EE, E∗ = (S∗, I∗, R∗), which exists if
383
+ and only if R0 =
384
+ βΛ
385
+ µ(µ + γ + δ) > 1. In [65], the Lyapunov function for the DFE is defined
386
+ as follows
387
+ V (S, I, R) := 1
388
+ 2 (S − S0 + I + R)2 + 2µ + δ
389
+ β
390
+ I + 2µ + δ
391
+
392
+ R2,
393
+ whereas the Lyapunov function for the EE is the combination of the composite quadratic,
394
+ common quadratic and logarithmic functions as follows
395
+ V (S, I, R) :=1
396
+ 2 (S − S∗ + I − I∗ + R − R∗)2
397
+ + 2µ + δ
398
+ β
399
+
400
+ I − I∗ − I∗ ln
401
+ � I
402
+ I∗
403
+ ��
404
+ + 2µ + δ
405
+
406
+ (R − R∗)2.
407
+ The authors also present other Lyapunov functions for SIR/SIRS models; in particular,
408
+ they also cite [3, 46] in which some variations of system (10) are showed. Other Lyapunov
409
+ functions for SIR/SIRS epidemic models are in [55], in which the authors use a graph-
410
+ theoretic approach.
411
+ In [66], the author proved that the quadratic Lyapunov function (7) of the SIS model
412
+ applies to the SIR and the SIRS, as well.
413
+ 2.3
414
+ SEIR/SEIS/SEIRS
415
+ In [32], the authors study both SEIR and SEIS models. Many real world examples present
416
+ a phase of exposition to the disease, between susceptibility and infectiousness. The models
417
+ presented thus far, albeit simpler to study, are unable to replicate this mechanism.
418
+ The authors first analyze a SEIR model with demography and constant population, in
419
+ which the disease is transmitted both horizontally and vertically. Individuals infected verti-
420
+ cally pass first in the exposed compartment. The ODEs system which describe the model is
421
+ dS
422
+ dt =µ − βSI − pµI − qµE − µS,
423
+ dE
424
+ dt =βSI + pµI − θE − µE + qµE,
425
+ dI
426
+ dt =θE − (δ + µ)I,
427
+ (11)
428
+ S
429
+ E
430
+ I
431
+ βSI
432
+ θE
433
+ µ(1 − pI − qE)
434
+ µS
435
+ µ(pI + qE)
436
+ µE
437
+ (δ + µ)I
438
+ and R = 1 − S − E − I. The vertical transmission of the disease is represented by the
439
+ probabilities p and q of being born directly in the Exposed compartment, rather than in the
440
+ Susceptible one, and is represented by the inward arrow in compartment E.
441
+ 8
442
+
443
+ The authors first provide an equivalent system, performing the substitution (S, E, I) −→
444
+ (P, E, I), where P := S + pµ
445
+ β . They then proceed to prove the GAS of the EE, using the
446
+ following Lyapunov function
447
+ V (P, E, I) :=(P − P ∗ ln P) +
448
+ θ + µ
449
+ θ + µ − qµ(E − E∗ ln E)
450
+ +
451
+ θ + µ
452
+ θ + µ − qµ(I − I∗ ln I).
453
+ Later, the authors analyze a situation in which the recovery does not provide immunity,
454
+ namely the SEIS model. They also assume that a fraction r of offspring of the infective
455
+ hosts is born directly into the infective compartment. In this case, the ODEs system changes
456
+ accordingly describe the model is
457
+ dS
458
+ dt =µ − βSI + (δ − pµ − rµ)I − qµE − µS,
459
+ dE
460
+ dt =βSI + pµI − (θ + µ − qµ)E,
461
+ dI
462
+ dt =θE − (δ + µ − µr)I,
463
+ (12)
464
+ and S + E + I = 1. Notice that, due to the population remaining constant in system (12),
465
+ one could in principle reduce its dimensionality and consider it as a planar system.
466
+ The authors prove the GAS of the EE using the following Lyapunov function
467
+ V (S, E, I) :=(S − S∗ ln S) + µ1 − S∗
468
+ βI∗S∗ (E − E∗ ln E)
469
+ + µ1 − S∗
470
+ θE∗
471
+
472
+ 1 + pρ0
473
+ µ
474
+ β
475
+
476
+ (I − I∗ ln I).
477
+ A natural extension to these models is the SEIRS [22, 64], in which one can combine the
478
+ existence of an immune compartment and the loss of immunity.
479
+ It is described by the
480
+ following system of ODEs
481
+ dS
482
+ dt = − βg(I)S + µ − µS + αR,
483
+ dE
484
+ dt =βg(I)S − (θ + µ)E,
485
+ dI
486
+ dt =θE − (γ + µ)I,
487
+ dR
488
+ dt =γI − (α + µ)R,
489
+ (13)
490
+ S
491
+ E
492
+ I
493
+ R
494
+ βg(I)S
495
+ γI
496
+ αR
497
+ θE
498
+ µE
499
+ µ
500
+ µS
501
+ µR
502
+ µI
503
+ where g ∈ C3(0, 1], g(0) = 0 (meaning, in absence of infectious individuals, the disease does
504
+ not spread) and g(I) > 0 for I > 0. The classical choice is g(I) = I, as in systems (11) and
505
+ (12). Assuming moreover
506
+ lim
507
+ I→0+
508
+ g(I)
509
+ I
510
+ = c ∈ [0, +∞),
511
+ 9
512
+
513
+ the authors of [22] derive R0 =
514
+ cβθ
515
+ (θ + µ)(γ + µ). They then prove GAS of the DFE of system
516
+ (13) through the use of the following linear Lyapunov function
517
+ V (E, I) = E + θ + µ
518
+ θ
519
+ I,
520
+ whereas the GAS of the EE is proved with a more complex geometrical method in [64].
521
+ 2.4
522
+ SAIR/SAIRS
523
+ One of the main challenges of the Covid-19 pandemic was the presence of asymptomatic
524
+ individuals spreading the disease. Such individuals must clearly be somehow distinguished
525
+ from symptomatic infectious individuals, as they are likely to behave like a susceptible
526
+ individual. Even though their viral load, and hence infectiousness, might be smaller, they
527
+ are more likely to get in close contact with susceptible individuals.
528
+ In [53], the authors consider a SAIRS model. The main difference between this kind
529
+ of models and the SEIR is that both asymptomatic and symptomatic hosts may infect
530
+ susceptible individuals.
531
+ The immunity is not permanent, i.e.
532
+ recovered individuals will
533
+ become susceptible again after a certain period of time. Moreover, vaccination are included.
534
+ The ODEs system which describe this model is
535
+ dS
536
+ dt = µ −
537
+
538
+ βAA + βII
539
+
540
+ S − (µ + ν)S + γR,
541
+ dA
542
+ dt =
543
+
544
+ βAA + βII
545
+
546
+ S − (α + δA + µ)A,
547
+ dI
548
+ dt = αA − (δI + µ)I,
549
+ dR
550
+ dt = δAA + δII + νS − (γ + µ)R,
551
+ S
552
+ A
553
+ R
554
+ I
555
+ µ
556
+ µS
557
+ (βAA + βII)S
558
+ δII
559
+ γR
560
+ δAA
561
+ µA
562
+ αA
563
+ νS
564
+ µI
565
+ µR
566
+ The global stability analysis of the EE has been performed for two variations of the original
567
+ model, described in the following.
568
+ The first model analyzed is the SAIR model, i.e. the case in which recovery from the
569
+ disease grants permanent immunity. In this case, the corresponding Lyapunov function is
570
+ the combination of the Lokta-Volterra Lyapunov functions for S, A and I
571
+ V (S, A, I) :=c1S∗
572
+ � S
573
+ S∗ − 1 − ln
574
+ � S
575
+ S∗
576
+ ��
577
+ + c2A∗
578
+ � A
579
+ A∗ − 1 − ln
580
+ � A
581
+ A∗
582
+ ��
583
+ + I∗
584
+ � I
585
+ I∗ − 1 − ln
586
+ � I
587
+ I∗
588
+ ��
589
+ ,
590
+ where c1, c2 > 0.
591
+ The second model is the SAIRS model, with homogeneous disease transmission and
592
+ recovery among A and I, i.e. βA = βI and δA = δI. In this case, it is possible to sum
593
+ 10
594
+
595
+ equations for A and I, defining M := A + I, reducing the dimensionality of the system.
596
+ Thus, the Lyapunov function can be written as the combination of the square function and
597
+ the Lokta-Volterra as follows
598
+ V (S, M) := 1
599
+ 2(S − S∗)2 + w
600
+
601
+ M − M∗ − M∗ ln
602
+ � M
603
+ M∗
604
+ ��
605
+ ,
606
+ where w > 0.
607
+ The global stability in the most general case is proved similarly to [64].
608
+ 2.5
609
+ More exotic compartmental models
610
+ The aforementioned models are some of the most commonly used in literature. In order
611
+ to capture additional disease-specific nuances, these model can be modified or extended by
612
+ adding new compartments.
613
+ Some diseases, for example, present different stages of infection. In this case, an infected
614
+ individual can progress between two or more stages before recovering. In [18], the authors
615
+ perform the global stability analysis via an integral Lyapunov function of a general class
616
+ of multistage models.
617
+ In their model, infectious individual can move both forward and
618
+ backward on the chain of stages, in order to incorporate both a natural disease progression
619
+ and the amelioration due to the effects of treatments.
620
+ The system of ODEs which describes the model is
621
+ dS
622
+ dt = θ(S) − f(N)
623
+ n
624
+
625
+ j=1
626
+ gj(S, Ij),
627
+ dI1
628
+ dt = f(N)
629
+ n
630
+
631
+ j=1
632
+ gj(S, Ij) +
633
+ n
634
+
635
+ j=1
636
+ φ1,j(Ij) −
637
+ n+1
638
+
639
+ j=1
640
+ φj,1(I1) − ζ1(I1),
641
+ dIi
642
+ dt =
643
+ n
644
+
645
+ j=1
646
+ φi,j(Ij) −
647
+ n+1
648
+
649
+ j=1
650
+ φj,i(Ii) − ζi(Ii),
651
+ i = 2, 3, . . ., n,
652
+ where θ(S) is the growth function, f(N)
653
+ �n
654
+ j=1 gj(S, Ij) is the incidence term, ζi(Ii), 1 ≤ i ≤ n,
655
+ denote the removal rates of the Ii compartment.
656
+ Moreover, for any i, j = 1, . . . , n, the
657
+ functions φi,j(Ij) represent the rate of the disease progression if i > j and the amelioration
658
+ if i < j.
659
+ The corresponding Lyapunov function for the DFE is linear in the disease compartments,
660
+ i.e.
661
+ V (I1, . . ., In) =
662
+ n
663
+
664
+ i=1
665
+ ciIi,
666
+ where c1 = R0 and ci ≥ 0 for all i = 2, . . . , n. For the global stability of the EE the authors
667
+ made some assumptions on the aforementioned functions. In particular, they consider the
668
+ following integral Lyapunov function
669
+ V (S, I1, . . . , In) = τ
670
+ � S
671
+ S∗
672
+ Φ(ξ) − Φ(S∗)
673
+ Φ(ξ)
674
+ dξ +
675
+ n
676
+
677
+ i=1
678
+ τi
679
+ � Ii
680
+ I∗
681
+ i
682
+ ψi(ξ) − ψi(I∗
683
+ i )
684
+ ψi(ξ)
685
+ dξ,
686
+ 11
687
+
688
+ where τ, τi > 0, for all i = 1, . . ., n. For a more in-depth explanation on the functions Φ(·)
689
+ and ψi(·) we refer to [18, Sect. 5].
690
+ Diseases which present multiple virus strains, due to the existence of different serotypes
691
+ of the virus or due to a mutation of the original disease, may need to be modelled differently.
692
+ Dengue, tuberculosis and various sexually transmitted diseases are caused by more than one
693
+ strain of a pathogen. Influenza type A viruses mutate constantly: an infection with one of
694
+ its strains gives permanent immunity against that specific strain. However, the so called
695
+ “antigenic drift” produces new virus strains, thus the hosts only acquire partial immunity, or
696
+ no immunity at all. Modelling these types of diseases requires the inclusion of cross-protective
697
+ effects, in which the immunity acquired towards one strain offers partial protection towards
698
+ another strain based on their antigenic similarity. In [6], the authors consider an n strain
699
+ model, both without immunity and with immunity for all the strains. Moreover, they analyze
700
+ an MSIR model, in which the M compartment represents the proportion of newborns who
701
+ possess temporary passive immunity due to protection from maternal antibodies. For all the
702
+ three model, the authors use a linear Lyapunov function to prove the global stability of the
703
+ DFE and a logarithmic Lyapunov function to prove the global stability of the EE.
704
+ Other compartmental models include e.g. control strategies. For new ongoing epidemics,
705
+ the most immediate strategy is including quarantine and isolation of infectious individuals.
706
+ For well-known epidemics for which a vaccination is available, it is useful to incorporate a
707
+ vaccinated individuals compartment V to keep track of the two possible immunities, disease
708
+ and vaccine induced, respectively. Usually, vaccination does not confer permanent immu-
709
+ nity, and after a certain disease-dependent period individuals become susceptible again. An
710
+ example is [50], in which the authors analyze a SIRV epidemic model with non-linear inci-
711
+ dence rate. The global stability of the DFE is proved using as linear Lyapunov function the
712
+ infectious compatment I and the global stability of the EE, instead, using a combination of
713
+ a quadratic function in S and a logarithmic function in the compartments I and V .
714
+ 3
715
+ Conclusion
716
+ In this survey, we presented the most widely used Lyapunov functions in the field of epi-
717
+ demic compartmental models. We focused on systems expressed as autonomous systems of
718
+ ODEs. These models allow for various interesting generalizations, of which we provide a
719
+ non-comprehensive list below.
720
+ One extension of the classic compartmental epidemic models is the so-called multi-group
721
+ approach, see e.g. [34, 58]. These models describe n communities, interacting with each other,
722
+ and whose internal evolution follows a standard compartmental model. A first example of
723
+ such a model is presented in [10], in which the authors consider a n groups SIS model. In
724
+ order to prove the GAS of the EE, they use a results on Metzler matrices. In [55], the authors
725
+ consider a heterogeneous SIS disease model, for which they provide Lyapunov functions both
726
+ for the DFE and for the EE. For the latter, they use a complex graph-teoretic method, for
727
+ the details of which we refer to the original paper. Global stability of EE via Lyapunov
728
+ function for multi-group generalization can be found also for the SIR [19], SIRS [48], SEIR
729
+ [17] and SAIR/SAIRS model [52]. Notice that, due to the complexity of the models, some
730
+ of them require additional technical assumptions to prove the global stability of the endemic
731
+ equilibrium.
732
+ 12
733
+
734
+ Other classes of models include interactions between human and vector population, i.e.
735
+ animals which transmit the disease to humans, or with the pathogens, such as viruses or
736
+ bacteria. In both cases, authors often include a compartmental structure for the non-human
737
+ population. Some examples of vector-host models are shown in [59, 62, 70]. Another example
738
+ can be found in [40], in which a SIR-B compartmental model is considered. Here the “B”
739
+ denotes the concentration of the pathogen in the environment.
740
+ All the models discussed thus far are described by only autonomous systems of ODEs.
741
+ However, in order to increase realism, it is possible to use non-autonomous systems to de-
742
+ scribe the spread of an infectious disease. This is the case of systems in which some param-
743
+ eters change in time [42, 56], to describe seasonal changes, or in which the state variables
744
+ depend on the previous state, i.e. the model includes a time delay [4, 67]. In these cases,
745
+ it is still possible to find Lyapunov functions to prove the global stability of the equilibria
746
+ using other techniques, described for example in [35].
747
+ Another popular option is to explicitly include delay in the system, such as in [4, 23,
748
+ 25, 26, 43, 63, 69].
749
+ In the latter the authors perform the global stability analysis of a
750
+ SEIQR model, in which Q denotes the quarantined individuals. They explicitly include a
751
+ latent period for the infection, transforming two of the ODEs in DDEs. The corresponding
752
+ Lyapunov function includes the integration over an interval whose size is precisely the latent
753
+ period.
754
+ Lastly, a widely adopted strategy is to explicitly include the “time since infection” [7,
755
+ 24, 44, 71, 72] in age-structured models. This allows to explicitly take into account time
756
+ heterogeneity in the spread of an infectious disease in a population.
757
+ These last cases we mentioned are outside of the scope of this project, and we leave them
758
+ as inspiration for future works.
759
+ Acknowledgments.
760
+ The authors are grateful to the organizers of the conference 100 Years
761
+ Unione Matematica Italiana - 800 Years Università di Padova, which made their scientific
762
+ cooperation possible. Moreover, they acknowledge Politecnico di Milano, Polish Academy of
763
+ Sciences, Inria and University of Trento for supporting their research.
764
+ References
765
+ [1] R.M. Anderson. The population dynamics of infectious diseases: Theory and applica-
766
+ tions, 2013.
767
+ [2] S. Ansumali, S. Kaushal, A. Kumar, M. K. Prakash, and M. Vidyasagar. Modelling a
768
+ pandemic with asymptomatic patients, impact of lockdown and herd immunity, with
769
+ applications to SARS-CoV-2. Annu. Rev. Control., 50:432–447, 2020.
770
+ [3] E. Beretta, T. Hara, W. Ma, Y. Takeuchi, et al. Global asymptotic stability of an SIR
771
+ epidemic model with distributed time delay. Nonlinear Anal. Theory Methods Appl.,
772
+ 47(6):4107–4115, 2001.
773
+ [4] E. Beretta and Y. Takeuchi. Global stability of an SIR epidemic model with time delays.
774
+ J. Math. Biol., 33(3):250–260, 1995.
775
+ 13
776
+
777
+ [5] D. Bichara and P. Adda. Global stability for SIR and SIRS models with differential
778
+ mortality. Technical report, INRIA, 2012.
779
+ [6] D. Bichara, A. Iggidr, and G. Sallet. Global analysis of multi-strains SIS, SIR and MSIR
780
+ epidemic models. J. Appl. Math. Comput., 44(1):273–292, 2014.
781
+ [7] A. Chekroun, M. N. Frioui, T. Kuniya, and T. M. Touaoula. Global stability of an
782
+ age-structured epidemic model with general Lyapunov functional. Math. Biosci. Eng.,
783
+ 16(3):1525–1553, 2019.
784
+ [8] K.-S. Cheng, S.-B. Hsu, and S.-S. Lin. Some results on global stability of a predator-prey
785
+ system. J. Math. Biol., 12(1):115–126, 1982.
786
+ [9] M. P. Dafilis, F. Frascoli, J. G. Wood, and J. M. McCaw. The influence of increasing
787
+ life expectancy on the dynamics of SIRS systems with immune boosting. The ANZIAM
788
+ Journal, 54(1-2):50–63, 2012.
789
+ [10] A. Fall, A. Iggidr, G. Sallet, and J. J. Tewa. Epidemiological models and Lyapunov
790
+ functions. Math. Model. Nat. Phenom., 2(1):62–83, 2007.
791
+ [11] P. Georgescu and H. Zhang. A Lyapunov functional for a SIRI model with nonlinear
792
+ incidence of infection and relapse. Appl. Math. Comput., 219(16):8496–8507, 2013.
793
+ [12] B. S. Goh. Global stability in two species interactions. J. Math. Biol., 3(3):313–318,
794
+ 1976.
795
+ [13] B. S. Goh. Global stability in many-species systems. Am. Nat., 111(977):135–143, 1977.
796
+ [14] B. S. Goh. Stability in models of mutualism. Am. Nat., 113(2):261–275, 1979.
797
+ [15] G. González-Parra and A. J. Arenas. Qualitative analysis of a mathematical model
798
+ with presymptomatic individuals and two SARS-CoV-2 variants. Comput. Appl. Math.,
799
+ 40(6):1–25, 2021.
800
+ [16] L. Guihua and J. Zhen. Global stability of an SEI epidemic model. Chaos Solit. Fractals,
801
+ 21(4):925–931, 2004.
802
+ [17] H. Guo, M. Li, and Z. Shuai.
803
+ A graph-theoretic approach to the method of global
804
+ Lyapunov functions. Proc. Am. Math. Soc., 136(8):2793–2802, 2008.
805
+ [18] H. Guo, M. Y Li, and Z. Shuai. Global dynamics of a general class of multistage models
806
+ for infectious diseases. SIAM J. Appl. Math., 72(1):261–279, 2012.
807
+ [19] H. Guo, M.Y. Li, and Z. Shuai. Global stability of the endemic equilibrium of multigroup
808
+ SIR epidemic models. Can. Appl. Math. Q., 14(3):259–284, 2006.
809
+ [20] T. Harko, F. S. N. Lobo, and M. K. Mak. Exact analytical solutions of the Susceptible-
810
+ Infected-Recovered (SIR) epidemic model and of the SIR model with equal death and
811
+ birth rates. Appl. Math. Comput., 236:184–194, 2014.
812
+ 14
813
+
814
+ [21] H. W. Hethcote. The mathematics of infectious diseases. SIAM Rev. Soc. Ind. Appl.
815
+ Math., 42(4):599–653, 2000.
816
+ [22] H. W. Hethcote and P. Van den Driessche. Some epidemiological models with nonlinear
817
+ incidence. J. Math. Biol., 29(3):271–287, 1991.
818
+ [23] G. Huang and A. Liu. A note on global stability for a heroin epidemic model with
819
+ distributed delay. Appl. Math. Lett., 26(7):687–691, 2013.
820
+ [24] G. Huang, X. Liu, and Y. Takeuchi. Lyapunov functions and global stability for age-
821
+ structured HIV infection model. SIAM J. Appl. Math., 72(1):25–38, 2012.
822
+ [25] G. Huang, Y. Takeuchi, and W. Ma. Lyapunov functionals for delay differential equa-
823
+ tions model of viral infections. SIAM J. Appl. Math., 70(7/8):2693–2708, 2010.
824
+ [26] G. Huang, Y. Takeuchi, W. Ma, and D. Wei. Global stability for delay SIR and SEIR
825
+ epidemic models with nonlinear incidence rate.
826
+ Bull. Math. Biol., 72(5):1192–1207,
827
+ 2010.
828
+ [27] H. Jardón-Kojakhmetov, C. Kuehn, A. Pugliese, and M. Sensi. A geometric analysis of
829
+ the SIR, SIRS and SIRWS epidemiological models. Nonlinear Anal. Real World Appl.,
830
+ 58:103220, 2021.
831
+ [28] M.J. Keeling and P. Rohani. Modeling Infectious Diseases in Humans and Animals,
832
+ 2008.
833
+ [29] D.P. Kelkile. Stability Analysis and Stochastic SI Modelling of Endemic Diseases. APM,
834
+ 8(5):516–534, 2018.
835
+ [30] W.O. Kermack and A.G. McKendrick. A contribution to the mathematical theory of
836
+ epidemics. Proc. R. Soc. Lond. A, 115(772):700–721, 1927.
837
+ [31] H. K. Khalil. Nonlinear control, 2015.
838
+ [32] A. Korobeinikov. Lyapunov functions and global properties for SEIR and SEIS epidemic
839
+ models. Math. Med. Biol ., 21(2):75–83, 2004.
840
+ [33] A. Korobeinikov and G.C. Wake. Lyapunov functions and global stability for SIR, SIRS,
841
+ and SIS epidemiological models. Appl. Math. Lett., 15(8):955–960, 2002.
842
+ [34] T. Kuniya. Global stability of a multi-group SVIR epidemic model. Nonlinear Anal.
843
+ Real World Appl., 14(2):1135–1143, 2013.
844
+ [35] J. P. LaSalle. Stability theory and invariance principles. In Dynamical systems, pages
845
+ 211–222. Elsevier, New York, 1976.
846
+ [36] G. Li and J. Zhen. Global stability of an SEI epidemic model with general contact rate.
847
+ Chaos Solit. Fractals, 23(3):997–1004, 2005.
848
+ [37] J. Li, X. Xie, and Y. Chen.
849
+ A new way of constructing Lyapunov functions with
850
+ application to an SI epidemic model. Appl. Math. Lett., 113:106777, 2021.
851
+ 15
852
+
853
+ [38] J. Li, Y. Yang, Y. Xiao, and S. Liu. A class of Lyapunov functions and the global
854
+ stability of some epidemic models with nonlinear incidence. J. Appl. Anal. Comput.,
855
+ 6(1):38–46, 2016.
856
+ [39] M. Y. Li and L. Wang. Global stability in some SEIR epidemic models. In Mathemat-
857
+ ical approaches for emerging and reemerging infectious diseases: models, methods, and
858
+ theory, pages 295–311. Springer, New York, 2002.
859
+ [40] S. Liao and J. Wang.
860
+ Global stability analysis of epidemiological models based on
861
+ Volterra–Lyapunov stable matrices. Chaos Solit. Fractals, 45(7):966–977, 2012.
862
+ [41] M. Mabotsa, J. M. W. Munganga, and A. S. Hassan. Mathematical modelling and
863
+ optimal control of the transmission dynamics of enterovirus. Phys. Scr., 97(3):034002,
864
+ 2022.
865
+ [42] M. Martcheva.
866
+ A non-autonomous multi-strain SIS epidemic model.
867
+ J. Biol. Dyn.,
868
+ 3(2-3):235–251, 2009.
869
+ [43] C. C. McCluskey.
870
+ Complete global stability for an SIR epidemic model with de-
871
+ lay—distributed or discrete. Nonlinear Anal. Real World Appl., 11(1):55–59, 2010.
872
+ [44] C. C. McCluskey. Global stability for an sei epidemiological model with continuous
873
+ age-structure in the exposed and infectious classes. Math. Biosci. Eng., 9(4):819, 2012.
874
+ [45] D. Y. Melesse and A. B. Gumel. Global asymptotic properties of an SEIRS model with
875
+ multiple infectious stages. J. Math. Anal. Appl., 366(1):202–217, 2010.
876
+ [46] J. Mena-Lorcat and H. W. Hethcote. Dynamic models of infectious diseases as regulators
877
+ of population sizes. J. Math. Biol., 30(7):693–716, 1992.
878
+ [47] A. Meskaf, O. Khyar, J. Danane, and K. Allali. Global stability analysis of a two-strain
879
+ epidemic model with non-monotone incidence rates. Chaos Solit. Fractals, 133:109647,
880
+ 2020.
881
+ [48] Y. Muroya, Y. Enatsu, and T. Kuniya. Global stability for a multi-group SIRS epidemic
882
+ model with varying population sizes. Nonlinear Anal. Real World Appl., 14(3):1693–
883
+ 1704, 2013.
884
+ [49] M. Ojo and F. Akinpelu. Lyapunov functions and global properties of SEIR epidemic
885
+ model. Int. J. Chem. Math. Phys., 1(1):11–16, 2017.
886
+ [50] M.O. Oke, O.M. Ogunmiloro, C.T. Akinwumi, and R.A. Raji. Mathematical modeling
887
+ and stability analysis of a SIRV epidemic model with non-linear force of infection and
888
+ treatment. Commun. Math. Appl., 10(4):717–731, 2019.
889
+ [51] S.M. O’Regan, T.C. Kelly, A. Korobeinikov, M.J.A. O’Callaghan, and A.V. Pokrovskii.
890
+ Lyapunov functions for SIR and SIRS epidemic models. Appl. Math. Lett., 23(4):446–
891
+ 448, 2010.
892
+ [52] S. Ottaviano, M. Sensi, and S. Sottile. Global stability of multi-group sairs epidemic
893
+ models. preprint available on arXiv: https://arxiv.org/abs/2202.02993, 2022.
894
+ 16
895
+
896
+ [53] S. Ottaviano, M. Sensi, and S. Sottile.
897
+ Global stability of SAIRS epidemic models.
898
+ Nonlinear Anal. Real World Appl., 65:103501, 2022.
899
+ [54] S. Ruan and W. Wang. Dynamical behavior of an epidemic model with a nonlinear
900
+ incidence rate. J. Differ. Equ., 188(1):135–163, 2003.
901
+ [55] Z. Shuai and P. van den Driessche. Global stability of infectious disease models using
902
+ Lyapunov functions. SIAM J. Appl. Math., 73(4):1513–1532, 2013.
903
+ [56] S. Sottile and X. Liu. Time-varying epidemic transmission in heterogeneous networks
904
+ and applications to measles. J. Biol. Syst., 28(04):901–926, 2020.
905
+ [57] D. Stiefs, E. Venturino, and U. Feudel.
906
+ Evidence of chaos in eco-epidemic models.
907
+ Mathematical Biosciences & Engineering, 6(4):855, 2009.
908
+ [58] R. Sun and J. Shi. Global stability of multigroup epidemic model with group mixing
909
+ and nonlinear incidence rates. Appl. Math. Comput., 218(2):280–286, 2011.
910
+ [59] S. Syafruddin and M. S. Md. Noorani. Lyapunov function of SIR and SEIR model for
911
+ transmission of dengue fever disease. Int. J. Simul. Process. Model., 8(2/3):177–184,
912
+ 2013.
913
+ [60] M. A. Taneco-Hernández and C. Vargas-De-León. Stability and Lyapunov functions
914
+ for systems with Atangana–Baleanu Caputo derivative: an HIV/AIDS epidemic model.
915
+ Chaos Solit. Fractals, 132:109586, 2020.
916
+ [61] Q. Tang, Z. Teng, and X. Abdurahman. A new Lyapunov function for SIRS epidemic
917
+ models. Bull. Malays. Math. Sci., 40(1):237–258, 2017.
918
+ [62] J. J. Tewa, J. L. Dimi, and S. Bowong.
919
+ Lyapunov functions for a dengue disease
920
+ transmission model. Chaos Solit. Fractals, 39(2):936–941, 2009.
921
+ [63] L. Tiantian and X. Yakui. Global stability analysis of a delayed SEIQR epidemic model
922
+ with quarantine and latent. Appl. Math., 2013, 2013.
923
+ [64] P. Van den Driessche, M. Li, and J. Muldowney. Global stability of SEIRS models in
924
+ epidemiology. Can. Appl. Math. Q., 7:409–425, 1999.
925
+ [65] C. Vargas-De-León. Constructions of Lyapunov functions for classic SIS, SIR and SIRS
926
+ epidemic models with variable population size. Foro-Red-Mat, 26:1–12, 2009.
927
+ [66] C. Vargas-De-León. On the global stability of SIS, SIR and SIRS epidemic models with
928
+ standard incidence. Chaos Solit. Fractals, 44(12):1106–1110, 2011.
929
+ [67] W. Wang. Global behavior of an SEIRS epidemic model with time delays. Appl. Math.
930
+ Lett., 15(4):423–428, 2002.
931
+ [68] W. Wang. Epidemic models with nonlinear infection forces. Mathematical Biosciences
932
+ & Engineering, 3(1):267, 2006.
933
+ 17
934
+
935
+ [69] R. Xu and Z. Ma. Global stability of a SIR epidemic model with nonlinear incidence
936
+ rate and time delay. Nonlinear Analysis: Real World Applications, 10(5):3175–3189,
937
+ 2009.
938
+ [70] H. Yang, H. Wei, and X. Li. Global stability of an epidemic model for vector-borne
939
+ disease. J. Syst. Sci. Complex., 23(2):279–292, 2010.
940
+ [71] J. Yang, Z. Qiu, and X.-Z. Li. Global stability of an age-structured cholera model. Math.
941
+ Biosci. Eng., 11(3):641, 2014.
942
+ [72] Y. Yang, S. Ruan, and D. Xiao. Global stability of an age-structured virus dynamics
943
+ model with Beddington-DeAngelis infection function. Math. Biosci. Eng., 12(4):859,
944
+ 2015.
945
+ 18
946
+
29FKT4oBgHgl3EQf8C4w/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
2tFAT4oBgHgl3EQfDhzt/content/tmp_files/2301.08417v1.pdf.txt ADDED
@@ -0,0 +1,727 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Suppression of laser beam’s polarization and intensity fluctuation via a Mach-Zehnder interferometer with proper feedback
2
+ Suppression of laser beam’s polarization and intensity fluctuation via a
3
+ Mach-Zehnder interferometer with proper feedback
4
+ Xiaokai Hou,1 Shuo Liu,1, a) Xin Wang,1 Feifei Lu,1 Jun He,1, 2 and Junmin Wang1, 2, b)
5
+ 1)State Key Laboratory of Quantum Optics and Quantum Optics Devices, and Institute of Opto-Electronics, Shanxi University,
6
+ Tai Yuan 030006, Shanxi Province, China
7
+ 2)Collaborative Innovation Center of Extreme Optics, Shanxi University, Tai Yuan 030006, Shanxi Province,
8
+ China
9
+ (Dated: 23 January 2023)
10
+ Long ground-Rydberg coherence lifetime is interesting for implementing high-fidelity quantum logic gates, many-body
11
+ physics, and other quantum information protocols. But, the potential well formed by a conventional far-off-resonance
12
+ red-detuned optical-dipole trap that is attractive for ground-state cold atoms is usually repulsive for Rydberg atoms,
13
+ which will result in the rapid loss of atoms and low repetition rate of the experimental sequence. Moreover, the coher-
14
+ ence time will be sharply shortened due to the residual thermal motion of cold atoms. These issues can be addressed
15
+ by an one-dimensional magic lattice trap and it can form a deeper potential trap than the traveling wave optical dipole
16
+ trap when the output power is limited. And these common techniques for atomic confinement generally have certain
17
+ requirement on the polarization and intensity stability of the laser. Here, we demonstrated a method to suppress both
18
+ the polarization drift and power fluctuation only based on the phase management of the Mach-Zehnder interferometer
19
+ for one-dimensional magic lattice trap. With the combination of three wave plates and the interferometer, we used the
20
+ instrument to collect data in the time domain, analyzed the fluctuation of laser intensity, and calculated the noise power
21
+ spectral density. We found that the total intensity fluctuation composed of laser power fluctuation and polarization drift
22
+ was significantly suppressed, and the noise power spectral density after closed-loop locking with typical bandwidth
23
+ 1-3000 Hz was significantly lower than that under the free running of the laser system. Typically, at 1000 Hz, the
24
+ noise power spectral density after locking was about 10 dB lower than that when A Master Oscillator Power Amplifier
25
+ (MOPA) system free running. The intensity-polarization control technique provides potential applications for atomic
26
+ confinement protocols that demand for fixed polarization and intensity
27
+ I.
28
+ INTRODUCTION
29
+ For various atomic manipulation experiments, such as sin-
30
+ gle photon source1−5, quantum dynamics based on Rydberg
31
+ states 6−10 and electric field detection based on atoms 11−13,
32
+ strong confinement optical dipole trap (ODT) of atoms is usu-
33
+ ally employed. In these applications,high power laser with
34
+ fixed polarization and relatively stabled intensity normally is
35
+ used to confine atoms. Common experimental setup for the
36
+ laser power stabilization were based on the active feedback
37
+ loop which used acousto-optic modulator (AOM) 14−17 or
38
+ electro-optic modulator (EOM) 18 as the actuator. In 2020,
39
+ AOM and EOM were combined to broaden the bandwidth of
40
+ laser intensity noise stabilization to 1MHz by Ni et. al.19. At
41
+ present, the feedback loop based on AOM has some disadvan-
42
+ tages. For example, bragg diffraction of AOM will seriously
43
+ affect the spot quality of first-order diffraction light, and the
44
+ power utilization of the system will be limited by the diffrac-
45
+ tion efficiency of AOM. The common electro-optic intensity
46
+ modulator (EOIM) with input and output tailed fiber is effi-
47
+ cient, but not suitable for high-power applications. Moreover,
48
+ the schemes mentioned above can observably suppress the
49
+ power fluctuation of laser beam, but the reduction for the drift
50
+ a)Present Address: Key Laboratory of Laser & Infrared System of Ministry
51
+ of Education, Shandong University, Qing Dao 266000, Shandong Province,
52
+ China.
53
+ b)corresponding author. E-mail: [email protected] ORCID : 0000-0001-
54
+ 8055-000X
55
+ of laser’s polarization is still not be effectively achieved . Here
56
+ we demonstrate an experimental scheme based on the Mach-
57
+ Zehnder interferometer (MZI) for actively suppress both the
58
+ fluctuation of power and polarization of laser beam. By prop-
59
+ erly manipulated phase difference between two paths, the out-
60
+ put fraction of MZI account for the majority laser power while
61
+ its intensity fluctuation in the time domain has been reduced
62
+ dozens of times compared with the free-running case, and the
63
+ noise power spectral density (NPSD) has been decreased in
64
+ the range of 1-3000 Hz in the frequency domain. Such a sta-
65
+ ble system can certainly meet the needs of various applica-
66
+ tions, such as experiments where the lifetime of cold atoms is
67
+ highly desirable.
68
+ II.
69
+ THEORETICAL BACKGROUND
70
+ 1.
71
+ Magic optical dipole trap for cesium 6S1/2 ground state
72
+ and 84P3/2 Rydberg state
73
+ Recently, a new experimental scheme which used interfer-
74
+ ometer as the actuator of the feedback loop has been proposed
75
+ 20. Considering the light intensity requirement of the ODT,
76
+ the MZI can satisfy the power requirement of ODT without af-
77
+ fecting spot quality of output light, therefore the experimental
78
+ setups of constructing the blue-detuning optical trap reported
79
+ by Yelin et. al 21 and Isenhower et. al 22 both concentrate on
80
+ the MZI. The intensity of the output laser mainly depends on
81
+ the phase difference between two arms of the MZI, therefore it
82
+ can be used as a power stabilizer in some experiments 23. Due
83
+ arXiv:2301.08417v1 [physics.optics] 20 Jan 2023
84
+
85
+ Suppression of laser beam’s polarization and intensity fluctuation via a Mach-Zehnder interferometer with proper feedback
86
+ 2
87
+ to the particularity of the output fraction of MZI, the com-
88
+ bination of interferometer and polarizer can realize the fixed
89
+ polarization, high proportion output and high intensity stabil-
90
+ ity. It is obviously useful for the experiment of optical trap.
91
+ The potential of ODT U can be expressed as:
92
+ U = − α
93
+ 2ε0c
94
+ 2P
95
+ πω2
96
+ 0
97
+ (1)
98
+ Where α is the induced polarizability of the target state, ε0
99
+ is the permittivity of vacuum, c is the speed of light, P is the
100
+ intensity of laser, ω0 is the radius of the spot at the focal point
101
+ after the laser is focused by a lens . As shown in the Eq. (1),
102
+ if the power of the 1879 nm laser is fluctuant, the resulting
103
+ trap depth will be changed. Thus, the lifetime of the trapped
104
+ atom will be severely affected by the presence of the heating
105
+ mechanism5,17,24.
106
+ FIG. 1. Diagram of the light shift induced by the ODT and MODT.
107
+ The intensity of laser which is intensly focused is still Gaussian,
108
+ and the closer to the center of beam, the stronger the intensity of
109
+ laser. The resulting trap depth or light shift is spatially dependent
110
+ (a) The ODT is attractive for ground states, but usually repulsive
111
+ for highly-excited Rydberg states because almost all strong dipole
112
+ transitions connected Rydberg state and the lower states have longer
113
+ wavelength than that of ODT laser. (b) The direct single-photon ex-
114
+ citation scheme from cesium |g⟩=|6S1/2⟩ to |r⟩=|84P3/2⟩ coupled by
115
+ a 319 nm ultraviolet laser. A 1879.43 nm laser is also tuned to the
116
+ blue side of the |r⟩ ⇐⇒ |a⟩=|7D5/2⟩ auxiliary transition to equalize
117
+ the trapping potential depth of the |g⟩ and |r⟩ state, which is so called
118
+ magic ODT (MODT).
119
+ In most of the experiments of cold atoms involving confine-
120
+ ment of ground-state atoms in an ODT and Rydberg excita-
121
+ tion, cold atomic sample is prepared in an ODT to hold them
122
+ in a fixed position in a significantly long time. The poten-
123
+ tial formed by a conventional far off-resonance red-detuned
124
+ ODT is attractive for the ground-state atoms, but usually re-
125
+ pulsive for highly-excited Rydberg atoms, leading that Ryd-
126
+ berg atoms normally cannot be confined in the conventional
127
+ ODT (Fig. 1(a)). Therefore, in the follow-up experiments, we
128
+ will face the following two problems: (1) if switching off the
129
+ ODT during Rydberg excitation and coherent manipulation, it
130
+ will result in atomic dephasing due to the thermal diffusion
131
+ of the atoms and the extremely low repetition rate of the ex-
132
+ perimental sequence; (2) if the ODT remains operation, it may
133
+ cause a low Rydberg excitation efficiency of atoms as the tran-
134
+ sition frequency is spatially position-dependent on the excita-
135
+ tion laser. The solution is to find an ODT such that the ground-
136
+ state atoms and the desired highly-excited Rydberg atoms can
137
+ experience the same potential, that is, the potential generated
138
+ by the ODT is a potential well for both the ground-state atoms
139
+ and the desired highly-excited Rydberg atoms, and is attrac-
140
+ tive to atoms in both states. So, the above-mentioned aspects
141
+ (1) and (2) can be solved. In Fig.1(b), the direct single-photon
142
+ excitation scheme from cesium |g⟩=|6S1/2⟩ to |r⟩=|84P3/2⟩
143
+ coupled by a 319 nm ultraviolet laser. A 1879.43 nm laser
144
+ is also tuned to the blue side of the |r⟩ ⇐⇒ |a⟩=|7D5/2⟩ aux-
145
+ iliary transition to equalize the trapping potential depth of the
146
+ |g⟩ and |r⟩ state. The specific calculation process is not de-
147
+ scribed here. For details, please refer to the reference 25,26.
148
+ 2.
149
+ Theoretical analysis of MZI
150
+ It is obvious that the MODT is not enough to meet the
151
+ need of extremely long coherence time in subsequent experi-
152
+ ments. The cold atoms trapped in the MODT still have resid-
153
+ ual thermal motion, which causes violent collisions that heat
154
+ the atoms and cause them to escape from the trap. We will fur-
155
+ ther construct one-dimensional magic lattice trap (1D-MLT),
156
+ and combine the advantages of lattice and magic conditions,
157
+ so as to prolong the coherence time of the ground-Rydberg
158
+ state of cold atoms. Of course, the 1D-MLT also needs to
159
+ suppress its power fluctuation. Because the power of the laser
160
+ used in the 1D-MLT fluctuates in the time domain, will di-
161
+ rectly shorten the coherence lifetime of the cold atom. There-
162
+ fore, we use the MZI to suppress the power ���uctuation.
163
+ As shown in Fig. 2 (a), Iout1 and Iout2 are the intensity of
164
+ two output paths of the interferometer respectively; R1, T1, R2,
165
+ T2 are the reflectivity and transmittance of input and output
166
+ beam splitters plate respectively. The two output channels of
167
+ the interferometer can be expressed as Eq. (2) and (3):
168
+ Iout1 = R2
169
+ 1R2
170
+ 2 +T 2
171
+ 1 T 2
172
+ 2 +2R1R2T1T2cos(2∆L
173
+ λ
174
+ +π)
175
+ (2)
176
+ Iout2 = R2
177
+ 1R2
178
+ 2 +T 2
179
+ 1 T 2
180
+ 2 +2R1R2T1T2cos(2∆L
181
+ λ )
182
+ (3)
183
+ Therefore, the laser intensity output of the interferometer can
184
+ be controlled by adjusting the driving voltage of PZT due to
185
+ the correlation between the output transmittance I and optical
186
+ path difference ∆L. In Fig. 2 (b), the interference fringes
187
+ generated by splitters with different splitter ratio is simulated
188
+ and analyzed by Mathematica. The splitter ratio shown by the
189
+ first line of Fig. 2(b) is 90/10, the second is 70/30, the third is
190
+ 60/40, and the last is 50/50.
191
+ III.
192
+ EXPERIMENTAL SETUP
193
+ The laser intensity stabilization setup is shown in Fig. 3. A
194
+ MOPA system consists of a 1879-nm butterfly packaged laser
195
+ diode and a Thulium Doped Fiber Amplifier (TmDFA) which
196
+ has maximum output ∼ 3 W. With a free space polarization
197
+
198
+ I(r
199
+ 84P
200
+ 3/2
201
+ 1879.43nm
202
+ 319nm
203
+ ODT
204
+ 1879.43nm
205
+ g
206
+ (a)
207
+ 6S1/2
208
+ (b)Suppression of laser beam’s polarization and intensity fluctuation via a Mach-Zehnder interferometer with proper feedback
209
+ 3
210
+ FIG. 2. Diagram of MZI and interference fringes of two channels of
211
+ the MZI are simulated and analyzed theoretically. (a) MZI consists
212
+ of two beam splitter plates (BS1 and BS2) and two high-reflectivity
213
+ mirrors (M1 and M2). Iin is the intensity of the incident light field,
214
+ Iout1 and Iout2 are the intensity of the outgoing light field at BS2. (b)
215
+ Normalized signal as a function of difference of optical path ∆L for
216
+ different splitter ratio.This ratio is both R1/T1 and R2/T2, because
217
+ the BS1 and BS2 that are used in the MZI are same. The solid red
218
+ and black lines represent the interference fringes of the two output
219
+ channels of MZI, respectively.
220
+ controller based on three waveplates ( λ/4, λ/2 and λ/4 ),
221
+ polarization fluctuation of 1879 nm beam is suppressed ini-
222
+ tially. The laser is injected into a MZI which is constructed by
223
+ a 50/50 beam splitter plate (BS1) that divides the incident light
224
+ into two beams with equal intensity and the different phase, a
225
+ high-reflectivity mirror (M1) that reflects one beam, a mirror
226
+ (M2) attached to a PZT that emits the other, and a beam split-
227
+ ter plate (BS2) that the two beams are finally combined. The
228
+ interferometer has two output channels and each channel can
229
+ be used for dynamic feedback to make the system more sta-
230
+ ble, and the output of this channel can then be used for sub-
231
+ sequent experiments. The photodetector (PD1) is mounted
232
+ behind a glass slice (GS1) of 1879 nm for sampling a little
233
+ fraction of light for in-loop feedback. The DC voltage signal
234
+ output by the PD1 is injected into Proportional Integral Dif-
235
+ ferential (PID) amplifier after passing through a low-pass filter
236
+ (LPF). The input signal of PID controller is subtracted from
237
+ the PID Set Point, which is an artificially set reference DC
238
+ voltage. The output signal of PID, that is the real-time differ-
239
+ ence between the detector signal and the reference DC volt-
240
+ age is added with the scanning signal (triangular wave) and
241
+ amplified by the high voltage (HV) amplifier as the driving
242
+ voltage of the PZT. The output power of interferometer can
243
+ therefore be controlled by manipulating the driving voltage of
244
+ PZT, and we expect that both power and polarization fluctua-
245
+ tion for 1879 nm laser are suppressed. And another photode-
246
+ tector (PD2) is mounted in order to independently monitor the
247
+ intensity stability of the output linear polarization laser. The
248
+ output signal of PD2 is then injected into the Data Acquisition
249
+ System (Keithley, DAQ-6510) in order to analyze and moni-
250
+ tor the intensity fluctuation of the laser in the time domain and
251
+ calculate the NPSD based on the measured optical power fluc-
252
+ tuation data. Undoubtedly, the little fraction of the far-infrared
253
+ laser is reflected by glass slice (GS2) and received by the PD2
254
+ and the majority of laser is transmitted and focused in a ce-
255
+ sium magneto-optical trap (Cs-MOT) for the construction of
256
+ the ODT.
257
+ FIG. 3. Experimental setup for intensity stabilization system. The
258
+ dynamic stability of laser intensity of 1879 nm MOPA system is re-
259
+ alized by MZI, and the fluctuation of laser intensity is monitored
260
+ and analyzed in time domain and frequency domain.λ/2: half-wave
261
+ plate; λ/4: quarter-wave plate; PBS: polarization beam splitting
262
+ cube; BS: beam splitting plate; GS: glass slice; M1/M2: high-
263
+ reflectivity mirror; PD: photodetector; LPF: low-pass filter; PID:
264
+ Proportional Integral Differential amplifier; HVA: high voltage am-
265
+ plifier.
266
+ IV.
267
+ EXPERIMENTAL RESULTS AND DISCUSSION
268
+ Fig. 4 shows, the interference fringes obtained by scanning
269
+ triangular waves with 50/50 beam splitter ratio in the experi-
270
+ ment, in which the interference contrast is 95%. In theoretical
271
+ simulation, an interference fringe with an interference con-
272
+ trast of 99.9% can be obtained by using a 50/50 beam splitter
273
+ plate, but the best interference contrast is not achieved in ex-
274
+ periment, probably due to the following two reasons: first, the
275
+ spatial mode of the two lasers is not exactly same; second, the
276
+ polarization of the two lasers may be slightly different.
277
+ Considering the requirement of constructing dipole trap
278
+ with this laser source, the polarization of 1879 nm laser should
279
+ be fixed, so PBS is usually inserted in the light path to fixed
280
+ the polarization of light. Even though the scheme is effective,
281
+ an inevitable defect exists in this scheme is that the polariza-
282
+ tion fluctuation of light will couple with the intensity fluctu-
283
+ ation through this polarization element. As the measurement
284
+ of which the intensity for 1879 nm laser after a PBS, although
285
+ the power fluctuation of 1879 nm TmDFA itself is not obvi-
286
+ ous, the intensity fluctuation behind the PBS becomes obvious
287
+ and the results is shown in Fig. 5(a). We monitor the laser in-
288
+ tensity for about 30 minutes in the time domain, with a large
289
+ fluctuation of about ±14.2%. The huge intensity fluctuation
290
+
291
+ 90/10
292
+ 70/30
293
+ 60/40
294
+ 50/50LSuppression of laser beam’s polarization and intensity fluctuation via a Mach-Zehnder interferometer with proper feedback
295
+ 4
296
+ FIG. 4. Interference fringe of the MZI. In the experiment, a 50/50
297
+ beam splitter plate is used, and the PZT is driven by scanning tri-
298
+ angular wave, so that the phase difference between the two arms is
299
+ generated, then the interference fringes are generated.
300
+ will significantly affect the power utilization of the stable sys-
301
+ tem. To maximize the power utilization, three wave plates
302
+ are used to suppressed the power fluctuation initially. After
303
+ proper adjustment, measurement result of laser intensity fluc-
304
+ tuation after PBS is shown in Fig. 5 (b). Fluctuation of laser
305
+ polarization has been reduced significantly. Then the initial-
306
+ stabled laser has been injected in the combine system of MZI
307
+ and another PBS, here the transmittance of the interferometer
308
+ is locked up to 90% in order to improve the power utilization.
309
+ Then the intensity fluctuation probed by the out-of-loop detec-
310
+ tor PD2 is shown in Fig. 6. As shown below, the intensity fluc-
311
+ tuation of output linear polarized laser is reduced to ±0.3%,
312
+ that is much better than the fluctuation of direct TmDFA-PBS
313
+ output. At this stability, both fluctuation of laser power and
314
+ polarization will no longer have a significant influence on the
315
+ parameter of dipole trap.
316
+ As shown in Table 1, for the 1879 nm 1D-MLT, if the laser
317
+ is focused through a lens to ∼ 20 µm. and the incident laser
318
+ power at the cold atom is about 1.5 W, so the maximum depth
319
+ of the 1D-MLT is −1000 µK and the typical trap depth fluc-
320
+ tuation is ±140 µK. When the laser power decreases after the
321
+ initial suppression of the wave plate group or the closed-loop
322
+ locking of the MZI, the corresponding typical trap depth is
323
+ about −800 µK and −700 µK respectively. And the effec-
324
+ tive temperature of the cold atoms which are transferred from
325
+ MOT to 1D-MLT will be slightly higher, about 100 µK, but
326
+ the decrease of trap depth caused by the suppression of power
327
+ fluctuation will not affect the capture of the cold atoms. How-
328
+ ever, the residual fluctuation of laser power still exists, which
329
+ will lead to the typical trap depth fluctuation of ±45 µK and
330
+ ±2 µK respectively.
331
+ The collected time-domain voltage signals are used to cal-
332
+ culate the NPSD. As shown in Fig. 7, the horizontal range
333
+ is determined by the sampling rate. In the experiment, we
334
+ selected sampling rate of 10000 Hz according to the actual
335
+ situation, so the horizontal axis in Fig. 7 ranges from 1 to
336
+ 5000Hz. In addition, we believe that the feedback bandwidth
337
+ of the system should be at the level of kilohertz due to the
338
+ limitation of PZT in the MZI. Therefore, the sampling rate
339
+ can fully meet the requirement of representing the feedback
340
+ bandwidth of the system.
341
+ The NPSD after closed-loop locking from 1-3000Hz is sig-
342
+ nificantly lower than that under the free running of the MOPA
343
+ system. It can be proved that the MZI plays an obvious role in
344
+ the power stability of the system. In order to further broaden
345
+ the feedback bandwidth and improve the inhibitory effect, we
346
+ assume that the arm length of the MZI is L and the angular fre-
347
+ quency of the laser is ω0, then the distance of the laser going
348
+ through the MZI is L and the phase shift generated is27
349
+ Φ0(t) = ω0t = ω0
350
+ L
351
+ c
352
+ (4)
353
+ Φ0 is a constant, and the magnitude is proportional to L .
354
+ When the PZT is scanned, we introduce to characterize small
355
+ changes in phase. For simplicity, we assume that a sine wave
356
+ is used to scan the PZT, and the amplitude of the sine wave is
357
+ h0 and the angular frequency is ωs, so the sine wave can be
358
+ expressed as
359
+ h(t) = h0cos(ωst)
360
+ (5)
361
+ So, the phase shift of the entire system can be written as
362
+ Φ = Φ0(t)+δφ
363
+ = ω0L
364
+ c
365
+ + ω0
366
+ 2
367
+ � t
368
+ t�� L
369
+ c
370
+ h0cos(ωst)dt
371
+ = ω0L
372
+ c
373
+ + h0
374
+ 2
375
+ ω0
376
+ ωs
377
+
378
+ sin(ωs
379
+ L
380
+ c )−sin[ωs(t − L
381
+ c )]
382
+
383
+ = ω0L
384
+ c
385
+ +h0
386
+ ω0
387
+ ωs
388
+ sin(ωs
389
+ L
390
+ 2c)cos[ωs
391
+ 2 (t − L
392
+ c )]
393
+ (6)
394
+ because L
395
+ 2c ≪ 1
396
+ h0
397
+ ω0
398
+ ωs
399
+ sin(ωs
400
+ L
401
+ 2c) = h0ω0
402
+ 2
403
+ L
404
+ c
405
+ (7)
406
+ δφ ∼ h0ω0
407
+ 2
408
+ L
409
+ c
410
+ (8)
411
+ As shown in Eq. (8), if the arm length L of the MZI is
412
+ increased, δφ of the system can be increased. Thus, the de-
413
+ tection sensitivity of the system can be improved and the de-
414
+ tection effect of the MZI for phase can be better. Increasing
415
+ the arm length of the MZI will cause extra noise due to the
416
+ insufficient stability of the system.
417
+ However, such noise can be solved through the isolation
418
+ platform and system temperature control. We can add F-P
419
+ cavity on the two arms of the MZI. F-P cavity can fold up the
420
+ optical path, greatly increase the distance of light in the MZI,
421
+ and do not need to occupy a large area.
422
+
423
+ 0.35
424
+ 0.28
425
+ Locked point
426
+ Voltage (V)
427
+ 0.21
428
+ 0.14
429
+ 0.07
430
+ 0.00
431
+ 0.00
432
+ 0.02
433
+ 0.04
434
+ 0.06
435
+ 0.08
436
+ Time (s)Suppression of laser beam’s polarization and intensity fluctuation via a Mach-Zehnder interferometer with proper feedback
437
+ 5
438
+ FIG. 5. (a) The power fluctuation of free running 1879 nm MOPA system . Through 30 mins of measurement, the power fluctuation is roughly
439
+ ±14.2%. The inset is zoomed in on the vertical axis to 2.00∼3.00 W, and shows the intensity fluctuation in 30 mins. (b) The power fluctuation
440
+ after three wave plates. We thought that the polarization fluctuation is initially suppressed by these plates. Similarly, after 30 minutes of
441
+ measurement, the intensity fluctuation is approximately ±5.7%. And, the vertical axis range of the inset becomes 1.75∼2.25 W, the range of
442
+ horizontal axis is still 0∼30 mins.
443
+ Category
444
+ PODT (mW)
445
+ ∆P(mW)
446
+ Gaussian radius after focused (µm) Udip(µK) ∆Udip(µK)
447
+ MOPA free running
448
+ 1500
449
+ ±213.0 (±14.2%)
450
+ 20
451
+ −1000
452
+ ±140
453
+ With wave plate group
454
+ 1200
455
+ ±68.4 (±5.7%)
456
+ 20
457
+ −800
458
+ ±45
459
+ After MZI is locked
460
+ 1100
461
+ ±3.3 (±0.3%)
462
+ 20
463
+ −700
464
+ ±2
465
+ TABLE I. The typical maximum trap depth and fluctuation of 1879.43nm 1D-MLT for cesium atoms under different power fluctuations are
466
+ calculated.
467
+ FIG. 6. The intensity fluctuation of 1879 nm laser on the bright fringe
468
+ of the MZI. By inter-of-loop locking, the phase difference between
469
+ the two arms is dynamically compensated , and the power fluctuation
470
+ is significantly suppressed , through 30 mins of measurement, the
471
+ power fluctuation is roughly ±0.3%. The vertical axis of the inset
472
+ has been enlarged with a range of 1.85∼1.88 W, and shows the 30-
473
+ min measurement.
474
+ FIG. 7. Intensity noise of 1879 nm laser as a function of analyze
475
+ frequency. (a): The solid black line represents the NPSD when the
476
+ 1879nm laser system is running freely without passing through the
477
+ wave plate group. (b): The solid blue line represents the NPSD of
478
+ the 1879nm laser system after closed-loop locking by the MZI.
479
+
480
+ 3.00
481
+ 3.00
482
+ 2.50
483
+ 2.50 F
484
+ Power fluctuation with wave plates group : ±5.7%
485
+ Power fluctuaticn when MOPA free running : ±14.2%
486
+ 2.00
487
+ 2.00
488
+ (W)
489
+ M
490
+ 3.00
491
+ 2.25
492
+ Power (
493
+ Power (
494
+ 1.50
495
+ 1.50
496
+ Power (w)
497
+ ()
498
+ Power (
499
+ 2.50
500
+ 2.00
501
+ 1.00
502
+ 1.00
503
+ 0.50
504
+ 0.50
505
+ 2.00
506
+ (a)
507
+ 1.75 ,
508
+ (b)
509
+ 15
510
+ 20
511
+ 25
512
+ 30
513
+ 20
514
+ 25
515
+ 30
516
+ T ime (min)
517
+ Time (min)
518
+ 0.00
519
+ 0.00
520
+ 5
521
+ 10
522
+ 15
523
+ 20
524
+ 25
525
+ 30
526
+ 0
527
+ 5
528
+ 10
529
+ 15
530
+ 20
531
+ 25
532
+ 30
533
+ 0
534
+ Time (min)
535
+ Time (min)3.00
536
+ 2.50
537
+ Power fluctuation after Mzl is locked : ±0.3%
538
+ Powewr (W)
539
+ 2.00
540
+ 1.88
541
+ 1.50
542
+ 1.00
543
+ 1.86
544
+ 0.50
545
+ 1.85,
546
+ 10
547
+ 15
548
+ 20
549
+ 25
550
+ 30
551
+ Time (min)
552
+ 0.00
553
+ 0
554
+ 5
555
+ 10
556
+ 15
557
+ 20
558
+ 25
559
+ 30
560
+ Time (min)100
561
+ 3000Hz
562
+ 10
563
+ NPSD (W/NHz)
564
+ 10
565
+ (b)
566
+ 10-8
567
+ MOPA free runming
568
+ After MzI is locked
569
+ 10-9
570
+ 100
571
+ 101
572
+ 102
573
+ 103
574
+ Frequency (Hz)Suppression of laser beam’s polarization and intensity fluctuation via a Mach-Zehnder interferometer with proper feedback
575
+ 6
576
+ V.
577
+ CONCLUSIONS
578
+ In summary, we have demonstrated the reduction of the
579
+ power and polarization fluctuation for 1879 nm laser based
580
+ on the cooperation of three wave plates and a MZI. The in-
581
+ tensity fluctuation ∼ ±14.2% after the combination of MOPA
582
+ system and PBS is reduced to ∼ ±0.3% with locked MZI.
583
+ And after MZI is locked, the NPSD is lower than that under
584
+ free running in the range of 1-3000 Hz. Typically, at 1000 Hz,
585
+ the NPSD after MZI is locked is about 10 dB lower than that
586
+ when MOPA free running. The system can not only withstand
587
+ high power injecting laser, but also can stabilize both power
588
+ fluctuation and polarization fluctuation without affecting the
589
+ quality of light beam for the low-loss output light. The laser
590
+ power utilizing efficiency can be further improved by improv-
591
+ ing the transmittance of locked interferometer or improving
592
+ the interference visibility.
593
+ It is expected that Rydberg atoms can have long coherence
594
+ lifetime in subsequent experiments involving Rydberg dressed
595
+ ground state. On one hand, we can use the 1879-nm MOPA
596
+ system to implement a 1D-MLT, which can both eliminate the
597
+ position-dependent light shift to capture Rydberg-state atoms
598
+ in optical tweezer like the ground-state atoms and attenuate
599
+ collisions between cold atoms caused by residual thermal mo-
600
+ tion to prolong the coherence time of the Rydberg atoms. On
601
+ the other hand, we propose an upgraded interferometer, that
602
+ is, adding a F-P cavity to each arm of the interferometer, and
603
+ using the reflection of beam in the cavity, the arm length can
604
+ be extended at least dozens of times, to improve the phase
605
+ measurement sensitivity of the interferometer and improve the
606
+ power stability.
607
+ FUNDING
608
+ This research was financially funded by the National Key R
609
+ & D Program of China (2021YFA1402002), the National Nat-
610
+ ural Science Foundation of China (11974226, and 61875111).
611
+ REFERENCES
612
+ 1 M. Endres, H. Bernien, A. Keesling, H. Levine, E. R.
613
+ Anschuetz, A. Krajenbrink, C. Senko, V. Vuletic, M. Greiner,
614
+ and M. D. Lukin.
615
+ Atom-by-atom assembly of defect-free
616
+ one-dimensional cold atom array[J]. Science, 354, 1024,
617
+ (2016).
618
+ 2 H. Kim, W. Lee, H.-G. Lee, H. Jo, Y. H. Song, and J. Ahn.
619
+ In situ single-atom array synthesis using dynamic holographic
620
+ optical tweezer[J]. Nature Commun., 7, 1-8, (2016).
621
+ 3 B. Darquié, M. P. A. Jones, J. Dingjan, J. Beugnon, S.
622
+ Bergamini, Y. Sortais, G. Messin, A. Browaeys, and P.
623
+ Grangier. Controlled single-photon emission from a single
624
+ trapped two-level atom[J]. Science, 309, 454-456, (2005).
625
+ 4 V. Leong, S. Kosen, B. Srivathsan, G. K. Gulati, A. Cere,
626
+ and C. Kurtsie.
627
+ Hong-ou-mandel interference between
628
+ triggered and heralded single photons from separate atomic
629
+ system[J]. Phys. Rev. A, 91, 063829, (2015).
630
+ 5 B. Liu, G. Jin, J. He, and J. M. Wang.
631
+ Suppression of
632
+ single cesium atom heating in a microscopic optical dipole
633
+ trap for demonstration of an 852nm triggered single-photon
634
+ source[J]. Phys. Rev. A, 94, 013409, (2016).
635
+ 6 Y. O. Dudin and A. Kuzmich. Strongly interacting Rydberg
636
+ excitations of a cold atomic gas[J]. Science, 336, 887–889,
637
+ (2012).
638
+ 7 Y.-Y. Jau, A. M. Hankin, T. Keating, I. H. Deutsch, and
639
+ G. W. Biedermann. Entangling atomic spins with a rydberg-
640
+ dressed spin-flip blockade[J]. Nature Phys., 12, 71–74,
641
+ (2016).
642
+ 8 E. Urban, T. A. Johnson, T. Henage, L. Isenhower, D.
643
+ D. Yavuz, T. G. Walker, and M. Saffman. Observation of
644
+ Rydberg blockade between two atoms[J]. Nature Phys., 5,
645
+ 110–114, (2009).
646
+ 9 B. Zhao, M. Müller, K. Hammerer, and P. Zoller. Efficient
647
+ quantum repeater based on deterministic Rydberg gates[J].
648
+ Phys. Rev. A, 81, 052329, (2010).
649
+ 10 A. D. Bounds, N. C. Jackson, R. K. Hanley, R. Faoro, E.
650
+ M. Bridge, P. Huillery, and M. P. A. Jones. Rydberg-dressed
651
+ magneto-optical trap[J]. Phys.
652
+ Rev.
653
+ Lett., 120, 183401,
654
+ (2018).
655
+ 11 J. D. Carter, O. Cherry, and J. D. D. Martin. Electric-field
656
+ sensing near the surface microstructure of an atom chip using
657
+ cold Rydberg atoms[J]. Phys. Rev. A, 86, 053401, (2012).
658
+ 12 L. A. Jones, J. D. Carter, and J. D. D. Martin. Rydberg
659
+ atoms with a reduced sensitivity to dc and low-frequency
660
+ electric fields[J]. Phys. Rev. A, 87, 71–74, (2013).
661
+ 13 J. D. Bai, S. Liu, J. Y. Wang, J. He, and J. M. Wang.
662
+ Single-photon Rydberg excitation and trap-loss spectroscopy
663
+ of cold cesium atoms in a magneto-optical trap by using of a
664
+ 319-nm ultraviolet laser system[J]. IEEE J. Sel. Top. Quant.
665
+ Electr., 26, 1600106, (2020).
666
+ 14 J. Junker, P. Oppermann, and B. Willke.
667
+ Shot-noise-
668
+ limited laser power stabilization for the aei 10 m prototype
669
+ interferometer[J]. Opt. Lett., 42, 755, (2017).
670
+ 15 F. Seifert, P. Kwee, M. Heurs, B. Willke, and K. Danz-
671
+ mann.
672
+ Laser power stabilization for second-generation
673
+ gravitational wave detectors[J]. Opt. Lett., 31, 2000–2002,
674
+ (2006).
675
+ 16 J. J. Du, W. F. Li, G. Li, J. M. Wang, and T. C. Zhang.
676
+ Intensity noise suppression of light field by optoelectronic
677
+ feedback[J]. Optik, 124, 3443–3445, (2013).
678
+ 17 R. Sun, X. Wang, K. Zhang, J. He, and J. M. Wang.
679
+ Influence of laser intensity fluctuation on single-cesium
680
+ atom trapping lifetime in a 1064-nm microscopic optical
681
+ tweezer[J]. Appl. Sci., 10, 659, (2020).
682
+ 18 P. Kwee, B. Willke, and K. Danzmann. Shot-noise-limited
683
+ laser power stabilization with a high-power photodiode
684
+ array[J]. Opt. Lett., 34, 2912–2914, (2009).
685
+ 19 Y. Wang, K. Wang, E. F. Fenton, Y. W. Lin, K.-K. Ni, and
686
+ J. D. Hood. Reduction of laser intensity noise over 1 mhz
687
+ band for single atom trapping[J]. Opt. Express, 28, 31209,
688
+ (2020).
689
+ 20 S. Inoue and Y. Yamamoto. Longitudinal-mode-partition
690
+ noise in a semiconductor-laser-based interferometer[J]. Opt.
691
+ Lett., 22, 328–330, (1997).
692
+ 21 D. Yelin, B. E. Bouma, and G. J. Tearney. Generating an
693
+
694
+ Suppression of laser beam’s polarization and intensity fluctuation via a Mach-Zehnder interferometer with proper feedback
695
+ 7
696
+ adjustable three-dimensional dark focus[J]. Opt.
697
+ Lett., 29,
698
+ 661–663, (2004).
699
+ 22 L. Isenhower, W. Williams, A. Dally, and M. Saffman.
700
+ Atom trapping in an interferometrically generated bottle
701
+ beam trap[J]. Opt. Lett., 34, 1159–1161, (2009).
702
+ 23 Y. H. Gao, Y. J. Li, J. X. Feng, and K. S. Zhang. Stable
703
+ continuous-wave
704
+ single-frequency
705
+ intracavity
706
+ frequency-
707
+ doubled laser with intensity noise suppressed in audio
708
+ frequency region[J]. Chinese Phys. B, 28, 094204, (2019).
709
+ 24 T. A. Savard, K. M. O’hara, and J. E. Thomas. Laser-
710
+ noise-induced heating in far-off resonance optical traps[J].
711
+ Phys. Rev. A, 56, R1095, (1997).
712
+ 25 J. D. Bai, S. Liu, J. He, and J. M. Wang.
713
+ Towards
714
+ implementation of a magic optical-dipole trap for confining
715
+ ground-state and Rydberg-state cesium cold atoms[J]. J.
716
+ Phys. B: At. Mol. Opt. Phys., 53, 155302, (2020).
717
+ 26 J. D. Bai, X. Wang, X. K. Hou, W. Y. Liu, and J. M. Wang.
718
+ Angle-Dependent Magic Optical Trap for the 6S1/2 − nP3/2
719
+ Rydberg Transition of Cesium Atoms[J]. Photonics, 9, 303,
720
+ (2022).
721
+ 27 Y. Y. Wang, X. J. Zhu, J. Liu, Y. B. Ma, Z. H. Zhu, J.
722
+ W. Cao, Z. H. Du, X. G. Wang, J. Qian, C. Yin, Z. Y. Liu,
723
+ D. Blair, L. Ju, and C. N. Zhao.
724
+ The laser interferometer
725
+ gravitational wave detector[J]. Progress in Astronomy, 32,
726
+ 348, (2014).(In Chinese)
727
+
2tFAT4oBgHgl3EQfDhzt/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
39AzT4oBgHgl3EQfffxn/content/2301.01453v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:429daa0157d3082e9b3594d4e9e5545ad07893af6b84408a31f79a8a407ef30a
3
+ size 1723612
39AzT4oBgHgl3EQfffxn/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8a302ccb97a9e30881cd166f6ae4d01f865599a598974a5f2cbf303d45056fe
3
+ size 2359341
39AzT4oBgHgl3EQfffxn/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66954b17ea26e5cd1634a61baec090f5f8b79a55bfe9ba6c6f3ef98b564b9fe8
3
+ size 93335
3dAyT4oBgHgl3EQfb_e6/content/2301.00275v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0885716d17df075519d66a9fad31227248b2678f8daf1253d83256120b2818ea
3
+ size 1000074
3dAyT4oBgHgl3EQfb_e6/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39487dabe37ddda17e21d971e3ff724f4ee151cd0e46122240926f5c1a9c82c9
3
+ size 3735597
3dAyT4oBgHgl3EQfb_e6/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12f7234811409e8cf648357fb6452fb2a8a16aeae3aa3de9bd7afc21ab74c47d
3
+ size 120374
69FAT4oBgHgl3EQfnx3V/content/2301.08631v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:728a9344ccaeaefae7c54fa317400c77efa3b9d081a9e3bca611152241c0212e
3
+ size 1559392
69FAT4oBgHgl3EQfnx3V/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a12967540be82edc5fac215bb05ee5b88111c92a2c74cca59c0ec0ab091d5ff
3
+ size 1966125
69FAT4oBgHgl3EQfnx3V/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77daa7e7f6b64d1048cc6d1c4f524a0d0851836cb7defea53a825e98c8c48e99
3
+ size 73657
79E0T4oBgHgl3EQffQDe/content/2301.02403v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2246463789d3972b23098a965b59b8791f884a66634394eccbefbaa885406fae
3
+ size 887404
79E0T4oBgHgl3EQffQDe/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8d444220aa9c9c7cfefb6f403165c0be26e2c9eae6745ce934e2bea4a1fbb64
3
+ size 113423
8tAyT4oBgHgl3EQf3PkC/content/tmp_files/2301.00763v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
8tAyT4oBgHgl3EQf3PkC/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
8tAzT4oBgHgl3EQf-v68/content/2301.01939v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64b7ddcacfae0a36923ddb2d48f9d4ede8ad678ee6a2504f5f4c39201b513b81
3
+ size 3589094
8tAzT4oBgHgl3EQf-v68/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bc6f39f660e4fa91f815ee290704381068241e506253bb0f231e8b1a64edeab
3
+ size 2883629
8tAzT4oBgHgl3EQf-v68/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86d8e08d14077d57aa45c74e9224c6ccea7180076238577460e26444fafae90e
3
+ size 105860
9dE2T4oBgHgl3EQfQAa_/content/tmp_files/2301.03766v1.pdf.txt ADDED
@@ -0,0 +1,1402 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 1
2
+ Optimal Power Flow Based on
3
+ Physical-Model-Integrated Neural Network with
4
+ Worth-Learning Data Generation
5
+ Zuntao Hu, Graduate Student Member, IEEE, and Hongcai Zhang, Member, IEEE
6
+ Abstract—Fast and reliable solvers for optimal power flow
7
+ (OPF) problems are attracting surging research interest. As
8
+ surrogates of physical-model-based OPF solvers, neural network
9
+ (NN) solvers can accelerate the solving process. However, they
10
+ may be unreliable for “unseen” inputs when the training dataset
11
+ is unrepresentative. Enhancing the representativeness of the
12
+ training dataset for NN solvers is indispensable but is not well
13
+ studied in the literature. To tackle this challenge, we propose an
14
+ OPF solver based on a physical-model-integrated NN with worth-
15
+ learning data generation. The designed NN is a combination
16
+ of a conventional multi-layer perceptron (MLP) and an OPF-
17
+ model module, which outputs not only the optimal decision
18
+ variables of the OPF problem but also the constraints violation
19
+ degree. Based on this NN, the worth-learning data generation
20
+ method can identify feasible samples that are not well generalized
21
+ by the NN. By iteratively applying this method and including
22
+ the newly identified worth-learning samples in the training set,
23
+ the representativeness of the training set can be significantly
24
+ enhanced. Therefore, the solution reliability of the NN solver
25
+ can be remarkably improved. Experimental results show that the
26
+ proposed method leads to an over 50% reduction of constraint
27
+ violations and optimality loss compared to conventional NN
28
+ solvers.
29
+ Index Terms—Optimal power flow, physical-model-integrated
30
+ neural network, worth-learning data generation
31
+ I. INTRODUCTION
32
+ O
33
+ PTIMAL power flow (OPF) is a fundamental but chal-
34
+ lenging problem for power systems [1]. A typical OPF
35
+ problem usually involves determining the optimal power dis-
36
+ patch with an objective, e.g., minimizing total generation costs
37
+ or power loss, while satisfying nonlinear power flow equations
38
+ and other physical or engineering constraints [2]. Due to the
39
+ nonlinear interrelation of nodal power injections and voltages,
40
+ OPF is non-convex, NP-hard, and cannot be solved efficiently
41
+ [3]. With the increasing integration of renewable generation
42
+ and flexible demands, uncertainty and volatility have been
43
+ rising on both the demand and supply sides of modern power
44
+ systems [4], which requires OPF to be solved more frequently.
45
+ Thus, fast and reliable OPF solvers have become indispensable
46
+ to ensure effective operations of modern power systems and
47
+ have attracted surging interest in academia.
48
+ There is a dilemma between the solving efficiency and
49
+ solution reliability of OPF. Conventionally, OPF is solved
50
+ by iterative algorithms, such as interior point algorithms,
51
+ based on explicit physical models [5]. However, these methods
52
+ may converge to locally optimal solutions. Recently, some
53
+ researchers have made great progress in designing conic
54
+ relaxation models for OPF, which are convex and can be
55
+ efficiently solved [6]–[8]. Nevertheless, the exactness of these
56
+ relaxations may not hold in practical scenarios, and they may
57
+ obtain infeasible solutions [9]. In addition, the scalability of
58
+ the conic relaxation of alternating current optimal power flow
59
+ (AC-OPF) may still be a challenge, particularly in online,
60
+ combinatorial, and stochastic settings [10].
61
+ To overcome the limitation of the aforementioned physical-
62
+ model-based solvers, some researchers propose surrogate OPF
63
+ solvers based on neural networks (NNs) [11]–[13]. These
64
+ solvers use NNs to approximate the functional mapping from
65
+ the operational parameters (e.g., profiles of renewable gen-
66
+ eration and power demands) to the decision variables (e.g.,
67
+ power dispatch) of OPF. Compared to iterative algorithms,
68
+ they can introduce significant speedup because an NN is only
69
+ composed of simple fundamental functions in sequence [12],
70
+ [13]. However, one of the critical problems of NN solvers is
71
+ that they may be unreliable if not properly trained, especially
72
+ for “unseen” inputs in feasible regions due to NNs’ mystery
73
+ generalization mechanism [14].
74
+ The generalization of NNs is mainly influenced by their
75
+ structures, loss functions, and training data. Most published
76
+ papers propose to enhance the generalization of NN OPF
77
+ solvers by adjusting the structures and loss functions. Various
78
+ advanced NN structures rather than conventional fully con-
79
+ nected networks are employed to imitate AC-OPF. For exam-
80
+ ple, Owerko et al. [15] use graph NNs to approximate a given
81
+ optimal solution. Su et al. [16] employ a deep belief network to
82
+ fit the generator’s power in OPF. Zhang et al. [17] construct a
83
+ convex NN solving DC-OPF to guarantee the generalization of
84
+ NNs. Jeyaraj et al. [18] employ a Bayesian regularized deep
85
+ NN to solve the OPF in DC microgrids. Some researchers
86
+ design elaborate loss functions that penalize the constraints
87
+ violation, combine Karush-Kuhn-Tucker conditions, or include
88
+ derivatives of decision variables to operational parameters. For
89
+ example, Pan et al. [11] introduce a penalty term related to the
90
+ inequality constraints into the loss function. This approach can
91
+ speed up the computation by up to two orders of magnitude
92
+ compared to the Gurobi solver, but 18.3% of its solutions are
93
+ infeasible. Ferdinando et al. [12] include a Lagrange item in
94
+ the loss function of NNs. Their method’s prediction errors
95
+ are as low as 0.2%, and its solving speed is faster than DC-
96
+ OPF by at least two orders of magnitude. Manish et al. [10]
97
+ include sensitivity information in the training of NN so that
98
+ only using about 10% to 25% of training data can attain the
99
+ same approximation accuracy as methods without sensitivity
100
+ information. Nellikkath et al. [19] apply physics-informed
101
+ arXiv:2301.03766v1 [cs.LG] 10 Jan 2023
102
+
103
+ 2
104
+ NNs to OPF problems, and their results have higher accuracy
105
+ than conventional NNs.
106
+ The above-mentioned studies have made significant progress
107
+ in designing elaborate network structures and loss functions.
108
+ However, little attention has been paid to the training set
109
+ generation problem. Specifically, they all adopt conventional
110
+ probability sampling methods to produce datasets for training
111
+ and testing, such as simple random sampling [10]–[13], [15],
112
+ [17], [20], Monte Carlo simulation [18], or Latin hypercube
113
+ sampling [16], [19]. These probability sampling methods can-
114
+ not provide a theoretical guarantee that a generated training
115
+ set can represent the input space of the OPF problem prop-
116
+ erly. As a result, probability sampling methods may generate
117
+ insufficient and unrepresentative training sets, so the trained
118
+ NN solvers may provide unreliable solutions.
119
+ It is important to create a sufficiently representative dataset
120
+ for training an NN OPF solver. A training set’s representative-
121
+ ness depends on its size and distribution in its feasible region
122
+ [21]. Taking a medium-scale OPF problem as an example,
123
+ millions of data samples may still be sparse given the high
124
+ dimension of the NN’s inputs (e.g., operational parameters of
125
+ the OPF problem: renewable generation and power demands
126
+ at all buses); in addition, because the OPF problem is non-
127
+ convex, the feasible region of the NN’s inputs is a complicated
128
+ irregular space. Thus, generating a representative training
129
+ set to cover all the feasible regions of the inputs with an
130
+ acceptable size is quite challenging. Without a representative
131
+ training set, it is difficult to guarantee that the NN OPF solver’s
132
+ outputs are reliable, especially given “unseen” inputs in the
133
+ inference process, as discussed in [22], [23].
134
+ To address the above challenge, this study proposes
135
+ a physical-model-integrated deep NN method with worth-
136
+ learning data generation to solve AC-OPF problems. To the
137
+ best of our knowledge, this is the first study that has addressed
138
+ the representativeness problem of the training dataset for NN
139
+ OPF solvers. The major contributions of this study are twofold:
140
+ 1) A novel physical-model-integrated NN is designed for
141
+ solving the AC-OPF problem. This NN is constructed by
142
+ a conventional MLP integrating an OPF-model module,
143
+ which outputs not only the optimal decision variables
144
+ of the OPF problem but also the violation degree of
145
+ constraints. By penalizing the latter in the loss function
146
+ during training, the NN can generate more reliable
147
+ decision variables.
148
+ 2) Based on the designed NN, a novel generation method
149
+ for worth-learning training data is proposed, which can
150
+ identify samples in the input feasible region that are
151
+ not well generalized by the previous NN. By iteratively
152
+ applying this method during the training process, the
153
+ trained NN gradually generalizes to the whole feasible
154
+ region. As a result, the generalization and reliability of
155
+ the proposed NN solver can be significantly enhanced.
156
+ Furthermore, comprehensive numerical experiments are con-
157
+ ducted, which prove that the proposed method is effective in
158
+ terms of both reliability and optimality for solving AC-OPF
159
+ problems with high computational efficiency.
160
+ The remainder of this article is organized as follows. Section
161
+ II provides preliminary models and the motivations behind this
162
+ Fig. 1. The 3-bus system.
163
+ study. Section III introduces the proposed method. Section IV
164
+ details the experiments. Section V concludes this paper.
165
+ II. ANALYSIS OF APPROXIMATING OPF PROBLEMS BY NN
166
+ A. AC-OPF problem
167
+ The AC-OPF problem aims to determine the optimal power
168
+ dispatch (usually for generators) given specific operating con-
169
+ ditions of a power system, e.g., power loads and renewable
170
+ generation. A typical AC-OPF model can be formulated as
171
+ min
172
+ V , SG C
173
+
174
+ SG�
175
+ (1a)
176
+ s.t.:
177
+ [V ] Y∗
178
+ busV ∗ = SG − SL,
179
+ (1b)
180
+ SG ≤ SG ≤ S
181
+ G,
182
+ (1c)
183
+ V ≤ V ≤ V,
184
+ (1d)
185
+ |YbV | ≤ ¯I,
186
+ (1e)
187
+ where Eq. (1a) is the objective, e.g., minimizing total genera-
188
+ tion costs, and Eqs. (1b) to (1e) denote constraints. Symbols
189
+ SG and SL are n × 1 vectors representing complex bus
190
+ injections from generators and loads, respectively, where n
191
+ is the number of buses. Symbol V is an n×1 vector denoting
192
+ node voltages. Symbol [.] denotes an operator that transforms
193
+ a vector into a diagonal matrix with the vector elements on
194
+ the diagonal. Symbol Ybus is a complex n × n bus admittance
195
+ matrix written as Y at other sections for convenience. Symbol
196
+ Yb is a complex nb × n branch admittance matrix, and nb is
197
+ the number of branches. The upper and lower bounds of any
198
+ variable x are represented by ¯x and x, respectively. Vector ¯I
199
+ denotes the current flow limit of branches.
200
+ B. AC-OPF mapping from loads to optimal dispatch
201
+ An NN model describes an input-output mapping. Specifi-
202
+ cally, for an NN model solving the AC-OPF problem shown in
203
+ Eq. (1), the input is the power demand SL, and the output is the
204
+ optimal generation SG *. Hence, an NN OPF solver describes
205
+ the mapping SG * = f OPF(SL). A well-trained NN should be
206
+ able to accurately approximate this mapping.
207
+ We provide a basic example of a 3-bus system, as shown in
208
+ Fig. 1, to illustrate how NN works for OPF problems and
209
+ explain the corresponding challenge for generalization. For
210
+ simplicity, we assume there is no reactive power in the system
211
+ and set r31 = r12 = 0.01 ; P i = 0 , P i = 4, for i ∈ {1, 3},
212
+ P2 ∈ [−7, 0]; V i = 0.95 and V i = 1.05, for i ∈ {1, 2, 3}.
213
+
214
+ Load
215
+ P2E[-7,0]3
216
+ Fig. 2. Examples of an NN fitting the OPF of the 3-bus system based on (a)
217
+ simple random sampling, and (b) worth-learning data generation.
218
+ Then, the OPF model Eq. (1) is reduced to the following
219
+ quadratic programming:
220
+ min
221
+ V, PG P1 + 1.5 × P3
222
+ (2a)
223
+ s.t.: P1 = V1 (V1 − V2) /0.01 + V1 (V1 − V3) /0.01,
224
+ (2b)
225
+ P2 = V2 (V2 − V1) /0.01,
226
+ (2c)
227
+ P3 = V3 (V3 − V1) /0.01,
228
+ (2d)
229
+ 0.95 ≤ V3 ≤ 1.05, 0.95 ≤ V2 ≤ 1.05,
230
+ (2e)
231
+ 0 ≤ P1 ≤ 4, 0 ≤ P3 ≤ 4, V1 = 1,
232
+ (2f)
233
+ where V is [V1 V2 V3]⊤, and P G is [P1 P3]⊤.
234
+ Given that P2 ranges from -7 to 0, the 3-bus OPF model can
235
+ be solved analytically. The closed-form solution of [P ∗
236
+ 1 P ∗
237
+ 3 ] =
238
+ f OPF
239
+ 3-bus(P2), is formulated as follows:
240
+ P ∗
241
+ 1 =
242
+
243
+ 50 − 50√0.04P2 + 1,
244
+ c1,
245
+ 4,
246
+ c2,
247
+ (3)
248
+ P ∗
249
+ 3 =
250
+
251
+
252
+
253
+ 0,
254
+ c1,
255
+ 213
256
+
257
+ 1 − 0.34√0.04P2 + 1
258
+ �2
259
+ +50√0.04P2 + 1 − 146
260
+ ,
261
+ c2,
262
+ (4)
263
+ where c1 denotes condition 1:
264
+ −3.84 ≤ P2 ≤ 0, and c2
265
+ denotes condition 2: −7 ≤ P2 < −3.84.
266
+ To further analyze the mapping f OPF
267
+ 3-bus, we draw the
268
+ [P ∗
269
+ 1 P ∗
270
+ 3 ]–P2 curve according to Eqs. (3) and (4), shown in
271
+ Fig. 2. Both the P ∗
272
+ 1 –P2 and P ∗
273
+ 3 –P2 curves are piecewise
274
+ nonlinear functions, in which two oblique lines are nonlinear
275
+ because of the quadratic equality constraints. The reason
276
+ why the two curves above are piecewise is that the active
277
+ inequalities change the [P ∗
278
+ 1 P ∗
279
+ 3 ]–P2 relationship. From an
280
+ optimization perspective, each active inequality will add a
281
+ unique equality constraint to the relationship, so the pieces
282
+ in f OPF
283
+ 3-bus are determined by the sets of active inequalities. In
284
+ this example, the two pieces in each curve correspond to two
285
+ sets of active inequalities: P1 ≤ 4 and 0 ≤ P3. Moreover,
286
+ the two intersection points are the critical points where these
287
+ inequalities are just satisfied as equalities.
288
+ For a general AC-OPF problem, its input is usually high-
289
+ dimensional (commonly determined by the number of buses),
290
+ and its feasible space is partitioned into some distinct regions
291
+ by different sets of active inequality constraints. From an
292
+ optimization perspective, a set of active constraints uniquely
293
+ characterizes the relationship SG *
294
+ = f OPF(SL), and the
295
+ number of pieces theoretically increases with the number of
296
+ inequality constraints by exponential order [24]–[26]. There-
297
+ fore, there are massive regions, and each region corresponds
298
+ to a unique mapping relation, i.e., a piece of mapping function
299
+ f OPF.
300
+ C. Challenges of fitting OPF mapping by NN
301
+ As shown in Fig. 2(a), to fit the two-dimensional piecewise
302
+ nonlinear curve of f OPF
303
+ 3-bus, we first adopt four data samples by
304
+ simple random sampling and then use an NN to learn the
305
+ curve. Obviously, there are significant fitting errors between
306
+ the fitting and the original lines. Because the training set lacks
307
+ the samples near the intersections in the curve (where p2 =
308
+ −0.384 in this case), the NN cannot accurately approximate
309
+ the mapping in the neighboring region of the intersections.
310
+ A training set representing the whole input space is a prereq-
311
+ uisite for an NN approximating the curve properly. However,
312
+ it is nontrivial to generate a representative training set by
313
+ probability sampling. As shown in Fig. 2(a), the intersections
314
+ of f OPF are key points for the representativeness, and the
315
+ number of intersections increases exponentially with that of
316
+ the inequality constraints, as analyzed in II-B. When each
317
+ sample is selected with a small possibility ρ, the generation
318
+ of a dataset containing all the intersection points are in a low
319
+ possibility event whose probability is equal to ρm, where m is
320
+ the number of intersections. In practice, the only way to collect
321
+ sufficient data representing the input space by probability
322
+ sampling is to expand the dataset as much as possible [27].
323
+ This is impractical for large power networks. Therefore, the
324
+ conventional probability sampling in the literature can hardly
325
+ produce a representative dataset with a moderate size.
326
+ As shown in Fig. 2(b), if we are able to identify the two
327
+ intersections, i.e., (P2 = −0.384, P1 = 4) and (P2 = −0.384,
328
+ P3 = 0), and include them as new samples in the training
329
+ dataset, the corresponding large fitting errors of the NN
330
+ can be eliminated. These samples are termed as the worth-
331
+ learning data samples. The focus of this study is to propose a
332
+ worth-learning data generation method that can help identify
333
+ worth-learning data samples and overcome the aforementioned
334
+ disadvantage of conventional probability sampling (detailed in
335
+ the following section).
336
+ III. A PHYSICAL-MODEL-INTEGRATED NN WITH
337
+ WORTH-LEARNING DATA GENERATION
338
+ This section proposes a physical-model-integrated NN with
339
+ a worth-learning data generation method to solve AC-OPF
340
+ problems. The proposed NN is a combination of a fully-
341
+ connected network and a transformed OPF model. It outputs
342
+ not only the optimal decision variables of the OPF problem but
343
+ also the violation degree of constraints, which provides guid-
344
+ ance for identifying worth-learning data. The worth-learning
345
+ data generation method creates representative training sets to
346
+ enhance the generalization of the NN solver.
347
+
348
+ P*
349
+ Data from simple random sampling
350
+ Fitting line
351
+ Fitting error Worth-learning data4
352
+ START
353
+ Initialize a training set by randomly sampling
354
+ Train the NN on the current training set
355
+ Identify worth-learning data
356
+ for the current NN
357
+ Worth-learning data
358
+ are identified?
359
+ Output the current NN
360
+ END
361
+ Y
362
+ N
363
+ Add identified
364
+ data to the
365
+ training set
366
+ Fig. 3. Framework of the proposed training process.
367
+ A. Framework of the proposed method
368
+ The proposed data generation method has an iterative pro-
369
+ cess, as shown in Fig. 3. First, a training set is initialized by
370
+ random sampling; second, the physical-model-integrated NN
371
+ is trained on the training set, where an elaborate loss function
372
+ is utilized; third, worth-learning data for the current NN are
373
+ identified; fourth, if worth-learning data are identified, these
374
+ data are added to the training set and returns to the second
375
+ step; otherwise, the current NN is output.
376
+ The above training process converges until no worth-
377
+ learning data are identified. This means that the training set
378
+ is sufficiently representative of the input space of the OPF
379
+ problem. As a result, the NN trained based on this dataset
380
+ can generalize to the input feasible set well. The following
381
+ subsections introduce the proposed method in detail.
382
+ B. Physical-model-integrated NN
383
+ In the second step of the proposed method (Fig. 3), the NN
384
+ is trained to fit the mapping SG * = f OPF(SL). To obtain better
385
+ results, we design a physical-model-integrated NN structure
386
+ consisting of a conventional NN module and a physical-model
387
+ module, as shown in Fig. 4. The former is a conventional MLP,
388
+ while the latter is a computational graph transformed from the
389
+ OPF model.
390
+ 1) Conventional NN module: This module first adopts a
391
+ conventional MLP with learnable parameters to fit the mapping
392
+ from the SL to the optimal decision variable V NN [28]. The
393
+ V NN has its box constraint defined in Eq. (1). To ensure that
394
+ the output V NN satisfies this constraint, we design a function
395
+ dRe() to adjust any infeasible output V NN into its feasible
396
+ region, which is formulated as follows:
397
+ x ← dRe(x, x, x) = ReLU(x − x) − ReLU(x − x) + x,
398
+ (5)
399
+ where ReLU(x) = max(x, 0); x is the input of the function,
400
+ and its lower and upper bounds are x and x, respectively. The
401
+ diagram of this function is illustrated in Fig. 5.
402
+ Input
403
+
404
+
405
+ Physical model module
406
+ Conventional NN module
407
+ Output
408
+ Fig. 4. The physical-model-integrated NN.
409
+ Applying dRe() as the activation function of the last layer
410
+ of the conventional MLP, the mathematical model of this
411
+ conventional NN module is formulated as follows:
412
+ V NN = MLP(SL),
413
+ (6)
414
+ V NN ← dRe(V NN, V , V ),
415
+ (7)
416
+ where Eq. (6) describes the conventional model of MLP and
417
+ Eq. (7) adjusts the output of the MLP.
418
+ 2) Physical model module: This module receives V NN
419
+ from the previous module, and then it outputs the optimal
420
+ power generation SG
421
+ phm and the corresponding constraints
422
+ violation V iophm, where the subscript “phm” denotes the
423
+ physical model module. The first output SG
424
+ phm is the optimal
425
+ decision variable of the AC-OPF problem. It can be calculated
426
+ by V NN and SL, as follows:
427
+ SG
428
+ phm = [V NN]Y∗V ∗
429
+ NN + SL.
430
+ (8)
431
+ The second output V iophm (termed as violation degree)
432
+ measures the quality of SG
433
+ phm and is the key metric to guide
434
+ the proposed worth-learning data generation (see details in
435
+ the following subsection III-C). Given V NN and SG
436
+ phm, the
437
+ violations of inequality constraints of the AC-OPF problem
438
+ V iophm are calculated as follows:
439
+ V ioS
440
+ phm = ReLU(SG
441
+ phm − S
442
+ G) + ReLU(SG − SG
443
+ phm),
444
+ (9a)
445
+ V ioI
446
+ phm = ReLU(|YfV NN| − ¯I),
447
+ (9b)
448
+ V iophm = (V ioS
449
+ phm
450
+ V ioI
451
+ phm)⊤,
452
+ (9c)
453
+ where V ioS
454
+ phm denotes the violation of the upper or lower
455
+ limit of SG
456
+ phm, and V ioI
457
+ phm represents the violation of branch
458
+ currents.
459
+ Remark 1. The physical-model-integrated NN is formed
460
+ by combining the conventional NN module and the physical
461
+ model module. It inputs SL and outputs SG
462
+ phm and V iophm,
463
+ as shown in Fig. 4. Its function is the same as conventional
464
+ OPF numerical solvers. In addition, it is convenient for users
465
+ Fig. 5. The dRe() function.
466
+
467
+ 5
468
+ Feasible region
469
+ Label value
470
+ Region with tiny
471
+ predicted error
472
+ Predicted value
473
+ Effect of the proposed
474
+ loss function
475
+ Point 1
476
+ Point 2
477
+ Fig. 6. Illustration of the effectiveness of the three terms in the loss function.
478
+ to directly determine whether the result of the NN OPF solver
479
+ is acceptable or not based on the violation degree V iophm. In
480
+ contrast, most NN OPF solvers in the literature are incapable
481
+ of outputting the violation degree directly [10]–[12].
482
+ 3) Loss function: To enhance the training accuracy of the
483
+ physical-model-integrated NN, we design an elaborate loss
484
+ function, which consists of V NN from the conventional NN
485
+ module, and SG
486
+ phm and V iophm from the physical model
487
+ module. The formula is as follows:
488
+ loss = || ˆV − V NN||1 + || ˆ
489
+ SG − SG
490
+ phm||1 + V iophm,
491
+ (10)
492
+ where ˆV and ˆ
493
+ SG are label values from the training set, which
494
+ is a ground truth dataset from numerical solvers.
495
+ Combining the three terms in the loss function can help en-
496
+ hance fitting precision. As shown in Fig. 6, if the loss function
497
+ only has the first two items || ˆV − V NN||1+ || ˆ
498
+ SG − SG
499
+ phm||1 to
500
+ penalize conventional fitting errors, the predicted value will be
501
+ in a tiny square space (the red square in Fig. 6) around the
502
+ label value. From the optimization perspective, the optimal
503
+ label value is usually on the edge of its feasible region (the
504
+ blue polyhedron in Fig. 6). This edge through the label value
505
+ splits the square into two parts: the feasible (blue) part and
506
+ the infeasible (white) part. Intuitively, we would prefer the
507
+ predicted values to be in the feasible part. Thus, we also
508
+ penalize violation degree V iophm in the loss function to force
509
+ the predicted values with big V iophm close to the square’s
510
+ feasible half space for smaller constraint violations.
511
+ Although the proposed NN with elaborate loss function has
512
+ high training accuracy, it is still difficult to guarantee the gen-
513
+ eralization of the NN OPF solver to the whole input space with
514
+ conventional random sampling. Therefore, it is indispensable
515
+ and challenging to obtain a representative training dataset with
516
+ moderate size to train the proposed NN, which is the focus of
517
+ the following subsection.
518
+ C. Worth-learning data generation
519
+ As shown in Fig. 3, we adopt an iterative process to identify
520
+ the worth-learning data. For an NN trained in the previous
521
+ iteration, we utilize its output V iophm to help identify new
522
+ data samples that are not yet properly generalized. Specifically,
523
+ if an input SL* is feasible for the original OPF problem while
524
+ the current NN outputs a large violation degree V io∗
525
+ phm, the
526
+ contradiction means the NN has a large fitting error at SL*.
527
+ Input feasible set module
528
+ Fig. 7. The input feasible set module.
529
+ This is probably because sample SL* was not included in
530
+ the previous training set and was not generalized by the NN.
531
+ Hence, this sample SL* can be regarded as a worth-learning
532
+ sample. Including the sample in the training dataset in the next
533
+ iteration helps enhance the generalization of the NN.
534
+ The key to the proposed worth-learning data generation
535
+ method is to identify worth-learning samples efficiently. In-
536
+ stead of traversing all of the possible inputs, we maximize
537
+ V iophm for a given NN to identify the input with a large vio-
538
+ lation degree. However, the inputs identified in the maximizing
539
+ process should be feasible for the original OPF problem.
540
+ Otherwise, the found inputs might be infeasible and useless
541
+ for the representation of the training data.
542
+ 1) Input feasible set module: To keep the inputs identified
543
+ in the maximizing process feasible for the original OPF
544
+ problem, we formulate the input feasible set module to restrict
545
+ power loads SL to their feasible set. The feasible set is
546
+ composed of box constraints, current limits, and KCL&KVL
547
+ constraints, which are transformed from the feasible set of the
548
+ OPF problem defined in Eq. (1). The partial formulations of
549
+ the input feasible set are as follows, where the subscript “ifs”
550
+ denotes the input feasible set module:
551
+ SG
552
+ ifs = dRe
553
+
554
+ S′G
555
+ ifs, SG, S
556
+ G�
557
+ , S′G
558
+ ifs ∈ Rn,
559
+ (11a)
560
+ V ifs = dRe
561
+
562
+ V ′
563
+ ifs, V , V
564
+
565
+ , V ′
566
+ ifs ∈ Rn,
567
+ (11b)
568
+ SL
569
+ ifs = SG
570
+ ifs − [V ifs]Y∗V ∗
571
+ ifs,
572
+ (11c)
573
+ Iifs = YbV ifs,
574
+ (11d)
575
+ where S′G
576
+ ifs and V ′ifs are auxiliary n × 1 vectors in Rn and
577
+ have no physical meaning. Symbols SG
578
+ ifs and V ifs are restricted
579
+ in their box constraints in Eqs. (11a) and (11b). Then the
580
+ KCL&KVL correlations of SL
581
+ ifs, SG
582
+ ifs, and V ifs are described
583
+ by Eq. (11c). Symbol Iifs in Eq. (11d) denotes the currents at
584
+ all branches.
585
+ The other formulations of the input feasible set aim to calcu-
586
+ late V ioifs, the AC-OPF’s constraint violations corresponding
587
+ to SL
588
+ ifs and Iifs, as follows:
589
+ V ioS
590
+ ifs = ReLU(SL
591
+ ifs − S
592
+ L) + ReLU(SL − SL
593
+ ifs),
594
+ (12a)
595
+ V ioI
596
+ ifs = ReLU(|Iifs| − I),
597
+ (12b)
598
+ V ioifs = (V ioS
599
+ ifs
600
+ V ioI
601
+ ifs)⊤,
602
+ (12c)
603
+ where V ioS
604
+ ifs denotes the violation of the upper or lower limit
605
+ of SL
606
+ phm, and V ioI
607
+ ifs denotes the violation of branch current.
608
+ Remark 2. This module takes S′G
609
+ ifs and V ′ifs as the inputs,
610
+ and then outputs SL
611
+ ifs and V ioifs, as shown in Fig. 7. When
612
+ V ioifs = 0, the corresponding SL
613
+ ifs lies in the feasible set of
614
+
615
+ 6
616
+ Conventional
617
+ NN module
618
+ Physical
619
+ model module
620
+ Input
621
+ feasible set
622
+ module
623
+ Updated
624
+ variables
625
+ Fig. 8.
626
+ The novel NN for max violation backpropagation by integrating
627
+ physical-model-integrated NN with the input feasible set module.
628
+ the AC-OPF problem. To identify feasible SL
629
+ ifs in the process of
630
+ maximizing V iophm, this module backpropagate the ∂V iophm
631
+ ∂SL
632
+ ifs
633
+ with V ioifs ≤ ζ (ζ is a small positive tolerance), and then
634
+ it updates S′G
635
+ ifs and V ′ifs. As a result, the corresponding SL
636
+ ifs
637
+ is always feasible. Furthermore, because S′G
638
+ ifs and V ′ifs are
639
+ not bounded, changing them can theoretically find any feasible
640
+ SL
641
+ ifs.
642
+ 2) Max violation backpropagation:
643
+ To identify worth-
644
+ learning data, a novel NN is created by inputting SL
645
+ ifs into
646
+ the physical-model-integrated NN (see Fig. 8). This NN has
647
+ two outputs, i.e., V iophm and V ioifs. The former measures
648
+ the constraint violation degree of the OPF solution SG*; the
649
+ latter indicates the feasibility of the OPF input SL
650
+ ifs. If SL
651
+ ifs
652
+ is a feasible input, i.e., V ioifs ≤ ζ, but the optimal solution
653
+ SG* is infeasible, i.e., V iophm ≥ ξ (ξ is a threshold), this
654
+ means the corresponding input is worth learning (i.e., it is
655
+ not learned or generalized by the current NN). Based on this
656
+ analysis, we design the loss function lossmax for max violation
657
+ backpropagation, as follows:
658
+ lossmax = V iophm − λ × V ioifs,
659
+ (13)
660
+ where λ is a large, constant weight parameter. When maxi-
661
+ mizing this loss function, the algorithm tends to find a worth-
662
+ learning SL
663
+ ifs that has small V ioifs but large V iophm.
664
+ During the max violation backpropagation, the proposed
665
+ algorithm maximizes lossmax to update the variables S′G
666
+ ifs
667
+ and V ′ifs by gradient backpropagation until lossmax converges
668
+ to the local maximum. After the process, the corresponding
669
+ SL
670
+ ifs is also found. Because the maximizing process can be
671
+ processed in parallel by the deep learning module PyTorch,
672
+ the worth-learning samples are found in batch, where the
673
+ max violation backpropagation uses the previous training set
674
+ as initial points to identify the new data. Further, the auto-
675
+ differentiation technique in PyTorch can accelerate the process
676
+ of parallel computation. Based on these techniques, massive
677
+ worth-learning data samples are identified efficiently.
678
+ D. Overall training process
679
+ The overall training process is presented in Algorithm 1,
680
+ which first takes an initial training dataset Dt (obtained by any
681
+ conventional sampling method) as input. The learning rate η is
682
+ equal to 10−3, the loss difference tolerance ϵ is equal to 10−2,
683
+ the added dataset A is empty, and the loss difference ∆L is
684
+ equal to infinity at initialization. The training is performed for
685
+ a fixed number of epochs (lines 2–5). Then the max violation
686
+ backpropagation starts to identify worth-learning data (lines
687
+ 6 and 7) by using the training data as the initial points (line
688
+ 8) and updating S′G
689
+ ifs and V ′ifs until ∆L is less than ϵ (lines
690
+ 9–12), which indicates lossmax has converged to the terminal.
691
+ Algorithm
692
+ 1
693
+ Training
694
+ process
695
+ of
696
+ the
697
+ physical-model-
698
+ integrated NN OPF solver with worth-learning data generation.
699
+ Input: Dt =
700
+ � ˆ
701
+ SL, ˆV , ˆ
702
+ SG�
703
+ Initialization : η ← 10−3, ϵ ← 10−2, A ← ∅, ∆L ← ∞
704
+ 1: repeat
705
+ 2:
706
+ for epoch k = 0, 1, ... do
707
+ 3:
708
+ Train the NN with loss Eq. (10):
709
+ 4:
710
+ w ← w − η∇loss.
711
+ 5:
712
+ end for
713
+ 6:
714
+ while ∆L ≥ ϵ do
715
+ 7:
716
+ Identify data with lossmax Eq. (13):
717
+ 8:
718
+ S′G
719
+ ifs, V ′
720
+ ifs ← SG
721
+ ifs, V ifs ← ˆ
722
+ SG, ˆV
723
+ 9:
724
+ S′G
725
+ ifs ← S′G
726
+ ifs + η∇lossmax
727
+ 10:
728
+ V ′
729
+ ifs ← V ′
730
+ ifs + η∇lossmax
731
+ 11:
732
+ ∆L ← | lossmax,i − lossmax,i−100 |
733
+ 12:
734
+ end while
735
+ 13:
736
+ {V iophm,N} ← ffilter(V iophm,N ≥ ξ)
737
+ 14:
738
+ Collect {SL
739
+ ifs} corresponding to {V iophm,N} based on the
740
+ novel NN in Fig. 8
741
+ 15:
742
+ Calculate { ˆV , ˆ
743
+ SG} corresponding to {SG
744
+ ifs} using numerical
745
+ solvers
746
+ 16:
747
+ A ← {SL
748
+ ifs, ˆV , ˆ
749
+ SG}
750
+ 17:
751
+ Dt ← Dt ∪ A
752
+ 18: until A is ∅
753
+ After the max violation backpropagation, a series of com-
754
+ mands are designed to add proper data to the training set.
755
+ First, a filter function ffilter is employed to eliminate data with
756
+ terminal violation V iophm,N less than a given threshold ξ (the
757
+ value depends on the acceptable violation settings). Second,
758
+ { ˆV , ˆ
759
+ SG} is calculated by numerical solvers corresponding to
760
+ SL
761
+ ifs with large violation degree (lines 14 and 15). They consist
762
+ of added set A (line 16). Third, the training set Dt is expanded
763
+ with A (line 17). The loop is repeated until the added set A is
764
+ empty (line 18), meaning no worth-learning data are identified.
765
+ E. Efficiency and convergence of the proposed method
766
+ Unlike general training processes for conventional NNs, the
767
+ proposed physical-model-integrated NN with worth-learning
768
+ data generation adopts an iterative training process. It iter-
769
+ atively checks the NN’s generalization to the input’s feasi-
770
+ ble space by identifying worth-learning data, as shown in
771
+ Fig. 3 and Algorithm 1. This difference introduces two critical
772
+ questions. 1) Efficiency: is the process of identifying worth-
773
+ learning data computationally efficient? 2) Convergence: is
774
+ the training set representative of the whole input space after
775
+ iterations? In terms of the computational efficiency of the
776
+ proposed method, the theoretical analysis (detailed in the
777
+ Appendix A) shows it takes no more than 0.08 s to find
778
+ one sample, which brings little computational burden into
779
+ the training process. According to the experiment results, the
780
+ average consumption time for finding one sample is 0.056 s. In
781
+ terms of the convergence, we prove that the training set would
782
+ gradually represent the whole input space in the Appendix
783
+ B, because the number of worth-learning samples identified
784
+ would converge to zero after a finite number of iterations.
785
+
786
+ 7
787
+ The number of times the sequence codes are
788
+ repeated in the data generation loop (/100)
789
+ The violation degree (MW)
790
+ Fig. 9. Time consumption of the worth-learning data codes in three different
791
+ iterations. The number of times the sequence codes are repeated in the data
792
+ generation loop (x-axis) represents the time consumed in one data generation
793
+ loop; the violation degrees (y-axis) quickly converge to the terminal stage.
794
+ IV. NUMERICAL EXPERIMENTS
795
+ The proposed method is evaluated using the IEEE 12-bus,
796
+ 14-bus, 30-bus, 57-bus, and 118-bus systems. The ground truth
797
+ datasets are constructed using PANDAPOWER based on a
798
+ prime-dual interior points algorithm.
799
+ A. The efficiency of worth-learning data generation
800
+ As shown in Algorithm 1, the proposed worth-learning data
801
+ generation (lines 6–12) is the second loop in one iteration
802
+ (lines 1–18), and the number of initial data points for the
803
+ generation varies with iterations (lines 8, 15–17). To evalu-
804
+ ate the efficiency of the worth-learning data generation, we
805
+ conduct an experiment on the IEEE 57-bus system in three
806
+ different iterations to quantitatively measure how much time
807
+ it takes to finish one worth-learning data generation loop. The
808
+ time consumption of the data-generation loops in the three
809
+ different iterations is illustrated in Fig. 9. The x-axis is the
810
+ number of times the codes are repeated (lines 6–12) divided
811
+ by 100, which represents the time consumed in one data
812
+ generation loop; the y-axis is the violation degree. The three
813
+ lines converge to the terminal stage within 4000 times. The
814
+ trends are similar: they increase very quickly at first (with 100
815
+ epochs) and then approach the local maximum slowly (with
816
+ 2900–3900 epochs). The inflection points on the three lines
817
+ are (1, 7228), (1, 9065), and (1, 5841).
818
+ In the three iterations, 300, 500, and 800 new data samples
819
+ are identified. Each data-generation loop in iterations takes
820
+ 30 s on average to run 3000–4000 times. Hence, one worth-
821
+ learning data sample costs (30×3)/(300+500+800) ≈ 0.056
822
+ s, which introduces little computational burden into the train-
823
+ ing process compared to the other steps in Algorithm 1. For ex-
824
+ ample, each label value calculated by numerical solvers costs
825
+ around 1 s (line 14), and the NN training on a dataset with
826
+ 1100 samples costs around 600 s (lines 2–5). In conclusion,
827
+ the numerical experiment verifies that the worth-learning data
828
+ generation brings little computational burden to the training
829
+ process.
830
+ Furthermore, we list the time consumption comparison of
831
+ the conventional and proposed training processes in Table I,
832
+ where the conventional training process uses simple random
833
+ sampling in place of the data generation loop (lines 6–12)
834
+ in Algorithm 1. By comparing the time consumption of the
835
+ TABLE I
836
+ TRAINING TIME BASED ON THE CONVENTIONAL SIMPLE RANDOM
837
+ SAMPLING AND PROPOSED WORTH-LEARNING DATA GENERATION
838
+ Cases
839
+ Conventional (min.)
840
+ Proposed (min.)
841
+ 30-bus
842
+ 27.9
843
+ 30.1
844
+ 57-bus
845
+ 79.8
846
+ 85.5
847
+ 118-bus
848
+ 174.1
849
+ 181.2
850
+ two methods, we can conclude that the training time of the
851
+ proposed method only increases by 4%–8%. Hence, these
852
+ experiments validate that the proposed worth-learning data
853
+ generation is computationally efficient.
854
+ B. Reliability and optimality of the proposed solver
855
+ To validate the superiority of the proposed NN OPF solver
856
+ (denoted by Proposed NN), we compare it with two bench-
857
+ marks: 1) B1 NN, which adopts the conventional loss function
858
+ and NN model (MLP) with a training dataset generated
859
+ by simple random sampling; 2) B2 NN, which adopts the
860
+ proposed loss function and physical-model-integrated model
861
+ with a training dataset generated by simple random sampling.
862
+ A particular test set different from the training datasets
863
+ above is created to examine the effect of these models fairly.
864
+ The test set has 600 samples that are produced by uniformly
865
+ sampling 200 points in [80%, 120%] of the nominal value
866
+ of one load three times. The other loads in the three times
867
+ are fixed at light (80% × nominal value), nominal (100% ×
868
+ nominal value), and heavy (120%×nominal value) load con-
869
+ ditions. The load sampled has the largest nominal value to
870
+ cover a big region of the input space. Based on these settings,
871
+ the test set includes much “unseen” data for those models.
872
+ The reliability of the NN OPF solvers is evaluated by the
873
+ constraint violation degrees on all test data. The optimality loss
874
+ is evaluated by the relative error between predicted results and
875
+ label values. For a fair comparison, the three methods all stop
876
+ their training processes when the value of || ˆV − V NN||1 is
877
+ less than 2 × 10−4. In view of the iterative training process,
878
+ the performance of the three solvers is studied with increasing
879
+ training data, and the initial NNs are identical because they
880
+ are trained on an initial dataset with N samples.
881
+ The results are statistically analyzed by creating box plots
882
+ displayed in Fig. 10. The violation degrees and optimality
883
+ losses of the results of the NNs from the three methods con-
884
+ verge to the terminal stages gradually. The rate of convergence
885
+ of Proposed NN is the largest, that of B2 NN is in the middle,
886
+ and that of B1 NN is the smallest.
887
+ In Figs. 10(a) to 10(c), the comparison of the last violation
888
+ degree gives notable results in the three cases. Specifically,
889
+ the median values in three cases are 7, 15, and 75 for B1
890
+ NN; 6, 12.5, and 60 for B2 NN; and 3.2, 6.1, and 25 for
891
+ Proposed NN, respectively. The novel loss function brings a
892
+ 19% reduction of violation degree on average by comparing
893
+ B1 NN and B2 NN. The proposed training data generation
894
+ method introduces a 50% reduction of violation degree on
895
+ average according to the comparison of B2 NN and Proposed
896
+ NN. Moreover, the height of the last boxes in each subfigure
897
+ suggests the robustness of the three solvers, and Proposed NN
898
+
899
+ 10000
900
+ 8000
901
+ 6000
902
+ 4000
903
+ Data at 1st iteration
904
+ 2000
905
+ Data at 2nd iteration
906
+ Data at 3rd iteration
907
+ 10
908
+ 20
909
+ 30
910
+ 40
911
+ 0
912
+ The number of epoches (/1008
913
+ (a)
914
+ (b)
915
+ (c)
916
+ (d)
917
+ (e)
918
+ (f)
919
+ Proposed NN
920
+ B1 NN
921
+ B2 NN
922
+ Fig. 10. The violation degree and optimality loss of the results of the NNs
923
+ trained by three methods change with the number of training data in different
924
+ cases: (a), (d) IEEE 30-bus; (b), (e) IEEE 57-bus; (c), (f) IEEE 118-bus.
925
+ has the smallest height in all three cases, which indicates the
926
+ worth-learning data generation can improve the reliability in
927
+ encountering “unseen” data from the feasible region.
928
+ The comparison of optimality losses is similar to that of
929
+ violation degrees, as illustrated in Figs. 10(d) to 10(f). The
930
+ proposed NN method has the best results in the three cases,
931
+ and the final median values of optimality losses are 0.6%,
932
+ 0.5%, and 0.3% in the three different cases, respectively. The
933
+ optimality losses of B2 NN and B1 NN increase by 150%,
934
+ 66%, and 360% and 142%, 167%, and 460% compared to
935
+ those of the proposed NN method in the three cases.
936
+ In conclusion, the proposed physical-model-integrated NN
937
+ OPF solver with worth-learning data generation can improve
938
+ the generalization of NN models compared to the conventional
939
+ NN solvers. Specifically, the proposed method introduces an
940
+ over 50% reduction of constraint violations and optimality
941
+ losses in the results on average.
942
+ C. Comparison with numerical solvers
943
+ To further evaluate the capability of the proposed method,
944
+ the next experiment focuses on the comparison with the
945
+ results of the classical AC-OPF solver based on the prime-
946
+ dual interior points algorithm and the classical DC-OPF solver
947
+ with a linear approximation of the power flow equations. The
948
+ classical AC-OPF solver produces the optimal solutions as
949
+ the ground truth values, and the DC-OPF solver is a widely
950
+ used approximation in the power industry. The test set is the
951
+ same as that in Section IV-B. The performance of the three
952
+ methods is evaluated by the following metrics: 1) the average
953
+ consumption time to solve an OPF problem; 2) the average
954
+ constraint violation degree V iophm, which is calculated by
955
+ Eqs. (8) and (9) for the two numerical solvers; and 3) the
956
+ average relative error of dispatch costs. These three metrics are
957
+ denoted as Time (ms), Vio.(MW), and Opt.(%), respectively.
958
+ The results are tabulated in Table II. The bottom row of the
959
+ table shows the average results over the three cases. As shown,
960
+ the proposed method achieves high computational efficiency,
961
+ which is at least three orders of magnitude faster than the
962
+ DC-OPF solver and four orders of magnitude faster than the
963
+ AC-OPF solver. Furthermore, the method also has much lower
964
+ constraint violations and optimality losses compared with the
965
+ DC OPF solver. The average Vio. (MW) and Opt. (%) of the
966
+ proposed solver are only 10.882 and 0.462, which are 44%
967
+ and 18% of those of the DC-OPF solver, respectively.
968
+ D. Interpretation of worth-learning data generation
969
+ This subsection interprets why the worth-learning data gen-
970
+ erated by the proposed method improve the representativeness
971
+ of the training dataset. The proposed worth-learning data
972
+ generation method is compared with the conventional simple
973
+ random sampling method. Without loss of generality, the
974
+ experiment is conducted on the 14-bus system. Beginning
975
+ with an identical initial dataset, the conventional and proposed
976
+ methods generate 100 samples in every step, and there are 8
977
+ steps for both. To visualize the representativeness, we draw the
978
+ distribution of these high-dimensional training samples based
979
+ on the t-distributed Stochastic Neighbor Embedding algorithm
980
+ [29], [30], which is a statistical method for visualizing high-
981
+ dimensional data by giving each data point a location in a two-
982
+ or three-dimensional map.
983
+ The reduced-dimensional data distributions of the conven-
984
+ tional and proposed methods are shown in Fig.11. In Fig.
985
+ 11(a), the data are produced by the simple random sampling
986
+ method, and their distribution is almost in a “�” region,
987
+ which means the possibility of sampling in this region is high.
988
+ Furthermore, the new data added in each step overlap with
989
+ existing data or fill in the intervals. The new data overlapping
990
+ with existing data are redundant in terms of NN training.
991
+ The data filling in the intervals may be also redundant when
992
+ the blanks are generalized well by the trained NN model. In
993
+ contrast, as shown in Fig. 11(b), the new data generated by
994
+
995
+ (MW)
996
+ X101
997
+ 8
998
+ 10
999
+ 12
1000
+ 16
1001
+ 2
1002
+ The number of the training data (320 + x X 64)(MW
1003
+ 12
1004
+ 16
1005
+ The number of the training data (320 + x X 64)× 102
1006
+ (MW
1007
+ Vio.
1008
+ 10
1009
+ 12
1010
+ 16
1011
+ The number of the training data (320 + x X 64)%
1012
+ Optimality loss(
1013
+ 2
1014
+ 8
1015
+ 10
1016
+ 12
1017
+ 16
1018
+ 3
1019
+ 4
1020
+ 6
1021
+ The number of the training data (320 + x X 64)%
1022
+ Optimality loss(
1023
+ 2
1024
+ 8
1025
+ 10
1026
+ 12
1027
+ 3
1028
+ 6
1029
+ 16
1030
+ The number of the training data (320 + x X 64)(%)
1031
+ 2
1032
+ 3
1033
+ 4
1034
+ 8
1035
+ 10
1036
+ 12
1037
+ 16
1038
+ 6
1039
+ The number of the training data (320 + x X 64)9
1040
+ TABLE II
1041
+ PERFORMANCE COMPARISON OF NUMERICAL SOLVERS AND THE PROPOSED SOLVER
1042
+ Test
1043
+ cases
1044
+ AC-OPF solver
1045
+ DC-OPF solver
1046
+ Proposed NN solver
1047
+ Time (ms)
1048
+ Vio. (MW)
1049
+ Opt. (%)
1050
+ Time (ms)
1051
+ Vio. (MW)
1052
+ Opt. (%)
1053
+ Time (ms)
1054
+ Vio. (MW)
1055
+ Opt. (%)
1056
+ 30-bus
1057
+ 530.3
1058
+ 0
1059
+ 0
1060
+ 14.8
1061
+ 5.340
1062
+ 0.908
1063
+ 0.110
1064
+ 4.415
1065
+ 0.603
1066
+ 57-bus
1067
+ 991.6
1068
+ 0
1069
+ 0
1070
+ 36.2
1071
+ 15.611
1072
+ 1.758
1073
+ 0.113
1074
+ 7.226
1075
+ 0.499
1076
+ 118-bus
1077
+ 1606.7
1078
+ 0
1079
+ 0
1080
+ 78.5
1081
+ 52.199
1082
+ 4.762
1083
+ 0.116
1084
+ 21.004
1085
+ 0.285
1086
+ Avg.
1087
+ 1024.9
1088
+ 0
1089
+ 0
1090
+ 129.5
1091
+ 24.383
1092
+ 2.476
1093
+ 0.113
1094
+ 10.882
1095
+ 0.462
1096
+ (a) Training dataset generated by simple random sampling
1097
+ (b) Training dataset generated by worth-learning data generation method
1098
+ Fig. 11. Reduced-dimensional distributions of the training datasets generated
1099
+ by two different methods.
1100
+ the proposed method in each step hardly overlap with existing
1101
+ data and are usually outside the region covered by the initial
1102
+ data. These new data increase the area covered by the training
1103
+ set so that the training set can have better representativeness
1104
+ of the input feasible region. This explains the effectiveness of
1105
+ the proposed worth-learning data generation method.
1106
+ V. CONCLUSION
1107
+ This study proposes an AC-OPF solver based on a physical-
1108
+ model-integrated NN with worth-learning data generation to
1109
+ produce reliable solutions efficiently. To the best of our knowl-
1110
+ edge, this is the first study that has addressed the generalization
1111
+ problem of NN OPF solvers regarding the representativeness
1112
+ of training datasets. The physical-model-integrated NN is
1113
+ designed by integrating an MLP and an OPF-model module.
1114
+ This specific structure outputs not only the optimal decision
1115
+ variables of the OPF problem but also the constraint violation
1116
+ degree. Based on this NN, the worth-learning data generation
1117
+ method can identify feasible training samples that are not well
1118
+ generalized by the NN. Accordingly, by iteratively applying
1119
+ this method and including the newly identified worth-learning
1120
+ data samples in the training set, the representativeness of the
1121
+ training set can be significantly enhanced.
1122
+ The theoretical analysis shows that the method brings little
1123
+ computational burden into the training process and can make
1124
+ the models generalize over the feasible region. Experimen-
1125
+ tal results show that the proposed method leads to over a
1126
+ 50% reduction of both constraint violations and optimality
1127
+ loss compared to conventional NN solvers. Furthermore, the
1128
+ computation speed of the proposed method is three orders of
1129
+ magnitude faster than that of the DC-OPF solver.
1130
+ APPENDIX A
1131
+ COMPUTATIONAL EFFICIENCY OF WORTH-LEARNING
1132
+ DATA GENERATION
1133
+ To analyze the computational complexity of the proposed
1134
+ NN model with worth-learning data generation, we adopt a
1135
+ widely used measure—the number of floating-point operations
1136
+ (FLOPs) during the NN model’s forward-backward propaga-
1137
+ tion. The total FLOPs of one single layer of a fully-connected
1138
+ NN model can be calculated as follows:
1139
+ Forward :
1140
+ FLOPs = (2I − 1) × O,
1141
+ (14a)
1142
+ Backward :
1143
+ FLOPs = (2I − 1) × O,
1144
+ (14b)
1145
+ where I is the dimension of the layer’s input, and O is the
1146
+ dimension of its output.
1147
+ To approximate an OPF mapping based on a 57-bus system,
1148
+ the proposed NN model uses the following structure: 84 ×
1149
+ 1000×2560×2560×5120×2000×114. According to Eq. (14),
1150
+ the total FLOPs of the NN per forward-backward process is
1151
+ around 1×108. The GPU used in the experiment is the Quadro
1152
+ P6000, and its performance is 12.2 TFLOP/s (1 TFLOP/s =
1153
+ 1012 FLOP/s). Using the GPU, we can perform the forward-
1154
+ backward process 1.22 × 105 times per second.
1155
+ For the worth-learning data generation in Algorithm 1, the
1156
+ forward process is to calculate V ioifs and V iophm, and the
1157
+ backward process is to update S′G
1158
+ ifs and V ′ifs by the gradients.
1159
+ We concatenate S′G
1160
+ ifs and V ′ifs as a vector x, and we suppose
1161
+ the range of each item in x is [0, 10], and x changes 10−3
1162
+ in each update step. Varying from 0 to 10, it costs 104 times
1163
+ the forward-backward processes. In other words, the algorithm
1164
+ can at least update 1.22 × 105/104 ≈ 12 samples in 1 s, so
1165
+ finding one sample costs no longer than 0.08 s.
1166
+ In practice, there is a slight error between the actual speed
1167
+ in experiments and the theoretical analysis. According to the
1168
+ numerical experiments in Section IV-A, an average of 533
1169
+ samples are found in 30 s. The average consumption time for
1170
+ identifying one sample is 0.056 s.
1171
+
1172
+ 2nd step
1173
+ Initial data
1174
+ step
1175
+ 80
1176
+ 60
1177
+ 40
1178
+ 20
1179
+ D
1180
+ 0
1181
+ -20
1182
+ -40
1183
+ -60
1184
+ -80
1185
+ -100
1186
+ 8th step
1187
+ 6th
1188
+ Overall data
1189
+ step
1190
+ 80
1191
+ 60
1192
+ 40
1193
+ 20
1194
+ D
1195
+ 0
1196
+ -20
1197
+ -40
1198
+ -60
1199
+ -80
1200
+ -100
1201
+ -50
1202
+ 50
1203
+ 100
1204
+ -50
1205
+ -100
1206
+ 0
1207
+ -100
1208
+ 50
1209
+ 100
1210
+ -100
1211
+ -50
1212
+ 50
1213
+ 100
1214
+ 0
1215
+ D2
1216
+ D2
1217
+ D2Initial data
1218
+ 2nd step
1219
+ 4th step
1220
+ 80
1221
+ 60
1222
+ 40
1223
+ 20
1224
+ D
1225
+ 0
1226
+ -20
1227
+ -40
1228
+ -60
1229
+ -80
1230
+ -100
1231
+ 6th step
1232
+ 8th step
1233
+ Overall data
1234
+ 80
1235
+ 60
1236
+ 40
1237
+ 20
1238
+ D
1239
+ 0
1240
+ -20
1241
+ -40
1242
+ -60
1243
+ -80
1244
+ -100
1245
+ -100
1246
+ -50
1247
+ 0
1248
+ 50
1249
+ 100
1250
+ -100
1251
+ -50
1252
+ 0
1253
+ 50
1254
+ 100
1255
+ -100
1256
+ -50
1257
+ 0
1258
+ 50
1259
+ 100
1260
+ D2
1261
+ D2
1262
+ D2
1263
+ Initial data
1264
+ Identified data in previous steps
1265
+ New data10
1266
+ Fig. 12.
1267
+ Illustration of the covered region Scover expanding its area by the
1268
+ generalized region Sadd.
1269
+ From the analysis presented above, we can conclude that the
1270
+ proposed worth-learning data generation method brings little
1271
+ computational burden into the training process.
1272
+ APPENDIX B
1273
+ CONVERGENCE OF WORTH-LEARNING DATA GENERATION
1274
+ This section verifies that the proposed NN with worth-
1275
+ learning data generation can generalize to the whole feasible
1276
+ set. NN models are continuous functions because both linear
1277
+ layers and activation functions are continuous. We define a
1278
+ critical violation value ϵ that divides the input space into
1279
+ two types: the covered region (the V iophm values of all of
1280
+ the points are less or equal to ϵ) and the uncovered region
1281
+ (the V iophm values of all of the points are greater than
1282
+ ϵ). The boundaries of the two regions consist of the points
1283
+ whose V iophm values are approximately equal to ϵ. Using
1284
+ these points as initial points, we can identify points with the
1285
+ local maximum in the uncovered region by max violation
1286
+ backpropagation.
1287
+ Next, these new points {x1} (the red points) are added to
1288
+ the training set. After training, the neighborhood of these new
1289
+ points {x1} would be covered. Due to the generalization of
1290
+ NNs, most points in the area Sadd = {x|a × xini
1291
+ 0 + (1 − a) ×
1292
+ x1, 0 ≤ a ≤ 1} would also be covered, where {xini
1293
+ 0 } are the
1294
+ initial points on the boundaries (the black points), as shown
1295
+ in Fig. 12.
1296
+ Therefore, the area Sadd is subtracted from the uncovered
1297
+ region. Through iterations, the uncovered region is emptied,
1298
+ and the number of added samples converges to zero.
1299
+ In practice, we choose the training set instead of the
1300
+ boundary points as initial points for convenience. Although
1301
+ some samples in the training set are not at boundaries, they
1302
+ are eliminated by the filter function, as shown in Algorithm 1.
1303
+ Therefore, the replacement of the boundary points has no
1304
+ impact on the results.
1305
+ REFERENCES
1306
+ [1] J. A. Taylor, Convex optimization of power systems.
1307
+ Cambridge
1308
+ University Press, 2015.
1309
+ [2] J. Zhu, Optimization of power system operation.
1310
+ John Wiley & Sons,
1311
+ 2015.
1312
+ [3] R. Madani, S. Sojoudi, and J. Lavaei, “Convex relaxation for optimal
1313
+ power flow problem: Mesh networks,” IEEE Trans. Power Syst., vol. 30,
1314
+ no. 1, pp. 199–211, 2014.
1315
+ [4] Y. Tang, K. Dvijotham, and S. Low, “Real-time optimal power flow,”
1316
+ IEEE Trans. Smart Grid, vol. 8, no. 6, pp. 2963–2973, 2017.
1317
+ [5] R. D. Zimmerman and C. E. Murillo-S´anchez, “Matpower 6.0 user’s
1318
+ manual,” Power Systems Engineering Research Center, vol. 9, 2016.
1319
+ [6] R. A. Jabr, “Radial distribution load flow using conic programming,”
1320
+ IEEE Trans. Power Syst., vol. 21, no. 3, pp. 1458–1459, 2006.
1321
+ [7] X. Bai, H. Wei, K. Fujisawa, and Y. Wang, “Semidefinite programming
1322
+ for optimal power flow problems,” Int. J. Electr. Power Energy Syst,
1323
+ vol. 30, no. 6-7, pp. 383–392, 2008.
1324
+ [8] J. Lavaei and S. H. Low, “Zero duality gap in optimal power flow
1325
+ problem,” IEEE Trans. Power Syst., vol. 27, no. 1, pp. 92–107, 2011.
1326
+ [9] S. H. Low, “Convex relaxation of optimal power flow—part ii: Exact-
1327
+ ness,” IEEE Trans. Control Netw. Syst., vol. 1, no. 2, pp. 177–189, 2014.
1328
+ [10] M. K. Singh, V. Kekatos, and G. B. Giannakis, “Learning to solve the
1329
+ ac-opf using sensitivity-informed deep neural networks,” IEEE Trans.
1330
+ Power Syst., vol. 37, no. 4, pp. 2833–2846, 2021.
1331
+ [11] X. Pan, T. Zhao, M. Chen, and S. Zhang, “Deepopf: A deep neural
1332
+ network approach for security-constrained dc optimal power flow,” IEEE
1333
+ Trans. Power Syst., vol. 36, no. 3, pp. 1725–1735, 2020.
1334
+ [12] F. Fioretto, T. W. Mak, and P. Van Hentenryck, “Predicting ac optimal
1335
+ power flows: Combining deep learning and lagrangian dual methods,”
1336
+ in Proceedings of the AAAI Conf. Artif. Intell., vol. 34, no. 01, 2020,
1337
+ pp. 630–637.
1338
+ [13] X. Pan, M. Chen, T. Zhao, and S. H. Low, “Deepopf: A feasibility-
1339
+ optimized deep neural network approach for ac optimal power flow
1340
+ problems,” IEEE Syst. J., 2022.
1341
+ [14] J. Wang, C. Lan, C. Liu, Y. Ouyang, and T. Qin, “Generalizing to
1342
+ unseen domains: A survey on domain generalization,” CoRR, vol.
1343
+ abs/2103.03097, 2021. [Online]. Available: https://arxiv.org/abs/2103.
1344
+ 03097
1345
+ [15] D. Owerko, F. Gama, and A. Ribeiro, “Optimal power flow using graph
1346
+ neural networks,” in ICASSP 2020 - 2020 IEEE Int. Conf. Acoust. Speech
1347
+ Signal Process, 2020, pp. 5930–5934.
1348
+ [16] Q. Su, H. U. Khan, I. Khan, B. J. Choi, F. Wu, and A. A. Aly, “An
1349
+ optimized algorithm for optimal power flow based on deep learning,”
1350
+ Energy Rep., vol. 7, pp. 2113–2124, 2021.
1351
+ [17] L. Zhang, Y. Chen, and B. Zhang, “A convex neural network solver for
1352
+ dcopf with generalization guarantees,” IEEE Trans. Control Netw. Syst.,
1353
+ 2021.
1354
+ [18] P. R. Jeyaraj, S. P. Asokan, and A. C. Karthiresan, “Optimum power flow
1355
+ in dc microgrid employing bayesian regularized deep neural network,”
1356
+ Electr. Power Syst. Res., vol. 205, p. 107730, 2022. [Online]. Available:
1357
+ https://www.sciencedirect.com/science/article/pii/S0378779621007112
1358
+ [19] R. Nellikkath and S. Chatzivasileiadis, “Physics-informed neural net-
1359
+ works for ac optimal power flow,” Electr. Power Syst. Res., vol. 212, p.
1360
+ 108412, 2022.
1361
+ [20] W. Huang, X. Pan, M. Chen, and S. H. Low, “Deepopf-v: Solving ac-
1362
+ opf problems efficiently,” IEEE Trans. Power Syst., vol. 37, no. 1, pp.
1363
+ 800–803, 2022.
1364
+ [21] W. Liang, G. A. Tadesse, D. Ho, L. Fei-Fei, M. Zaharia, C. Zhang,
1365
+ and J. Zou, “Advances, challenges and opportunities in creating data for
1366
+ trustworthy ai,” Nat. Mach. Intell., vol. 4, no. 8, pp. 669–677, 2022.
1367
+ [22] Y. Chen, Y. Tan, and D. Deka, “Is machine learning in power systems
1368
+ vulnerable?” in 2018 IEEE International Conference on Communica-
1369
+ tions, Control, and Computing Technologies for Smart Grids (Smart-
1370
+ GridComm).
1371
+ IEEE, 2018, pp. 1–6.
1372
+ [23] A. Venzke and S. Chatzivasileiadis, “Verification of neural network
1373
+ behaviour: Formal guarantees for power system applications,” IEEE
1374
+ Trans. Smart Grid, vol. 12, no. 1, pp. 383–397, 2020.
1375
+ [24] P. Tøndel, T. A. Johansen, and A. Bemporad, “An algorithm for
1376
+ multi-parametric quadratic programming and explicit mpc solutions,”
1377
+ Automatica, vol. 39, no. 3, pp. 489–497, 2003.
1378
+ [25] Y. Ji, R. J. Thomas, and L. Tong, “Probabilistic forecasting of real-time
1379
+ lmp and network congestion,” IEEE Trans. Power Syst., vol. 32, no. 2,
1380
+ pp. 831–841, 2016.
1381
+ [26] T. Gal, Postoptimal Analyses, Parametric Programming, and Related
1382
+ Topics: degeneracy, multicriteria decision making, redundancy.
1383
+ Walter
1384
+ de Gruyter, 2010.
1385
+ [27] T. Joswig-Jones, K. Baker, and A. S. Zamzam, “Opf-learn: An open-
1386
+ source framework for creating representative ac optimal power flow
1387
+ datasets,” in 2022 IEEE Power & Energy Society Innovative Smart Grid
1388
+ Technologies Conference (ISGT).
1389
+ IEEE, 2022, pp. 1–5.
1390
+ [28] S. H. Low, “Convex relaxation of optimal power flow—part i,” IEEE
1391
+ Trans. Control Netw. Syst., vol. 1, no. 1, pp. 15–27.
1392
+ [29] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion,
1393
+ O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vander-
1394
+ plas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duches-
1395
+ nay, “Scikit-learn: Machine learning in Python,” J. Mach. Learn. Res.,
1396
+ vol. 12, pp. 2825–2830, 2011.
1397
+ [30] A. C. Belkina, C. O. Ciccolella, R. Anno, R. Halpert, J. Spidlen, and J. E.
1398
+ Snyder-Cappione, “Automated optimized parameters for t-distributed
1399
+ stochastic neighbor embedding improve visualization and analysis of
1400
+ large datasets,” Nat. Commun., vol. 10, no. 1, pp. 1–12, 2019.
1401
+
1402
+ C
9dE2T4oBgHgl3EQfQAa_/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
9tE2T4oBgHgl3EQfQQYW/content/tmp_files/2301.03767v1.pdf.txt ADDED
@@ -0,0 +1,1506 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Online Backfilling with No Regret for Large-Scale Image Retrieval
2
+ Seonguk Seo1,†
3
+ Mustafa Gokhan Uzunbas3
4
+ Bohyung Han1,2
5
+ Sara Cao3
6
+ Joena Zhang3
7
+ Taipeng Tian3
8
+ Ser-Nam Lim3
9
+ 1ECE & 1,2IPAI, Seoul National University
10
+ 3Meta AI
11
+ {seonguk, bhhan}@snu.ac.kr
12
+ {gokhanuzunbas, joenazhang, xuefeicao01, ttp, sernamlim}@meta.com
13
+ Abstract
14
+ Backfilling is the process of re-extracting all gallery em-
15
+ beddings from upgraded models in image retrieval systems.
16
+ It inevitably requires a prohibitively large amount of com-
17
+ putational cost and even entails the downtime of the ser-
18
+ vice. Although backward-compatible learning sidesteps this
19
+ challenge by tackling query-side representations, this leads
20
+ to suboptimal solutions in principle because gallery em-
21
+ beddings cannot benefit from model upgrades. We address
22
+ this dilemma by introducing an online backfilling algorithm,
23
+ which enables us to achieve a progressive performance im-
24
+ provement during the backfilling process while not sacri-
25
+ ficing the final performance of new model after the com-
26
+ pletion of backfilling. To this end, we first propose a sim-
27
+ ple distance rank merge technique for online backfilling.
28
+ Then, we incorporate a reverse transformation module for
29
+ more effective and efficient merging, which is further en-
30
+ hanced by adopting a metric-compatible contrastive learn-
31
+ ing approach. These two components help to make the dis-
32
+ tances of old and new models compatible, resulting in de-
33
+ sirable merge results during backfilling with no extra com-
34
+ putational overhead. Extensive experiments show the effec-
35
+ tiveness of our framework on four standard benchmarks in
36
+ various settings.
37
+ 1. Introduction
38
+ Image retrieval models [5, 10, 21, 23] have achieved re-
39
+ markable performance by adopting deep neural networks
40
+ for representing images. Yet, all models need to be up-
41
+ graded at times to take advantage of improvements in train-
42
+ ing datasets, network architectures, and training techniques.
43
+ This unavoidably leads to the need for re-extracting the fea-
44
+ tures from millions or even billions of gallery images using
45
+ the upgraded new model. This process, called backfilling
46
+ † This work was mostly done during an internship at Meta AI.
47
+ or re-indexing, needs to be completed before the retrieval
48
+ system can benefit from the new model, which may take
49
+ months in practice.
50
+ To sidestep this bottleneck, several backfilling-free ap-
51
+ proaches based on backward-compatible learning [4,13,19,
52
+ 20,22] have been proposed. They learn a new model while
53
+ ensuring that its feature space is still compatible with the old
54
+ one, thus avoiding the need for updating old gallery embed-
55
+ dings. Although these approaches have achieved substantial
56
+ performance gains without backfilling, they achieve feature
57
+ compatibility at the expense of feature discriminability and
58
+ their performance is suboptimal. We argue that backward-
59
+ compatible learning is not a fundamental solution and back-
60
+ filling is still essential to accomplish state-of-the-art perfor-
61
+ mance without performance sacrifices.
62
+ To resolve this compatibility-discriminability dilemma,
63
+ we relax the backfill-free constraint and propose a novel
64
+ online backfilling algorithm equipped with three technical
65
+ components. We posit that an online backfilling technique
66
+ needs to satisfy three essential conditions: 1) immediate de-
67
+ ployment after the completion of model upgrade, 2) pro-
68
+ gressive and non-trivial performance gains in the middle
69
+ of backfilling, and 3) no degradation of final performance
70
+ compared to offline backfilling. To this end, we first pro-
71
+ pose a distance rank merge framework to make online back-
72
+ filling feasible, which retrieves images from both the old
73
+ and new galleries separately and merge their results to ob-
74
+ tain the final retrieval outputs even when backfilling is still
75
+ ongoing. While this approach provides a monotonic perfor-
76
+ mance increase with the progress of backfilling regardless
77
+ of the gallery of interest and network architectures, it re-
78
+ quires feature computations twice, once from the old model
79
+ and another from the new one at the inference stage of a
80
+ query. To overcome this limitation, we introduce a reverse
81
+ transformation module, which is a lightweight mapping net-
82
+ work between the old and new embeddings. The reverse
83
+ transformation module allows us to obtain the query repre-
84
+ sentations compatible with both the old and new galleries
85
+ arXiv:2301.03767v1 [cs.CV] 10 Jan 2023
86
+
87
+ using only a single feature extraction. On the other hand,
88
+ however, we notice that the scales of distance in the embed-
89
+ ding spaces of the two models could be significantly dif-
90
+ ferent. We resolve the limitation with a metric compatible
91
+ learning technique, which calibrates the distances of two
92
+ models via contrastive learning, further enhancing perfor-
93
+ mance of rank merge.
94
+ The main contributions of our work are summarized as
95
+ follows.
96
+ • We propose an online backfilling approach, a funda-
97
+ mental solution for model upgrades in image retrieval
98
+ systems, based on distance rank merge to overcome
99
+ the compatibility-discriminability dilemma in existing
100
+ compatible learning methods.
101
+ • We incorporate a reverse query transform module to
102
+ make it compatible with both the old and new galleries
103
+ while computing the feature extraction of query only
104
+ once in the middle of the backfilling process.
105
+ • We adopt a metric-compatible learning technique to
106
+ make the merge process robust by calibrating distances
107
+ in the feature embedding spaces given by the old and
108
+ new models.
109
+ • The proposed approach outperforms all existing meth-
110
+ ods by significant margins on four standard benchmark
111
+ datasets under various scenarios.
112
+ The rest of this paper is organized as follows. Section 2
113
+ reviews the related works. We present the main framework
114
+ of online backfilling in Section 3, and discuss the techni-
115
+ cal components for improvement in Section 4 and 5. We
116
+ demonstrate the effectiveness of the proposed framework in
117
+ Section 6 and conclude this paper in Section 7.
118
+ 2. Related Work
119
+ Backward compatible learning
120
+ Backward compatibility
121
+ refers to the property to support older versions in hardware
122
+ or software systems. It has been recently used in model
123
+ upgrade scenarios in image retrieval systems. Since the fea-
124
+ ture spaces given by the models relying on training datasets
125
+ in different regimes are not compatible [11, 24], model up-
126
+ grades require re-extraction of all gallery images from new
127
+ models, which takes a huge amount of computational cost.
128
+ To prevent this time-consuming backfilling cost, backward
129
+ compatible training (BCT) [1, 13, 15, 19, 22, 26] has been
130
+ proposed to learn better feature representations while be-
131
+ ing compatible with old embeddings, which makes the new
132
+ model backfill-free. Shen et al. [19] employ the influence
133
+ loss that utilizes the old classifier as a regularizer when
134
+ training the new model. LCE [13] introduces an alignment
135
+ loss to align the class centers between old and new mod-
136
+ els and a boundary loss that restricts more compact intra-
137
+ class distributions for the new model. Bai et al. [1] pro-
138
+ pose a joint prototype transfer with structural regularization
139
+ to align two embedding features. UniBCT [26] presents a
140
+ structural prototype refinement algorithm that first refines
141
+ noisy old features with graph transition and then conducts
142
+ backward compatible training. Although these approaches
143
+ improved compatible performance without backfilling, they
144
+ clearly sacrifice feature discriminability to achieve feature
145
+ compatibility with non-ideal old gallery embeddings.
146
+ Compatible learning with backfilling
147
+ To overcome the
148
+ inherent limitation of backward compatible learning, sev-
149
+ eral approaches [17, 20, 25] have been proposed to uti-
150
+ lize backfilling but efficiently. Forward compatible train-
151
+ ing (FCT) [17] learn a lightweight transformation mod-
152
+ ule that updates old gallery embeddings to be compati-
153
+ ble with new embeddings. Although it gives better com-
154
+ patible performance than BCT, it requires an additional
155
+ side-information [2] to map from old to new embeddings,
156
+ which limits its practicality. Moreover, FCT still suffers
157
+ from computational bottleneck until all old gallery embed-
158
+ dings are transformed, especially when the side-information
159
+ needs to be extracted.
160
+ On the other hand, RACT [25]
161
+ and BiCT [20] alleviate this bottleneck issue by backfilling
162
+ the gallery embeddings in an online manner. RACT first
163
+ trains a backward-compatible new model with regression-
164
+ alleviating loss, then backfills the old gallery embeddings
165
+ with the new model.
166
+ Because the new feature space is
167
+ compatible with the old one, the new model can be de-
168
+ ployed right away while backfilling is carried out in the
169
+ background.
170
+ BiCT further reduces the backfilling cost
171
+ by transforming the old gallery embeddings with forward-
172
+ compatible training [17]. Although both approaches can
173
+ utilize online backfilling, they still sacrifice the final perfor-
174
+ mance because the final new embeddings are constrained by
175
+ the old ones. Unlike these methods, our framework enables
176
+ online backfilling while fully exploiting the final new model
177
+ performance without any degradation.
178
+ 3. Image Retrieval by Rank Merge
179
+ This section discusses our baseline image retrieval al-
180
+ gorithm that makes online backfilling feasible.
181
+ We first
182
+ present our motivation and then describe technical details
183
+ with empirical observations.
184
+ 3.1. Overview
185
+ Our goal is to develop a fundamental solution via online
186
+ backfilling to overcome the compatibility-discriminability
187
+ trade-off in compatible model upgrade.
188
+ This strategy
189
+ removes inherent limitations of backfill-free backward-
190
+ compatible learning—the inability to use state-of-the-
191
+
192
+ Figure 1. Image retrieval with the proposed distance rank merge technique. In the middle of backfilling, we retrieve images independently
193
+ using two separate models and their galleries, and merge the retrieval results based on their distances. Note that the total number of gallery
194
+ embeddings are fixed throughout the backfilling process, i.e., |G| = |Gnew| + |Gold|.
195
+ art representations of gallery images through model
196
+ upgrades—while avoiding prohibitive costs, including the
197
+ situation that we cannot benefit from model upgrade of the
198
+ offline backfilling process, until backfilling is completed.
199
+ To be specific, the proposed image retrieval system with
200
+ online backfilling should satisfy the following three condi-
201
+ tions:
202
+ 1. The system can be deployed immediately as soon as
203
+ model upgrade is complete.
204
+ 2. The performance should monotonically increase with-
205
+ out negative flips1 as backfill progresses.
206
+ 3. The final performance should not be sacrificed com-
207
+ pared to the algorithm relying on offline backfilling.
208
+ We present a distance rank merge approach for image re-
209
+ trieval, which enables online backfilling in arbitrary model
210
+ upgrade scenarios. Our method maintains two separate re-
211
+ trieval pipelines corresponding to the old and new models
212
+ and merges the retrieval results from the two models based
213
+ on distances from a query embedding. This allows us to run
214
+ the retrieval system without a warm-up period and achieve
215
+ surprisingly good results during the backfill process. Note
216
+ that the old and new models are not required to be com-
217
+ patible at this moment but we will make them so to further
218
+ improve performance in the subsequent sections.
219
+ 3.2. Formulation
220
+ Let q ∈ Q be a query image and G = {g1, ..., gN} be
221
+ a gallery composed of N images. An embedding network
222
+ φ(·) projects an image onto a learned feature embedding
223
+ space. To retrieve the closest gallery image given a query,
224
+ we find arg ming∈G dist (φ(q), φ(g)), where dist(·, ·) is a
225
+ distance metric. Following [19], we define the retrieval per-
226
+ formance as
227
+ M(φ(Q), φ(G)),
228
+ (1)
229
+ 1The “negative flip” refers to performance degradation caused by in-
230
+ correct retrievals of samples by the new model, which were correctly rec-
231
+ ognized by the old model.
232
+ where M(·, ·) is an evaluation metric such as mean aver-
233
+ age precision (mAP) or cumulative matching characteristics
234
+ (CMC), and φ(·) indicates embedding models for query and
235
+ gallery, respectively.
236
+ Backward compatibility
237
+ Denote the old and new embed-
238
+ ding networks by φold(·) and φnew(·) respectively. If φnew(·)
239
+ is backward compatible with φold(·), then we can perform
240
+ search on a set of old gallery embeddings using a new
241
+ query embedding, i.e., arg ming∈G dist(φnew(q), φold(g)).
242
+ As stated in [19], the backward compatibility is achieved
243
+ when the following criterion is satisfied:
244
+ M(φnew(Q), φold(G)) > M(φold(Q), φold(G)).
245
+ (2)
246
+ From now, we refer to a pair of embedding networks for
247
+ query and gallery as a retrieval system, e.g., {φ(·), φ(·)}.
248
+ Rank merge
249
+ Assume that the first M out of a total of
250
+ N images are backfilled, i.e., Gnew = {g1, ..., gM} and
251
+ Gold = {gM+1, ..., gN}. Note that the total number of
252
+ stored gallery embeddings is fixed to N during the back-
253
+ filling process, i.e., Gold = G − Gnew. Then, we first con-
254
+ duct image retrieval using the individual retrieval systems,
255
+ {φold, φold} and {φnew, φnew}, independently as
256
+ gm = arg min
257
+ gi∈Gold dist
258
+
259
+ φold(q), φold(gi)
260
+
261
+ ,
262
+ (3)
263
+ gn = arg min
264
+ gj∈Gnew dist (φnew(q), φnew(gj)) .
265
+ (4)
266
+ Figure 1 illustrates the retrieval process. For each query
267
+ image q, we finally select gm if dist(φold(q), φold(gm)) <
268
+ dist(φnew(q), φnew(gn)) and gn otherwise.
269
+ The retrieval
270
+ performance after rank merge during backfilling is given by
271
+ Mt :=
272
+ (5)
273
+ M({φold(Q), φnew(Q)}, {φold(Gold
274
+ t ), φnew(Gnew
275
+ t
276
+ )}),
277
+ where t ∈ [0, 1] indicates the rate of backfilling completion,
278
+ i.e., |Gnew
279
+ t
280
+ | = t|G| and |Gold
281
+ t | = (1 − t)|G|. The criteria
282
+
283
+ Old retrieval system
284
+ retrieval
285
+ Plo
286
+ dold (Gold)
287
+ Query
288
+ Gallery (G)
289
+ (q)
290
+ retrieval
291
+ IG| = |Gnew| + |Gold]
292
+ backfilling
293
+ New retrieval system anew0
294
+ 20
295
+ 40
296
+ 60
297
+ 80
298
+ 100
299
+ Backfill progress (%)
300
+ 0.62
301
+ 0.64
302
+ 0.66
303
+ 0.68
304
+ 0.70
305
+ 0.72
306
+ 0.74
307
+ 0.76
308
+ mAP
309
+ New (0.773)
310
+ Merge (0.693)
311
+ Old (0.627)
312
+ 0
313
+ 20
314
+ 40
315
+ 60
316
+ 80
317
+ 100
318
+ Backfill progress (%)
319
+ 0.84
320
+ 0.86
321
+ 0.88
322
+ 0.90
323
+ CMC (Top1 Acc.)
324
+ New (0.909)
325
+ Merge (0.871)
326
+ Old (0.827)
327
+ (a) ImageNet-1K
328
+ 0
329
+ 20
330
+ 40
331
+ 60
332
+ 80
333
+ 100
334
+ Backfill progress (%)
335
+ 0.25
336
+ 0.30
337
+ 0.35
338
+ 0.40
339
+ 0.45
340
+ mAP
341
+ New (0.474)
342
+ Merge (0.308)
343
+ Old (0.216)
344
+ 0
345
+ 20
346
+ 40
347
+ 60
348
+ 80
349
+ 100
350
+ Backfill progress (%)
351
+ 0.35
352
+ 0.40
353
+ 0.45
354
+ 0.50
355
+ 0.55
356
+ 0.60
357
+ CMC (Top1 Acc.)
358
+ New (0.626)
359
+ Merge (0.490)
360
+ Old (0.343)
361
+ (b) CIFAR-100
362
+ 0
363
+ 20
364
+ 40
365
+ 60
366
+ 80
367
+ 100
368
+ Backfill progress (%)
369
+ 0.17
370
+ 0.18
371
+ 0.19
372
+ 0.20
373
+ 0.21
374
+ 0.22
375
+ 0.23
376
+ mAP
377
+ New (0.234)
378
+ Merge (0.195)
379
+ Old (0.165)
380
+ 0
381
+ 20
382
+ 40
383
+ 60
384
+ 80
385
+ 100
386
+ Backfill progress (%)
387
+ 0.32
388
+ 0.34
389
+ 0.36
390
+ 0.38
391
+ CMC (Top1 Acc.)
392
+ New (0.391)
393
+ Merge (0.358)
394
+ Old (0.307)
395
+ (c) Places-365
396
+ 0
397
+ 20
398
+ 40
399
+ 60
400
+ 80
401
+ 100
402
+ Backfill progress (%)
403
+ 0.35
404
+ 0.40
405
+ 0.45
406
+ 0.50
407
+ mAP
408
+ New (0.513)
409
+ Merge (0.400)
410
+ Old (0.312)
411
+ 0
412
+ 20
413
+ 40
414
+ 60
415
+ 80
416
+ 100
417
+ Backfill progress (%)
418
+ 0.50
419
+ 0.55
420
+ 0.60
421
+ 0.65
422
+ 0.70
423
+ CMC (Top1 Acc.)
424
+ New (0.703)
425
+ Merge (0.639)
426
+ Old (0.497)
427
+ (d) Market-1501
428
+ Figure 2. mAP and CMC results on the standard benchmarks using ResNet-18. Old and New denote the performance without backfilling
429
+ and with offline backfilling, respectively. The proposed distance rank merging of the old and new models, denoted by Merge, exhibits
430
+ desirable results; the performance monotonically increases as backfill progresses without negative flips for all datasets and our algorithm
431
+ based on online backfilling achieves competitive final performances with offline backfilling. The numbers in the legend indicate either
432
+ AUCmAP or AUCCMC scores.
433
+ discussed in Section 3.1 are formally defined as
434
+ M0 ≥ M(φold(Q), φold(G)),
435
+ (6)
436
+ M1 ≥ M(φnew(Q), φnew(G)),
437
+ (7)
438
+ Mt1 ≥ Mt2 if t1 ≥ t2.
439
+ (8)
440
+ Comprehensive evaluation
441
+ To measure both backfilling
442
+ cost and model performance comprehensively during online
443
+ backfilling, we utilize the following metrics that calculate
444
+ the area under mAP or CMC curves as
445
+ AUCmAP =
446
+ � 1
447
+ 0
448
+ mAPtdt and AUCCMC =
449
+ � 1
450
+ 0
451
+ CMCtdt.
452
+ 3.3. Merge Results
453
+ We present the results from the rank merge strategy on
454
+ two standard benchmarks, including ImageNet-1K [18] and
455
+ Places-365 [28], in Figure 2. Our rank merging approach
456
+ yields strong and robust results for all datasets; both mAP
457
+ and CMC monotonically increase without negative flips as
458
+ backfill progresses even though the old and new models are
459
+ not compatible each other. Also, it takes full advantage of
460
+ the new model until the end of backfilling without suffering
461
+ from performance degradation. This validates that our rank
462
+ merge technique satisfies the criteria for online backfilling
463
+ discussed in Section 3.1 and 3.2. Please refer to Section 6.1
464
+ for the experimental detail.
465
+ Figure 3. Reverse query transform module, ψ(·), learns a mapping
466
+ from new to old feature spaces. We only update the parameters of
467
+ the module ψ(·) (in red rectangle) during training.
468
+ 4. Reverse Query Transform
469
+ Our baseline image retrieval method is model-agnostic,
470
+ free from extra training, and effective for performance im-
471
+ provement. However, one may argue that the proposed ap-
472
+ proach is computationally expensive at inference time be-
473
+ cause we need to conduct feature extraction twice per query
474
+ for both the old and new models. This section discusses how
475
+ to alleviate this limitation by introducing a small network,
476
+ called the reverse query transform module.
477
+ 4.1. Basic Formulation
478
+ To reduce the computational cost incurred by comput-
479
+ ing query embeddings twice at inference stage, we compute
480
+ the embedding using the new model and transform it to the
481
+ version compatible with the old model through the reverse
482
+
483
+ old
484
+ tnew
485
+ (.)
486
+ new
487
+ revFigure 4. Image retrieval merging with reverse query transform module. Backward retrieval system consists of reversely transformed new
488
+ query and old gallery, {φrev, φold}. The final image retrieval results are given by merging the outputs from {φrev, φold} and {φnew, φnew}.
489
+ query transform module as illustrated in Figure 3. To estab-
490
+ lish such a mechanism, we fix the parameters of the old and
491
+ new models {φold, φnew} after training them independently,
492
+ and train a lightweight network, ψ(·), which transforms the
493
+ embedding in the new model to the one in the old model.
494
+ For each training example x, our objective is minimizing
495
+ the following loss:
496
+ LRQT(x) := dist
497
+
498
+ ψ (φnew(x)) , φold(x)
499
+
500
+ ,
501
+ (9)
502
+ where dist(·, ·) is a distance metric such as ℓ2 or cosine dis-
503
+ tances. Because we only update the parameters in ψ(·), not
504
+ the ones in φnew(·) or φold(·), we can still access the repre-
505
+ sentations given by the new model at no cost even after the
506
+ optimization of ψ(·). Note that this reverse query transform
507
+ module differs from FCT [17] mainly in terms of transfor-
508
+ mation direction and requirement of side information. FCT
509
+ performs a transformation from the old representation to the
510
+ new, while the opposite is true for our proposed approach.
511
+ Since the embedding quality of a new model is highly likely
512
+ to be better than that of an old one, our reverse transforma-
513
+ tion module performs well even without additional side in-
514
+ formation and, consequently, is more practical and efficient.
515
+ 4.2. Integration into Baseline Retrieval System
516
+ Figure 4 illustrates the distance rank merge process to-
517
+ gether with the proposed reverse transformation module.
518
+ The whole procedure consists of two retrieval systems de-
519
+ fined by a pair of query and gallery representations, back-
520
+ ward retrieval system {φrev, φold} and new retrieval system
521
+ {φnew, φnew}, where φrev := ψ(φnew). Note that we obtain
522
+ both the new and compatible query embeddings, φnew(q)
523
+ and φrev(q) = ψ(φnew(q)), using a shared feature extrac-
524
+ tion network, φnew(·).
525
+ The entire image retrieval pipeline consists of two parts:
526
+ 1) feature extraction of a query image and 2) search for the
527
+ nearest image in a gallery from the query. Compared to the
528
+ image retrieval based on a single model, the computational
529
+ cost of the proposed model with rank merge requires negli-
530
+ gible additional cost, which corresponds to feature transfor-
531
+ mation ψ(·) in the first part. Note that the number of total
532
+ gallery embeddings is fixed, i.e., |Gnew| + |Gold| = |G|, so
533
+ the cost of the second part is always the same in both cases.
534
+ 5. Distance Calibration
535
+ While the proposed rank merge technique with the ba-
536
+ sic reverse transformation module works well, there ex-
537
+ ists room for improvement in calibrating feature embedding
538
+ spaces of both systems. This section discusses the issues in
539
+ details and presents how we figure them out.
540
+ 5.1. Cross-Model Contrastive Learning
541
+ The objective in (9) cares about the positive pairs φold
542
+ and φrev with no consideration of negative pairs, which can
543
+ sometimes lead to misranked position. To handle this issue,
544
+ we employ a supervised contrastive learning loss [7, 14] to
545
+ consider both positive and negative pairs as follows:
546
+ LCL(xi, yi) = − log
547
+
548
+ yk=yi sold
549
+ ik
550
+
551
+ yk=yi sold
552
+ ik + �
553
+ yk̸=yi sold
554
+ ik
555
+ ,
556
+ (10)
557
+ where sold
558
+ ij
559
+ = exp
560
+
561
+ −dist
562
+
563
+ φrev(xi), φold(xj)
564
+ ��
565
+ and yi de-
566
+ notes the class membership of the ith sample. For more ro-
567
+ bust contrastive training, we perform hard example mining
568
+ for both the positive and negative pairs2. Such a contrastive
569
+ learning approach facilitates distance calibration and im-
570
+ proves feature discrimination because it promotes separa-
571
+ tion of the positive and negative examples.
572
+ Now, although the distances within the backward re-
573
+ trieval system {φrev, φold} become more comparable, they
574
+ are still not properly calibrated in terms of the distances
575
+ in the new retrieval system {φnew, φnew}. Considering dis-
576
+ tances in both retrieval systems jointly when we train the
577
+ reverse transformation module, we can obtain more com-
578
+ parable distances and consequently achieve more reliable
579
+ rank merge results. From this perspective, we propose a
580
+ 2For each anchor, we select the half of the examples in each of positive
581
+ and negative labels based on the distances from the anchor.
582
+
583
+ Backward retrieval system
584
+ pold (.)
585
+ retrieval
586
+ (.)
587
+ grev(q)
588
+ dold (Gold)
589
+ Gallery
590
+ (G)
591
+ Query
592
+ retrieval
593
+ (q)
594
+ (q)
595
+ backfilling
596
+ New retrieval system [@new
597
+ ,dnew1Figure 5. Illustration of cross-model contrastive learning loss with
598
+ backward retrieval system {φold, φrev} and new retrieval system
599
+ {φnew, φnew}.
600
+ Two boxes with dotted lines corresponds to two
601
+ terms in (11). For each retrieval system, the distances between
602
+ positive pairs are learned to be both smaller than those of negative
603
+ pairs in the two retrieval systems.
604
+ cross-model contrastive learning loss as
605
+ LCMCL(xi, yi) =
606
+ (11)
607
+ − log
608
+
609
+ yk=yi sold
610
+ ik
611
+
612
+ yk=yi sold
613
+ ik + �
614
+ yk̸=yi sold
615
+ ik + �
616
+ yk̸=yi snew
617
+ ik
618
+ − log
619
+
620
+ yk=yi snew
621
+ ik
622
+
623
+ yk=yi snew
624
+ ik + �
625
+ yk̸=yi snew
626
+ ik + �
627
+ yk̸=yi sold
628
+ ik
629
+ ,
630
+ where snew
631
+ ij
632
+ = exp(−dist
633
+
634
+ φnew(xi), φnew(xj)
635
+
636
+ ) and sold
637
+ ij =
638
+ exp(−dist
639
+
640
+ φrev(xi), φold(xj)
641
+
642
+ ).
643
+ Figure 5 illustrates the
644
+ concept of the loss function. The positive pairs from the
645
+ backward retrieval system {φrev, φold} are trained to locate
646
+ closer to the anchor than not only the negative pairs from
647
+ the same system but also the ones from the new system
648
+ {φnew, φnew}, and vice versa. We finally replace (9) with
649
+ (11) for training the reverse transformation module. Com-
650
+ pared to (10), additional heterogeneous negative terms in
651
+ the denominator of (11) play a role as a regularizer to make
652
+ the distances from one model directly comparable to those
653
+ from other one, which is desirable for our rank merge strat-
654
+ egy.
655
+ 5.2. Training New Feature Embedding
656
+ Until now, we do not jointly train the reverse transfor-
657
+ mation module ψ(·) and the new feature extraction module
658
+ φnew(·) as illustrated in Figure 3. This hampers the compat-
659
+ ibility between the backward and new retrieval systems be-
660
+ cause the backward retrieval system {φrev, φold} is the only
661
+ part to be optimized while the new system {φnew, φnew} is
662
+ fixed. To provide more flexibility, we add another transfor-
663
+ mation module ρ(·) on top of the new model as shown in
664
+ Figure 6, where ρnew = ρ(φnew) and ρrev = ψ(ρ(φnew)). In
665
+ this setting, we use ρnew as the final new model instead of
666
+ φnew, and our rank merge process employs {ρrev, φold} and
667
+ Figure 6.
668
+ Compatible training with learnable new embedding.
669
+ Compared to Figure 3, another transformation module ρ(·) is in-
670
+ corporated on top of the new model to learn new embedding fa-
671
+ vorable to our rank merging. The retrieval results are now merged
672
+ from {ρrev, φold} and {ρnew, ρnew}.
673
+ {ρnew, ρnew} eventually. This strategy helps to achieve a bet-
674
+ ter compatibility by allowing both systems to be trainable.
675
+ The final loss function to train the reverse transformation
676
+ module has the identical form to LCMCL in (11) except for
677
+ the definitions of snew
678
+ ij
679
+ and sold
680
+ ij , which are given by
681
+ snew
682
+ ij
683
+ = exp (−dist (ρnew(xi), ρnew(xj)))
684
+ (12)
685
+ sold
686
+ ij = exp
687
+
688
+ −dist
689
+
690
+ ρrev(xi), φold(xj)
691
+ ��
692
+ .
693
+ (13)
694
+ Note that this extension does not result in computational
695
+ overhead at inference stage but yet improves the perfor-
696
+ mance even further.
697
+ 6. Experiments
698
+ We present our experiment setting, the performance of
699
+ the proposed approach, and results from the analysis of al-
700
+ gorithm characteristics.
701
+ 6.1. Dataset and Evaluation Protocol
702
+ We employ four standard benchmarks, which includes
703
+ ImageNet-1K [18],
704
+ CIFAR-100 [9],
705
+ Places-365 [28],
706
+ Market-1501 [27]. As in previous works [17,19], we adopt
707
+ the extended-class setting in model upgrade; the old model
708
+ is trained with examples from a half of all classes while the
709
+ new model is trained with all samples. For example, on the
710
+ ImageNet-1K dataset, the old model is trained with the first
711
+ 500 classes and the new model is trained with the whole
712
+ 1,000 classes.
713
+ Following the previous works [17, 20, 25], we measure
714
+ mean average precision (mAP) and cumulative matching
715
+ characteristics (CMC)3. We also report our comprehensive
716
+ results in terms of AUCmAP and AUCCMC at 10 backfill time
717
+ slices, i.e., t ∈ {0.0, 0.1, ..., 1.0} in (5).
718
+ 6.2. Implementation Details
719
+ We employ ResNet-18 [6], ResNet-50 [6], and ViT-
720
+ B/32 [3] as our backbone architectures for either old or new
721
+ 3CMC corresponds to top-k accuracy, and we report top-1 accuracy in
722
+ all tables and graphs.
723
+
724
+ old
725
+ rev
726
+ new
727
+ D
728
+ anchor
729
+ positive
730
+ negativeold
731
+ (.)
732
+ p()
733
+ ()
734
+ new
735
+ rev
736
+ 0Table 1. Comparison with existing compatible learning methods on four standard benchmarks in homogeneous model upgrades. Gain
737
+ denotes relative gain that each method achieves from old model in terms of AUCmAP, compared to the gain of new model. The proposed
738
+ framework, dubbed as RM, consistently outperforms all other models with significantly large margins for all datasets. Note that RMna¨ıve
739
+ indicates the basic version of distance rank merge described in Sec. 3.2 and that Old and New denote embedding models of gallery images.
740
+ ImageNet-1K
741
+ CIFAR-100
742
+ Places-365
743
+ Market-1501
744
+ AUCmAP
745
+ AUCCMC
746
+ Gain
747
+ AUCmAP
748
+ AUCCMC
749
+ Gain
750
+ AUCmAP
751
+ AUCCMC
752
+ Gain
753
+ AUCmAP
754
+ AUCCMC
755
+ Gain
756
+ Old
757
+ 31.2
758
+ 49.7
759
+ 0%
760
+ 21.6
761
+ 34.3
762
+ 0%
763
+ 16.5
764
+ 30.7
765
+ 0%
766
+ 62.7
767
+ 82.7
768
+ 0%
769
+ New
770
+ 51.3
771
+ 70.3
772
+ 100%
773
+ 47.4
774
+ 62.6
775
+ 100%
776
+ 23.4
777
+ 39.1
778
+ 100%
779
+ 77.3
780
+ 90.9
781
+ 100%
782
+ RMna¨ıve (Ours)
783
+ 40.0
784
+ 63.9
785
+ 44%
786
+ 30.8
787
+ 49.1
788
+ 36%
789
+ 19.5
790
+ 35.8
791
+ 43%
792
+ 69.2
793
+ 87.0
794
+ 45%
795
+ BCT [19]
796
+ 32.0
797
+ 46.3
798
+ 4%
799
+ 26.4
800
+ 43.5
801
+ 19%
802
+ 17.5
803
+ 37.0
804
+ 14%
805
+ 66.6
806
+ 84.3
807
+ 27%
808
+ FCT [17]
809
+ 36.9
810
+ 58.7
811
+ 28%
812
+ 27.1
813
+ 49.4
814
+ 21%
815
+ 22.5
816
+ 37.3
817
+ 87%
818
+ 66.4
819
+ 84.2
820
+ 25%
821
+ FCT (w/ side-info) [17]
822
+ 43.6
823
+ 65.0
824
+ 62%
825
+ 37.0
826
+ 53.9
827
+ 60%
828
+ 23.7
829
+ 38.3
830
+ 104%
831
+ 66.4
832
+ 84.4
833
+ 25%
834
+ BiCT [20]
835
+ 35.1
836
+ 59.7
837
+ 19%
838
+ 29.0
839
+ 48.3
840
+ 29%
841
+ 19.0
842
+ 34.9
843
+ 36%
844
+ 65.0
845
+ 82.4
846
+ 16%
847
+ RM (Ours)
848
+ 53.4
849
+ 68.1
850
+ 110%
851
+ 41.4
852
+ 60.7
853
+ 78%
854
+ 28.2
855
+ 41.7
856
+ 170%
857
+ 70.7
858
+ 87.6
859
+ 55%
860
+ 0
861
+ 20
862
+ 40
863
+ 60
864
+ 80
865
+ 100
866
+ Backfill progress (%)
867
+ 0.30
868
+ 0.35
869
+ 0.40
870
+ 0.45
871
+ 0.50
872
+ 0.55
873
+ 0.60
874
+ mAP
875
+ 0
876
+ 20
877
+ 40
878
+ 60
879
+ 80
880
+ 100
881
+ Backfill progress (%)
882
+ 0.50
883
+ 0.55
884
+ 0.60
885
+ 0.65
886
+ 0.70
887
+ CMC (Top1 Acc.)
888
+ (a) ImageNet-1K
889
+ 0
890
+ 20
891
+ 40
892
+ 60
893
+ 80
894
+ 100
895
+ Backfill progress (%)
896
+ 0.25
897
+ 0.30
898
+ 0.35
899
+ 0.40
900
+ 0.45
901
+ 0.50
902
+ mAP
903
+ 0
904
+ 20
905
+ 40
906
+ 60
907
+ 80
908
+ 100
909
+ Backfill progress (%)
910
+ 0.35
911
+ 0.40
912
+ 0.45
913
+ 0.50
914
+ 0.55
915
+ 0.60
916
+ CMC (Top1 Acc.)
917
+ (b) CIFAR-100
918
+ 0
919
+ 20
920
+ 40
921
+ 60
922
+ 80
923
+ 100
924
+ Backfill progress (%)
925
+ 0.175
926
+ 0.200
927
+ 0.225
928
+ 0.250
929
+ 0.275
930
+ 0.300
931
+ mAP
932
+ 0
933
+ 20
934
+ 40
935
+ 60
936
+ 80
937
+ 100
938
+ Backfill progress (%)
939
+ 0.32
940
+ 0.34
941
+ 0.36
942
+ 0.38
943
+ 0.40
944
+ 0.42
945
+ CMC (Top1 Acc.)
946
+ (c) Places-365
947
+ 0
948
+ 20
949
+ 40
950
+ 60
951
+ 80
952
+ 100
953
+ Backfill progress (%)
954
+ 0.62
955
+ 0.64
956
+ 0.66
957
+ 0.68
958
+ 0.70
959
+ 0.72
960
+ 0.74
961
+ 0.76
962
+ mAP
963
+ 0
964
+ 20
965
+ 40
966
+ 60
967
+ 80
968
+ 100
969
+ Backfill progress (%)
970
+ 0.80
971
+ 0.82
972
+ 0.84
973
+ 0.86
974
+ 0.88
975
+ 0.90
976
+ CMC (Top1 Acc.)
977
+ (d) Market-1501
978
+ 0
979
+ 20
980
+ 40
981
+ 60
982
+ 80
983
+ 100
984
+ Backfill progress (%)
985
+ 0.30
986
+ 0.35
987
+ 0.40
988
+ 0.45
989
+ 0.50
990
+ 0.55
991
+ 0.60
992
+ mAP
993
+ 0
994
+ 20
995
+ 40
996
+ 60
997
+ 80
998
+ 100
999
+ Backfill progress (%)
1000
+ 0.50
1001
+ 0.55
1002
+ 0.60
1003
+ 0.65
1004
+ 0.70
1005
+ Top1 Acc.
1006
+ Old
1007
+ New
1008
+ BCT
1009
+ FCT*
1010
+ FCT(w/ side-info)*
1011
+ BiCT
1012
+ RM_naïve (Ours)
1013
+ RM (Ours)
1014
+ Figure 7. mAP and CMC (Top-1 Acc.) results of our full framework in comparison to existing approaches. The numbers in the legend
1015
+ indicate either AUCmAP or AUCCMC scores.
1016
+ models. All transformation modules, ψ(·) and ρ(·), con-
1017
+ sist of 1 to 5 linear layer blocks, where each block is com-
1018
+ posed of a sequence of operations, (Linear → BatchNorm
1019
+ → ReLU), except for the last block that only has a Lin-
1020
+ ear layer. Our algorithm does not use any side-information.
1021
+ Our modules are trained with the Adam optimizer [8] for 50
1022
+ epoch, where the learning rate is 1 × 10−4 at the beginning
1023
+ and decayed using cosine annealing [12]. Our frameworks
1024
+ are implemented with the Pytorch [16] library and we plan
1025
+ to release the source codes of our work.
1026
+ 6.3. Results
1027
+ Homogeneous model upgrade
1028
+ We present the quantita-
1029
+ tive results in the homogeneous model upgrade scenario,
1030
+ where old and new models have the same architecture. We
1031
+ employ ResNet-50 for ImageNet and ResNet-18 for other
1032
+ datasets. Table 1 and Figure 7 compare the proposed frame-
1033
+ work, referred to as RM (Rank Merge), with existing com-
1034
+ patible learning approaches, including BCT [19], FCT [17],
1035
+ and BiCT [20]. As shown in the table, RM consistently out-
1036
+ performs all the existing compatible learning methods by
1037
+ remarkably significant margins in all datasets. BCT [19]
1038
+ learns backward compatible feature representations, which
1039
+ is backfill-free, but its performance gain is not impressive.
1040
+
1041
+ FCT [17] achieves meaningful performance improvement
1042
+ by transforming old gallery features, but most of the gains
1043
+ come from side-information [2].
1044
+ For example, if side-
1045
+ information is not available, the performance gain of FCT
1046
+ drops from 62% to 28% on the ImageNet dataset. Also,
1047
+ such side-information is not useful for the re-identification
1048
+ dataset, Market-1501, mainly because the model for the
1049
+ side-information is trained for image classification using the
1050
+ ImageNet dataset, which shows its limited generalizability.
1051
+ On the other hand, although BiCT [20] takes advantage of
1052
+ online backfilling with less backfilling cost, it suffers from
1053
+ degraded final performance and negative flips in the mid-
1054
+ dle of backfilling. Note that RMna¨ıve, our na¨ıve rank merg-
1055
+ ing between old and new models, is already competitive to
1056
+ other approaches.
1057
+ Heterogeneous model upgrade
1058
+ We evaluate our frame-
1059
+ work in more challenging scenarios and present the results
1060
+ in Figure 8, where the old and new models have different
1061
+ architectures, e.g., ResNet-18 → ResNet-50 or ResNet-18
1062
+ → ViT-B/32. In this figure, RMRQT (green line) denotes
1063
+ our ablative model trained with (9). Even in this setting,
1064
+ where both embedding spaces are more incompatible, our
1065
+ rank merge results from the old and new models still man-
1066
+ age to achieve a monotonous performance growth curve and
1067
+ RM improves the overall performance significantly further,
1068
+ which validates the robustness of our frameworks.
1069
+ Ablation study
1070
+ We analyze the results from the abla-
1071
+ tions of models for our cross-model contrastive learning.
1072
+ For compatible training, CL-S employs contrastive learn-
1073
+ ing within the backward system only as in (10) while our
1074
+ CMCL considers distance metrics from both backward and
1075
+ new retrieval systems simultaneously as in (11). For a more
1076
+ thorough ablation study, we also design and test another
1077
+ metric learning objective, called CL-M, which is given by
1078
+ LCL-M(xi, yi) = − log
1079
+
1080
+ yk=yi sold
1081
+ ik
1082
+
1083
+ yk=yi sold
1084
+ ik + �
1085
+ yk̸=yi sold
1086
+ ik
1087
+ − log
1088
+
1089
+ yk=yi snew
1090
+ ik
1091
+
1092
+ yk=yi snew
1093
+ ik + �
1094
+ yk̸=yi snew
1095
+ ik
1096
+ , (14)
1097
+ which conducts contrastive learning for both backward and
1098
+ new retrieval systems separately. Figure 9 visualizes the re-
1099
+ sults from the ablation studies, where CMCL consistently
1100
+ outperforms both CL-S and CL-M in various datasets and
1101
+ architectures. CL-M generally gives better merge results
1102
+ than CL-S because it calibrates the distances of new re-
1103
+ trieval system additionally.
1104
+ However, CL-M still suffers
1105
+ from negative flips because the distance metrics of both re-
1106
+ trieval systems are calibrated independently and not learned
1107
+ to be directly comparable to each other.
1108
+ On the other
1109
+ hand, CMCL improves overall performance curves con-
1110
+ sistently without negative flips.
1111
+ This validates that con-
1112
+ 0
1113
+ 20
1114
+ 40
1115
+ 60
1116
+ 80
1117
+ 100
1118
+ Backfill progress (%)
1119
+ 0.3
1120
+ 0.4
1121
+ 0.5
1122
+ 0.6
1123
+ mAP
1124
+ Old (0.223)
1125
+ New (0.513)
1126
+ RM (0.509)
1127
+ RM_RQT (0.372)
1128
+ RM_naïve (0.365)
1129
+ 0
1130
+ 20
1131
+ 40
1132
+ 60
1133
+ 80
1134
+ 100
1135
+ Backfill progress (%)
1136
+ 0.40
1137
+ 0.45
1138
+ 0.50
1139
+ 0.55
1140
+ 0.60
1141
+ 0.65
1142
+ 0.70
1143
+ Top1 Acc.
1144
+ Old (0.436)
1145
+ New (0.703)
1146
+ RM (0.673)
1147
+ RM_RQT (0.631)
1148
+ RM_naïve (0.615)
1149
+ (a) ImageNet (ResNet-18 → ResNet-50)
1150
+ 0
1151
+ 20
1152
+ 40
1153
+ 60
1154
+ 80
1155
+ 100
1156
+ Backfill progress (%)
1157
+ 0.25
1158
+ 0.30
1159
+ 0.35
1160
+ 0.40
1161
+ 0.45
1162
+ 0.50
1163
+ mAP
1164
+ Old (0.216)
1165
+ New (0.448)
1166
+ RM (0.420)
1167
+ RM_RQT (0.364)
1168
+ RM_naïve (0.309)
1169
+ 0
1170
+ 20
1171
+ 40
1172
+ 60
1173
+ 80
1174
+ 100
1175
+ Backfill progress (%)
1176
+ 0.35
1177
+ 0.40
1178
+ 0.45
1179
+ 0.50
1180
+ 0.55
1181
+ 0.60
1182
+ Top1 Acc.
1183
+ Old (0.343)
1184
+ New (0.626)
1185
+ RM (0.611)
1186
+ RM_RQT (0.568)
1187
+ RM_naïve (0.514)
1188
+ (b) CIFAR-100 (ResNet-18 → ViT-B/32)
1189
+ 0
1190
+ 20
1191
+ 40
1192
+ 60
1193
+ 80
1194
+ 100
1195
+ Backfill progress (%)
1196
+ 0.65
1197
+ 0.70
1198
+ 0.75
1199
+ 0.80
1200
+ mAP
1201
+ MCT (Ours) (0.747)
1202
+ Direct Alignment (0.709)
1203
+ Merge [Old+New] (0.722)
1204
+ 0
1205
+ 20
1206
+ 40
1207
+ 60
1208
+ 80
1209
+ 100
1210
+ Backfill progress (%)
1211
+ 0.82
1212
+ 0.84
1213
+ 0.86
1214
+ 0.88
1215
+ 0.90
1216
+ 0.92
1217
+ Top1 Acc.
1218
+ MCT (Ours) (0.900)
1219
+ Direct Alignment (0.871)
1220
+ Merge [Old+New] (0.883)
1221
+ (c) Market-1501 (ResNet-18 → ResNet-50)
1222
+ 0
1223
+ 20
1224
+ 40
1225
+ 60
1226
+ 80
1227
+ 100
1228
+ Backfill progress (%)
1229
+ 0.175
1230
+ 0.200
1231
+ 0.225
1232
+ 0.250
1233
+ 0.275
1234
+ 0.300
1235
+ 0.325
1236
+ mAP
1237
+ Old (0.164)
1238
+ New (0.249)
1239
+ RM (0.292)
1240
+ RM_RQT (0.217)
1241
+ RM_naïve (0.208)
1242
+ 0
1243
+ 20
1244
+ 40
1245
+ 60
1246
+ 80
1247
+ 100
1248
+ Backfill progress (%)
1249
+ 0.32
1250
+ 0.34
1251
+ 0.36
1252
+ 0.38
1253
+ 0.40
1254
+ 0.42
1255
+ Top1 Acc.
1256
+ Old (0.309)
1257
+ New (0.398)
1258
+ RM (0.427)
1259
+ RM_RQT (0.375)
1260
+ RM_naïve (0.366)
1261
+ (d) Places-365 (ResNet-18 → ResNet-50)
1262
+ Figure 8.
1263
+ Experimental results with heterogeneous model up-
1264
+ grades. Our na¨ıve rank merge between different architectures still
1265
+ achieves promising performance curves in various settings, and
1266
+ our full algorithm exhibits significantly better results.
1267
+ sidering the distance metrics of both systems simultane-
1268
+ ously helps to achieve better metric compatibility and con-
1269
+ sequently stronger merge results.
1270
+ 7. Conclusion
1271
+ We presented a novel compatible training framework for
1272
+ effective and efficient online backfilling. We first addressed
1273
+
1274
+ 0
1275
+ 20
1276
+ 40
1277
+ 60
1278
+ 80
1279
+ 100
1280
+ Backfill progress (%)
1281
+ 0.45
1282
+ 0.50
1283
+ 0.55
1284
+ 0.60
1285
+ mAP
1286
+ CMCL (Ours) (0.534)
1287
+ CL-M (0.487)
1288
+ CL-S (0.461)
1289
+ 0
1290
+ 20
1291
+ 40
1292
+ 60
1293
+ 80
1294
+ 100
1295
+ Backfill progress (%)
1296
+ 0.60
1297
+ 0.62
1298
+ 0.64
1299
+ 0.66
1300
+ 0.68
1301
+ 0.70
1302
+ Top1 Acc.
1303
+ CMCL (Ours) (0.681)
1304
+ CL-M (0.648)
1305
+ CL-S (0.637)
1306
+ (a) ImageNet (ResNet-50 → ResNet-50)
1307
+ 0
1308
+ 20
1309
+ 40
1310
+ 60
1311
+ 80
1312
+ 100
1313
+ Backfill progress (%)
1314
+ 0.350
1315
+ 0.375
1316
+ 0.400
1317
+ 0.425
1318
+ 0.450
1319
+ 0.475
1320
+ 0.500
1321
+ mAP
1322
+ CMCL (Ours) (0.433)
1323
+ CL-M (0.411)
1324
+ CL-S (0.400)
1325
+ 0
1326
+ 20
1327
+ 40
1328
+ 60
1329
+ 80
1330
+ 100
1331
+ Backfill progress (%)
1332
+ 0.54
1333
+ 0.56
1334
+ 0.58
1335
+ 0.60
1336
+ 0.62
1337
+ Top1 Acc.
1338
+ CMCL (Ours) (0.617)
1339
+ CL-M (0.572)
1340
+ CL-S (0.594)
1341
+ (b) CIFAR-100 (ViT-B/32 → ViT-B/32)
1342
+ 0
1343
+ 20
1344
+ 40
1345
+ 60
1346
+ 80
1347
+ 100
1348
+ Backfill progress (%)
1349
+ 0.175
1350
+ 0.200
1351
+ 0.225
1352
+ 0.250
1353
+ 0.275
1354
+ 0.300
1355
+ 0.325
1356
+ mAP
1357
+ CMCL (Ours) (0.282)
1358
+ CL-M (0.228)
1359
+ CL-S (0.243)
1360
+ 0
1361
+ 20
1362
+ 40
1363
+ 60
1364
+ 80
1365
+ 100
1366
+ Backfill progress (%)
1367
+ 0.30
1368
+ 0.32
1369
+ 0.34
1370
+ 0.36
1371
+ 0.38
1372
+ 0.40
1373
+ 0.42
1374
+ Top1 Acc.
1375
+ CMCL (Ours) (0.417)
1376
+ CL-M (0.383)
1377
+ CL-S (0.385)
1378
+ (c) Places-365 (ResNet-18 → ResNet-18)
1379
+ Figure 9. Ablation study of the cross-model contrastive learning
1380
+ loss on several datasets. CMCL outperforms other ablative mod-
1381
+ els, CL-M and CL-S, which validates that the distance calibration
1382
+ plays a crucial role for effective rank merging.
1383
+ the inherent trade-off between compatibility and discrim-
1384
+ inability, and proposed a practical alternative, online back-
1385
+ filling, to handle this dilemma. Our distance rank merge
1386
+ framework elegantly sidesteps this issue by bridging the gap
1387
+ between old and new models, and our metric-compatible
1388
+ learning further enhances the merge results with distance
1389
+ calibration. Our framework was validated via extensive ex-
1390
+ periments with significant improvement. We believe our
1391
+ work will provide a fundamental and practical foundation
1392
+ for promoting new directions in this line of research.
1393
+ References
1394
+ [1] Yan Bai, Jile Jiao, Shengsen Wu, Yihang Lou, Jun Liu, Xue-
1395
+ tao Feng, and Ling-Yu Duan. Dual-tuning: Joint prototype
1396
+ transfer and structure regularization for compatible feature
1397
+ learning. arXiv preprint arXiv:2108.02959, 2021. 2
1398
+ [2] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Ge-
1399
+ offrey Hinton. A simple framework for contrastive learning
1400
+ of visual representations. In ICLR, 2020. 2, 8
1401
+ [3] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov,
1402
+ Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner,
1403
+ Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl-
1404
+ vain Gelly, et al. An image is worth 16x16 words: Trans-
1405
+ formers for image recognition at scale. In ICLR, 2021. 6
1406
+ [4] Rahul Duggal, Hao Zhou, Shuo Yang, Yuanjun Xiong, Wei
1407
+ Xia, Zhuowen Tu, and Stefano Soatto. Compatibility-aware
1408
+ heterogeneous visual search. In CVPR, 2021. 1
1409
+ [5] Albert Gordo, Jon Almaz´an, Jerome Revaud, and Diane Lar-
1410
+ lus. Deep image retrieval: Learning global representations
1411
+ for image search. In ECCV, 2016. 1
1412
+ [6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
1413
+ Deep Residual Learning for Image Recognition. In CVPR,
1414
+ 2016. 6
1415
+ [7] Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna,
1416
+ Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and
1417
+ Dilip Krishnan. Supervised contrastive learning. NeurIPS,
1418
+ 2020. 5
1419
+ [8] Diederik P Kingma and Jimmy Ba. Adam: A method for
1420
+ stochastic optimization.
1421
+ arXiv preprint arXiv:1412.6980,
1422
+ 2014. 7
1423
+ [9] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple
1424
+ layers of features from tiny images. 2009. 6
1425
+ [10] Wei Li, Rui Zhao, Tong Xiao, and Xiaogang Wang. Deep-
1426
+ reid:
1427
+ Deep filter pairing neural network for person re-
1428
+ identification. In CVPR, 2014. 1
1429
+ [11] Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and
1430
+ John Hopcroft.
1431
+ Convergent learning: Do different neural
1432
+ networks learn the same representations?
1433
+ arXiv preprint
1434
+ arXiv:1511.07543, 2015. 2
1435
+ [12] Ilya Loshchilov and Frank Hutter.
1436
+ Sgdr:
1437
+ Stochas-
1438
+ tic gradient descent with warm restarts.
1439
+ arXiv preprint
1440
+ arXiv:1608.03983, 2016. 7
1441
+ [13] Qiang Meng, Chixiang Zhang, Xiaoqiang Xu, and Feng
1442
+ Zhou. Learning compatible embeddings. In ICCV, 2021.
1443
+ 1, 2
1444
+ [14] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Repre-
1445
+ sentation learning with contrastive predictive coding. arXiv
1446
+ preprint arXiv:1807.03748, 2018. 5
1447
+ [15] Xiao Pan, Hao Luo, Weihua Chen, Fan Wang, Hao Li, Wei
1448
+ Jiang, Jianming Zhang, Jianyang Gu, and Peike Li.
1449
+ Dy-
1450
+ namic gradient reactivation for backward compatible person
1451
+ re-identification. arXiv preprint arXiv:2207.05658, 2022. 2
1452
+ [16] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer,
1453
+ James Bradbury, Gregory Chanan, Trevor Killeen, Zeming
1454
+ Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An
1455
+ imperative style, high-performance deep learning library. In
1456
+ NeurIPS, 2019. 7
1457
+ [17] Vivek Ramanujan, Pavan Kumar Anasosalu Vasu, Ali
1458
+ Farhadi, Oncel Tuzel, and Hadi Pouransari. Forward com-
1459
+ patible training for large-scale embedding retrieval systems.
1460
+ In CVPR, 2022. 2, 5, 6, 7, 8
1461
+ [18] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San-
1462
+ jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy,
1463
+ Aditya Khosla, Michael Bernstein, et al.
1464
+ Imagenet large
1465
+ scale visual recognition challenge. International journal of
1466
+ computer vision, 115(3):211–252, 2015. 4, 6
1467
+
1468
+ [19] Yantao Shen, Yuanjun Xiong, Wei Xia, and Stefano Soatto.
1469
+ Towards backward-compatible representation learning.
1470
+ In
1471
+ CVPR, 2020. 1, 2, 3, 6, 7
1472
+ [20] Shupeng Su, Binjie Zhang, Yixiao Ge, Xuyuan Xu, Yexin
1473
+ Wang, Chun Yuan, and Ying Shan. Privacy-preserving model
1474
+ upgrades with bidirectional compatible training in image re-
1475
+ trieval. arXiv preprint arXiv:2204.13919, 2022. 1, 2, 6, 7,
1476
+ 8
1477
+ [21] Yi Sun, Yuheng Chen, Xiaogang Wang, and Xiaoou Tang.
1478
+ Deep learning face representation by joint identification-
1479
+ verification. NIPS, 2014. 1
1480
+ [22] Timmy ST Wan, Jun-Cheng Chen, Tzer-Yi Wu, and Chu-
1481
+ Song Chen. Continual learning for visual search with back-
1482
+ ward consistent feature embedding. In CVPR, 2022. 1, 2
1483
+ [23] Fei Wang, Liren Chen, Cheng Li, Shiyao Huang, Yanjie
1484
+ Chen, Chen Qian, and Chen Change Loy. The devil of face
1485
+ recognition is in the noise. In ECCV, 2018. 1
1486
+ [24] Liwei Wang, Lunjia Hu, Jiayuan Gu, Zhiqiang Hu, Yue Wu,
1487
+ Kun He, and John Hopcroft. Towards understanding learning
1488
+ representations: To what extent do different neural networks
1489
+ learn the same representation. NeurIPS, 2018. 2
1490
+ [25] Binjie Zhang, Yixiao Ge, Yantao Shen, Yu Li, Chun Yuan,
1491
+ Xuyuan Xu, Yexin Wang, and Ying Shan. Hot-refresh model
1492
+ upgrades with regression-free compatible training in image
1493
+ retrieval. In ICLR, 2021. 2, 6
1494
+ [26] Binjie Zhang, Yixiao Ge, Yantao Shen, Shupeng Su, Chun
1495
+ Yuan, Xuyuan Xu, Yexin Wang, and Ying Shan.
1496
+ To-
1497
+ wards universal backward-compatible representation learn-
1498
+ ing. arXiv preprint arXiv:2203.01583, 2022. 2
1499
+ [27] Liang Zheng, Liyue Shen, Lu Tian, Shengjin Wang, Jing-
1500
+ dong Wang, and Qi Tian. Scalable person re-identification:
1501
+ A benchmark. In ICCV, 2015. 6
1502
+ [28] Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Tor-
1503
+ ralba, and Aude Oliva.
1504
+ Learning deep features for scene
1505
+ recognition using places database. NIPS, 2014. 4, 6
1506
+
9tE2T4oBgHgl3EQfQQYW/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
A9E2T4oBgHgl3EQf8Qn_/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e5d45a06089696787439d3d13625ee9decdecc3a65f754e7a60d4b1b48f2433
3
+ size 4784173
A9E2T4oBgHgl3EQf8Qn_/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb00eab814fc8c60987aa7c73728fcd87cbd2ef5cfde560cb6b58f42fbb2d546
3
+ size 174818
ANFJT4oBgHgl3EQfrC3Z/content/2301.11607v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e63729dad4ca71fd8e8e0561e9ee84f59b1200795580dd22e99dadb38b7d159
3
+ size 2347465
ANFJT4oBgHgl3EQfrC3Z/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3fcce6fdef605baeeb9ca676eb7f52ab009413a371d17fff8d52d609bd2060aa
3
+ size 4194349
ANFJT4oBgHgl3EQfrC3Z/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8180cd932a041f9db27c2056cff59db416f1bc18c1ffe596a68ef9013a50a83
3
+ size 138904
AtAzT4oBgHgl3EQfTPzA/content/2301.01247v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48cfeebc793728818e5c086a3ab1382530b1078cbe94989d9b650e381c2df1ef
3
+ size 554821
AtAzT4oBgHgl3EQfTPzA/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd6a6838d2cd72356666124decba9c97208e49821bb29817214b06256b5ab8b9
3
+ size 524333
AtAzT4oBgHgl3EQfTPzA/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5c7161209cfcb8a16d9d64932e69e873c7d359a6f3062273b9dc6a7decc775b
3
+ size 22330
BNE1T4oBgHgl3EQfDgMw/content/tmp_files/2301.02877v1.pdf.txt ADDED
@@ -0,0 +1,1950 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Deep Learning for Mean Field Games with
2
+ non-separable Hamiltonians
3
+ Mouhcine Assoulia, Badr Missaouib
4
+ aModeling, Simulation and Data Analysis Lab, Lot 660, Ben Guerir, 43150, Morocco
5
+ bModeling,Simulation and Data Analysis Lab, Lot 660, Ben Guerir, 43150, Morocco
6
+ Abstract
7
+ This paper introduces a new method based on Deep Galerkin Methods (DGMs)
8
+ for solving high-dimensional stochastic Mean Field Games (MFGs).
9
+ We
10
+ achieve this by using two neural networks to approximate the unknown so-
11
+ lutions of the MFG system and forward-backward conditions. Our method
12
+ is efficient, even with a small number of iterations, and is capable of han-
13
+ dling up to 300 dimensions with a single layer, which makes it faster than
14
+ other approaches.
15
+ In contrast, methods based on Generative Adversarial
16
+ Networks (GANs) cannot solve MFGs with non-separable Hamiltonians. We
17
+ demonstrate the effectiveness of our approach by applying it to a traffic flow
18
+ problem, which was previously solved using the Newton iteration method
19
+ only in the deterministic case. We compare the results of our method to
20
+ analytical solutions and previous approaches, showing its efficiency. We also
21
+ prove the convergence of our neural network approximation with a single
22
+ hidden layer using the universal approximation theorem.
23
+ Keywords:
24
+ Mean Field Games, Deep Learning, Deep Galerkin Method,
25
+ Traffic Flow, Non-Separable Hamiltonian
26
+ 1. Introduction
27
+ Mean Field Games (MFGs) are a widely studied topic that can model
28
+ a variety of phenomena, including autonomous vehicles [1, 2], finance [3, 4],
29
+ economics [5, 6, 7], industrial engineering [8, 9, 10], and data science [11, 12].
30
+ MFGs are dynamic, symmetric games where the agents are indistinguishable
31
+ but rational, meaning that their actions can affect the mean of the popu-
32
+ lation. In the optimal case, the MFG system reaches a Nash equilibrium
33
+ January 10, 2023
34
+ arXiv:2301.02877v1 [cs.LG] 7 Jan 2023
35
+
36
+ (NE), in which no agent can further improve their objective. MFGs are de-
37
+ scribed by a system of coupled partial differential equations (PDEs) known
38
+ as equation
39
+
40
+
41
+
42
+ −∂tφ − ν∆φ + H(x, ρ, ∇φ) = 0, in
43
+ E1,
44
+ ∂tρ − ν∆ρ − div (ρ∇pH(x, ρ, ∇φ)) = 0, in
45
+ E2,
46
+ ρ(0, x) = ρ0(x),
47
+ φ(T, x) = g(x, ρ(T, x)), in
48
+ Ω,
49
+ (1)
50
+ where, E1 = (0, T] × Ω, E2 = [0, T) × Ω, Ω ⊂ Rd and g denotes the terminal
51
+ cost. The Hamiltonian H with separable structure is defined as
52
+ H(x, ρ, p) = infv{−p.v + L0(x, v)} − f0(x, ρ) = H0(x, p) − f0(x, ρ),
53
+ (2)
54
+ consisting of a forward-time Fokker-Planck equation (FP) and a backward-
55
+ time Hamilton-Jacobi-Bellman equation (HJB), which describe the evolution
56
+ of the population density (ρ) and the cost value (φ), respectively. The PDEs
57
+ are defined in the domain E1 = (0, T]×Ω and E2 = [0, T). The Hamiltonian
58
+ H has a separable structure and is defined as the infimum of the Lagrangian
59
+ function L0, which is the Legendre transform of the Hamiltonian, minus the
60
+ interaction function f0 between the population of agents. The MFG system
61
+ also includes boundary conditions, with the initial density ρ(0, x) given by
62
+ ρ0(x) and the terminal cost φ(T, x) given by g(x, ρ(T, x)). These boundary
63
+ conditions apply in the domain Ω ⊂ Rd.
64
+ One of the main challenges of MFGs is the viscosity problem, in addi-
65
+ tion to the complexity of the PDEs and forward-backward conditions. Many
66
+ methods for solving MFGs are limited to the deterministic setting (ν = 0).
67
+ For example, the Newton iteration method has been applied to the prob-
68
+ lem of traffic flow in [1], where a flexible machine learning framework was
69
+ provided for the numerical solution of potential MFGs.
70
+ While numerical
71
+ methods do exist for solving the system of PDEs (1) [13, 14, 15, 16], they
72
+ are not always effective due to computational complexity, especially in high
73
+ dimensional problems. Deep learning methods, such as Generative Adver-
74
+ sarial Networks (GANs) [17, 18], have been used to address this issue by
75
+ reformulating MFGs as a primal-dual problem [19, 20, 14]. This approach
76
+ uses the Hopf formula in density space [21] to establish a connection between
77
+ MFGs and GANs. However, this method requires the Hamiltonian H to be
78
+ separable in ρ and p. In cases where the Hamiltonian is non-separable, such
79
+ as in traffic flow [1], it is not possible to reformulate MFGs as a primal-dual
80
+ 2
81
+
82
+ problem. Recently, [22] proposed a policy iteration algorithm for MFGs with
83
+ non-separable Hamiltonians using the contraction fixed point method.
84
+ Contributions In this work, we present a new method based on DGM for
85
+ solving stochastic MFG with a non-separable Hamiltonian. Inspired by the
86
+ work [23, 24, 25], we approximate the unknown solutions of the system (1)
87
+ by two neural networks trained simultaneously to satisfy each equation of the
88
+ MFGs system and forward-backward conditions. While the GAN-based tech-
89
+ niques are limited to problems with separable Hamiltonians, our algorithm,
90
+ called New-Method, can solve any MFG system. Moreover, we prove the
91
+ convergence of the neural network approximation with a single layer using a
92
+ fundamental result of the universal approximation theorem. Then, we test
93
+ the effectiveness of our New-Method through several numerical experiments,
94
+ where we compare our results of New-Method with previous approaches to
95
+ assess their reliability. At last, our approach is applied to solve the MFG
96
+ system of traffic flow accounting for the stochastic case.
97
+ Contents The structure of the rest of the paper is as follows: in Section 2,
98
+ we introduce the main description of our approach. Section 3 examines the
99
+ convergence of our neural network approximation with a single hidden layer.
100
+ In Section 4, we present a review of prior methods. Section 5 investigates
101
+ the numerical performance of our proposed algorithms.
102
+ We evaluate our
103
+ method using a simple analytical solution in Section 5.1 and compare it
104
+ to the previous approach in Section 5.2. We also apply our method to the
105
+ traffic flow problem in Section 5.3. Finally, we conclude the paper and discuss
106
+ potential future work in Section 6.
107
+ 2. Methodology
108
+ Our method involves using two neural networks, Nθ and Nω, to approx-
109
+ imate the unknown variables ρ and φ, respectively. The weights for these
110
+ networks are θ and ω. Each iteration of our method involves updating ρ
111
+ and φ with the approximations from Nθ and Nω. To optimize the accuracy
112
+ of these approximations, we use a loss function based on the residual of the
113
+ first equation (HJB) to update the parameters of the neural networks. We
114
+ repeat this process using the second equation (FP) and new parameters; see
115
+ Figure 1. Both neural networks are simultaneously trained on the first equa-
116
+ tion, and the results are then checked in the second equation, where they are
117
+ 3
118
+
119
+ Figure 1: The learning mechanism of our method.
120
+ fine-tuned until an equilibrium is reached. This equilibrium represents the
121
+ convergence of the two neural networks and, therefore, the solution to both
122
+ the Hamilton Jacobi Bellman equations and the Fokker-Planck equation.
123
+ We have developed a solution for the problem of MFG systems 1 that
124
+ does not rely on the Hamiltonian structure. Our approach involves using a
125
+ combination of physics-informed deep learning [24] and deep hidden physics
126
+ models [25] to train our model to solve high-dimensional PDEs that adhere to
127
+ specified differential operators, initial conditions, and boundary conditions.
128
+ Our model is also designed to adhere to general nonlinear partial differential
129
+ equations that describe physical laws. To train our model, we define a loss
130
+ function that minimizes the residual of the equation at randomly chosen
131
+ points in time and space within the domain Ω.
132
+ We initialize the neural networks as a solution to our system. We let:
133
+ φω(t, x) = Nω(t, x),
134
+ ρθ(t, x) = Nθ(t, x).
135
+ (3)
136
+ Our training strategy starts by solving (HJB). We compute the loss (4) at
137
+ randomly sampled points {(tb1, xb1)}B1
138
+ b1=1 from E1, and {xs1}S1
139
+ s1=1 from Ω.
140
+ Loss(HJB)
141
+ total
142
+ = Loss(HJB) + Loss(HJB)
143
+ cond ,
144
+ (4)
145
+ 4
146
+
147
+ Update
148
+ Input
149
+ HJB
150
+ On, Wn
151
+ On+1,Wn+1
152
+ On+2, Wn+2
153
+ On+1, Wn+1
154
+ FP
155
+ Input
156
+ Updatewhere
157
+ Loss(HJB) = 1
158
+ B1
159
+ B1
160
+
161
+ b1=1
162
+ ���∂tφω(tb1, xb1) + ν∆φω(tb1, xb1)
163
+ − H(xb1, ρθ(tb1, xb1), ∇φω(tb1, xb1))
164
+ ���
165
+ 2
166
+ ,
167
+ and
168
+ Loss(HJB)
169
+ cond
170
+ = 1
171
+ S1
172
+ S1
173
+
174
+ s1=1
175
+ ���φω(T, xs1) − g(xs1, ρθ(T, xs1))
176
+ ���
177
+ 2
178
+ .
179
+ We then update the weights of φω and ρθ by back-propagating the loss (4).
180
+ We do the same to (FP) with the updated weights. We compute (5) at ran-
181
+ domly sampled points {(tb2, xb2)}B2
182
+ b2=1 from E2, and {xs2}S2
183
+ s2=1 from Ω.
184
+ Loss(FP)
185
+ total = Loss(FP) + Loss(FP)
186
+ cond ,
187
+ (5)
188
+ where
189
+ Loss(FP) = 1
190
+ B2
191
+ B2
192
+
193
+ b2=1
194
+ ���∂tρθ(tb2, xb2) − ν∆ρθ(tb2, xb2)
195
+ − div (ρθ(tb2, xb2)∇pH(xb2, ρθ(tb2, xb2), ∇φω(tb2, xb2)))
196
+ ���
197
+ 2
198
+ ,
199
+ and
200
+ Loss(FP)
201
+ cond = 1
202
+ S2
203
+ S2
204
+
205
+ s2=1
206
+ ���ρθ(0, xs2) − ρ0(xs2)
207
+ ���
208
+ 2
209
+ .
210
+ Finally, we update the weights of φω and ρθ by back-propagating the loss (5);
211
+ see Algorithm [1].
212
+ 3. Convergence
213
+ Following the steps of [23], this section presents theoretical results that
214
+ guarantee the existence of a single layer feedforward neural networks ρθ and
215
+ φω which can universally approximate the solutions of (1). Denote
216
+ L1(ρθ, φω) =
217
+ ���H1(ρθ, φω)
218
+ ���
219
+ 2
220
+ L2(E1) +
221
+ ���φω(T, x) − φ(T, x)
222
+ ���
223
+ 2
224
+ L2(Ω),
225
+ (6)
226
+ 5
227
+
228
+ Algorithm 1 New-Method
229
+ Require: H Hamiltonian, ν diffusion parameter, g terminal cost.
230
+ Require: Initialize neural networks Nω0 and Nθ0
231
+ Train
232
+ for n=0,1,2...,K-2 do
233
+ Sample batch {(tb1, xb1)}B1
234
+ b1=1 from E1, and {xs1}S1
235
+ s1=1 from Ω
236
+ L(HJB) ←
237
+ 1
238
+ B1
239
+ �B1
240
+ b1=1
241
+ ���∂tφωn(tb1, xb1) + ν∆φωn(tb1, xb1)
242
+ −H(xb1, ρθn(tb1, xb1), ∇φωn(tb1, xb1))
243
+ ���
244
+ 2
245
+ .
246
+ L(HJB)
247
+ cond
248
+
249
+ 1
250
+ S1
251
+ �S1
252
+ s1=1
253
+ ���φωn(T, xs1) − g(xs1, ρθn(T, xs1))
254
+ ���
255
+ 2
256
+ .
257
+ Backpropagate Loss(HJB)
258
+ total
259
+ to ωn+1, θn+1 weights.
260
+ Sample batch {(tb2, xb2)}B2
261
+ b2=1 from E2, and {xs2}S2
262
+ s2=1 from Ω.
263
+ L(FP) ←
264
+ 1
265
+ B2
266
+ �B2
267
+ b2=1
268
+ ���∂tρθn+1(tb2, xb2) ��� ν∆ρθn+1(tb2, xb2)
269
+ − div(∇pH(xb2, ρθn+1(tb2, xb2), ∇φωn+1(tb2, xb2))
270
+ ×ρθn+1(tb2, xb2))
271
+ ���
272
+ 2
273
+ .
274
+ Lcond(FP) ←
275
+ 1
276
+ S2
277
+ �S2
278
+ s2=1
279
+ ���ρθn+1(0, xs2) − ρ0(xs2)
280
+ ���
281
+ 2
282
+ .
283
+ Backpropagate Loss(FP)
284
+ total to ωn+2 θn+2 weights.
285
+ return θK, ωK
286
+ where
287
+ H1(ρθ, φω) = ∂tφω(t, x) + ν∆φω(t, x) − H(x, ρθ(x, t), ∇φω(t, x)).
288
+ L2(ρθ, φω) =
289
+ ���H2(ρθ, φω)
290
+ ���
291
+ 2
292
+ L2(E2) +
293
+ ���ρθ(0, x) − ρ0(x)
294
+ ���
295
+ 2
296
+ L2(Ω),
297
+ (7)
298
+ and
299
+ H2(ρθ, φω) = ∂tρθ(t, x)−ν∆ρθ(t, x)−div (ρθ(t, x)∇pH(x, ρθ(t, x), ∇φω(t, x))) .
300
+ Denote ||f(x)||L2(E) =
301
+ ��
302
+ E |f(x)|2dµ(x)
303
+ � 1
304
+ 2 the norm on L2 and µ is a positive
305
+ probability density on E. The aim of our approach is to identify a set of
306
+ 6
307
+
308
+ parameters θ and ω such that the functions ρθ(x, t) and φω(x, t) minimizes
309
+ the error L1(ρθ, φω) and L2(ρθ, φω). If L1(ρθ, φω) = 0 and L2(ρθ, φω) = 0,
310
+ then ρθ(t, x) and φω(t, x) are solutions to (1). To prove the convergence of
311
+ the neural networks, we use the results [26] on the universal approximation
312
+ of functions and their derivatives. Define the class of neural networks with a
313
+ single hidden layer and n hidden units,
314
+ N n(σ) =
315
+
316
+ Φ(t, x) : R1+d �→ R : Φ(t, x) =
317
+ n
318
+
319
+ i=1
320
+ βiσ
321
+
322
+ α1,it +
323
+ d
324
+
325
+ j=1
326
+ αj,ixj + cj
327
+ � �
328
+ ,
329
+ Where
330
+ θ = (β1, · · · , βn, α1,1, · · · , αd,n, c1, c1, · · · , cn) ∈ R2n+n(1+d),
331
+ the vector of the parameter to be learned. The set of all functions imple-
332
+ mented by such a network with a single hidden layer and n hidden units
333
+ is
334
+ N(σ) =
335
+
336
+ n≥1
337
+ N n(σ),
338
+ (8)
339
+ We consider E a compact subset of Rd+1, from [26, Th 3]. we know that if
340
+ σ ∈ C2 �
341
+ Rd+1�
342
+ is non constant and bounded, then N(σ) is uniformly 2-dense
343
+ on E. This means by [26, Th 2] that for all u ∈ C1,2 �
344
+ [0, T] × Rd�
345
+ and ϵ > 0,
346
+ there is fθ ∈ N(σ) such that:
347
+ sup
348
+ (t,x)∈E
349
+ |∂tu(t, x) − ∂tfθ(t, x)| + max
350
+ |a|≤2 sup
351
+ (t,x)∈E
352
+ ��∂(a)
353
+ x u(t, x) − ∂(a)
354
+ x fθ(t, x)
355
+ �� < ϵ. (9)
356
+ To prove the convergence of our algorithm, we make the following assump-
357
+ tions,
358
+ • (H1): E1, E2 are compacts and consider the measures µ1, µ2, µ3, and µ4
359
+ whose support is contained in E1, Ω, E2, and Ω respectively.
360
+ • (H2): System (1) has a unique solution (φ, ρ) ∈ X × X such that:
361
+ X =
362
+
363
+ u(t, x) ∈ C
364
+
365
+ ¯
366
+ [0, T] × Ω
367
+ � �
368
+ C1+η/2,2+η ([0, T] × Ω)
369
+ with η ∈ (0, 1)and that
370
+ sup
371
+ (t,x)∈[0,T]×Ω
372
+ 2
373
+
374
+ k=1
375
+ ��∇(k)
376
+ x u(t, x)
377
+ �� < ∞
378
+
379
+ .
380
+ 7
381
+
382
+ • (H3): H, ∇pH, ∇ppH, ∇ρpH are locally Lipschitz continuous in (ρ, p)
383
+ with Lipschitz constant that can have at most polynomial growth in ρ
384
+ and p, uniformly with respect to t, x.
385
+ Remark 3.1. It is important to note that the nonlinear term of L2 can be
386
+ simplified as follows,
387
+ div(ρ∇pH(x, ρ, ∇φ)) = ∇pH(x, ρ, ∇φ)∇ρ + ∇pρH(x, ρ, ∇φ)∇ρ.ρ
388
+ +
389
+
390
+ i,j
391
+ ∇pipjH(x, ρ, ∇φ)(∂xjxiφ)ρ.
392
+ Theorem 3.1. Let consider N(σ) where σ is C2 �
393
+ Rd+1�
394
+ , non constant and
395
+ bounded.
396
+ Suppose (H1), (H2), (H3) hold.
397
+ Then for every ϵ1, ϵ2 > 0,
398
+ there exists two positives constant C1, C2 > 0 and there exists two functions
399
+ (ρθ, φω) ∈ N(σ) × N(σ), such that,
400
+ Li(ρθ, φω) ≤ Ci(ϵ1 + ϵ2),
401
+ for
402
+ i = {1, 2}.
403
+ The proof of this theorem is in Appendix A.
404
+ Now we have L1(ρn
405
+ θ, φn
406
+ ω) �→ 0, and L2(ρn
407
+ θ, φn
408
+ ω) �→ 0 as n �→ ∞ but it does
409
+ not necessarily imply that (ρn
410
+ θ, φn
411
+ ω) �→ (ρ, ω) is the unique solution.
412
+ We now prove, under stronger conditions, the convergence of the neural net-
413
+ work, (ρn
414
+ θ, φn
415
+ w) to the solution (ρ, φ) of the system 1 as n → ∞.
416
+ To avoid some difficulties, we add homogeneous boundary conditions that
417
+ assume the solution is vanishing on the boundary. The MFG system (1)
418
+ writes
419
+
420
+
421
+
422
+
423
+
424
+
425
+
426
+ −∂tφ − ν div (a1(∇φ)) + γ(ρ, ∇φ) = 0, in
427
+ ΩT,
428
+ ∂tρ − ν div (a2(∇ρ)) − div (a3(ρ, ∇φ)) = 0, in
429
+ ΩT,
430
+ ρ(0, x) = ρ0(x),
431
+ φ(T, x) = g(x, ρ(T, x)), in
432
+ Ω,
433
+ ρ(t, x) = φ(t, x) = 0, in
434
+ Γ,
435
+ (10)
436
+ where, ΩT = (0, T) × Ω, Γ = (0, T) × ∂Ω and
437
+ a1(t, x, ∇φ) = ∇φ,
438
+ a2(t, x, ∇ρ) = ∇ρ,
439
+ a3(t, x, ρ, ∇φ) = ρ∇pH(x, ρ, ∇φ),
440
+ γ(t, x, ρ, ∇φ) = H(x, ρ, ∇φ),
441
+ 8
442
+
443
+ a1 : ΩT × RN → RN, a2 : ΩT × RN × RN → RN, a3 : ΩT × R × RN → RN
444
+ and γ : ΩT × R × RN → R are Caratheodory functions.
445
+ Then we introduce the approximate problem of the system (10) as
446
+
447
+
448
+
449
+
450
+
451
+
452
+
453
+ −∂tφn
454
+ ω − ν div (a1(∇φn
455
+ ω)) + γ(ρn
456
+ θ, ∇φn
457
+ ω) = 0, in
458
+ ΩT,
459
+ ∂tρn
460
+ θ − ν div (a2(∇ρn
461
+ θ)) − div (a3(ρn
462
+ θ, ∇φn
463
+ ω) = 0, in
464
+ ΩT,
465
+ ρn
466
+ θ(0, x) = ρ0(x),
467
+ φn
468
+ ω(T, x) = g(x, ρn
469
+ θ(T, x)), in
470
+ Ω,
471
+ ρn
472
+ θ(t, x) = φn
473
+ ω(t, x) = 0. in
474
+ Γ,
475
+ (11)
476
+ Let us first introduce some definitions.
477
+ Let r ≥ 1. In the sequel we denote by Lr �
478
+ 0, T; W 1,r
479
+ 0 (Ω)
480
+
481
+ the set of functions
482
+ u such that u ∈ Lr (ΩT), u(t, ·) ∈ W 1,r
483
+ 0 (Ω). The space Lr �
484
+ 0, T; W 1,r
485
+ 0 (Ω)
486
+
487
+ is
488
+ equipped with the norm
489
+ ∥u∥Lr(0,T;W 1,r
490
+ 0
491
+ (Ω)) :=
492
+ �� T
493
+ 0
494
+
495
+
496
+ |∇u(x, t)|rdxdt
497
+ � 1
498
+ r
499
+ ,
500
+ is a Banach space. For s, r ≥ 1, the space V s,r
501
+ 0
502
+ (ΩT) := L∞ (0, T; Ls(Ω)) ∩
503
+ Lr �
504
+ 0, T; W 1,r
505
+ 0 (Ω)
506
+
507
+ endowed with the norm
508
+ ∥ϕ∥V s,r
509
+ 0
510
+ (ΩT ) := ess sup
511
+ 0≤t≤T
512
+ ∥ϕ(., t)∥Ls(Ω) + ∥ϕ∥Lr(0,T;W 1,r
513
+ 0
514
+ (Ω)),
515
+ is also a Banach space.
516
+ For this convergence, we make the following set of assumptions,
517
+ • (H4): There is a constant µ > 0 and positive functions κ(t, x), λ(t, x)
518
+ such that for all (t, x) ∈ ΩT, we have
519
+ ∥a3(t, x, ρ, p)∥ ≤ µ(κ(t, x) + ∥p∥), and |γ(t, x, ρ, p)| ≤ λ(t, x)∥p∥,
520
+ with κ ∈ L2 (ΩT) , λ ∈ Ld+2 (ΩT) .
521
+ • (H5): a3(t, x, ρ, p) and γ(t, x, ρ, p) are Lipschitz continuous in (t, x, ρ, p) ∈
522
+ ΩT×R×Rd uniformly on compacts of the form
523
+
524
+ (t, x) ∈ ¯ΩT, |ρ| ≤ C, |p| ≤ C
525
+
526
+ .
527
+ • (H6): There is a positive constant α > 0 such that
528
+ a3(t, x, ρ, p)p ≥ α|p|2.
529
+ 9
530
+
531
+ • (H7): For every n ∈ N, ρn
532
+ θ, φn
533
+ ω ∈ C1,2 �¯ΩT
534
+
535
+ . In addition, (ρn
536
+ θ)n∈N , (φn
537
+ ω)n∈N ∈
538
+ L2 (ΩT) .
539
+ Theorem 3.2. Under previous assumptions (H4)-(H7), if we assume that
540
+ (10) has a unique bounded solution (φ, ρ) ∈ V 2,2
541
+ 0
542
+ ×V 2,2
543
+ 0
544
+ , then (φn
545
+ ω, ρn
546
+ θ) converge
547
+ to (φ, ρ) strongly in Lp (ΩT) × Lp (ΩT) for every p < 2.
548
+ The proof of this theorem is in Appendix B. Related Works
549
+ 4. Related Works
550
+ GANs: Generative adversarial networks, or GANs, are a class of ma-
551
+ chine learning introduced in 2014 [27] that have been successful in generat-
552
+ ing images and processing data [28, 29, 30]. In recent years, there has been
553
+ increasing interest in using GANs for financial modeling as well [31]. GANs
554
+ consist of two neural networks, a generator network, and a discriminator
555
+ network, that work against each other in order to generate samples from a
556
+ specific distribution. As described in various sources [27, 32, 33], the goal is
557
+ to reach equilibrium for the following problem,
558
+ min
559
+ G max
560
+ D
561
+
562
+ Ex∼Pdata(x)[log(D(x)] + Ez∼Pg(z)[log(1 − D(G(z))]
563
+
564
+ ,
565
+ (12)
566
+ where Pdata(x) is the original data and Pg(z) is the noise data. In (12), the
567
+ goal is to minimize the generator’s output (G) and maximize the discrimina-
568
+ tor’s output (D). This is achieved by comparing the probability of the original
569
+ data Pdata(x) being correctly identified by the discriminator D with the prob-
570
+ ability of the generated data G produced by the generator using noise data
571
+ Pg(z) being incorrectly identified as real by the discriminator 1 − D(G(z)).
572
+ Essentially, the discriminator is trying to accurately distinguish between real
573
+ and fake data, while the generator is attempting to create fake data that can
574
+ deceive the discriminator.
575
+ APAC-Net: In [17], the authors present a method (APAC-Net) based
576
+ on GANs for solving high-dimensional MFGs in the stochastic case. They use
577
+ of the Hopf formula in density space to reformulate the MFGs as a saddle-
578
+ point problem given by,
579
+ inf
580
+ ρ(x,t) sup
581
+ φ(x,t)
582
+
583
+ Ez∼P(z),t∼Unif[0,T][∂tφ(ρ(t, z), t) + ν∆φ(ρ(t, z), t)
584
+ − H(ρ(t, z), ∇φ)] + Ez∼P(z)φ(0, ρ(0, z)) − Ex∼ρT φ(T, x)
585
+
586
+ ,
587
+ (13)
588
+ 10
589
+
590
+ where
591
+ H(x, p) = infv{−p.v + L(x, v)}.
592
+ In this case, we have a connection between the GANs and MFGs, since
593
+ (13) allows them to reach the Kantorovich-Rubenstein dual formulation of
594
+ Wasserstein GANs [33] given by,
595
+ min
596
+ G max
597
+ D {Ex∼Pdata(x)[(D(x)] − Ez∼Pg(z)[(D(G(z))]},
598
+ s.t. ||∇D|| ≤ 1.
599
+ (14)
600
+ Finally, we can use an algorithm similar to GANs to solve the problems of
601
+ MFGs. Unfortunately, we notice that the Hamiltonian in this situation has a
602
+ separable structure. Due to this, we cannot solve the MFG-LWR system (to
603
+ be detailed in section 5.3). In general, we cannot solve the MFGs problems,
604
+ where its Hamiltonian is non-separable, since we cannot reformulate MFGs
605
+ as 13.
606
+ MFGANs: In [18, 17], the connection between GANs and MFGs is
607
+ demonstrated by the fact that equation (13) allows them to both reach the
608
+ Kantorovich-Rubinstein dual formulation of Wasserstein GANs, as described
609
+ in reference [33]. This is shown in equation (12), which can be solved using
610
+ an algorithm similar to those used for GANs. However, it is not possible to
611
+ solve MFGs problems with non-separable Hamiltonians, as they cannot be
612
+ reformulated as in equation (13). This is because the Hamiltonian in these
613
+ cases has a separable structure, which prevents the solution of the MFG-
614
+ LWR system (to be discussed in section 5.3).
615
+ DGM-MFG: In [34], section 4 discusses the adaptation of the DGM al-
616
+ gorithm to solve mean field games, referred to as DGM-MFG. This method
617
+ is highly versatile and can effectively solve a wide range of partial differential
618
+ equations due to its lack of reliance on the specific structure of the problem.
619
+ Our own work is similar to DGM-MFG in that we also utilize neural net-
620
+ works to approximate unknown functions and adjust parameters to minimize
621
+ a loss function based on the PDE residual, as seen in [34] and [18]. However,
622
+ our approach, referred to as New-Method, differs in the way it is trained.
623
+ Instead of using the sum of PDE residuals as the loss function and SGD for
624
+ optimization, we define a separate loss function for each equation and use
625
+ ADAM for training, following the approach in [18]. This modification allows
626
+ 11
627
+
628
+ for faster and more accurate convergence.
629
+ Policy iteration Method: To the best of our knowledge, [22] was the
630
+ first to successfully solve systems of mean field game partial differential equa-
631
+ tions with non-separable Hamiltonians. They proposed two algorithms based
632
+ on policy iteration, which involve iteratively updating the population distri-
633
+ bution, value function, and control. These algorithms only require the solu-
634
+ tion of two decoupled, linear PDEs at each iteration due to the fixed control.
635
+ This approach reduces the complexity of the equations, but it is limited to
636
+ low-dimensional problems due to the computationally intensive nature of the
637
+ method. In contrast, our method utilizes neural networks to solve the HJB
638
+ and FP equations at each iteration, allowing for updates to the population
639
+ distribution and value function in each equation without the limitations of
640
+ [22].
641
+ 5. Numerical Experiments
642
+ To evaluate the effectiveness of the proposed algorithm [1], we use the
643
+ example provided in [17], as it has an explicitly defined solution structure
644
+ that allows for easy numerical comparison. We compare the performance of
645
+ New-Method, APAC-Net’s MFGAN, and DGM-MFG on the same data to
646
+ assess their reliability. Additionally, we apply New-Method to the traffic flow
647
+ problem [19], which is characterized by its non-separable Hamiltonian [20],
648
+ to determine its ability to solve this type of problem in a stochastic case.
649
+ 5.1. Analytic Comparison
650
+ We test our method by comparing it to a simple example of the analytic
651
+ solution used to test the effectiveness of Apac-Net [17].
652
+ For the sake of
653
+ simplicity, we take the spatial domain Ω = [−2, 2]d, the final time T = 1,
654
+ and without congestion (γ = 0). For
655
+ H0(x, p) = ||p||2
656
+ 2
657
+ − β ||x||2
658
+ 2 ,
659
+ f0(x, ρ) = γln(ρ),
660
+ g(x) = α ||x||2
661
+ 2
662
+ − (νdα + γ d
663
+ 2ln α
664
+ 2πν),
665
+ (15)
666
+ and ν = β = 1, where
667
+ α = −γ +
668
+
669
+ γ2 + 4ν2β
670
+
671
+ = 1.
672
+ 12
673
+
674
+ The corresponding MFG system is:
675
+
676
+
677
+
678
+
679
+
680
+
681
+
682
+
683
+
684
+ −∂tφ − ∆φ + ||∇φ||2
685
+ 2
686
+ − ||x||2
687
+ 2
688
+ = 0,
689
+ ∂tρ − ∆ρ − div (ρ∇φ) = 0,
690
+ ρ(0, x) = ( 1
691
+ 2π)
692
+ d
693
+ 2e− ||x||2
694
+ 2 ,
695
+ φ(T, x) = x2
696
+ 2 − d,
697
+ (16)
698
+ and the explicit formula is given by
699
+ φ(t, x) = ||x||2
700
+ 2
701
+ − d.t,
702
+ ρ(t, x) = ( 1
703
+ 2π)
704
+ d
705
+ 2e− ||x||2
706
+ 2 .
707
+ (17)
708
+ Test 1: We consider the system of PDEs [16] in one dimension (d =
709
+ 1).
710
+ To obtain results, we run Algorithm [1] for 5.103 iterations, using a
711
+ minibatch of 50 samples at each iteration. The neural networks employed
712
+ have three hidden layers with 100 neurons each, and utilize the Softplus
713
+ activation function for Nω and the Tanh activation function for Nθ. Both
714
+ networks use ADAM with a learning rate of 10−4 and a weight decay of 10−3.
715
+ We employ ResNet as the architecture of the neural networks, with a skip
716
+ connection weight of 0.5. The numerical results are shown in Figure 2, which
717
+ compares the approximate solutions obtained by New-Method to the exact
718
+ solutions at different time states.
719
+ To evaluate the performance of New-Method, we compute the relative error
720
+ between the model predictions and the exact solutions on a 100 × 100 grid
721
+ within the domain [0, 1] × [−2, 2]. Additionally, we plot the HJB and FP
722
+ residual loss, as defined in Algorithm [1], to monitor the convergence of our
723
+ method (see Figure 3).
724
+ Test 2: In this experiment, we use a single hidden layer with vary-
725
+ ing numbers of hidden units (nU) for both neural networks. As previously
726
+ shown in section 2, the number of hidden units can affect the convergence of
727
+ the model. To verify this, we repeat the previous test using the same hyper-
728
+ parameters and a single hidden layer but with different numbers of hidden
729
+ units. The relative error between the model predictions and the exact solu-
730
+ tions is then calculated on a 100×100 grid within the domain [0, 1]×[−2, 2],
731
+ as shown in Figure 4.
732
+ Test 3: We solve the MFG system [16] for dimensions 2, 50, and 100.
733
+ Figure 5 shows the residuals of the HJB and FP equations over 5.104 iter-
734
+ ations. A minibatch of 1024, 512, and 128 samples were used for d=100,
735
+ d=50, and d=2, respectively. The neural networks had three hidden layers
736
+ 13
737
+
738
+ Figure 2: The exact solution and prediction calculated by New-Method in dimension one
739
+ at t=(0.25, 0.5, 0.75 ).
740
+ Figure 3: The relative error for ρ, φ for the figure on the left. On the right, the HJB, FP
741
+ Loss.
742
+ 14
743
+
744
+ t=0.25
745
+ 175
746
+ pexact
747
+ 150
748
+ p new method
749
+ 125
750
+ exact
751
+ @ new method
752
+ 1D0
753
+ 0.75
754
+ 0.50
755
+ 0.25
756
+ 0.00
757
+ 0.25
758
+ 2.0
759
+ 1.5
760
+ 1.0
761
+ 0.5
762
+ 0.0
763
+ 0.5
764
+ 1D
765
+ 15
766
+ 2Dt=0.5
767
+ 150
768
+ pexact
769
+ 125
770
+ p new method
771
+ LDO -
772
+ exact
773
+ @ new method
774
+ 0.75
775
+ 0.50
776
+ 0.25
777
+ 0.0
778
+ 0.25
779
+ 0.50
780
+ 2.0
781
+ 1.5
782
+ 1.0
783
+ 0.5
784
+ 0.0
785
+ 0.5
786
+ 1D
787
+ 15
788
+ 2Dt=0.75
789
+ 125
790
+ pexact
791
+ 1D0
792
+ p new method
793
+ 0.75
794
+ exact
795
+ @ new method
796
+ 0.50
797
+ 0.25
798
+ 0.0
799
+ 0.25
800
+ 0.50
801
+ 2.0
802
+ 1.5
803
+ 1.0
804
+ 0.5
805
+ 0.0
806
+ 0.5
807
+ 1D
808
+ 15
809
+ 2DThe relative errorof p and
810
+ 10
811
+ Relative errar
812
+ 0
813
+ 2400
814
+ ODE
815
+ 50D0
816
+ iberationsResiduals for Fp and HjB cguation
817
+ loss FP
818
+ 10°
819
+ loss HJB
820
+ 10-1
821
+ Losg
822
+ 10-
823
+ 10
824
+ 10
825
+ 0
826
+ 2400
827
+ ODE
828
+ 400
829
+ 50D0
830
+ IeratiorsFigure 4: The relative error for ρ , φ in 1-dimension for nU=(2, 5, 10, 20, 50).
831
+ with 100 neurons each and utilized the Softplus activation function for Nω
832
+ and the Tanh activation function for Nθ. Both networks used ADAM with
833
+ a learning rate of 10−4, weight decay of 10−3, and employed ResNet as their
834
+ architecture with a skip connection weight of 0.5. The results were obtained
835
+ by recording the residuals every 100 iterations and using a rolling average
836
+ over 5 points to smooth out the curves.
837
+ Figure 5: The loss HJB and FP equation for d=(2,10,100)
838
+ Test 4: In this test, we use the same setup as before, but with a single
839
+ layer of 100 neurons instead of multiple layers. We keep all other neural
840
+ network hyperparameters unchanged.
841
+ This test is meant to demonstrate
842
+ that a single layer can perform better than multiple layers, even when the
843
+ dimension increases, as seen in section 2. Figure 6 shows improved results
844
+ compared to the previous test, even with few iterations, which allows for
845
+ faster computation times.
846
+ 15
847
+
848
+ Residuals for Fp equation
849
+ 10-2 1
850
+ .- d=2
851
+ - d=50
852
+ E-OT
853
+ . d=100
854
+ 10+
855
+ lenprs
856
+ 10
857
+ 10-
858
+ 107
859
+ 10-8
860
+ 10-
861
+ 0
862
+ 14000
863
+ 24000
864
+ ODIDE
865
+ 4D00
866
+ 50000
867
+ iberationa cf AdamThe relative error phi
868
+ 100
869
+ nU=2
870
+ 10-1
871
+ nU=5
872
+ nU=10
873
+ nU=20
874
+ nU=50
875
+ 0
876
+ 2400
877
+ 30
878
+ 400
879
+ 50D0
880
+ iberatiors of AdamlThe relative eror rha
881
+ nU=2
882
+ nU=5
883
+ = nU=10
884
+ 100
885
+ nU=20
886
+ nU=50
887
+ XYYY
888
+ 101
889
+ 0
890
+ 2400
891
+ 30D0
892
+ 400
893
+ 50D0
894
+ iberatior of AdamlResiduals for HjB eguation
895
+ - d=2
896
+ 103
897
+ -- d=50
898
+ ... d=100
899
+ 107
900
+ 10
901
+ 100
902
+ 10-1
903
+ 10~2
904
+ 0
905
+ 1ADO0
906
+ 24000
907
+ ODIDE
908
+ 4D00
909
+ 50000
910
+ iberationa cf AdamFigure 6: The loss HJB and FP equation with a minibatch of 128, 512, and 1024 samples
911
+ for d=2, d=50, and d=(100,200,300), respectively.
912
+ 5.2. Comparison
913
+ In previous sections, we introduced and discussed four methods for solv-
914
+ ing MFGs: APAC-Net, MFGAN, DGM-MFG, and New-Method. Here, we
915
+ compare these approaches to assess their performance. For APAC-Net, it is
916
+ only possible to compare the cost values φ due to the unavailability of the
917
+ density function. In APAC-Net, the generator neural network represents ρ,
918
+ which generates the distribution. In order to compare the results, we need
919
+ to use kernel density estimation to transform the distribution into a density,
920
+ which is only an estimate. We use the simple example from the analytic so-
921
+ lution with d = 1 and T = 1 for this comparison. The two neural networks in
922
+ this comparison have three hidden layers with 100 neurons each, and utilize
923
+ ResNet as their architecture with a skip connection weight of 0.5. They also
924
+ use the Softplus activation function for Nω and the Tanh activation function
925
+ for Nθ. For training APAC-Net, MFGAN, and New-Method, we use ADAM
926
+ with a learning rate of 10−4 and a weight decay of 10−3 for both networks.
927
+ For training DGM-MFG, we use SGD initialized with a value of 10−3 and a
928
+ weight decay of 10−3 for both networks.
929
+ We run the four algorithms for 5.103 iterations, using a minibatch of 50 sam-
930
+ ples at each iteration. The relative error between the model predictions and
931
+ the exact solutions is then calculated on a 100 × 100 grid within the domain
932
+ [0, 1] × [−2, 2], as shown in Figure 7.
933
+ 5.3. Application (Traffic Flow):
934
+ In a study published in [1], the authors focused on the longitudinal speed
935
+ control of autonomous vehicles. They developed a mathematical model called
936
+ 16
937
+
938
+ Residual for HjB eguation
939
+ ... d=2
940
+ 104
941
+ - d=50
942
+ - d=100
943
+ - d=200
944
+ 102
945
+ — d=300
946
+ Bso1 lenpgad
947
+ 100
948
+ 102 ,
949
+ 0
950
+ 2400
951
+ 14000
952
+ keratiorsResidual for Fp eguation
953
+ 101
954
+ 10-2
955
+ 10-3
956
+ 10-4
957
+ d=2
958
+ 101
959
+ d=50
960
+ d=100
961
+ 10-0
962
+ d=200
963
+ d=300
964
+ 0
965
+ 2400
966
+ 410
967
+ 6+DO
968
+ 14000
969
+ keratiorsFigure 7: comparison between APAC-Net, MFGAN, DGM-MFG, and New-Method.
970
+ a Mean Field Game (MFG) to solve a traffic flow problem for autonomous
971
+ vehicles and demonstrated that the traditional Lighthill-Whitham-Richards
972
+ (LWR) model can be used as a solution to the MFG-LWR model described
973
+ by the following system of equations:
974
+ MFG − LWR
975
+
976
+
977
+
978
+
979
+
980
+
981
+
982
+ Vt + U(ρ)Vx − 1
983
+ 2V 2
984
+ x = 0,
985
+ ρt + (ρu)x = 0,
986
+ u = U(ρ) − Vx,
987
+ VT = g(·, ρT),
988
+ ρ(·, 0) = ρ0.
989
+ (18)
990
+ Here, ρ, V , and u represent the density, optimal cost, and speed function,
991
+ respectively, and the Greenshields density-speed relation is given by U(ρ) =
992
+ umax(1 − ρ/ρjam), where ρjam is the jam density and umax is the maximum
993
+ speed. By setting ρjam = 1 and umax = 1, the authors generalized the MFG-
994
+ LWR model to include a viscosity term µ > 0, resulting in the following
995
+ system:
996
+ MFG − LWR
997
+
998
+
999
+
1000
+ Vt + ν∆V − H(x, p, ρ) = 0,
1001
+ ρt − ν∆ρ − div(∇pH(x, p, ρ)ρ) = 0,
1002
+ VT = g(·, ρT),
1003
+ ρ(·, 0) = ρ0.
1004
+ (19)
1005
+ In this model, ρ and V represent the density and optimal cost function,
1006
+ respectively, and H is the Hamiltonian with a non-separable structure given
1007
+ by
1008
+ H(x, p, ρ) = 1
1009
+ 2||p||2 − (1 − ρ)p,
1010
+ with p = Vx,
1011
+ (20)
1012
+ where p = Vx.
1013
+ The authors solved the system in (19) using the New-
1014
+ ton Iteration method for the deterministic case (ν = 0) with a numerical
1015
+ 17
1016
+
1017
+ Comparison of p
1018
+ - New-Method
1019
+ 100
1020
+ MFGAN
1021
+ DGM-MFG
1022
+ Relative errar
1023
+ 10-1
1024
+ 0
1025
+ 2400
1026
+ ODE
1027
+ 50D0
1028
+ iberationsComparison of
1029
+ ..- New-Method
1030
+ MFGAN
1031
+ DGM-MFG
1032
+ 100
1033
+ APAC net
1034
+ Relative errar
1035
+ 10-1
1036
+ 0
1037
+ 24D0
1038
+ ODE
1039
+ 5400
1040
+ iberationsmethod that considers only a finite number of discretization points to re-
1041
+ duce computational complexity.
1042
+ In this work, we propose a new method
1043
+ using a neural network to approximate the unknown and solve the prob-
1044
+ lem in the stochastic case, while also avoiding the computational complexity
1045
+ of the previous method. To evaluate the performance of the new method,
1046
+ we consider the traffic flow problem defined by the MFG-LWR model in 19
1047
+ with a non-separable Hamiltonian in (20) on the spatial domain Ω = [0, 1]
1048
+ with dimension d = 1 and final time T = 1.
1049
+ The terminal cost g is
1050
+ set to zero and the initial density ρ0 is given by a Gaussian distribution,
1051
+ ρ0(x) = 0.2 − 0.6 exp
1052
+
1053
+ −1
1054
+ 2
1055
+ � x−0.5
1056
+ 0.1
1057
+ �2�
1058
+ . The aim is to investigate the perfor-
1059
+ mance of the new method, called the ”New-Method,” in solving this traffic
1060
+ flow problem.
1061
+ The corresponding MFG system is,
1062
+
1063
+
1064
+
1065
+
1066
+
1067
+
1068
+
1069
+ Vt + ν∆V − 1
1070
+ 2||Vx||2 + (1 − ρ)Vx = 0
1071
+ ρt − ν∆ρ − div((Vx − (1 − ρ))ρ) = 0
1072
+ ρ(x, 0) = 0.2 − 0.6 exp( −1
1073
+ 2 ( x−0.5
1074
+ 0.1 )2),
1075
+ φ(x, T) = 0.
1076
+ (21)
1077
+ We study the deterministic case (ν = 0) and stochastic case (ν = 0.5). We
1078
+ represent the unknown solutions by two neural networks Nω and Nθ, which
1079
+ have a single hidden layer of 50 neurons. We use the ResNet architecture
1080
+ with a skip connection weight of 0.5. We employ ADAM with learning rate
1081
+ 4 × 10−4 for Nω and 5 × 10−4 for Nθ and weight decay of 10−4 for both
1082
+ networks, batch size 100, in both cases ν = 0 and ν = 0.5 we use the
1083
+ activation function Softmax and Relu for Nω and Nθ respectively. In Figure
1084
+ 8 we plot over different times the density function, the optimal cost, and the
1085
+ speed which is calculated according to the density and the optimal cost [1]
1086
+ by the following formula,
1087
+ u = umax(1 − ρ/ρjam) − Vx
1088
+ where, we take the jam density ρjam = 1 and the maximum speed umax = 1
1089
+ and 104 iterations.
1090
+ ++ In Figure (9), we plot the HJB, FP residual loss for ν = 0 and ν = 0.5,
1091
+ which helps us monitor the convergence of our method. Unfortunately, we
1092
+ do not have the exact solution to compute the error. To validate the results
1093
+ of Figure (8), we use the fundamental traffic flow diagram, an essential tool
1094
+ to comprehend classic traffic flow models. Precisely, this is a graphic that
1095
+ 18
1096
+
1097
+ t=0
1098
+ t=0.5
1099
+ t=1
1100
+ Figure 8: The solution of the problem MFG-LWR by New-Method for (ν = 0) and (ν =
1101
+ 0.5) at t=(0,0.5,1).
1102
+ Figure 9: The loss HJB and FP equation for (ν = 0) and (ν = 0.5).
1103
+ 19
1104
+
1105
+ v=o
1106
+ 10-1 ,
1107
+ .- loss FP
1108
+ loss HJB
1109
+ 10-3
1110
+ i
1111
+ 10-
1112
+ 10~7
1113
+ 10-5
1114
+ 1022,
1115
+ 1013
1116
+ 0
1117
+ 2400
1118
+ 400
1119
+ 14000
1120
+ keratiorsv=05
1121
+ 101
1122
+ -
1123
+ loss FP
1124
+ loss HJB
1125
+ 102
1126
+ 10-3
1127
+ Bso1 lenpgad
1128
+ 10
1129
+ 100
1130
+ 10~7
1131
+ 10-8
1132
+ 0
1133
+ 2400
1134
+ 6+DO
1135
+ 14000
1136
+ keratiorsdensity
1137
+ speed
1138
+ Optimal costdensity
1139
+ speed
1140
+ Optimal costdensity
1141
+ speed
1142
+ Optimal costdensity
1143
+ speed
1144
+ Optimal costdensity
1145
+ speed
1146
+ Optimal costdensity
1147
+ speed
1148
+ Optimal cost0.8
1149
+ 0.7
1150
+ 0.6
1151
+ 0.5
1152
+ density
1153
+ 0.4
1154
+ speed
1155
+ EO
1156
+ Optimal cost
1157
+ 0.2
1158
+ 0.1
1159
+ 0.0
1160
+ 0'0
1161
+ 0.2
1162
+ 0.4
1163
+ 0.6
1164
+ 0.8
1165
+ 10V=0.5
1166
+ 0.8
1167
+ 0.7
1168
+ 0.6
1169
+ 0.5
1170
+ density
1171
+ 0.4
1172
+ speed
1173
+ Optimalcost
1174
+ 0.3
1175
+ 0.2
1176
+ 0.1
1177
+ 0.0
1178
+ 00
1179
+ 0.2
1180
+ 0.4
1181
+ 0.6
1182
+ 0.8
1183
+ 10
1184
+ xV=O
1185
+ 0.8
1186
+ 0.7
1187
+ 0.6
1188
+ 0.5
1189
+ density
1190
+ 0.4 :
1191
+ speed
1192
+ EO
1193
+ Optimal cost
1194
+ 0.2
1195
+ 0.1
1196
+ 0.0
1197
+ 00
1198
+ 0.2
1199
+ 0.4
1200
+ 0.6
1201
+ 0.8
1202
+ 1.0
1203
+ x0.8
1204
+ 0.7
1205
+ 0.6
1206
+ 0.5
1207
+ density
1208
+ 0.4 :
1209
+ speed
1210
+ EO
1211
+ Optimalcost
1212
+ 0.2
1213
+ 0.1
1214
+ 0.0
1215
+ 0.0
1216
+ 0.2
1217
+ 0.4
1218
+ 0.6
1219
+ 8'0
1220
+ 1.0
1221
+ x0.8
1222
+ 0.7
1223
+ 0.6
1224
+ 0.5
1225
+ density
1226
+ 0.4
1227
+ speed
1228
+ Optimalcost
1229
+ EO
1230
+ 0.2
1231
+ 0.1
1232
+ 0.0
1233
+ 00
1234
+ 0.2
1235
+ 0.4
1236
+ 0.6
1237
+ 0.8
1238
+ 1.00.8
1239
+ 0.7
1240
+ 0.6
1241
+ 0.5
1242
+ density
1243
+ 0.4
1244
+ speed
1245
+ E0
1246
+ Optimal cost
1247
+ 0.2
1248
+ 0.1
1249
+ 0.0
1250
+ 0'0
1251
+ 0.2
1252
+ 0.4
1253
+ 9:0
1254
+ 0.8
1255
+ 10displays a link between road traffic flux (vehicles/hour) and the traffic density
1256
+ (vehicles/km) [35, 36, 37]. We can find this diagram numerically [1] such as
1257
+ its function q is given by,
1258
+ q(t, x) = ρ(t, x)u(t, x).
1259
+ Figure (10) shows the fundamental diagram of our results.
1260
+ t=0
1261
+ t=0.5
1262
+ t=1
1263
+ Figure 10: Fundamental diagram for ν = (0, 0.5) at t = (0, 0.5, 1).
1264
+ 6. Conclusion
1265
+ • We present a new method based on the deep galerkin method (DGM)
1266
+ for solving high-dimensional stochastic mean field games (MFGs). The
1267
+ key idea of our algorithm is to approximate the unknown solutions by
1268
+ two neural networks that were simultaneously trained to satisfy each
1269
+ equation of the MFGs system and forward-backward conditions.
1270
+ • Consequently, our method shows better results even in a small number
1271
+ of iterations because of its learning mechanism. Moreover, it shows the
1272
+ 20
1273
+
1274
+ densitydenstbydensitydensitydansitydensityV=0.5
1275
+ 0.24
1276
+ 0.22
1277
+ flow
1278
+ 0.20
1279
+ 0.18
1280
+ 0.16
1281
+ 0.2
1282
+ 0.3
1283
+ 0.4
1284
+ 0.5
1285
+ 0.6
1286
+ 0.7
1287
+ 0.8
1288
+ density0.24
1289
+ 0.22
1290
+ MOU
1291
+ 0.20
1292
+ 0.18
1293
+ 0.16
1294
+ 0.20
1295
+ 0.25
1296
+ 0.30
1297
+ 0.35
1298
+ 0.40
1299
+ 0.45
1300
+ 0.50
1301
+ 0.55
1302
+ density0.168
1303
+ 0.167
1304
+ 0.166
1305
+ 10
1306
+ 0.165
1307
+ 0.164
1308
+ 0.163
1309
+ 0.162
1310
+ 0.204
1311
+ 0.206
1312
+ 0.208
1313
+ 0.210
1314
+ 0.212
1315
+ 0.214
1316
+ densityV=O
1317
+ 0.24
1318
+ 0.22
1319
+ 0.18
1320
+ 0.16
1321
+ 0.2
1322
+ 0.3
1323
+ 0.4
1324
+ 0.5
1325
+ 0.6
1326
+ 0.7
1327
+ 0.8
1328
+ density0.24
1329
+ 0.22
1330
+ 0.20
1331
+ 0.18
1332
+ 0.16
1333
+ 0.20
1334
+ 0.25
1335
+ 0.30
1336
+ SEO
1337
+ 0.40
1338
+ density0.180
1339
+ 0.175
1340
+ 0.170
1341
+ 0.165
1342
+ 0.160
1343
+ 0.20
1344
+ 0.21
1345
+ 0.22
1346
+ 0.23
1347
+ 0.24
1348
+ densitypotential of up to 300 dimensions with a single layer, which gives more
1349
+ speed to our method.
1350
+ • we proved that as the number of hidden units increases, the neural
1351
+ networks converge to the MFG solution.
1352
+ • Comparison with the previous methods shows the efficiency of our ap-
1353
+ proach even with multilayer neural networks.
1354
+ • Test on traffic flow problem in the deterministic case gives results sim-
1355
+ ilar to the newton iteration method, showing that it can solve this
1356
+ problem in the stochastic case.
1357
+ To address the issue of high dimensions in the problem, we used a neural
1358
+ network but found that it took a significant amount of time.
1359
+ While our
1360
+ approach has helped to reduce the time required, it is still not fast enough.
1361
+ Therefore, we are seeking an alternative to neural networks in future research
1362
+ to improve efficiency.
1363
+ Appendix A. Proof of Theorem 3.1.
1364
+ Denote N(σ) the space of all functions implemented by such a network
1365
+ with a single hidden layer and n hidden units, where σ in C2 �
1366
+ Rd+1�
1367
+ non-
1368
+ constant, and bonded. By (H1) we have that for all ρ, φ ∈ C1,2 �
1369
+ [0, T] × Rd�
1370
+ and ε1, ε2 > 0, There is ρθ, φω ∈ N(σ) such That,
1371
+ sup
1372
+ (t,x)∈E1
1373
+ |∂tφ(t, x) − ∂tφω(t, x)|
1374
+ + max
1375
+ |a|≤2
1376
+ sup
1377
+ (t,x)∈E1
1378
+ ��∂(a)
1379
+ x φ(t, x) − ∂(a)
1380
+ x φω(t, x)
1381
+ �� < ϵ1
1382
+ (A.1)
1383
+ sup
1384
+ (t,x)∈E2
1385
+ |∂tρ(t, x) − ∂tρθ(t, x)|
1386
+ + max
1387
+ |a|≤2
1388
+ sup
1389
+ (t,x)∈E2
1390
+ ��∂(a)
1391
+ x ρ(t, x) − ∂(a)
1392
+ x ρθ(t, x)
1393
+ �� < ϵ2
1394
+ (A.2)
1395
+ From (H3) we have that (ρ, p) �→ H(x, ρ, p) is locally Lipschitz continuous
1396
+ in (ρ, p), with Lipschitz constant that can have at most polynomial growth
1397
+ in ρ and p, uniformly with respect to t, x. This means that
1398
+ |H(x, ρ, p) − H(x, γ, s)| ≤
1399
+
1400
+ |ρ|q1/2 + |p|q2/2 + |γ|q3/2 + |s|q4/2�
1401
+ × (|ρ − γ| + |p − s|).
1402
+ 21
1403
+
1404
+ with some constants 0 ≤ q1, q2, q3, q4 < ∞. As a result, we get using H¨older
1405
+ inequality with exponents r1, r2,
1406
+
1407
+ E1
1408
+ |H (x, ρθ, ∇xφω) − H (x, ρ, ∇φ)|2 dµ1(t, x)
1409
+
1410
+
1411
+ E1
1412
+ (|ρθ(t, x)|q1 + |∇φω(t, x)|q2 + |ρ(t, x)|q3 + |∇φ(t, x)|q4)
1413
+ ×
1414
+
1415
+ |ρθ(t, x) − ρ(t, x)|2 + |∇φω(t, x) − ∇φ(t, x)|2�
1416
+ dµ1(t, x)
1417
+
1418
+ � �
1419
+ E1
1420
+ (|ρθ(t, x)|q1 + |∇φω(t, x)|q2 + |ρ(t, x)|q3 + |∇φ(t, x)|q4)r1dµ1(t, x)
1421
+ �1/r1
1422
+ ×
1423
+ � �
1424
+ E1
1425
+ (|ρθ(t, x) − ρ(t, x)|2 + |∇φω(t, x) − ∇φ(t, x)|2)r2dµ1(t, x)
1426
+ �1/r2
1427
+ ≤ C1
1428
+ � �
1429
+ E1
1430
+ (|ρθ(t, x) − ρ(t, x)|q1 + |∇φω(t, x) − ∇φ(t, x)|q2
1431
+ + |ρ(t, x)|q1∨q3 + |∇φ(t, x)|q2∨q4)r1dµ1(t, x)
1432
+ �1/r1
1433
+ ×
1434
+ � �
1435
+ E1
1436
+ (|ρθ(t, x) − ρ(t, x)|2 + |∇φω(t, x) − ∇φ(t, x)|2)r2dµ1(t, x)
1437
+ �1/r2
1438
+ ≤ C1
1439
+
1440
+ ϵq1
1441
+ 1 + ϵq2
1442
+ 2 + sup
1443
+ E1
1444
+ |ρ|q1∨q3 + sup
1445
+ E1
1446
+ |∇φ|q2∨q4
1447
+
1448
+ (ϵ2
1449
+ 1 + ϵ2
1450
+ 2)
1451
+ ≤ C1(ϵ2
1452
+ 1 + ϵ2
1453
+ 2),
1454
+ where the constant C1 < ∞ may change from line to line and qi ∨ qj =
1455
+ max{qi, qj}. In the two last steps we used A.1, A.2 and (H2). We recall
1456
+ that,
1457
+ H1(ρθ, φω) = ∂tφω(t, x) + ν∆φω(t, x) − H(x, ρθ(t, x), ∇φω(t, x)).
1458
+ 22
1459
+
1460
+ Note that H1(ρ, φ) = 0 for ρ, θ that solves the system of PDEs,
1461
+ L1(ρθ, φω) =
1462
+ ���H1(ρθ, φω)
1463
+ ���
1464
+ 2
1465
+ L2(E1) +
1466
+ ���φω(T, x) − φ(T, x)
1467
+ ���
1468
+ 2
1469
+ L2(Ω)
1470
+ =
1471
+ ���H1(ρθ, φω) − H1(ρ, φ)
1472
+ ���
1473
+ 2
1474
+ L2(E1) +
1475
+ ���φω(x, T) − g(x, ρθ(x, T))
1476
+ ���
1477
+ 2
1478
+ L2(Ω)
1479
+
1480
+
1481
+ E1
1482
+ |∂tφω(t, x) − ∂tφ(t, x)|2 dµ1(t, x)
1483
+ + |ν|
1484
+
1485
+ E1
1486
+ |∆φω(t, x) − ∆φ(t, x)|2 dµ1(t, x)
1487
+ +
1488
+
1489
+ E1
1490
+ |H (x, ρθ, ∇φω) − H (x, ρ, ∇φ)|2 dµ1(t, x)
1491
+ +
1492
+
1493
+
1494
+ |φω(T, x) − φ(T, x)|2dµ2(t, x)
1495
+ ≤C1(ϵ2
1496
+ 1 + ϵ2
1497
+ 2)
1498
+ for an appropriate constant C1 < ∞. In the last step, we use A.1, A.2 and
1499
+ the previous result.
1500
+ For L2 we use remark 3.1 to simplified the nonlinear term,
1501
+ div(ρ∇pH(x, ρ, ∇φ)) = α1(x, ρ, ∇φ) + α2(x, ρ, ∇φ) + α3(x, ρ, ∇φ),
1502
+ where,
1503
+ α1(x, ρ, ∇φ) = ∇pH(x, ρ, ∇φ)∇ρ,
1504
+ α2(x, ρ, ∇φ) = ∇pρH(x, ρ, ∇φ)∇ρ.ρ,
1505
+ α3(x, ρ, ∇φ) =
1506
+
1507
+ i,j
1508
+ ∇pipjH(x, ρ, ∇φ)(∂xjxiφ)ρ.
1509
+ In addition, from (H3) we have also ∇pH(x, ρ, p), ∇pρH(x, ρ, p), and ∇ppH(x, ρ, p)
1510
+ are locally Lipschitz continuous in (ρ, p). Then, we have after an application
1511
+ of Holder inequality, for some constant C2 < ∞ that may change from line
1512
+ 23
1513
+
1514
+ to line,
1515
+
1516
+ E2
1517
+ |α1 (x, ρθ, ∇φω) − α1(x, ρ, ∇φ)|2 dµ3(t, x)
1518
+ =
1519
+
1520
+ E2
1521
+ |∇pωH (x, ρθ, ∇φω) ∇ρθ − ∇pH(x, ρ, ∇φ)∇ρ|2 dµ3(t, x)
1522
+
1523
+
1524
+ E2
1525
+ ���
1526
+
1527
+ ∇pωH (x, ρθ, ∇φω) − ∇pH(x, ρ, ∇φ)
1528
+
1529
+ ∇ρ
1530
+ ���
1531
+ 2
1532
+ dµ3(t, x)
1533
+ +
1534
+
1535
+ E2
1536
+ ���∇pωH (x, ρθ, ∇φω) (∇ρθ − ∇ρ)
1537
+ ���
1538
+ 2
1539
+ dµ3(t, x)
1540
+ ≤ C2
1541
+ ��
1542
+ E2
1543
+ ���∇pωH (x, ρθ, ∇φω) − ∇pH(x, ρ, ∇φ)
1544
+ ���
1545
+ 2r1dµ3 (t, x)
1546
+ �1/r1
1547
+ ×
1548
+ � �
1549
+ E2
1550
+ |∇ρ|2r2dµ3(t, x)
1551
+ �1/r2 + C2
1552
+ ��
1553
+ E2
1554
+ ���∇pωH (x, ρθ, φω)
1555
+ ���
1556
+ 2s1dµ3(t, x)
1557
+ �1/s1
1558
+ ×
1559
+ ��
1560
+ E2
1561
+ |∇ρθ − ∇ρ|2s2 dµ3(t, x)
1562
+ �1/s2
1563
+ ≤ C2
1564
+ � �
1565
+ E2
1566
+ |∇ρ|2r2dµ3(t, x)
1567
+ �1/r2
1568
+ ×
1569
+ � �
1570
+ E2
1571
+ (|ρθ(t, x) − ρ(t, x)|q1 + |∇φω(t, x) − ∇φ(t, x)|q2
1572
+ + |ρ(t, x)|q1∨q3 + |∇φ(t, x)|q2∨q4)v1r1dµ3(t, x)
1573
+ �1/v1r1
1574
+ ×
1575
+ � �
1576
+ E2
1577
+ (|ρθ(t, x) − ρ(t, x)|2 + |∇xφω(t, x) − ∇xφ(t, x)|2)v2r2dµ3(t, x)
1578
+ �1/v2r2
1579
+ + C2
1580
+ ��
1581
+ E2
1582
+ ���∇pωH (x, ρθ, φω)
1583
+ ���
1584
+ 2s1dµ3(t, x)
1585
+ �1/s1
1586
+ ×
1587
+ ��
1588
+ E2
1589
+ |∇ρθ − ∇ρ|2s2 dµ3(t, x)
1590
+ �1/s2
1591
+ ≤ C2(ϵ2
1592
+ 1 + ϵ2
1593
+ 2)
1594
+ where in the last steps, we followed the computations previously. We do
1595
+ same for α2(x, ρ, ∇φ) and α3(x, ρ, ∇φ), we obtain for a C2 < ∞,
1596
+
1597
+ E2
1598
+ ��� div(ρθ∇pωH(x, ρθ, ∇φω)) − div(ρ∇pH(x, ρ, ∇φ))
1599
+ ���
1600
+ 2
1601
+ dµ3(t, x)
1602
+ ≤ C2(ϵ2
1603
+ 1 + ϵ2
1604
+ 2).
1605
+ 24
1606
+
1607
+ We recall that,
1608
+ H2(ρθ, φω) = ∂tρθ(t, x)−ν∆ρθ(t, x)−div (ρθ(t, x)∇pH(x, ρθ(t, x), ∇φω(t, x)))
1609
+ Note that H2(ρ, φ) = 0 for ρ, θ that solves the system of PDEs, then we have,
1610
+ L2(ρθ, φω) =
1611
+ ���H2(ρθ, φω)
1612
+ ���
1613
+ 2
1614
+ L2(E2) +
1615
+ ���ρθ(0, x) − ρ0(x)
1616
+ ���
1617
+ 2
1618
+ L2(Ω)
1619
+ =
1620
+ ���H2(ρθ, φω) − H2(ρ, φ)
1621
+ ���
1622
+ 2
1623
+ L2(E2) +
1624
+ ���ρθ(0, x) − ρ0(x)
1625
+ ���
1626
+ 2
1627
+ L2(Ω)
1628
+
1629
+
1630
+ E2
1631
+ |∂tρθ(t, x) − ∂tρ(t, x)|2 dµ3(t, x)
1632
+ + |ν|
1633
+
1634
+ E2
1635
+ |∆ρθ(t, x) − ∆ρ(t, x)|2 dµ3(t, x)
1636
+ +
1637
+
1638
+ E2
1639
+ ��� div(ρθ∇pωH(x, ρθ, ∇φω)) − div(ρ∇pH(x, ρ, ∇φ))
1640
+ ���
1641
+ 2
1642
+ dµ3(t, x)
1643
+ +
1644
+
1645
+
1646
+ |ρθ(0, x) − ρ0(x)|2dµ4(t, x)
1647
+ ≤C2(ϵ2
1648
+ 1 + ϵ2
1649
+ 2)
1650
+ for an appropriate constant C2 < ∞. The proof of theorem 3.1 is complete
1651
+ after rescaling ϵ1 and ϵ2
1652
+ Appendix B. Proof of Theorem 3.2.
1653
+ We follow the method used in [23] for a single PDE. (See also section 4
1654
+ in [38] for a coupled system). Let us denote the solution of problem 11 by.
1655
+
1656
+ ˆρn
1657
+ θ, ˆφn
1658
+ ω
1659
+
1660
+ ∈ V = V 2,2
1661
+ 0
1662
+ × V 2,2
1663
+ 0
1664
+ . Due to Conditions (H4) − (H6) and by using
1665
+ lemma 1.4 [39] on each equation then, there exist, C1, C2 such that:
1666
+ ∥ˆρn
1667
+ θ∥V 2,2
1668
+ 0
1669
+ ≤ C1
1670
+ ∥ˆφn
1671
+ ω∥V 2,2
1672
+ 0
1673
+ ≤ C2
1674
+ These applies and gives that the both sequence {ˆρn
1675
+ θ}n∈N, {ˆφn
1676
+ ω}n∈N are uni-
1677
+ formly bounded with respect to n in at least V . These uniform energy bounds
1678
+ 25
1679
+
1680
+ imply the existence of two subsequences, (still denoted in the same way)
1681
+ {ˆρn
1682
+ θ}n∈N, {ˆφn
1683
+ ω}n∈N and two functions ρ, φ in L2 �
1684
+ 0, T; W 1,2
1685
+ 0 (Ω)
1686
+
1687
+ such that,
1688
+ ˆρn
1689
+ θ → ρ weakly in L2 �
1690
+ 0, T : W 1,2
1691
+ 0 (Ω)
1692
+
1693
+ ˆφn
1694
+ ω → φ weakly in L2 �
1695
+ 0, T : W 1,2
1696
+ 0 (Ω)
1697
+
1698
+ Next let us set q = 1 +
1699
+ d
1700
+ d+4 ∈ (1, 2) and note that for conjugates, r1, r2 > 1
1701
+ such that 1/r1 + 1/r2 = 1
1702
+
1703
+ ΩT
1704
+ ���γ
1705
+
1706
+ t, x, ˆρn
1707
+ θ, ∇ˆφn
1708
+ ω
1709
+ ����
1710
+ q
1711
+
1712
+
1713
+ ΩT
1714
+ |λ|q ���∇ˆφn
1715
+ ω
1716
+ ���
1717
+ q
1718
+
1719
+ ��
1720
+ ΩT
1721
+ |λ|r1q
1722
+ �1/r1 ��
1723
+ ΩT
1724
+ ���∇ˆφn
1725
+ ω
1726
+ ���
1727
+ r2q�1/r2
1728
+ Let us choose r2 = 2/q > 1. Then we calculate r1 =
1729
+ r2
1730
+ r2−1 =
1731
+ 2
1732
+ 2−q. Hence,
1733
+ we have that r1q = d + 2. Recalling the assumption λ ∈ Ld+2 (ΩT) and the
1734
+ uniform bound on the ∇ˆφn
1735
+ ω we subsequently obtain that for q = 1 +
1736
+ d
1737
+ d+4,
1738
+ there is a constant C < ∞ such that
1739
+
1740
+ ΩT
1741
+ ���γ
1742
+
1743
+ t, x, ˆρn
1744
+ θ, ∇ˆφn
1745
+ ω
1746
+ ����
1747
+ q
1748
+ ≤ C
1749
+ On the other hand, it is obvious that a1 is bounded uniformly then, according
1750
+ to the HJB equation of 11, we have
1751
+
1752
+ ∂t ˆφn
1753
+ ω
1754
+
1755
+ n∈N is bounded uniformly with
1756
+ respect to n in L2 (0, T; W −1,2(Ω)). Then we can extract a subsequence, (still
1757
+ denoted in the same way)
1758
+
1759
+ ∂t ˆφn
1760
+ ω
1761
+
1762
+ n∈N such that
1763
+ ∂t ˆφn
1764
+ θ → ∂tφ weakly in L2 �
1765
+ 0, T; W −1,2(Ω)
1766
+
1767
+ Also, it will be shown that
1768
+ ∂tˆρn
1769
+ θ → ∂tρ weakly in L2 �
1770
+ 0, T; W −1,2(Ω)
1771
+
1772
+ Since the problem is nonlinear, the weak convergence of ˆφn
1773
+ ω and ˆρn
1774
+ θ in the
1775
+ space L2 �
1776
+ 0, T; W 1,2
1777
+ 0 (Ω)
1778
+
1779
+ is not enough in order to prove that φ and ρ are a
1780
+ solution of problem 10. To do this, we need the almost everywhere conver-
1781
+ gence of the gradients for a subsequence of the approximating solutions ˆφn
1782
+ ω
1783
+ and ˆρn
1784
+ θ.
1785
+ 26
1786
+
1787
+ However, the uniform boundedness of {ˆφn
1788
+ ω}n∈N and {ˆρn
1789
+ θ}n∈N in L2 �
1790
+ 0, T; W 1,2
1791
+ 0 (Ω)
1792
+
1793
+ and their weak convergence to φ and ρ respectively in that space, allows us
1794
+ to conclude, by using Theorem 3.3 of [40] on each equation, that
1795
+ ∇ˆφn
1796
+ ω → ∇φ almost everywhere in ΩT.
1797
+ ∇ˆρn
1798
+ θ → ∇ρ almost everywhere in ΩT.
1799
+ Hence, we obtain that {ˆφn
1800
+ ω}n∈N and {ˆρn
1801
+ θ}n∈N converges respectively to φ and
1802
+ ρ strongly in Lp �
1803
+ 0, T; W 1,p
1804
+ 0 (Ω)
1805
+
1806
+ for every p < 2. It remains to discuss the
1807
+ convergence of φn
1808
+ ω − ˆφn
1809
+ ω and ρn
1810
+ θ − ˆρn
1811
+ θ to zero. By last step of proof theorem 7.3
1812
+ [23] we get
1813
+
1814
+ φn
1815
+ ω − ˆφn
1816
+ ω
1817
+
1818
+ n∈N and {ρn
1819
+ θ − ˆρn
1820
+ θ}n∈N goes to zero strongly in Lp (ΩT)
1821
+ for every p < 2. Finally we conclude the proof of the convergence in Lp (ΩT)
1822
+ for every p < 2
1823
+ References
1824
+ [1] K. Huang, X. Di, Q. Du, X. Chen, A game-theoretic framework
1825
+ for autonomous vehicles velocity control:
1826
+ Bridging microscopic dif-
1827
+ ferential games and macroscopic mean field games, arXiv preprint
1828
+ arXiv:1903.06053 (2019).
1829
+ [2] H. Shiri, J. Park, M. Bennis, Massive autonomous uav path planning:
1830
+ A neural network based mean-field game theoretic approach, in: 2019
1831
+ IEEE Global Communications Conference (GLOBECOM), IEEE, 2019,
1832
+ pp. 1–6.
1833
+ [3] P. Cardaliaguet, C.-A. Lehalle, Mean field game of controls and an appli-
1834
+ cation to trade crowding, Mathematics and Financial Economics 12 (3)
1835
+ (2018) 335–363.
1836
+ [4] P. Casgrain, S. Jaimungal, Algorithmic trading in competitive markets
1837
+ with mean field games, SIAM News 52 (2) (2019) 1–2.
1838
+ [5] Y. Achdou, J. Han, J.-M. Lasry, P.-L. Lions, B. Moll, Income and
1839
+ wealth distribution in macroeconomics: A continuous-time approach,
1840
+ Tech. rep., National Bureau of Economic Research (2017).
1841
+ [6] Y. Achdou, F. J. Buera, J.-M. Lasry, P.-L. Lions, B. Moll, Partial differ-
1842
+ ential equation models in macroeconomics, Philosophical Transactions
1843
+ 27
1844
+
1845
+ of the Royal Society A: Mathematical, Physical and Engineering Sci-
1846
+ ences 372 (2028) (2014) 20130397.
1847
+ [7] D. A. Gomes, L. Nurbekyan, E. Pimentel, Economic models and mean-
1848
+ field games theory, Publicaoes Matematicas, IMPA, Rio, Brazil (2015).
1849
+ [8] A. De Paola, V. Trovato, D. Angeli, G. Strbac, A mean field game
1850
+ approach for distributed control of thermostatic loads acting in simulta-
1851
+ neous energy-frequency response markets, IEEE Transactions on Smart
1852
+ Grid 10 (6) (2019) 5987–5999.
1853
+ [9] A. C. Kizilkale, R. Salhab, R. P. Malham´e, An integral control formula-
1854
+ tion of mean field game based large scale coordination of loads in smart
1855
+ grids, Automatica 100 (2019) 312–322.
1856
+ [10] D. Gomes, J. Sa´ude, A mean-field game approach to price formation in
1857
+ electricity markets, arXiv preprint arXiv:1807.07088 (2018).
1858
+ [11] J. Han, Q. Li, et al., A mean-field optimal control formulation of deep
1859
+ learning, arXiv preprint arXiv:1807.01083 (2018).
1860
+ [12] X. Guo, A. Hu, R. Xu, J. Zhang, Learning mean-field games, arXiv
1861
+ preprint arXiv:1901.09585 (2019).
1862
+ [13] Y. Achdou, I. Capuzzo-Dolcetta, Mean field games: numerical methods,
1863
+ SIAM Journal on Numerical Analysis 48 (3) (2010) 1136–1162.
1864
+ [14] J.-D. Benamou, G. Carlier, F. Santambrogio, Variational mean field
1865
+ games, in: Active Particles, Volume 1, Springer, 2017, pp. 141–171.
1866
+ [15] Y. T. Chow, J. Darbon, S. Osher, W. Yin, Algorithm for overcoming the
1867
+ curse of dimensionality for time-dependent non-convex hamilton–jacobi
1868
+ equations arising from optimal control and differential games problems,
1869
+ Journal of Scientific Computing 73 (2) (2017) 617–643.
1870
+ [16] Y. T. Chow, J. Darbon, S. Osher, W. Yin, Algorithm for overcom-
1871
+ ing the curse of dimensionality for certain non-convex hamilton–jacobi
1872
+ equations, projections and differential games, Annals of Mathematical
1873
+ Sciences and Applications 3 (2) (2018) 369–403.
1874
+ 28
1875
+
1876
+ [17] A. T. Lin, S. W. Fung, W. Li, L. Nurbekyan, S. J. Osher, Apac-net:
1877
+ Alternating the population and agent control via two neural networks
1878
+ to solve high-dimensional stochastic mean field games, arXiv preprint
1879
+ arXiv:2002.10113 (2020).
1880
+ [18] H. Cao, X. Guo, M. Lauri`ere, Connecting gans, mfgs, and ot, arXiv
1881
+ preprint arXiv:2002.04112 (2020).
1882
+ [19] J.-M. Lasry, P.-L. Lions, Mean field games, Japanese journal of mathe-
1883
+ matics 2 (1) (2007) 229–260.
1884
+ [20] M. Cirant, L. Nurbekyan, The variational structure and time-periodic
1885
+ solutions for mean-field games systems, arXiv preprint arXiv:1804.08943
1886
+ (2018).
1887
+ [21] Y. T. Chow, W. Li, S. Osher, W. Yin, Algorithm for hamilton–jacobi
1888
+ equations in density space via a generalized hopf formula, Journal of
1889
+ Scientific Computing 80 (2) (2019) 1195–1239.
1890
+ [22] M. Lauri´ere, J. Song, Q. Tang, Policy iteration method for time-
1891
+ dependent mean field games systems with non-separable hamiltonians,
1892
+ arXiv preprint arXiv:2110.02552 (2021).
1893
+ [23] J. Sirignano, K. Spiliopoulos, Dgm: A deep learning algorithm for solv-
1894
+ ing partial differential equations, Journal of computational physics 375
1895
+ (2018) 1339–1364.
1896
+ [24] M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics informed deep learn-
1897
+ ing (part i): Data-driven solutions of nonlinear partial differential equa-
1898
+ tions, arXiv preprint arXiv:1711.10561 (2017).
1899
+ [25] M. Raissi, Deep hidden physics models: Deep learning of nonlinear par-
1900
+ tial differential equations, The Journal of Machine Learning Research
1901
+ 19 (1) (2018) 932–955.
1902
+ [26] K. Hornik, Approximation capabilities of multilayer feedforward net-
1903
+ works, Neural networks 4 (2) (1991) 251–257.
1904
+ [27] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
1905
+ S. Ozair, A. Courville, Y. Bengio, Generative adversarial networks,
1906
+ Communications of the ACM 63 (11) (2020) 139–144.
1907
+ 29
1908
+
1909
+ [28] E. Denton, S. Chintala, A. Szlam, R. Fergus, Deep generative image
1910
+ models using a laplacian pyramid of adversarial networks, arXiv preprint
1911
+ arXiv:1506.05751 (2015).
1912
+ [29] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, H. Lee, Genera-
1913
+ tive adversarial text to image synthesis, in: International Conference on
1914
+ Machine Learning, PMLR, 2016, pp. 1060–1069.
1915
+ [30] A. Radford, L. Metz, S. Chintala, Unsupervised representation learning
1916
+ with deep convolutional generative adversarial networks, arXiv preprint
1917
+ arXiv:1511.06434 (2015).
1918
+ [31] M. Wiese, L. Bai, B. Wood, H. Buehler, Deep hedging: learning to
1919
+ simulate equity option markets, arXiv preprint arXiv:1911.01700 (2019).
1920
+ [32] Y. Dukler, W. Li, A. Lin, G. Mont´ufar, Wasserstein of wasserstein loss
1921
+ for learning generative models, in: International Conference on Machine
1922
+ Learning, PMLR, 2019, pp. 1716–1725.
1923
+ [33] C. Villani, Topics in optimal transportation, Vol. 58, American Mathe-
1924
+ matical Soc., 2021.
1925
+ [34] R. Carmona,
1926
+ M. Lauri`ere,
1927
+ Deep learning for mean field games
1928
+ and mean field control with applications to finance, arXiv preprint
1929
+ arXiv:2107.04568 (2021).
1930
+ [35] F. Siebel, W. Mauser, On the fundamental diagram of traffic flow, SIAM
1931
+ Journal on Applied Mathematics 66 (4) (2006) 1150–1162.
1932
+ [36] N. Geroliminis, C. F. Daganzo, Existence of urban-scale macroscopic
1933
+ fundamental diagrams: Some experimental findings, Transportation Re-
1934
+ search Part B: Methodological 42 (9) (2008) 759–770.
1935
+ [37] M. Keyvan-Ekbatani, A. Kouvelas, I. Papamichail, M. Papageorgiou,
1936
+ Exploiting the fundamental diagram of urban networks for feedback-
1937
+ based gating, Transportation Research Part B: Methodological 46 (10)
1938
+ (2012) 1393–1403.
1939
+ [38] F. O. Gallego, M. T. G. Montesinos, Existence of a capacity solution to
1940
+ a coupled nonlinear parabolic–elliptic system, Communications on Pure
1941
+ & Applied Analysis 6 (1) (2007) 23.
1942
+ 30
1943
+
1944
+ [39] M. M. Porzio, Existence of solutions for some” noncoercive” parabolic
1945
+ equations, Discrete & Continuous Dynamical Systems 5 (3) (1999) 553.
1946
+ [40] L. Boccardo, A. Dall’Aglio, T. Gallou¨et, L. Orsina, Nonlinear parabolic
1947
+ equations with measure data, journal of functional analysis 147 (1)
1948
+ (1997) 237–258.
1949
+ 31
1950
+
BNE1T4oBgHgl3EQfDgMw/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
BdE1T4oBgHgl3EQf9gYX/content/2301.03556v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf56d618c9b13772865f2a4898f4b6c95dafbdc1a8fddbf8f107ae58be5f4be9
3
+ size 1179521
BdE1T4oBgHgl3EQf9gYX/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40520ced3b8322145f9fcf3dd5193546b3195d65c4cb72e9bdbd3ec3cd8c71d1
3
+ size 3801133
BdE1T4oBgHgl3EQf9gYX/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e30a619412de425a58ef6edc2bb8031c1cea2377fb1d781384fdbea9fed73b5c
3
+ size 126866
CNAyT4oBgHgl3EQfePiT/content/2301.00318v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:968a18ab8cf0386d92a1da688d2e066a6b8be1be20deea06ed7700588fe6b44c
3
+ size 734606
CNAyT4oBgHgl3EQfePiT/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a85735f88ecb051b05590a68125cffd9dd437c174dadc72825876e377cd5b621
3
+ size 1507373
CNAyT4oBgHgl3EQfePiT/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d42137a7ec9628353051002c68be757556d55a46c088b5fa07fdbc7728a05e8
3
+ size 52481
E9FRT4oBgHgl3EQfyzhp/content/tmp_files/2301.13647v1.pdf.txt ADDED
@@ -0,0 +1,1420 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.13647v1 [physics.data-an] 31 Jan 2023
2
+ Bayesian estimation of information-theoretic metrics for sparsely sampled distributions
3
+ Angelo Piga,∗ Lluc Font-Pomarol,† Marta Sales-Pardo,‡ and Roger Guimer`a§
4
+ (Dated: February 1, 2023)
5
+ Estimating the Shannon entropy of a discrete distribution from which we have only observed a small sample is
6
+ challenging. Estimating other information-theoretic metrics, such as the Kullback-Leibler divergence between
7
+ two sparsely sampled discrete distributions, is even harder. Existing approaches to address these problems
8
+ have shortcomings: they are biased, heuristic, work only for some distributions, and/or cannot be applied to all
9
+ information-theoretic metrics. Here, we propose a fast, semi-analytical estimator for sparsely sampled distribu-
10
+ tions that is efficient, precise, and general. Its derivation is grounded in probabilistic considerations and uses a
11
+ hierarchical Bayesian approach to extract as much information as possible from the few observations available.
12
+ Our approach provides estimates of the Shannon entropy with precision at least comparable to the state of the
13
+ art, and most often better. It can also be used to obtain accurate estimates of any other information-theoretic
14
+ metric, including the notoriously challenging Kullback-Leibler divergence. Here, again, our approach performs
15
+ consistently better than existing estimators.
16
+ I.
17
+ INTRODUCTION
18
+ Information theory is gaining momentum as a methodolog-
19
+ ical framework to study complex systems. In network sci-
20
+ ence, information theory provides rigorous tools to predict
21
+ unobserved links [1] and to infer community structure [2].
22
+ In neuroscience, Shannon entropy of spike train distributions
23
+ characterizes brain activity from neural responses [3], while
24
+ mutual information identifies correlations between brain stim-
25
+ uli and responses [4]. Recently, the Kullback-Leibler diver-
26
+ gence [5] and its regularized version, the Jensen-Shannon dis-
27
+ tance, have also been successfully used in a wide variety of
28
+ contexts: in cognitive science as a measure of “surprise,” to
29
+ quantify and predict how human attention is oriented between
30
+ changing screen images [6]; in quantitative social science,
31
+ in combination with topic models, to track the propagation
32
+ of political and social discourses [7, 8] or to understand the
33
+ emergence of social disruption from the analysis of judicial
34
+ decisions [9]; and in machine learning, at the intersection be-
35
+ tween the statistical physics of diffusive processes, probabilis-
36
+ tic models and deep neural networks [10].
37
+ Information theoretical metrics are measured on distribu-
38
+ tions. In practice, a distribution ρ over the possible states
39
+ of a system, as well as functions F(ρ) of this distribution
40
+ (such as Shannon entropy or other metrics), have to be in-
41
+ ferred from experimental observations. However, this infer-
42
+ ence process is difficult for many real complex systems since,
43
+ due to experimental limitations, the observations are often
44
+ sparse, and statistical estimates of the distribution ρ and its
45
+ functions can be severely biased. Here, we focus on the par-
46
+ ticular yet important case of discrete (or categorical) distribu-
47
+ tions ρi, i = 1, . . . , K, where K is the number of possible
48
+ [email protected]; Department of Chemical Engineering, Universi-
49
+ tat Rovira i Virgili, Tarragona 43007, Catalonia.
50
51
+ Department of Chemical Engineering,
52
+ Universitat
53
+ Rovira i Virgili, Tarragona 43007, Catalonia.
54
+ [email protected]; Department of Chemical Engineering, Universitat
55
+ Rovira i Virgili, Tarragona 43007, Catalonia.
56
+ § [email protected]; Department of Chemical Engineering, Universitat
57
+ Rovira i Virgili, Tarragona 43007, Catalonia.; ICREA, Barcelona 08010,
58
+ Catalonia.
59
+ states (or categories), which is known and fixed. Inferences
60
+ about ρ and any function must be based on ni, the number
61
+ of observations in the i-th state (with N = �
62
+ i ni the sam-
63
+ ple size) and, in the undersampled regime we are interested
64
+ in, N ≲ K. The challenge is thus, from the sparse observa-
65
+ tions {ni}, to infer the probability ρi of each category i and
66
+ estimate metrics F(ρ).
67
+ A theoretically well-founded approach to tackle this prob-
68
+ lem is provided by the principles of conditional probability,
69
+ encapsulated in Bayes’ theorem [11]. This framework is in
70
+ general preferable because of its transparency—it requires
71
+ that all assumptions of the underlying generative model for the
72
+ data are made explicit, expressed via the choice of a likelihood
73
+ function and a prior distribution that reflects the knowledge
74
+ about the system before observing any data. In probabilistic
75
+ reasoning, the combination of observations and prior distri-
76
+ bution provides an updated (posterior) probability distribution
77
+ of the quantity under study. Other estimation strategies make
78
+ implicit assumptions and often provide only point estimates,
79
+ as opposed to full distributions.
80
+ A class of expressive generative models for categorical dis-
81
+ tributions amenable to a Bayesian framework is the well-
82
+ studied family of Dirichlet distributions. However, as Ne-
83
+ menmann, Shafee, and Bialek (henceforth NSB) pointed out
84
+ in [12], when sample sizes are small (N ≲ K), the inferred
85
+ Shannon entropy is tightly determined by the specific parame-
86
+ ters one chooses for the Dirichlet model; therefore, inaccurate
87
+ choices result in severe biases of the Shannon entropy esti-
88
+ mates. To overcome this problem, they introduced a mixture
89
+ of Dirichlet models, which results in a very precise estimator
90
+ of the Shannon entropy that works for a wide variety of distri-
91
+ butions, even in the sparse sampling regime N ≲ K [12, 13].
92
+ Although, in terms of precision, NSB can be considered the
93
+ state of the art for estimating the Shannon entropy, it does not
94
+ provide estimates for the distribution ρ. For this reason, its ap-
95
+ plicability is limited to estimating the Shannon entropy (and
96
+ related information theoretic quantities like mutual informa-
97
+ tion and Jensen-Shannon distance, which can be expressed in
98
+ terms of entropies). By contrast, it cannot be used to estimate
99
+ the Kullback-Leibler divergence. To cover this gap, Hausser
100
+ and Strimmer derived a James-Stein-type shrinkage estimator
101
+ for ρ [14] (henceforth HS), which has the advantage of being
102
+
103
+ 2
104
+ analytical and applicable to any information-theoretic metric,
105
+ but at the price of making implicit ad hoc assumptions, and
106
+ of being less precise than NSB for the Shannon entropy, and
107
+ lacking error estimation.
108
+ Here, we propose an alternative fast, semi-analytical es-
109
+ timator for distributions that is efficient, precise, and gen-
110
+ eral. Its derivation is grounded in probabilistic considerations,
111
+ without any ad hoc assumptions. We consider Dirichlet gen-
112
+ erative models and use a hierarchical Bayesian approach to
113
+ extract as much information as possible from the few observa-
114
+ tions at hand. In the case of Shannon entropy, we can estimate
115
+ the expected value and higher order moments with precision
116
+ at least comparable to the NSB estimator, and most often bet-
117
+ ter. Additionally, because our method provides estimates of
118
+ the probability distribution, it can be used to obtain accurate
119
+ estimations of the Kullback-Leibler divergence. In this case
120
+ our approach also performs equally or better than existing es-
121
+ timators.
122
+ II.
123
+ BACKGROUND
124
+ Let us consider a system with K possible output states
125
+ whose observations follow an unknown discrete distribu-
126
+ tion ρ = {ρi; i = 1, . . . , K} with �
127
+ i ρi = 1.
128
+ The vector
129
+ n = {ni; i = 1, . . . , K} represents the number of times each
130
+ state was observed in a set of �
131
+ i ni = N independent obser-
132
+ vations of the system. We also consider a function F(ρ) of ρ,
133
+ such as, for example, the Shannon entropy
134
+ S(ρ) = −
135
+ K
136
+
137
+ i=1
138
+ ρi log ρi ,
139
+ (1)
140
+ which we want to estimate from the set of observations.
141
+ The posterior distribution over the values of the function F
142
+ given the observed counts n is
143
+ p (F|n) =
144
+
145
+ dρ δ (F − F(ρ)) p(ρ|n) ,
146
+ (2)
147
+ where p(ρ|n) is the posterior of the distribution ρ given the
148
+ counts n. We further assume that the prior over distributions
149
+ depends on a parameter β, which becomes a hyperparameter
150
+ of our generative model. Then, using the laws of conditional
151
+ probability, we can write the posterior p(ρ|n, β) as
152
+ p(ρ|n, β) = p(n|ρ, β) p(ρ|β)
153
+ p(n|β)
154
+ ,
155
+ (3)
156
+ where p(n|ρ, β) is the likelihood, p(ρ|β) is the prior over
157
+ distributions, and p(n|β) =
158
+
159
+ dρ p(n|ρ) p(ρ|β) is the evi-
160
+ dence and acts as normalization factor. The likelihood is the
161
+ probability of the empirical observations n given ρ; for in-
162
+ dependent multinomial samples, the probability of observing
163
+ an event of type i is ρi, and the full likelihood is the prod-
164
+ uct p(n|ρ, β) = p(n|ρ) = N! �K
165
+ i ρni
166
+ i /ni! and, given ρ, it
167
+ is independent of the hyperparameter β. The prior p(ρ|β)
168
+ expresses the probability of each distribution ρ prior to ob-
169
+ serving any data, and plays a crucial role in the discussion be-
170
+ low. Symmetric Dirichlet distributions are convenient priors
171
+ because they are a generative model for a broad class of dis-
172
+ crete distributions. Additionally, they have been widely used
173
+ in this setting [15], and are parametrized as follows
174
+ p(ρ|β) =
175
+ 1
176
+ BK(β)
177
+ K
178
+
179
+ i=1
180
+ ρβ−1
181
+ i
182
+ ,
183
+ BK(β) = Γ(β)K
184
+ Γ(βK) ,
185
+ (4)
186
+ where Γ is the gamma function, while the hyperparameter β is
187
+ a real, positive number known as the concentration parameter.
188
+ In the first row of Fig. 1, examples of categorical distributions
189
+ sampled from symmetric Dirichlet priors are shown.
190
+ Besides being very expressive, Dirichlet priors are conju-
191
+ gate distributions of categorical likelihoods, meaning that the
192
+ posterior is still a Dirichlet distribution, a property that often
193
+ makes the inference via Eqs. (3) and (2) analytically tractable.
194
+ For example, when F(ρ) = ρ, Dirichlet priors lead to ex-
195
+ pected posterior probabilities ⟨ρi⟩ given by the widely-used
196
+ generalized Laplace’s formula
197
+ ⟨ρi⟩ =
198
+ ni + β
199
+ N + Kβ .
200
+ (5)
201
+ It is worth noting the improvement of Eq. (5) with respect to
202
+ the maximum likelihood (or frequency) estimator ρi = ni/N,
203
+ which is recovered by the former in the limit β → 0. In par-
204
+ ticular, Laplace’s formula assigns non zero probability to non
205
+ observed states, a desirable property whose advantage will be-
206
+ come evident later, when estimating Kullback-Leibler diver-
207
+ gences. This example also illustrates how non-Bayesian ap-
208
+ proaches to inference make implicit and non-trivial assump-
209
+ tions, in this case assuming β → 0 amounts to assuming that
210
+ infinitely concentrated distributions ρ are a priori much more
211
+ plausible than more homogeneous ones.
212
+ Going back to the estimation of F from the observations
213
+ n, and given Eq. (5), one may be tempted to directly plug
214
+ the value of ⟨ρi⟩ in the explicit expression of F(ρ) to get a
215
+ point estimate. However, this is just an approximation; the
216
+ exact procedure consists in finding and using the whole pos-
217
+ terior p(F|n). Specifically, the expected value of this pos-
218
+ terior ⟨F⟩ =
219
+
220
+ dF F p(F|n) minimizes the mean-squared
221
+ error [16], and its mode is a consistent estimator, meaning
222
+ that it converges to the true value of F(ρ) when the num-
223
+ ber of observations increases, regardless of the prior and, in
224
+ particular, regardless of the hyperparameter β. Wolpert and
225
+ Wolf in Refs. [16, 17] provided analytical formulas for all
226
+ the moments of p(F|n) when F is the Shannon entropy and
227
+ for Dirichlet priors (we report the formula for the mean in
228
+ Eq. (15) and for the second moment in Appendix B).
229
+ However, an unbiased estimation of F is not guaranteed for
230
+ small samples. This is often the case for Dirichlet priors, es-
231
+ pecially when the parameter β is unknown. Several options
232
+ for the value of β have been proposed in literature, each one
233
+ suitable to some specific case but deficient in others (for a dis-
234
+ cussion, refer to Refs. [12, 14]). In [12], NSB suggested that,
235
+ when samples are scarce, any attempt to find a single universal
236
+ β is hopeless; the fundamental reason being that categorical
237
+ distributions generated by a Dirichlet have a Shannon entropy
238
+ that is narrowly determined by, and monotonically dependent
239
+
240
+ 3
241
+ on, β. In other words, for small samples, the posterior distri-
242
+ bution (2) is dominated by the prior. To overcome this prob-
243
+ lem, Refs. [12, 13] proposed, as the prior pNSB(ρ), an infinite
244
+ mixture of Dirichlet priors
245
+ pNSB(ρ) ∝
246
+
247
+ dβ pNSB(β) p(ρ|β) ,
248
+ (6)
249
+ where the weights pNSB(β) were set so as to obtain a flat prior
250
+ over entropies S, and have the functional form
251
+ pNSB(β) ∝ d E[S|ni = 0, β]
252
+
253
+ = Kψ1(Kβ +1)−ψ1(β +1) ,
254
+ (7)
255
+ where E[S|n, β] is the expected entropy given the observa-
256
+ tions n, and then E[S|ni = 0, β] is the expected entropy of the
257
+ distributions ρ generated from a symmetric Dirichlet priors
258
+ (that is if there are no observations), with fixed β and K, and
259
+ ψm(x) =
260
+ � d
261
+ dx
262
+ �m+1 log Γ(x) are the polygamma functions.
263
+ The NSB prior leads to very accurate estimates of the Shannon
264
+ entropy, and can be considered the state of the art. Even if best
265
+ suited for situations in which the number of states K is known
266
+ and fixed, it is quite versatile and has been later extended for
267
+ countable infinite number of states [18] and further optimized
268
+ for binary states [19] and long tail distributions [18]. Other es-
269
+ timators, for example, the Chao-Shen estimator [20], perform
270
+ at most as well as the NSB (or its derivatives), but never better
271
+ (see [14] for a comprehensive review). Additionally, given an
272
+ estimator of S, a number of other quantities can be indirectly
273
+ estimated. For example, the mutual information M between
274
+ two distributions ρ and σ is M(ρ ; σ) = S(ρ)+S(σ)−S(π),
275
+ where π is the joint distribution of ρ and σ [21]. Similar re-
276
+ lations can be derived for Jensen-Shannon distance and other
277
+ information-theoretic quantities [8] [22].
278
+ However, consider the estimation of the Kullback-Leibler
279
+ divergence (DKL) between two distributions ρ and σ with the
280
+ same dimension K
281
+ DKL(ρ∥σ) =
282
+ K
283
+
284
+ i=1
285
+ ρi log2
286
+ ρi
287
+ σi
288
+ .
289
+ (8)
290
+ To estimate DKL from samples n = {ni; i = 1, . . . , K} from
291
+ ρ, and m = {mi; i = 1, . . . , K} from σ, one cannot use the
292
+ NSB approach. First, DKL is not a combination of the Shan-
293
+ non entropies of the two underlying distributions ρ and σ.
294
+ Second, DKL is unbounded, and any attempt to find a hyper-
295
+ prior in the spirit of Eq. (7) results in improper hyperpriors.
296
+ Finally, with the NSB prior one renounces to any estimation
297
+ of β and, in turn, to a good a point estimation of DKL by
298
+ means of Laplace’s formula.
299
+ III.
300
+ HIERARCHICAL BAYES POINT ESTIMATE FOR β
301
+ Here, we address these limitations of the NSB estimator
302
+ while maintaining and even improving its performance. We
303
+ posit that the success of the NSB approach stems, not from
304
+ mixing infinitely many values of the concentration parameter
305
+ β, but rather from the flexibility to accommodate for any par-
306
+ ticular value of β. Indeed, we surmise that, in general, only
307
+ a narrow interval of β values are compatible with a given ob-
308
+ servation n and therefore contribute to the mixture, whereas
309
+ most others do not contribute. Motivated by this, we propose
310
+ an approach that aims to directly estimate the value of β that
311
+ most contributes to the posterior given the data n.
312
+ First, we observe that the posterior p(ρ|n) can be written as
313
+ p(ρ|n) =
314
+
315
+ dβ p(ρ|n, β) p(β|n)
316
+ =
317
+
318
+ dβ p(n|ρ) p(ρ|β)
319
+ p(n|β)
320
+ p(β|n) ,
321
+ (9)
322
+ where we have applied Bayes’ rule, and the fact that n condi-
323
+ tioned on ρ is independent of β, so that p(n|ρ, β) = p(n|ρ).
324
+ Then, we assume that the conditional distribution p(β|n) is
325
+ very peaked around a given value β⋆ , so that the posterior
326
+ p(ρ|n) can be approximated as
327
+ p(ρ|n) ≈ p(n|ρ) p(ρ|β⋆)
328
+ p(n|β⋆)
329
+ .
330
+ (10)
331
+ This approximation, sometimes referred to as empirical
332
+ Bayes, is a point estimate for the fully hierarchical probabilis-
333
+ tic model given by p(n|ρ) and p(ρ|β). Eq. (10) is identical to
334
+ Eq. (3), with the difference that the concentration parameter
335
+ is now the most likely value of β given the observed counts n,
336
+ that is,
337
+ β⋆ = argmax
338
+ β
339
+ p(β|n) = argmax
340
+ β
341
+ p(n|β) p(β)
342
+ p(n)
343
+ ,
344
+ (11)
345
+ where p(n|β) =
346
+
347
+ dρ p(n|β, ρ)p(ρ|β). For Dirichlet priors
348
+ (Eq. (4)), β∗ satisfies (see Appendix A)
349
+ K
350
+
351
+ i=1
352
+ ni−1
353
+
354
+ m=0
355
+ 1
356
+ m + β⋆ −
357
+ N−1
358
+
359
+ m=0
360
+ K
361
+ m + Kβ⋆ +
362
+ 1
363
+ p(β⋆)
364
+ d p(β)
365
+ d β
366
+ ���
367
+ β⋆ = 0 ,
368
+ (12)
369
+ which is the key analytical result of this paper.
370
+ The hyperprior p(β) reflects our prior knowledge about the
371
+ shape of the distribution of the hyperparameter. To be com-
372
+ pletely agnostic in this regard, we can use a uniform hyper-
373
+ prior
374
+ pU(β) =
375
+ 1
376
+ ∆β = const. ,
377
+ ∆β = βmax − βmin ,
378
+ (13)
379
+ with cut-offs 0 < βmin < βmax < ∞. In this case, the deriva-
380
+ tive term in Eq. (12) disappears. The NSB hyperprior (7) is a
381
+ valid alternative; in this case, the last term in Eq. (12) is (see
382
+ appendix A for details)
383
+ 1
384
+ pNSB(β∗)
385
+ d pNSB(β)
386
+ d β
387
+ ����
388
+ β∗ = K2ψ2(kβ⋆ + 1) − ψ2(β⋆ + 1)
389
+ Kψ1(kβ⋆ + 1) − ψ1(β⋆ + 1) .
390
+ (14)
391
+ Despite the complex appearance of Eq. (12), β∗ is not
392
+ hard to obtain numerically, giving a computational improve-
393
+ ment with respect the NSB estimator, whose algorithm is
394
+ involved and has higher computational costs [23].
395
+ The
396
+ source code of the implementations in Python is available at
397
+ https://github.com/angelopiga/info-metric-estimation/.
398
+
399
+ 4
400
+ 1
401
+ 200
402
+ 400
403
+ 600
404
+ 800
405
+ 1000
406
+ i
407
+ 0.1
408
+ 0.2
409
+ 0.3
410
+ ρ
411
+ i
412
+ Dirichlet
413
+ :
414
+ β
415
+ =
416
+ 0.01, S
417
+ =
418
+ 0.394
419
+ 1
420
+ 200
421
+ 400
422
+ 600
423
+ 800
424
+ 1000
425
+ i
426
+ 0.0025
427
+ 0.0050
428
+ 0.0075
429
+ Dirichlet
430
+ :
431
+ β
432
+ =
433
+ 1, S
434
+ =
435
+ 0.936
436
+ 1
437
+ 200
438
+ 400
439
+ 600
440
+ 800
441
+ 1000
442
+ i
443
+ 0.001
444
+ 0.002
445
+ Dirichlet
446
+ :
447
+ β
448
+ =
449
+ 10, S
450
+ =
451
+ 0.993
452
+ 1
453
+ 200
454
+ 400
455
+ 600
456
+ 800
457
+ 1000
458
+ i
459
+ 0.005
460
+ 0.010
461
+ ρ
462
+ i
463
+ half empty Dirichlet
464
+ :
465
+ β
466
+ =
467
+ 1, S
468
+ =
469
+ 0.838
470
+ 1
471
+ 200
472
+ 400
473
+ 600
474
+ 800
475
+ 1000
476
+ i
477
+ 10
478
+ −4
479
+ 10
480
+ −3
481
+ 10
482
+ −2
483
+ 10
484
+ −1
485
+ Zipf
486
+ :
487
+ a
488
+ =
489
+ 1.001, S
490
+ =
491
+ 0.751
492
+ 1
493
+ 200
494
+ 400
495
+ 600
496
+ 800
497
+ 1000
498
+ i
499
+ 0.0025
500
+ 0.0050
501
+ 0.0075
502
+ Bimodal
503
+ :
504
+ S
505
+ =
506
+ 0.854
507
+ FIG. 1. Examples of target distributions. First row: three categorical distributions sampled from uniform Dirichlet with β = 0.01, 1, 10,
508
+ respectively. Second row: a categorical distribution sampled from a uniform Dirichlet, β = 1, but where half bins are set to zero; Zipf’s dis-
509
+ tribution with exponent a = 1.001; binomial distribution: two gaussians with {mean, standard deviation} respectively {10, 20} and {100, 5},
510
+ are concatenated and then discretized over a histogram of 1000 categories.
511
+ IV.
512
+ RESULTS
513
+ We test our method in a variety of scenarios and com-
514
+ pare the results with the main alternative available estima-
515
+ tors, the NSB [12, 13] and the Hausser-Strimmer (HS) [14].
516
+ In our experiments, we generate synthetic target distributions
517
+ and sample multinomial counts {ni} from those distributions.
518
+ We fix K = 1000 and generate samples of increasing size
519
+ N = 20, . . ., 10000. After calculating β⋆ from (12), we es-
520
+ timate the Shannon entropy S and the Kullback-Leibler di-
521
+ vergence DKL. For each case, we repeat this procedure 1000
522
+ times; we always report averages over these repetitions [24].
523
+ As target distributions (see Fig. 1 as reference) we consider
524
+ categorical distributions that are both typical in the Dirich-
525
+ let prior (that is, they are generated by a symmetric Dirich-
526
+ let prior; we use several values of concentration parameter
527
+ β = 0.01, 1, 10) and atypical in the Dirichlet prior (that is,
528
+ they cannot be attributed to or have a negligible probability of
529
+ being generated from a symmetric Dirichlet prior). Among
530
+ the latter, we consider: (i) distributions with added struc-
531
+ tural zeroes (that is, we sample from a symmetric Dirich-
532
+ let prior with a given β, but half of the categories are then
533
+ forced to have zero probability) [25]; (ii) Bimodal distribu-
534
+ tions, which represent, for example, the degree distributions
535
+ of core-periphery complex networks [26]; (iii) Zipf’s distri-
536
+ bution, ubiquitous in nature, in biological as well as social
537
+ systems [27], characterized by probabilities ρi ∝ i−a, with a
538
+ exponent a ≥ 1 [28].
539
+ A.
540
+ Shannon entropy
541
+ To estimate the posterior p(S|n) of the Shannon entropy we
542
+ use the exact formulas of its moments, derived in Refs. [16,
543
+ 17] (later refined in Ref. [18]). The first moment is given by
544
+ E[S|n, β] =
545
+
546
+ dρ S(ρ|β) p(ρ|n)
547
+ = ψ0(N + Kβ + 1)
548
+
549
+ K
550
+
551
+ i=1
552
+ ni + β
553
+ N + Kβ ψ0(ni + β + 1) .
554
+ (15)
555
+ In Appendix B we also show the expression of the standard
556
+ deviation.
557
+ In practice, given a dataset n we calculate the most prob-
558
+ able β⋆ from Eq. (12) by assuming either a flat hyperprior,
559
+ Eq. (13), or the NSB hyperprior, Eq. (7). Then, we compute
560
+ the required moments of the Shannon entropy; we indicate
561
+ the estimated values of the Shannon entropy as S(β⋆
562
+ flat) and
563
+ S(β⋆
564
+ NSB), respectively. In Figs. 2 and 3, we show that our
565
+ estimator with a flat hyperprior is the most accurate estima-
566
+ tor overall. In particular, S(β⋆
567
+ flat) is consistently more accu-
568
+ rate than the NSB estimator, except in the deep sparse regime
569
+ N < 30 of two of the distributions atypical in the Dirichlet
570
+ prior, where it is comparable but slightly less accurate. The
571
+ Bayesian estimators also behave better than the HS estimator
572
+ SHS except for very uniform distributions sampled from the
573
+ Dirichlet prior with β = 10. Overall, the S(β⋆
574
+ flat) has little
575
+ bias often even in the very sparse regime and for distributions
576
+ atypical in the Dirichlet prior. It is also interesting to note
577
+ that both S(β⋆
578
+ flat) and S(β⋆
579
+ NSB) have a more regular scaling
580
+ behavior, in the convergence toward the true values as N in-
581
+ creases, in particular when compared with NSB and HS for
582
+ Zipf’s distribution.
583
+ We also analyze the variability of the Shannon entropy es-
584
+ timates, as measured by the root mean squared error (insets
585
+ in Figs. 2 and 3). This analysis reveals that, besides having
586
+ less bias, the S(β⋆
587
+ flat) estimator has a variability that is typi-
588
+ cally comparable to or smaller than the other estimators. It is
589
+
590
+ 5
591
+ 10
592
+ 2
593
+ 10
594
+ 3
595
+ 10
596
+ 4
597
+ N
598
+ 0.0
599
+ 0.2
600
+ 0.4
601
+ ΔS
602
+ rel
603
+ Dirichlet : K
604
+ =
605
+ 1000, β
606
+ =
607
+ 0.01, S
608
+ true
609
+ =
610
+ 0.422
611
+ S
612
+ true
613
+ β
614
+
615
+ flat
616
+ β
617
+
618
+ NSB
619
+ NSB
620
+ HS
621
+ 10
622
+ 2
623
+ 10
624
+ 3
625
+ 10
626
+ 4
627
+ N
628
+ 10
629
+ −2
630
+ 10
631
+ −1
632
+ RMSE
633
+ 10
634
+ 2
635
+ 10
636
+ 3
637
+ 10
638
+ 4
639
+ N
640
+ −0.15
641
+ −0.10
642
+ −0.05
643
+ 0.00
644
+ 0.05
645
+ ΔS
646
+ rel
647
+ Dirichlet : K
648
+ =
649
+ 1000, β
650
+ =
651
+ 1, S
652
+ true
653
+ =
654
+ 0.939
655
+ 10
656
+ 2
657
+ 10
658
+ 3
659
+ 10
660
+ 4
661
+ N
662
+ 10
663
+ −2
664
+ 10
665
+ −1
666
+ RMSE
667
+ 10
668
+ 2
669
+ 10
670
+ 3
671
+ 10
672
+ 4
673
+ N
674
+ −0.20
675
+ −0.15
676
+ −0.10
677
+ −0.05
678
+ 0.00
679
+ ΔS
680
+ rel
681
+ Dirichlet : K
682
+ =
683
+ 1000, β
684
+ =
685
+ 10, S
686
+ true
687
+ =
688
+ 0.993
689
+ 10
690
+ 2
691
+ 10
692
+ 3
693
+ 10
694
+ 4
695
+ N
696
+ 10
697
+ −3
698
+ 10
699
+ −2
700
+ 10
701
+ −1
702
+ RMSE
703
+ FIG. 2.
704
+ Shannon entropy estimation for distributions typical in
705
+ a Dirichlet prior, for β
706
+ =
707
+ 0.01, 1, 10 and sample size N
708
+ =
709
+ 25, . . . , 10000.
710
+ Each point corresponds to an average over 1000
711
+ samples. The Strue in the titles serves as a reference and indicates
712
+ the average over the entropies of the runs. Main plots: relative er-
713
+ rors of entropies ∆Srel = (Sest − Strue)/Strue. Insets: roots
714
+ mean-squared errors (note the logarithmic scale in both axes). Black
715
+ squares: our estimator with β⋆ from a flat hyperprior. Cyan pluses:
716
+ our estimator but with β⋆ from NSB hyperprior. Pink upper triangle:
717
+ NSB estimator. Red crosses: Hausser-Strimmer plug-in estimator.
718
+ Here and in the rest of figures, the standard-errors bars of the main
719
+ plots are smaller then symbols and are not shown.
720
+ also worth noting that, differently from Bayesian estimators,
721
+ for which all the moments can be estimated also from a single
722
+ sample, the HS estimator is limited to a point estimate of the
723
+ mean value of Shannon entropy.
724
+ Note that, contrary to what one may expect, SNSB differs
725
+ from our estimate S(β⋆
726
+ NSB) in that the latter is always smaller
727
+ for small samples.
728
+ This happens because the NSB hyper-
729
+ prior (7) is a positive monotonically-decreasing function that
730
+ assigns higher probabilities to smaller β’s, while the Shannon
731
+ entropy of distributions sampled from a symmetric Dirichlet
732
+ is a monotonically-increasing function of β. However, it is
733
+ not the same estimating β⋆ with the NSB hyperprior and then
734
+ plug it in (15) or directly estimating the Shannon entropy with
735
+ the NSB prior (6) and the latter in fact provides better results.
736
+ On the other side, S(β⋆
737
+ flat) and SNSB should not substantially
738
+ differ, being based on the same first principles of estimation.
739
+ The differences are attributable to the numerical and compu-
740
+ tational difficulties in implementing the NSB approach that
741
+ required both a fine discretization over β and solving as many
742
+ equations (15) as β’s, which have to be finally integrated with
743
+ weights given by the hyperprior (7), in contrast with our ap-
744
+ proach, which needs solving just Eq. (12) and Eq. (15) once.
745
+ 10
746
+ 2
747
+ 10
748
+ 3
749
+ 10
750
+ 4
751
+ N
752
+ −0.1
753
+ 0.0
754
+ 0.1
755
+ ΔS
756
+ rel
757
+ Half empty Dirichlet : K
758
+ =
759
+ 1000, β
760
+ =
761
+ 1, S
762
+ true
763
+ =
764
+ 0.839
765
+ 10
766
+ 2
767
+ 10
768
+ 3
769
+ 10
770
+ 4
771
+ N
772
+ 10
773
+ −2
774
+ 10
775
+ −1
776
+ RMSE
777
+ 10
778
+ 2
779
+ 10
780
+ 3
781
+ 10
782
+ 4 N
783
+ −0.3
784
+ −0.2
785
+ −0.1
786
+ 0.0
787
+ 0.1
788
+ ΔSrel
789
+ Zipf : K = 1000, a = 1.001, Strue = 0.751
790
+ 10
791
+ 2
792
+ 10
793
+ 3
794
+ 10
795
+ 4
796
+ N
797
+ 10
798
+ −2
799
+ 10
800
+ −1
801
+ RMSE
802
+ 10
803
+ 2
804
+ 10
805
+ 3
806
+ 10
807
+ 4
808
+ N
809
+ −0.1
810
+ 0.0
811
+ 0.1
812
+ ΔS
813
+ rel
814
+ Bimodal : K
815
+ =
816
+ 1000, S=0.857
817
+ 10
818
+ 2
819
+ 10
820
+ 3
821
+ 10
822
+ 4
823
+ N
824
+ 10
825
+ −2
826
+ 10
827
+ −1
828
+ RMSE
829
+ FIG. 3. Shannon entropy estimation for atypical distributions in a
830
+ Dirichlet prior (same legend as in Fig. 2): Dirichlet with β = 1
831
+ but half bins are set to zeros; Zipf’s distribution with exponent a =
832
+ 1.001; bimodal distribution.
833
+
834
+ 6
835
+ B.
836
+ Kullback-Leibler divergence
837
+ Regarding the Kullback-Leibler divergence DKL, there are
838
+ no exact formulas for the moments of the posterior distribu-
839
+ tion p(DKL|n). Therefore, we have to rely on a point estimate
840
+ of the mean by first estimating the distributions via Laplace’s
841
+ formula (5) with the inferred β⋆ and then plugging these val-
842
+ ues into expression (8). The flat hyperprior in Eq. (13) is the
843
+ only reasonable one to estimate β⋆ in this case, since the NSB
844
+ prior (Eq. (7)) can only be justified for the Shannon entropy.
845
+ 10
846
+ 2
847
+ 10
848
+ 3
849
+ 10
850
+ 4 N
851
+ 0
852
+ 1
853
+ 2
854
+ 3
855
+ DKL
856
+ K = 1000, β = 0.01
857
+ β⋆
858
+ plugin
859
+ HS
860
+ βplugin = ⋆
861
+ 10
862
+ 2
863
+ 10
864
+ 3
865
+ 10
866
+ 4 N
867
+ 0.00
868
+ 0.25
869
+ 0.50
870
+ 0.75
871
+ 1.00
872
+ DKL
873
+ K = 1000, β = 1
874
+ 10
875
+ 2
876
+ 10
877
+ 3
878
+ 10
879
+ 4 N
880
+ 0.0
881
+ 0.2
882
+ 0.4
883
+ 0.6
884
+ DKL
885
+ K = 1000, β = 10
886
+ FIG. 4.
887
+ Kullback-Leibler estimation for distributions typical in
888
+ a Dirichlet prior, for β = 0.01, 1, 101 and sample size N
889
+ =
890
+ 25 . . . 10000. Each point corresponds to an average over 1000 sam-
891
+ ples. Black squares: our plug-in estimator, that is Laplace’s formula
892
+ with β⋆ estimated from a flat hyperprior. Red crosses: Hausser-
893
+ Strimmer plug-in estimator. Purple circles: Laplace’s estimator for
894
+ uniform prior β = 1.
895
+ We compare the results with Laplace’s estimator (5) with
896
+ β = 1 and with the HS estimator, since both have the same
897
+ desirable property of assigning non-null probabilities to un-
898
+ observed states (ni = 0) and are suitable estimators for com-
899
+ puting DKL. Indeed, β = 1 in Laplace’s formula is a com-
900
+ mon choice and amounts to assigning the same probability
901
+ to all possible distributions. We test the estimators in a sce-
902
+ nario typical in machine learning and variational inference,
903
+ in which one wants to minimize the DKL between a complex,
904
+ target distribution and some model approximation. Here, after
905
+ generating a synthetic discrete distribution ρ, we measure the
906
+ DKL(ρ; ˆρ), where ˆρ is the distribution estimated from counts;
907
+ hence a good estimator should make DKL as small as possi-
908
+ ble.
909
+ 10
910
+ 2
911
+ 10
912
+ 3
913
+ 10
914
+ 4
915
+ N
916
+ 0.0
917
+ 0.5
918
+ 1.0
919
+ D
920
+ KL
921
+ Half empty Dirichlet : K
922
+ =
923
+ 1000, β
924
+ =
925
+ 1
926
+ 10
927
+ 2
928
+ 10
929
+ 3
930
+ 10
931
+ 4
932
+ N
933
+ 0.0
934
+ 0.5
935
+ 1.0
936
+ D
937
+ KL
938
+ Zipf : K
939
+ =
940
+ 1000, a
941
+ =
942
+ 1.001
943
+ 10
944
+ 2
945
+ 10
946
+ 3
947
+ 10
948
+ 4
949
+ N
950
+ 0.0
951
+ 0.5
952
+ 1.0
953
+ D
954
+ KL
955
+ Bimodal
956
+ :
957
+ K
958
+ =
959
+ 1000
960
+ FIG. 5. Kullback-Leibler divergence estimation for atypical distribu-
961
+ tions in a Dirichlet prior (same legend as in Fig. 4): Dirichlet with
962
+ β = 1 but half bins set to zero; Zipf’s distribution with exponent
963
+ a = 1, 001; bimodal distribution.
964
+ In Figs. 4 and 5, we show that our estimator and the HS
965
+ estimator provide similar results, although DKL(β⋆) is more
966
+
967
+ 7
968
+ accurate in the very sparse regime N < 50, and when the tar-
969
+ get distributions are atypical in the Dirichlet priors, especially
970
+ in the important case of Zipf’s distributions. The estimator
971
+ based on Laplace’s formula wih β = 1 performs generally
972
+ worse, unless in the trivial case when the target distribution
973
+ itself was also generated just from a Dirichlet with β = 1.
974
+ Importantly, in this case in which β = 1 is optimal, our ap-
975
+ proach provides virtually identical results.
976
+ V.
977
+ CONCLUSIONS
978
+ Inferring the shape of discrete distributions and their infor-
979
+ mation content from experimental data is a fundamental task
980
+ in fields that spread from machine learning to computational
981
+ social science and neuroscience. However, it is common in
982
+ experiments to have a very low number of observations that
983
+ hinder a correct estimation. In this paper, we have proposed
984
+ a new method for the solution of this problem that applies
985
+ to discrete distributions with a known number of states. It
986
+ is pinned on the laws of conditional probability, in the form
987
+ of Bayes’ rules, with the explicit assumption of a Dirichlet
988
+ prior distribution as the mechanism behind the generation of
989
+ data. In particular, we are able to provide a semi-analytical
990
+ formula (Eq. (12)), easily solvable with moderated computa-
991
+ tion efforts, to find the concentration parameter characterizing
992
+ the Dirichlet distribution. This result is a step forward with re-
993
+ spect to many previous works that share the same background
994
+ but ultimately focused on constructing an infinite mixture of
995
+ Dirichlet priors, which weights were chosen to optimize the
996
+ estimation of the Shannon entropy only [12, 13, 18, 19]. Be-
997
+ sides their precision and success, these other approaches are
998
+ computationally involved and ignore any estimation of the
999
+ probability distribution, which could be necessary, in partic-
1000
+ ular, for the estimation of the Kullback-Leibler divergence.
1001
+ Our approach allows the reconstruction of the posterior distri-
1002
+ bution of Shannon entropy for a broad variety of data types, by
1003
+ using the exact formulas in Ref. [16], with a precision compa-
1004
+ rable to or better than other estimators developed for the same
1005
+ purposes. In the case of Kullback-Leibler divergence, on the
1006
+ contrary, we were not able to estimate its full posterior dis-
1007
+ tribution, but we obtained a good point-wise estimation of its
1008
+ mean value, by estimating the two involved probability dis-
1009
+ tributions and then plugging them into the explicit expression
1010
+ of the Kulback-Leibler divergence. In regard to this point and
1011
+ for future studies, it is in general desirable having some ana-
1012
+ lytical expression for the posterior distribution (conditioned to
1013
+ observations) of the Kullback-Leibler divergence in the same
1014
+ spirit as the Shannon entropy. Further efforts should be de-
1015
+ voted to extending the same approach to more specific pri-
1016
+ ors than Dirichlet, for example for data that follow power
1017
+ law distributions, including Zipf’s laws, for binary distribu-
1018
+ tions [19], or when the number of states is unknown, as in
1019
+ Refs. [18, 20, 29, 30].
1020
+ VI.
1021
+ ACKNOWLEDGEMENTS
1022
+ This research was funded by the Social Observatory of the
1023
+ “la Caixa” Foundation as part of the project LCF / PR / SR19
1024
+ / 52540009, by MCIN / AEI / 10.13039 / 501100011033
1025
+ (Project No. PID2019–106811GB-C31) and by the Govern-
1026
+ ment of Catalonia (Project No. 2017SGR-896).
1027
+ [1] Roger Guimer`a and Marta Sales-Pardo. Missing and spurious
1028
+ interactions and the reconstruction of complex networks. Pro-
1029
+ ceedings of the National Academy of Sciences, 106(52):22073–
1030
+ 22078, 2009.
1031
+ [2] Tiago P Peixoto. Entropy of stochastic blockmodel ensembles.
1032
+ Physical Review E, 85(5):056122, 2012.
1033
+ [3] Fred Rieke, David Warland, Rob de Ruyter Van Steveninck, and
1034
+ William Bialek. Spikes: exploring the neural code. MIT press,
1035
+ 1999.
1036
+ [4] Rodrigo Quian Quiroga and Stefano Panzeri. Extracting infor-
1037
+ mation from neuronal populations: information theory and de-
1038
+ coding approaches. Nature Reviews Neuroscience, 10(3):173–
1039
+ 185, 2009.
1040
+ [5] Solomon Kullback and Richard A Leibler. On information and
1041
+ sufficiency. The annals of mathematical statistics, 22(1):79–86,
1042
+ 1951.
1043
+ [6] Laurent Itti and Pierre Baldi. Bayesian surprise attracts human
1044
+ attention. Vision research, 49(10):1295–1306, 2009.
1045
+ [7] Alexander TJ Barron, Jenny Huang, Rebecca L Spang, and Si-
1046
+ mon DeDeo.
1047
+ Individuals, institutions, and innovation in the
1048
+ debates of the french revolution. Proceedings of the National
1049
+ Academy of Sciences, 115(18):4607–4612, 2018.
1050
+ [8] Simon DeDeo, Robert XD Hawkins, Sara Klingenstein, and
1051
+ Tim Hitchcock. Bootstrap methods for the empirical study of
1052
+ decision-making and information flows in social systems. En-
1053
+ tropy, 15(6):2246–2276, 2013.
1054
+ [9] Lluc Font-Pomarol, Angelo Piga, Rosa Maria Teruel-Garcia,
1055
+ Sergio Nasarre-Aznar, Marta Sales-Pardo, and Roger Guimer`a.
1056
+ Socially disruptive periods and topics from information-
1057
+ theoretical analysis of judicial decisions.
1058
+ EPJ Data Science,
1059
+ 2023. Accepted for publication 12/2022.
1060
+ [10] Yasaman Bahri, Jonathan Kadmon, Jeffrey Pennington, Sam S
1061
+ Schoenholz, Jascha Sohl-Dickstein, and Surya Ganguli. Statis-
1062
+ tical mechanics of deep learning. Annual Review of Condensed
1063
+ Matter Physics, 11(1), 2020.
1064
+ [11] Edwin T Jaynes. Probability theory: The logic of science. Cam-
1065
+ bridge university press, 2003.
1066
+ [12] Ilya Nemenman, Fariel Shafee, and William Bialek. Entropy
1067
+ and inference, revisited. Advances in neural information pro-
1068
+ cessing systems, 14, 2001.
1069
+ [13] Ilya Nemenman,
1070
+ William Bialek,
1071
+ and Rob De Ruyter
1072
+ Van Steveninck.
1073
+ Entropy and information in neural spike
1074
+ trains: Progress on the sampling problem. Physical Review E,
1075
+ 69(5):056111, 2004.
1076
+ [14] Jean Hausser and Korbinian Strimmer. Entropy inference and
1077
+ the james-stein estimator, with application to nonlinear gene as-
1078
+ sociation networks.
1079
+ Journal of Machine Learning Research,
1080
+ 10(7), 2009.
1081
+ [15] Andrew Gelman, John B Carlin, Hal S Stern, and Donald B
1082
+ Rubin. Bayesian data analysis. Chapman and Hall/CRC, 1995.
1083
+
1084
+ 8
1085
+ [16] David H Wolpert and David R Wolf. Estimating functions of
1086
+ probability distributions from a finite set of samples. Physical
1087
+ Review E, 52(6):6841, 1995.
1088
+ [17] David R Wolf and David H Wolpert. Estimating functions of
1089
+ distributions from a finite set of samples, part 2: Bayes estima-
1090
+ tors for mutual information, chi-squared, covariance and other
1091
+ statistics. arXiv preprint comp-gas/9403002, 1994.
1092
+ [18] Evan W Archer, Il Memming Park, and Jonathan W Pillow.
1093
+ Bayesian entropy estimation for countable discrete distribu-
1094
+ tions. The Journal of Machine Learning Research, 15(1):2833–
1095
+ 2868, 2014.
1096
+ [19] Evan W Archer, Il Memming Park, and Jonathan W Pillow.
1097
+ Bayesian entropy estimation for binary spike train data using
1098
+ parametric prior knowledge. Advances in neural information
1099
+ processing systems, 26, 2013.
1100
+ [20] Anne Chao and Tsung-Jen Shen. Nonparametric estimation of
1101
+ shannon’s index of diversity when there are unseen species in
1102
+ sample.
1103
+ Environmental and ecological statistics, 10(4):429–
1104
+ 443, 2003.
1105
+ [21] Evan W Archer, Il Memming Park, and Jonathan W Pillow.
1106
+ Bayesian and quasi-bayesian estimators for mutual information
1107
+ from discrete data. Entropy, 15(5):1738–1755, 2013.
1108
+ [22] As observed in [21] and [30], mutual information can be ex-
1109
+ pressed in terms of different combinations of the Shannon en-
1110
+ tropy of the two distributions. But its estimations in general dif-
1111
+ fer. The expression M(ρ ; σ) = S(ρ) + S(σ) − S(π) seems
1112
+ to be the less biased, however, in the absence of a unique con-
1113
+ sistent prior over the joint distribution, it is not guaranteed it
1114
+ minimizes the mean-squared error.
1115
+ [23] Although we have not proved that the solution β⋆ is unique, it
1116
+ seems reasonable that it is and, indeed, our simulations suggest
1117
+ that, even for N ≪ K, if a finite β⋆ exists, it is unique.
1118
+ [24] Averaging on multiple runs is preferable in order to highlight
1119
+ the scaling behaviors of the estimators while mitigating the
1120
+ effects of outliers (for example, very singular distributions or
1121
+ samples).
1122
+ [25] This scenario corresponds to an experiment in which some
1123
+ states are not observable.
1124
+ [26] Xiao Zhang, Travis Martin, and Mark EJ Newman. Identifica-
1125
+ tion of core-periphery structure in networks. Physical Review
1126
+ E, 91(3):032803, 2015.
1127
+ [27] Mark EJ Newman. Power laws, pareto distributions and zipf’s
1128
+ law. Contemporary physics, 46(5):323–351, 2005.
1129
+ [28] In Refs. [12, 13] a rigorous definition of atypicity is provided,
1130
+ related to the shape of the tails of a Zipf’s distribution.
1131
+ [29] Gregory Valiant and Paul Valiant. Estimating the unseen: im-
1132
+ proved estimators for entropy and other properties. Journal of
1133
+ the ACM (JACM), 64(6):1–41, 2017.
1134
+ [30] David H Wolpert and Simon DeDeo. Estimating functions of
1135
+ distributions defined over spaces of unknown size.
1136
+ Entropy,
1137
+ 15(11):4668–4699, 2013.
1138
+ Appendix A: Derivation of results (Eq. (12) in main text)
1139
+ Let us suppose that we have K different categories (or types of random events) and that we observe N independent random
1140
+ events distributed in the K categories n = {ni; i = 1, . . . , K}, with �
1141
+ i ni = N. We also assume that the probabilities of
1142
+ observing counts in each category ρi are distributed according to a Dirichlet prior with the same hyper-parameters β for all
1143
+ ρ = {ρi; i = 1, . . . , K}, so that
1144
+ p(ρ|β) =
1145
+ 1
1146
+ BK(β)
1147
+ K
1148
+
1149
+ i=1
1150
+ ρβ−1
1151
+ i
1152
+ ,
1153
+ BK(β) = Γ(β)K
1154
+ Γ(βK).
1155
+ (A1)
1156
+ Our goal is to compute the most likely value of β given the observed counts {ni}. To that end, we need to compute the
1157
+ conditional probability p(β|n). We can do this by marginalizing over the possible combinations of ρ = {ρi} as follows:
1158
+ p(β|n) = p(β)
1159
+ p(n)p(n|β) ,
1160
+ p(n|β) =
1161
+
1162
+ dρ p(n|β, ρ)p(ρ|β) .
1163
+ (A2)
1164
+ Since the probability of observing an event in category i is ρi, the probability of observing ni events of type i is ρni
1165
+ i . Therefore,
1166
+ for the integral in Eq. (A2) we have that
1167
+ p(n|β, ρ) =
1168
+ K
1169
+
1170
+ i=1
1171
+ ρni
1172
+ i
1173
+ ,
1174
+ (A3)
1175
+ so that
1176
+ p(n|β) =
1177
+ 1
1178
+ BK(β)
1179
+
1180
+
1181
+ K
1182
+
1183
+ i=1
1184
+ ρni+β−1
1185
+ i
1186
+ ,
1187
+ (A4)
1188
+ where we have used Eq. (A1) for p(ρ|β) and the integral is over the simplex that satisfies the condition �
1189
+ i=1 ρi = 1.
1190
+ To perform the integrals above we first evaluate the normalization condition for ρk = 1 − R(K − 1) with RK−1 = �K−1
1191
+ i=1 ρi
1192
+ so that for ρk−1 we have the following integral:
1193
+ IK−1 =
1194
+ � 1−RK−2
1195
+ 0
1196
+ dρK−1 ρnK−1+β−1
1197
+ K−1
1198
+ (1 − ρk−1 − RK−2)nK+β−1 .
1199
+ (A5)
1200
+
1201
+ 9
1202
+ To evaluate this integral we use the fact that
1203
+ � (1−R)
1204
+ 0
1205
+ dx xa(1 − x − R)b = Γ(a + 1)Γ(b + 1)
1206
+ Γ(a + b + 2)
1207
+ (1 − R)a+b+1
1208
+ if
1209
+ Re(R) < 1
1210
+ and
1211
+ Im(R) = 0
1212
+ (A6)
1213
+ so that
1214
+ IK−1 = Γ(nK−1 + β)Γ(nK + β)
1215
+ Γ(nk + nK−1 + 2β)
1216
+ (1 − RK−2)nK+nK−1+2β−1
1217
+ (A7)
1218
+ Which gives for ρK−2 the following integral:
1219
+ IK−2 =
1220
+ � 1−RK−3
1221
+ 0
1222
+ dρK−2 ρnK−2+β−1
1223
+ K−2
1224
+ (1 − ρK−2 − RK−3)nK+NK−1+2β−1
1225
+ (A8)
1226
+ = Γ(nK−2 + β)Γ(nK + nK−1 + 2β)
1227
+ Γ(nk + nK−1 + nk−2 + 3β)
1228
+ (1 − RK−3)nK+nK−1+nK−2+3β−1
1229
+ (A9)
1230
+ which have evaluated using Eq. (A6). If we do this for all ρ we end up having
1231
+
1232
+
1233
+
1234
+ i
1235
+ ρni+β−1
1236
+ i
1237
+ =
1238
+ K
1239
+
1240
+ i=1
1241
+ Ii =
1242
+ �K
1243
+ i=1 Γ(ni + β)
1244
+ Γ(N + Kβ)
1245
+ .
1246
+ (A10)
1247
+ Thus, we obtain the following expression for p(n|β)
1248
+ p(n|β) =
1249
+ 1
1250
+ BK(β)
1251
+
1252
+ i Γ(ni + β)
1253
+ Γ(N + Kβ) = Γ(Kβ)
1254
+ Γ(β)K
1255
+
1256
+ i Γ(ni + β)
1257
+ Γ(N + Kβ)
1258
+ (A11)
1259
+ Our goal is to find β⋆ that maximizes p(β|n) = p(β)
1260
+ p(n)p(n|β). To that end we take the derivative of log p(β|n),
1261
+ log p(β|n) = log Γ(Kβ) − K log Γ(β) +
1262
+
1263
+ i
1264
+ log Γ(ni + β) − log Γ(N + Kβ) + log p(β) − log p(n)
1265
+ (A12)
1266
+ so that β⋆ is the one that satisfies the condition:
1267
+ d log p(β|n)
1268
+
1269
+ ����
1270
+ β=β⋆ = 0 .
1271
+ (A13)
1272
+ To evaluate this equation we use the following definitions and properties of the log Gamma function:
1273
+ 1.
1274
+ � d
1275
+ dx
1276
+ �m+1 log Γ(x) = ψm(x)
1277
+ (A14)
1278
+ 2.
1279
+ ψ0(x + n) = �n−1
1280
+ m=0
1281
+ 1
1282
+ x+m + ψ(x) .
1283
+ (A15)
1284
+ Using the expressions above and the consideration that p(β) = const. we obtain that:
1285
+ d log p(β|n)
1286
+
1287
+ = Kψ0(Kβ) − Kψ0(β) +
1288
+
1289
+ i
1290
+ ψ0(ni + β) − Kψ0(N + Kβ)
1291
+ (A16)
1292
+ =
1293
+ K
1294
+
1295
+ i=1
1296
+ ni−1
1297
+
1298
+ m=0
1299
+ 1
1300
+ m + β −
1301
+ N−1
1302
+
1303
+ m=0
1304
+ K
1305
+ m + Kβ
1306
+ (A17)
1307
+ Therefore the condition that gives β⋆ is
1308
+ K
1309
+
1310
+ i=1
1311
+ ni−1
1312
+
1313
+ m=0
1314
+ 1
1315
+ m + β⋆ −
1316
+ N−1
1317
+
1318
+ m=0
1319
+ K
1320
+ m + Kβ⋆ = 0 ,
1321
+ (A18)
1322
+ that is, the Eq. (12) in main text for uniform hyperprior (13). If instead we consider a prior for beta that results in a close-to-
1323
+ uniform distribution of Shannon entropy such as in Nemenman et al. [12, 13] then
1324
+ pNSB(β) = dS
1325
+ dβ ,
1326
+ (A19)
1327
+
1328
+ 10
1329
+ with S = E[S|ni = 0, β] = ψ0(Kβ + 1) − ψ0(β + 1), the average entropy of the distributions generated from a Dirichlet
1330
+ prior p(ρ|β). Note that this prior is already normalized since
1331
+ � ∞
1332
+ 0
1333
+ dS/dβdβ = S(∞; K) − S(0; K) = 1. The derivative of the
1334
+ logarithm of this prior with respect to β is then
1335
+ d log pNSB(β)
1336
+
1337
+ =
1338
+ 1
1339
+ pNSB(β)
1340
+ dpNSB(β)
1341
+
1342
+ = 1
1343
+ dS
1344
+
1345
+ d2S
1346
+ dβ2 = K2ψ2(kβ + 1) − ψ2(β + 1)
1347
+ Kψ1(kβ + 1) − ψ1(β + 1) ,
1348
+ which is the formula (12) in main text. The condition of the β⋆ that maximizes p(β|n) is in this case:
1349
+ d log p(β|n)
1350
+
1351
+ = Kψ0(Kβ) − Kψ0(β) +
1352
+
1353
+ i
1354
+ ψ0(ni + β) − Kψ0(N + Kβ) + 1
1355
+ dS
1356
+
1357
+ d2S
1358
+ dβ2 =
1359
+ =
1360
+ K
1361
+
1362
+ i=1
1363
+ ni−1
1364
+
1365
+ m=0
1366
+ 1
1367
+ m + β⋆ −
1368
+ N−1
1369
+
1370
+ m=0
1371
+ K
1372
+ m + Kβ⋆ + K2ψ2(kβ⋆ + 1) − ψ2(β⋆ + 1)
1373
+ Kψ1(kβ⋆ + 1) − ψ1(β⋆ + 1) = 0 .
1374
+ Appendix B: Analytical moments of the Shannon entropy posterior
1375
+ In the specific case of S(ρ), instead of solving p (F|n) =
1376
+
1377
+ dρ δ (F − F(ρ)) p(ρ|n) (Eq. (2) in main text) directly, it is
1378
+ possible to obtain closed form expression for all the moments of the posterior [16–18]. Here we report the first two, the mean
1379
+ E[S|n, β] =
1380
+
1381
+ dρ S(ρ|β) p(ρ|n) = ψ0(N + Kβ + 1) −
1382
+ K
1383
+
1384
+ i=1
1385
+ ni + β
1386
+ N + Kβ ψ0(ni + β + 1) ,
1387
+ (B1)
1388
+ and the second moment
1389
+ E[S2|n, β] =
1390
+
1391
+ dρ S(ρ|β)2 p(ρ|n) =
1392
+ K
1393
+
1394
+ i̸=j
1395
+ (ni + β) (nj + β)
1396
+ (N + Kβ + 1) (N + Kβ) Ii,j +
1397
+ K
1398
+
1399
+ i=1
1400
+ (ni + β + 1) (ni + β)
1401
+ (N + Kβ + 1) (N + Kβ) Ji ,
1402
+ (B2)
1403
+ with
1404
+ Ii,j =
1405
+
1406
+ ψ0(ni + β + 1) − ψ0(N + Kβ + 2)
1407
+
1408
+ ·
1409
+
1410
+ ψ0(nj + β + 1) − ψ0(N + Kβ + 2)
1411
+
1412
+ − ψ1(N + Kβ + 2) ;
1413
+ Ji =
1414
+
1415
+ ψ0(ni + β + 2) − ψ0(N + Kβ + 2)
1416
+ �2
1417
+ + ψ1(ni + β + 2) − ψ1(N + Kβ + 2) ;
1418
+ (B3)
1419
+ from which the standard deviation is in turn calculated as the square root of the variance Var(S|n, β) = E[S2|n, β]−E[S|n, β]2.
1420
+