jackkuo commited on
Commit
bd539b3
·
verified ·
1 Parent(s): 6dccbd3

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -NFRT4oBgHgl3EQfrTfI/content/tmp_files/2301.13620v1.pdf.txt +1801 -0
  2. -NFRT4oBgHgl3EQfrTfI/content/tmp_files/load_file.txt +0 -0
  3. .gitattributes +51 -0
  4. 09AzT4oBgHgl3EQfDPrL/content/2301.00974v1.pdf +3 -0
  5. 09AzT4oBgHgl3EQfDPrL/vector_store/index.faiss +3 -0
  6. 09AzT4oBgHgl3EQfDPrL/vector_store/index.pkl +3 -0
  7. 1NFPT4oBgHgl3EQfUTQQ/content/tmp_files/2301.13056v1.pdf.txt +1506 -0
  8. 1NFPT4oBgHgl3EQfUTQQ/content/tmp_files/load_file.txt +0 -0
  9. 1tFIT4oBgHgl3EQf4CvP/vector_store/index.faiss +3 -0
  10. 39AyT4oBgHgl3EQf1_mj/content/2301.00744v1.pdf +3 -0
  11. 39AyT4oBgHgl3EQf1_mj/vector_store/index.pkl +3 -0
  12. 39E2T4oBgHgl3EQfjgft/vector_store/index.pkl +3 -0
  13. 3NFAT4oBgHgl3EQflB25/content/2301.08615v1.pdf +3 -0
  14. 3NFAT4oBgHgl3EQflB25/vector_store/index.pkl +3 -0
  15. 4dE1T4oBgHgl3EQf6QUq/content/2301.03520v1.pdf +3 -0
  16. 4dE1T4oBgHgl3EQf6QUq/vector_store/index.pkl +3 -0
  17. 59AyT4oBgHgl3EQfcfe1/content/tmp_files/2301.00285v1.pdf.txt +1073 -0
  18. 59AyT4oBgHgl3EQfcfe1/content/tmp_files/load_file.txt +451 -0
  19. 59E1T4oBgHgl3EQfTAPi/content/tmp_files/2301.03074v1.pdf.txt +1442 -0
  20. 59E1T4oBgHgl3EQfTAPi/content/tmp_files/load_file.txt +0 -0
  21. 5dFIT4oBgHgl3EQf7yth/content/tmp_files/2301.11399v1.pdf.txt +1893 -0
  22. 5dFIT4oBgHgl3EQf7yth/content/tmp_files/load_file.txt +0 -0
  23. 8NE5T4oBgHgl3EQfQg5R/content/tmp_files/2301.05513v1.pdf.txt +1532 -0
  24. 9tE0T4oBgHgl3EQffwDm/content/tmp_files/2301.02410v1.pdf.txt +1798 -0
  25. 9tE0T4oBgHgl3EQffwDm/content/tmp_files/load_file.txt +0 -0
  26. A9AyT4oBgHgl3EQf3_rL/content/2301.00780v1.pdf +3 -0
  27. A9AyT4oBgHgl3EQf3_rL/vector_store/index.pkl +3 -0
  28. BdAzT4oBgHgl3EQfh_0J/vector_store/index.faiss +3 -0
  29. CNAzT4oBgHgl3EQfTvwq/vector_store/index.faiss +3 -0
  30. CdE0T4oBgHgl3EQfyQI7/content/tmp_files/2301.02656v1.pdf.txt +0 -0
  31. CdE0T4oBgHgl3EQfyQI7/content/tmp_files/load_file.txt +0 -0
  32. ENAyT4oBgHgl3EQfSPf9/vector_store/index.faiss +3 -0
  33. FdE1T4oBgHgl3EQfqwVD/vector_store/index.pkl +3 -0
  34. FtE1T4oBgHgl3EQfEwPV/content/tmp_files/2301.02895v1.pdf.txt +2479 -0
  35. FtE1T4oBgHgl3EQfEwPV/content/tmp_files/load_file.txt +0 -0
  36. GdAzT4oBgHgl3EQfHfsT/content/tmp_files/2301.01044v1.pdf.txt +1583 -0
  37. GdAzT4oBgHgl3EQfHfsT/content/tmp_files/load_file.txt +0 -0
  38. GtAzT4oBgHgl3EQfUvxQ/vector_store/index.faiss +3 -0
  39. I9E1T4oBgHgl3EQfGAMz/content/tmp_files/2301.02908v1.pdf.txt +1168 -0
  40. I9E1T4oBgHgl3EQfGAMz/content/tmp_files/load_file.txt +0 -0
  41. IdE2T4oBgHgl3EQfowif/content/tmp_files/2301.04022v1.pdf.txt +2070 -0
  42. IdE2T4oBgHgl3EQfowif/content/tmp_files/load_file.txt +0 -0
  43. JdE0T4oBgHgl3EQfSACX/content/tmp_files/2301.02216v1.pdf.txt +1950 -0
  44. JdE0T4oBgHgl3EQfSACX/content/tmp_files/load_file.txt +0 -0
  45. K9E0T4oBgHgl3EQfSgAu/content/2301.02222v1.pdf +3 -0
  46. K9E0T4oBgHgl3EQfSgAu/vector_store/index.pkl +3 -0
  47. K9E0T4oBgHgl3EQfigH5/content/tmp_files/2301.02448v1.pdf.txt +2484 -0
  48. K9E0T4oBgHgl3EQfigH5/content/tmp_files/load_file.txt +0 -0
  49. KNFOT4oBgHgl3EQfyzQ_/content/tmp_files/2301.12929v1.pdf.txt +2263 -0
  50. KNFOT4oBgHgl3EQfyzQ_/content/tmp_files/load_file.txt +0 -0
-NFRT4oBgHgl3EQfrTfI/content/tmp_files/2301.13620v1.pdf.txt ADDED
@@ -0,0 +1,1801 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A Maximum Principle for Optimal Control Problems involving
2
+ Sweeping Processes with a Nonsmooth Set
3
+ M. d. R. de Pinho, M. Margarida A. Ferreira ∗and
4
+ Georgi Smirnov †
5
+ February 1, 2023
6
+ Abstract
7
+ We generalize a Maximum Principle for optimal control problems involving sweeping systems
8
+ previously derived in [14] to cover the case where the moving set may be nonsmooth. Noteworthy,
9
+ we consider problems with constrained end point. A remarkable feature of our work is that we rely
10
+ upon an ingenious smooth approximating family of standard differential equations in the vein of that
11
+ used in [10].
12
+ Keywords: Sweeping Process Optimal Control, Maximum Principle, Approximations
13
+ 1
14
+ Introduction
15
+ In recent years, there has been a surge of interest in optimal control problems involving the controlled
16
+ sweeping process of the form
17
+ ˙x(t) ∈ f(t, x(t), u(t)) − NC(t)(x(t)), u(t) ∈ U,
18
+ x(0) ∈ C0.
19
+ (1.1)
20
+ In this respect, we refer to, for example, [3], [4], [5], [8], [9], [16], [23], [10] (see also accompanying correction
21
+ [11]), [6], [15] and [14]. Sweeping processes first appeared in the seminal paper [18] by J.J. Moreau as a
22
+ mathematical framework for problems in plasticity and friction theory. They have proved of interest to
23
+ tackle problems in mechanics, engineering, economics and crowd motion problems; to name but a few,
24
+ see [1], [5], [16], [17] and [21]. In the last decades, systems in the form (1.1) have caught the attention and
25
+ interest of the optimal control community. Such interest resides not only in the range of applications but
26
+ also in the remarkable challenge they rise concerning the derivation of necessary conditions. This is due
27
+ to the presence of the normal cone NC(t)(x(t)) in the dynamics. Indeed, the presence of the normal cone
28
+ renders the discontinuity of the right hand of the differential inclusion in (1.1) destroying a regularity
29
+ property central to many known optimal control results.
30
+ Lately, there has been several successful attempts to derive necessary conditions for optimal control
31
+ problems involving (1.1). Assuming that the set C is time independent, necessary conditions for optimal
32
+ control problems with free end point have been derived under different assumptions and using different
33
+ techniques. In [10], the set C has the form C = {x :
34
+ ψ(x) ≤ 0} and an approximating sequence of
35
+ optimal control problems, where (1.1) is approximated by the differential equation
36
+ ˙xγk(t) = f(t, xγk(t), u(t)) − γkeγkψ(xγk (t))∇ψ(xγk(t)),
37
+ (1.2)
38
+ for some positive sequence γk → +∞, is used. Similar techniques are also applied to somehow more
39
+ general problems in [23]. A useful feature of those approximations is explored in [12] to define numerial
40
+ schemes to solve such problems.
41
+ ∗MdR de Pinho and MMA Ferreira are at Faculdade de Engenharia da Universidade do Porto, DEEC, SYSTEC. Portugal,
42
+ mrpinho, [email protected]
43
+ †G. Smirnov is at Universidade do Minho, Dep. Matem´atica, Physics Center of Minho and Porto Universities (CF-UM-
44
+ UP), Campus de Gualtar, Braga, Portugal, [email protected]
45
+ 1
46
+ arXiv:2301.13620v1 [math.OC] 31 Jan 2023
47
+
48
+ More recently, an adaptation of the family of approximating systems (1.2) is used in [14] to generalize
49
+ the results in [10] to cover problems with additional end point constraints and with a moving set of the
50
+ form C(t) = {x : ψ(t, x) ≤ 0}.
51
+ In this paper we generalize the Maximum Principle proved in [14] to cover problems with possibly
52
+ nonsmooth sets. Our problem of interest is
53
+ (P)
54
+
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
+
64
+
65
+
66
+
67
+
68
+
69
+
70
+
71
+ Minimize φ(x(T))
72
+ over processes (x, u) such that
73
+ ˙x(t) ∈ f(t, x(t), u(t)) − NC(t)(x(t)), a.e. t ∈ [0, T],
74
+ u(t) ∈ U,
75
+ a.e. t ∈ [0, T],
76
+ (x(0), x(T)) ∈ C0 × CT ⊂ C(0) × C(T),
77
+ where T > 0 is fixed, φ : Rn → R, f : [0, T] × Rn × Rm → Rn, U ⊂ Rm and
78
+ C(t) :=
79
+
80
+ x ∈ Rn : ψi(t, x) ≤ 0, i = 1, . . . , I
81
+
82
+ (1.3)
83
+ for some functions ψi : [0, T] × Rn → R, i = 1, . . . , I.
84
+ The case where I = 1 in (1.3) and ψ1 is C2 is covered in [14]. Here, we assume I > 1 and that
85
+ the functions ψi are also C2. Although going from I = 1 in (1.3) to I > 1 may be seen as a small
86
+ generalization, it demands a significant revision of the technical approach and, plus, the introduction
87
+ of a constraint qualification. This is because the set (1.3) may be nonsmooth. We focus on sets (1.3),
88
+ satisfying a certain constraint qualification, introduced in assumption (A1) in section 2 below. This is,
89
+ indeed, a restriction on the nonsmoothness of (1.3). A similar problem with nonsmooth moving set is
90
+ considered in [15]. Our results cannot be obtained from the results of [15] and do not generalize them.
91
+ This paper is organized in the following way. In section 2, we introduce the main notation and we state
92
+ and discuss the assumptions under which we work. In this same section, we also introduce the family of
93
+ approximating systems to ˙x(t) ∈ f(t, x(t), u(t)) − NC(t)(x(t)) and establish a crucial convergence result,
94
+ Theorem 2.2. In section 3, we dwell on the approximating family of optimal control problems to (P)
95
+ and we state the associated necessary conditions. The Maximum Principle for (P) is then deduced and
96
+ stated in Theorem 4.1, covering additionally, problems in the form of (P) where the end point constraint
97
+ x(T) ∈ CT is absent. Before finishing, we present an illustrative example of our main result, Theorem
98
+ 4.1.
99
+ 2
100
+ Preliminaries
101
+ In this section, we introduce a summary of the notation and state the assumptions on the data of (P)
102
+ enforced throughout. Furthermore, we extract information from the assumptions establishing relations
103
+ crucial for the forthcoming analysis.
104
+ Notation
105
+ For a set S ⊂ Rn, ∂S, cl S and int S denote the boundary, closure and interior of S.
106
+ If g : Rp → Rq, ∇g represents the derivative and ∇2g the second derivative. If g : R × Rp → Rq, then
107
+ ∇xg represents the derivative w.r.t. x ∈ Rp and ∇2
108
+ xg the second derivative, while ∂tg(t, x) represents the
109
+ derivative w.r.t. t ∈ R.
110
+ The Euclidean norm or the induced matrix norm on Rp×q is denoted by |·|. We denote by Bn the
111
+ closed unit ball in Rn centered at the origin. The inner product of x and y is denoted by ⟨x, y⟩. For
112
+ some A ⊂ Rn, d(x, A) denotes the distance between x and A. We denote the support function of A at z
113
+ by S(z, A) = sup{⟨z, a⟩ | a ∈ A}
114
+ The space L∞([a, b]; Rp) (or simply L∞ when the domains are clearly understood) is the Lebesgue
115
+ space of essentially bounded functions h : [a, b] → Rp. We say that h ∈ BV ([a, b]; Rp) if h is a function
116
+ of bounded variation. The space of continuous functions is denoted by C([a, b]; Rp).
117
+ 2
118
+
119
+ Standard concepts from nonsmooth analysis will also be used. Those can be found in [7], [19] or [22],
120
+ to name but a few. The Mordukhovich normal cone to a set S at s ∈ S is denoted by NS(s) and ∂f(s)
121
+ is the Mordukhovich subdifferential of f at s (also known as limiting subdifferential).
122
+ For any set A ⊂ Rn, cone A is the cone generated by the set A.
123
+ We now turn to problem (P). We first state the definition of admissible processes for (P) and then
124
+ we describe the assumptions under which we will derive our main results.
125
+ Definition 2.1 A pair (x, u) is called an admissible process for (P) when x is an absolutely continuous
126
+ function and u is a measurable function satisfying the constraints of (P).
127
+ Assumptions on the data of (P)
128
+ A1: The function ψi, i = 1, . . . , I, are C2. The graph of C(·) is compact and it is contained in the
129
+ interior of a ball rBn+1, for some r > 0. There exist constants β > 0, η > 0 and ρ ∈]0, 1[ such that
130
+ ψi(t, x) ∈ [−β, β] =⇒ |∇xψi(t, x)|> η forall (t, x) ∈ [0, T] × Rn,
131
+ (2.1)
132
+ and, for I(t, x) = {i = 1, . . . , I | ψi(t, x) ∈] − 2β, β]},
133
+ ⟨∇xψi(t, x), ∇xψj(t, x)⟩ ≥ 0, i, j ∈ I(t, x).
134
+ (2.2)
135
+ Moreover, if i ∈ I(t, x), then
136
+
137
+ j∈I(t,x)\{i}
138
+ |⟨∇xψi(t, x), ∇xψj(t, x)⟩| ≤ ρ|∇xψi(t, x)|2
139
+ (2.3)
140
+ and
141
+ ψi(t, x) ≤ −2β =⇒ ∇ψi(t, x) = 0 for i = 1, . . . I.
142
+ (2.4)
143
+ A2: The function f is continuous, x → f(t, x, u) is continuously differentiable for all (t, u) ∈ [0, T]×Rm.
144
+ The constant M > 0 is such that |f(t, x, u)|≤ M and |∇xf(t, x, u)|≤ M for all (t, x, u) ∈ rBn+1×U.
145
+ A3: For each (t, x), the set f(t, x, U) is convex.
146
+ A4: The set U is compact.
147
+ A5: The sets C0 and CT are compact.
148
+ A6: There exists a constant Lφ such that |φ(x) − φ(x′)|≤ Lφ|x − x′| for all x, x′ ∈ Rn.
149
+ Assumption (A1) concerns the functions ψi defining the set C and it plays a crucial role in the analysis.
150
+ All ψi are assumed to be smooth with gradients bounded away from the origin when ψi takes values in
151
+ a neighorhood of zero. Moreover, the boundary of C may be nonsmooth at the intersection points of the
152
+ level sets
153
+
154
+ x : ψi(t, x) = 0
155
+
156
+ . However, nonsmoothness at those corner points is restricted to (2.2) which
157
+ excludes the cases where the angle between the two gradients of the functions defining the boundary of
158
+ C is obtuse; see figure 1.
159
+ On the other hand, (2.3) guarantees that the Gramian matrix of the gradients of the functions
160
+ taking values near the boundary of C(t) is diagonally dominant and, hence, the gradients are linearly
161
+ independent.
162
+ In many situations, as in the example we present in the last section, we can guarantee the fulfillment
163
+ of (A1), in particular (2.4), replacing the function ψi by
164
+ ˜ψi(t, x) = h ◦ ψi(t, x),
165
+ (2.5)
166
+ 3
167
+
168
+ c
169
+ Y1=0
170
+ Y2=0
171
+ ▽ Y2
172
+ Not allowed
173
+ Allowed
174
+ ▽ Y1
175
+ C
176
+ Y2=0
177
+ ▽ Y2
178
+ ▽ Y1
179
+ Y1=0
180
+ Figure 1: Examples of two diferent sets C. On the left size, a set that does not satisfies (2.2). On the
181
+ right side, the set C is nonsmooth and it fulfils (2.2).
182
+ where
183
+ h(z) =
184
+
185
+
186
+
187
+ z
188
+ if
189
+ z > −β,
190
+ hs(z)
191
+ if
192
+ −2β ≤ z ≤ −β,
193
+ −2β
194
+ if
195
+ z < −2β,
196
+ Here, h is an C2 function, with hs an increasing function defined on [−2β, −β]. For example, h may be
197
+ a cubic polinomial with positive derivative on the interval ] − 2β, −β[. For all t ∈ [0, T], set
198
+ ˜C(t) :=
199
+
200
+ x ∈ R :
201
+ ˜ψi(t, x) ≤ 0, i = 1, . . . , I
202
+
203
+ .
204
+ It is then a simple matter to see that
205
+ C(t) = ˜C(t) for all t ∈ [0, T].
206
+ and that the functions ˜ψi(·) satisfy the assumption (A1).
207
+ The assumption that the graph of C(·) is compact and contained in the interior of a ball is introduced
208
+ to avoid technicalities in our forthcoming analysis. In applied problems, this may be easily side tracked
209
+ by considering the intersection of the graph of C(·) with a tube around the optimal trajectory.
210
+ We now proceed introducing an approximation family of controlled systems to (1.1). Let x(·) be a
211
+ solution to the differential inclusion
212
+ ˙x(t) ∈ f(t, x(t), U) − NC(t)(x(t)).
213
+ Under our assumptions, measurable selection theorems assert the existence of measurable functions u
214
+ and ξi such that u(t) ∈ U, ξi(t) ≥ 0 a.e. t ∈ [0, T], ξi(t) = 0 if ψi(t, x(t)) < 0, and
215
+ ˙x(t) = f(t, x(t), u(t)) −
216
+ I
217
+
218
+ i=1
219
+ ξi(t)∇xψi(t, x(t)) a.e. t ∈ [0, T].
220
+ Considering the trajectory x, some observations are called for. Let µ be such that
221
+ max
222
+
223
+ (|∇xψi(t, x)||f(t, x, u)|+|∂tψi(t, x)|) + 1 :
224
+ t ∈ [0, T], u ∈ U, x ∈ C(t) + Bn, i = 1, . . . , I} ≤ µ.
225
+ The properties of the graph of C(·) in (A1) guarantee the existence of such maximum.
226
+ 4
227
+
228
+ Consider now some t such that, for some j ∈ {1, . . . I}, ψj(t, x(t)) = 0 and ˙x(t) exists. Since the
229
+ trajectory x is always in C, we have (see (2.2))
230
+ 0 = d
231
+ dtψj(t, x(t)) = ⟨∇xψj(t, x(t)), ˙x(t)⟩ + ∂tψj(t, x(t))
232
+ = ⟨∇xψj(t, x(t)), f(t, x(t), u(t))⟩ − ξj(t)|∇xψj(t, x(t))|2
233
+
234
+
235
+ i∈I(t,x(t))\{j}
236
+ ξi(t)⟨∇xψi(t, x(t)), ∇xψj(t, x(t))⟩ + ∂tψj(t, x(t))
237
+ ≤ ⟨∇xψj(t, x(t)), f(t, x(t), u(t))⟩ − ξj(t)|∇xψj(t, x(t))|2+∂tψj(t, x(t)),
238
+ and, hence (see (2.1)),
239
+ ξj(t) ≤
240
+ 1
241
+ |∇xψj(t, x(t))|2 (⟨∇xψj(t, x(t)), f(t, x(t), u(t))⟩ + ∂tψj(t, x(t))) ≤ µ
242
+ η2 .
243
+ Define the function
244
+ µ(γ) = 1
245
+ γ log
246
+ � µ
247
+ η2γ
248
+
249
+ ,
250
+ γ > 0,
251
+ consider a sequence {σk} such that σk ↓ 0 and choose another sequence {γk} with γk ↑ +∞ and
252
+ C(t) ⊂ int Ck(t) = int
253
+
254
+ x : ψi(t, x) − σk ≤ µk, i = 1, . . . , I
255
+
256
+ ,
257
+ where
258
+ µk = µ(γk).
259
+ Let xk be a solution to the differential equation
260
+ ˙xk(t) = f(t, xk(t), uk(t)) −
261
+ I
262
+
263
+ i=1
264
+ γkeγk(ψi(t,xk(t))−σk)∇xψi(t, xk(t))
265
+ (2.6)
266
+ for some uk(t) ∈ U a.e. t ∈ [0, T]. Take any t ∈ [0, T] such that ˙xk(t) exists and ψj(t, xk(t)) − σk = µk.
267
+ 5
268
+
269
+ Assume k is such that j ∈ I(t, xk(t)). Then, whenever γk is sufficiently large, we have
270
+ d
271
+ dtψj(t, xk(t)) = ⟨∇xψj(t, xk(t)), f(t, xk(t), uk(t))⟩
272
+ − γkeγk(ψj(t,xk(t))−σk)|∇xψj(t, xk(t))|2
273
+
274
+
275
+ i∈I(t,xk(t))\{j}
276
+ γkeγk(ψi(t,xk(t))−σk)⟨∇xψi(t, xk(t)), ∇xψj(t, xk(t))⟩
277
+
278
+
279
+ i̸∈I(t,xk(t))
280
+ γkeγk(ψi(t,xk(t))−σk)⟨∇xψi(t, xk(t)), ∇xψj(t, xk(t))⟩
281
+ + ∂tψj(t, xk(t))
282
+ ≤ ⟨∇xψj(t, xk(t)), f(t, xk(t), uk(t))⟩
283
+ − γkeγk(ψj(t,xk(t))−σk)|∇xψj(t, xk(t))|2
284
+
285
+
286
+ i̸∈I(t,xk(t))
287
+ γkeγk(ψi(t,xk(t))−σk)⟨∇xψi(t, xk(t)), ∇xψj(t, xk(t))⟩
288
+ + ∂tψj(t, xk(t))
289
+ ≤ ⟨∇xψj(t, xk(t)), f(t, xk(t), uk(t))⟩
290
+ − γkeγk(ψj(t,xk(t))−σk)|∇xψj(t, xk(t))|2
291
+ +
292
+
293
+ i̸∈I(t,xk(t))
294
+ γkeγk(−2β−σk)|⟨∇xψi(t, xk(t)), ∇xψj(t, xk(t))⟩|
295
+ + ∂tψj(t, xk(t))
296
+ ≤ µ − 1
297
+ 2 − η2γkeγkµk
298
+ = −1
299
+ 2.
300
+ Above, we have used the definition of µ and the inequality
301
+
302
+ i̸∈I(t,xk(t))
303
+ γkeγk(−2β−σk)|⟨∇xψi(t, xk(t)), ∇xψj(t, xk(t))⟩|≤ 1
304
+ 2,
305
+ which holds for γk sufficiently large.
306
+ Now, if xk(0) ∈ Ck(0), we assure that xk(t) ∈ Ck(t), for all t ∈ [0, T], and
307
+ γkeγk(ψj(t,xk(t))−σk) ≤ γkeγkµk = µ
308
+ η2 .
309
+ (2.7)
310
+ It follows that, for k sufficienttly large, we have
311
+ | ˙xk(t)|≤ (const).
312
+ We are now a in position to state and prove our first result, Theorem 2.2 below. This is in the vein of
313
+ Theorem 4.1 in [23] (see also Lemma 1 in [10] when ψ is independent of t and convex) deviating from it
314
+ in so far as the approximating sequence of control systems (2.6) differs from the one introduced in [10]1.
315
+ The proof of Theorem 2.2 relies on (2.7).
316
+ Theorem 2.2 Let {(xk, uk)}, with uk(t) ∈ U a.e., be a sequence of solutions of Cauchy problems
317
+
318
+
319
+
320
+
321
+
322
+ ˙xk(t)
323
+ =
324
+ f(t, xk(t), uk(t)) −
325
+ I
326
+
327
+ i=1
328
+ γkeγk(ψi(t,xk(t))−σk)∇xψi(t, xk(t)),
329
+ xk(0)
330
+ =
331
+ bk ∈ Ck(0).
332
+ (2.8)
333
+ 1See also Theorem 2.2 in [14]
334
+ 6
335
+
336
+ If bk → x0, then there exists a subsequence {xk} (we do not relabel) converging uniformly to x, a unique
337
+ solution to the Cauchy problem
338
+ ˙x(t) ∈ f(t, x(t), u(t)) − NC(t)(x(t)),
339
+ x(0) = x0,
340
+ (2.9)
341
+ where u is a measurable function such that u(t) ∈ U a.e. t ∈ [0, T].
342
+ If, moreover, all the controls uk are equal, i.e., uk = u, then the subsequence converges to a unique
343
+ solution of (2.9), i.e., any solution of
344
+ ˙x(t) ∈ f(t, x(t), U) − NC(t)(x(t)),
345
+ x(0) = x0 ∈ C(0)
346
+ (2.10)
347
+ can be approximated by solutions of (2.8).
348
+ Proof Consider the sequence {xk}, where (xk, uk) solves (2.8). Recall that xk(t) ∈ Ck(t) for all
349
+ t ∈ [0, T], and
350
+ | ˙xk(t)|≤ (const)
351
+ and
352
+ ξi
353
+ k(t) = γkeγk(ψi(t,xk(t))−σk) ≤ (const).
354
+ (2.11)
355
+ Then there exist subsequences (we do not relabel) weakly-∗ converging in L∞ to some v and ξi. Hence
356
+ xk(t) = x0 +
357
+ � t
358
+ 0
359
+ ˙xk(s)ds −→ x(t) = x0 +
360
+ � t
361
+ 0
362
+ v(s)ds, ∀ t ∈ [0, T],
363
+ for an absolutely continuous function x. Obviously, x(t) ∈ C(t) for all t ∈ [0, T]. Considering the sequence
364
+ {xk}, recall that
365
+ ˙xk(t) ∈ f(t, xk(t), U) −
366
+ I
367
+
368
+ i=1
369
+ ξi
370
+ k(t)∇xψi(t, xk(t)).
371
+ (2.12)
372
+ Inclusion (2.12) is equivalent to
373
+ ⟨z, ˙xk(t)⟩ ≤ S(z, f(t, xk(t), U)) −
374
+ I
375
+
376
+ i=1
377
+ ξi
378
+ k(t)⟨z, ∇xψi(t, xk(t))⟩,
379
+ ∀ z ∈ Rn.
380
+ Integrating this inequality, we get
381
+
382
+ z, xk(t + τ) − xk(t)
383
+ τ
384
+
385
+ ≤ 1
386
+ τ
387
+ � t+τ
388
+ t
389
+
390
+ S(z, f(s, xk(s), U)) −
391
+ I
392
+
393
+ i=1
394
+ ξi
395
+ k(s)⟨z, ∇xψi(s, xk(s))⟩
396
+
397
+ ds
398
+ = 1
399
+ τ
400
+ � t+τ
401
+ t
402
+
403
+ S(z, f(s, xk(s), U)) −
404
+ I
405
+
406
+ i=1
407
+ ξi
408
+ k(s)⟨z, ∇xψi(s, x(s))⟩
409
+ +
410
+ I
411
+
412
+ i=1
413
+ ξi
414
+ k(s)⟨z, ∇xψi(s, x(s)) − ∇xψi(s, xk(s))⟩
415
+
416
+ ds.
417
+ (2.13)
418
+ Passing to the limit as k → ∞, we obtain
419
+
420
+ z, x(t + τ) − x(t)
421
+ τ
422
+
423
+ ≤ 1
424
+ τ
425
+ � t+τ
426
+ t
427
+
428
+ S(z, f(s, x(s), U)) −
429
+ I
430
+
431
+ i=1
432
+ ξi(s)⟨z, ∇xψi(s, x(s))⟩
433
+
434
+ ds.
435
+ (2.14)
436
+ 7
437
+
438
+ Let t ∈ [0, T] be a Lebesgue point of x and ξ. Passing in the last inequality to the limit as τ ↓ 0, it leads
439
+ to
440
+ ⟨z, ˙x(t)⟩ ≤ S(z, f(t, x(t), U)) −
441
+ I
442
+
443
+ i=1
444
+ ξi(t)⟨z, ∇xψi(t, x(t))⟩.
445
+ Since z ∈ Rn is an arbitrary vector and the set f(t, x(t), U) is convex, we conclude that
446
+ ˙x(t) ∈ f(t, x(t), U) −
447
+ I
448
+
449
+ i=1
450
+ ξi(t)∇xψi(t, x(t)).
451
+ By the Filippov lemma there exists a measurable control u(t) ∈ U such that
452
+ ˙x(t) = f(t, x(t), u(t)) −
453
+ I
454
+
455
+ i=1
456
+ ξi(t)∇xψi(t, x(t)).
457
+ Furthermore, observe that ξi is zero if ψi(t, x(t)) < 0. If for some u such that u(t) ∈ U a.e., uk = u for
458
+ all k, then the sequence xk converges to the solution of
459
+ ˙x(t) = f(t, x(t), u(t)) −
460
+ I
461
+
462
+ i=1
463
+ ξi(t)∇xψi(t, x(t)).
464
+ Indeed, to see this, it suffices to pass to the limit as k → ∞ and then as τ ↓ 0, in the equality
465
+ xk(t + τ) − xk(t)
466
+ τ
467
+ = 1
468
+ τ
469
+ � t+τ
470
+ t
471
+
472
+ f(s, xk(s), u(s)) −
473
+ I
474
+
475
+ i=1
476
+ ξi
477
+ k(s)∇xψi(s, xk(s))
478
+
479
+ ds.
480
+ We now prove the uniqueness of the solution. We follow the proof of Theorem 4.1 in [23]. Notice,
481
+ however, that we now consider a special case and not the general case treated in [23]. Suppose that there
482
+ exist two different solutions of (2.9): x1 and x2. We have
483
+ 1
484
+ 2
485
+ d
486
+ dt|x1(t) − x2(t)|2= ⟨x1(t) − x2(t), ˙x1(t) − ˙x2(t)⟩
487
+ = ⟨x1(t) − x2(t), f(t, x1(t), u(t)) − f(t, x2(t), u(t))⟩
488
+
489
+
490
+ x1(t) − x2(t),
491
+ I
492
+
493
+ i=1
494
+ ξi
495
+ 1(t)∇ψi(t, x1(t)) −
496
+ I
497
+
498
+ i=1
499
+ ξi
500
+ 2(t)∇ψi(t, x2(t))
501
+
502
+ .
503
+ (2.15)
504
+ If, for all i, ψi(t, x1(t)) < 0 and ψi(t, x2(t)) < 0, then ξi
505
+ 1(t) = ξi
506
+ 2(t) = 0 and we obtain
507
+ 1
508
+ 2
509
+ d
510
+ dt|x1(t) − x2(t)|2≤ Lf|x1(t) − x2(t)|2.
511
+ Suppose that ψj(t, x1(t)) = 0. Then by the Taylor formula we get
512
+ ψj(t, x2(t)) = ψj(t, x1(t)) + ⟨∇xψj(t, x1(t)), x2(t) − x1(t)⟩
513
+ + 1
514
+ 2⟨x2(t) − x1(t), ∇2
515
+ xψj(t, θx2(t) + (1 − θ)x1(t))(x2(t) − x1(t))⟩,
516
+ (2.16)
517
+ where θ ∈ [0, 1]. Since ψj(t, x2(t)) ≤ 0, we have
518
+ ⟨∇xψj(t, x1(t)), x2(t) − x1(t)⟩
519
+ ≤ −1
520
+ 2⟨x2(t) − x1(t), ∇2
521
+ xψj(t, θx2(t) + (1 − θ)x1(t))(x2(t) − x1(t))⟩
522
+ ≤ (const)|x1(t) − x2(t)|2.
523
+ (2.17)
524
+ 8
525
+
526
+ Now, if ψj(t, x2(t)) = 0, we deduce in the same way that
527
+ ⟨∇xψj(t, x2(t)), x1(t) − x2(t)⟩ ≤ (const)|x1(t) − x2(t)|2.
528
+ Thus we have
529
+ 1
530
+ 2
531
+ d
532
+ dt|x1(t) − x2(t)|2≤ (const)|x1(t) − x2(t)|2.
533
+ Hence |x1(t) − x2(t)|= 0.
534
+ 2
535
+ 3
536
+ Approximating Family of Optimal Control Problems
537
+ In this section we define an approximating family of optimal control problems to (P) and we state the
538
+ corresponding necessary conditions.
539
+ Let (ˆx, ˆu) be a global solution to (P) and consider sequences {γk} and {σk} as defined above. Let
540
+ ˆxk(·) be the solution to
541
+
542
+
543
+
544
+
545
+
546
+ ˙x(t) = f(t, x(t), ˆu(t)) −
547
+ I
548
+
549
+ i=1
550
+ γkeγk(ψi(t,x(t))−σk)∇xψi(t, x(t)),
551
+ x(0) = ˆx(0).
552
+ (3.1)
553
+ Set ϵk = |ˆxk(T) − ˆx(T)|. It follows from Theorem 2.2 that ϵk ↓ 0. Take α > 0 and define the problem
554
+ (P α
555
+ k )
556
+
557
+
558
+
559
+
560
+
561
+
562
+
563
+
564
+
565
+
566
+
567
+
568
+
569
+
570
+
571
+
572
+
573
+
574
+
575
+
576
+
577
+
578
+
579
+
580
+
581
+ Minimize φ(x(T)) + |x(0) − ˆx(0)|2+α
582
+ � T
583
+ 0
584
+ |u(t) − ˆu(t)|dt
585
+ over processes (x, u) such that
586
+ ˙x(t) = f(t, x(t), u(t)) −
587
+ I
588
+
589
+ i=1
590
+ ∇xeγk(ψi(t,x(t))−σk) a.e. t ∈ [0, T],
591
+ u(t) ∈ U
592
+ a.e. t ∈ [0, T],
593
+ x(0) ∈ C0,
594
+ x(T) ∈ CT + ϵkBn,
595
+ Clearly, the problem (P α
596
+ k ) has admissible solutions. Consider the space
597
+ W = {(c, u) | c ∈ C0, u ∈ L∞ with u(t) ∈ U}
598
+ and the distance
599
+ dW ((c1, u1), (c2, u2)) = |c1 − c2|+
600
+ � T
601
+ 0
602
+ |u1(t) − u2(t)|dt.
603
+ Endowed with dW , W is a complete metric space. Take any (c, u) ∈ W and a solution y to the Cauchy
604
+ problem
605
+
606
+
607
+
608
+
609
+
610
+ ˙y(t)
611
+ =
612
+ f(t, y(t), u(t)) −
613
+ I
614
+
615
+ i=1
616
+ ∇xeγk(ψi(t,y(t))−σk) a.e. t ∈ [0, T],
617
+ y(0)
618
+ =
619
+ c.
620
+ Under our assumptions, the function
621
+ (c, u) → φ(y(T)) + |c − ˆx(0)|2+α
622
+ � T
623
+ 0
624
+ |u − ˆu| dt
625
+ 9
626
+
627
+ is continuous on (W, dW ) and bounded below. Appealing to Ekeland’s Theorem we deduce the existence
628
+ of a pair (xk, uk) solving the following problem
629
+ (APk)
630
+
631
+
632
+
633
+
634
+
635
+
636
+
637
+
638
+
639
+
640
+
641
+
642
+
643
+
644
+
645
+
646
+
647
+
648
+
649
+
650
+
651
+
652
+
653
+
654
+
655
+
656
+
657
+
658
+
659
+
660
+
661
+
662
+
663
+
664
+
665
+ Minimize Φ(x, u) = φ(x(T)) + |x(0) − ˆx(0)|2+α
666
+ � T
667
+ 0
668
+ |u(t) − ˆu(t)|dt
669
+ +ϵk
670
+
671
+ |x(0) − xk(0)|+
672
+ � T
673
+ 0
674
+ |u(t) − uk(t)|dt
675
+
676
+ ,
677
+ over processes (x, u) such that
678
+ ˙x(t) = f(t, x(t), u(t)) −
679
+ I
680
+
681
+ i=1
682
+ ∇xeγk(ψi(t,x(t))−σk) a.e. t ∈ [0, T],
683
+ u(t) ∈ U
684
+ a.e. t ∈ [0, T],
685
+ x(0) ∈ C0,
686
+ x(T) ∈ CT + ϵkBn,
687
+ Lemma 3.1 Take γk → ∞, σk → 0 and ϵk → 0 as defined above. For each k, let (xk, uk) be the solution
688
+ to (APk). Then there exists a subsequence (we do not relabel) such that
689
+ uk(t) → ˆu(t) a.e.,
690
+ xk → ˆx uniformly in [0, T].
691
+ Proof We deduce from Theorem 2.2 that {xk} uniformly converges to an admissible solution ˜x to (P).
692
+ Since U and C0 are compact, we have U ⊂ KBm and C0 ⊂ KBn. Without loss of generality, uk weakly-∗
693
+ converges to a function ˜u ∈ L∞([0, T], U). Hence it weakly converges to ˜u in L1. From optimality of the
694
+ processes (xk, uk) we have
695
+ φ(xk(T)) + |xk(0) − ˆx(0)|2+α
696
+ � T
697
+ 0
698
+ |uk(t) − ˆu(t)|dt
699
+ ≤ φ(ˆxk(T)) + ϵk
700
+
701
+ |ˆxk(0) − xk(0)|+
702
+ � T
703
+ 0
704
+ |uk(t) − ˆu(t)|dt
705
+
706
+ ≤ φ(ˆxk(T)) + 2K(1 + T)ϵk.
707
+ Since (ˆx, ˆu) is a global solution of the problem, passing to the limit, we get
708
+ φ(˜x(T)) + |˜x(0) − ˆx(0)|2+α
709
+ � T
710
+ 0
711
+ |˜u(t) − ˆu(t)|dt
712
+ ≤ lim
713
+ k→∞(φ(xk(T)) + |xk(0) − ˆx(0)|2) + α lim inf
714
+ k→∞
715
+ � T
716
+ 0
717
+ |uk(t) − ˆu(t)|dt
718
+ ≤ lim
719
+ k→∞ φ(ˆxk(T)) = φ(ˆx(T)) ≤ φ(˜x(T)).
720
+ Hence ˜x(0) = ˆx(0), ˜u = ˆu a.e., and uk converges to ˆu in L1, and some subsequence converges to ˆu almost
721
+ everywhere (we do not relabel).
722
+ 2
723
+ We now finish this section with the statement of the optimality necessary conditions for the family of
724
+ problems (APk). These can be seen as a direct consequence of Theorem 6.2.1 in [22].
725
+ Proposition 3.2 For each k, let (xk, uk) be a solution to (APk). Then there exist absolutely continous
726
+ functions pk and scalars λk ≥ 0 such that
727
+ (a) (nontriviality condition)
728
+ λk + |pk(T)|= 1,
729
+ (3.2)
730
+ 10
731
+
732
+ (b) (adjoint equation)
733
+ ˙pk = −(∇xfk)∗pk + �I
734
+ i=1 γkeγk(ψi
735
+ k−σk)∇2
736
+ xψi
737
+ kpk
738
+ + �I
739
+ i=1 γ2
740
+ keγk(ψi
741
+ k−σk)∇xψi
742
+ k⟨∇xψi
743
+ k, pk⟩,
744
+ (3.3)
745
+ where the superscript ∗ stands for transpose,
746
+ (c) (maximization condition)
747
+ max
748
+ u∈U {⟨f(t, xk, u), pk⟩ − αλk|u − ˆu|−ϵkλk|u − uk|}
749
+ (3.4)
750
+ is attained at uk(t), for almost every t ∈ [0, T],
751
+ (d) (transversality condition)
752
+ (pk(0), −pk(T)) ∈ λk (2(xk(0) − ˆx(0)) + ϵkBn, ∂φ(xk(T)))
753
+ +NC0(xk(0)) × NCT +ϵkBn(xk(T)).
754
+ (3.5)
755
+ To simplify the notation above, we drop the t dependance in pk, ˙pk, xk, uk, ˆx and ˆu. Moreover, in
756
+ (b), we write ψk instead of ψ(t, xk(t)), fk instead of f(t, xk(t), uk(t)). The same holds for the derivatives
757
+ of ψ and f.
758
+ 4
759
+ Maximum Principle for (P)
760
+ In this section, we establish our main result, a Maximum Principle for (P). This is done by taking limits
761
+ of the conclusions of Proposition 3.2, following closely the analysis done in the proof of [10, Theorem 2].
762
+ Observe that
763
+ 1
764
+ 2
765
+ d
766
+ dt|pk(t)|2 = −⟨∇xfkpk, pk⟩ +
767
+ I
768
+
769
+ i=1
770
+ γkeγk(ψi
771
+ k−σk)⟨∇2
772
+ xψi
773
+ kpk, pk⟩
774
+ +
775
+ I
776
+
777
+ i=1
778
+ γ2
779
+ keγk(ψi
780
+ k−σk)⟨∇xψi
781
+ k, pk⟩2
782
+ ≥ −⟨∇xfkpk, pk⟩ +
783
+ I
784
+
785
+ i=1
786
+ γkeγk(ψi
787
+ k−σk)⟨∇2
788
+ xψi
789
+ kpk, pk⟩
790
+ ≥ −M|pk|2+
791
+ I
792
+
793
+ i=1
794
+ γkeγk(ψi
795
+ k−σk)⟨∇2
796
+ xψi
797
+ kpk, pk⟩,
798
+ where M is the constant of (A2). Taking into account hypothesis (A1) and (2.7) we deduce the existence
799
+ of a constant K0 > 0 such that
800
+ 1
801
+ 2
802
+ d
803
+ dt|pk(t)|2≥ −K0|pk(t)|2.
804
+ This last inequality leads to
805
+ |pk(t)|2 ≤ e2K0(T −t)|pk(T)|2≤ e2K0T |pk(T)|2.
806
+ Since, by (a) of Proposition 3.2, |pk(T)|≤ 1, we deduce from the above that there exists M0 > 0 such
807
+ that
808
+ |pk(t)| ≤ M0.
809
+ (4.1)
810
+ Now, we claim that the sequence { ˙pk} is uniformly bounded in L1. To prove our claim, we need to establish
811
+ bounds for the three terms in (3.3). Following [10] and [14], we start by deducing some inequalities that
812
+ will be of help.
813
+ 11
814
+
815
+ Denote Ik = I(t, xk(t)) and Sj
816
+ k = sign
817
+
818
+ ⟨∇xψj
819
+ k, pk⟩
820
+
821
+ . We have
822
+ I
823
+
824
+ j=1
825
+ d
826
+ dt
827
+ ���⟨∇xψj
828
+ k, pk⟩
829
+ ���
830
+ =
831
+ I
832
+
833
+ j=1
834
+
835
+ ⟨∇2
836
+ xψj
837
+ k ˙xk, pk⟩ + ⟨∂t∇xψj
838
+ k, pk⟩ + ⟨∇xψj
839
+ k, ˙pk⟩
840
+
841
+ Sj
842
+ k
843
+ =
844
+ I
845
+
846
+ j=1
847
+
848
+ ⟨pk, ∇2
849
+ xψj
850
+ kfk⟩ −
851
+ I
852
+
853
+ i=1
854
+ γkeγk(ψi
855
+ k−σk)⟨pk, ∇2ψj
856
+ k∇xψi
857
+ k⟩
858
+
859
+ Sj
860
+ k
861
+ +
862
+ I
863
+
864
+ j=1
865
+
866
+ ⟨∂t∇xψj
867
+ k, pk⟩ − ⟨∇xψj
868
+ k, (∇xfk)∗pk⟩
869
+
870
+ Sj
871
+ k
872
+ +
873
+ I
874
+
875
+ j=1
876
+ � I
877
+
878
+ i=1
879
+ γkeγk(ψi
880
+ k−σk)⟨∇xψj
881
+ k, ∇2
882
+ xψi
883
+ kpk⟩
884
+
885
+ Sj
886
+ k
887
+ +
888
+ I
889
+
890
+ i=1
891
+ I
892
+
893
+ j=1
894
+ γ2
895
+ keγk(ψi
896
+ k−σk)⟨∇xψj
897
+ k, ∇xψi
898
+ k⟩⟨∇xψi
899
+ k, pk⟩Sj
900
+ k
901
+ Observe that (see (2.3) and (2.4))
902
+ I
903
+
904
+ i=1
905
+ I
906
+
907
+ j=1
908
+ γ2
909
+ keγk(ψi
910
+ k−σk)⟨∇xψj
911
+ k, ∇xψi
912
+ k⟩⟨∇xψi
913
+ k, pk⟩Sj
914
+ k
915
+ =
916
+ I
917
+
918
+ i=1
919
+
920
+ j∈Ik
921
+ γ2
922
+ keγk(ψi
923
+ k−σk)⟨∇xψj
924
+ k, ∇xψi
925
+ k⟩⟨∇xψi
926
+ k, pk⟩Sj
927
+ k
928
+ =
929
+
930
+ i̸∈Ik
931
+ γ2
932
+ keγk(ψi
933
+ k−σk) �
934
+ j∈Ik
935
+ ⟨∇xψj
936
+ k, ∇xψi
937
+ k⟩⟨∇xψi
938
+ k, pk⟩Sj
939
+ k
940
+ +
941
+
942
+ i∈Ik
943
+ γ2
944
+ keγk(ψi
945
+ k−σk)
946
+
947
+ �|∇xψi
948
+ k|2+
949
+
950
+ j∈Ik\{i}
951
+ ⟨∇xψj
952
+ k, ∇xψi
953
+ k⟩Sj
954
+ k Si
955
+ k
956
+
957
+ � |⟨∇xψi
958
+ k, pk⟩|
959
+ =
960
+
961
+ i∈Ik
962
+ γ2
963
+ keγk(ψi
964
+ k−σk)
965
+
966
+ �|∇xψi
967
+ k|2+
968
+
969
+ j∈Ik\{i}
970
+ ⟨∇xψj
971
+ k, ∇xψi
972
+ k⟩Sj
973
+ k Si
974
+ k
975
+
976
+ � |⟨∇xψi
977
+ k, pk⟩|
978
+ ≥ (1 − ρ)
979
+
980
+ i∈Ik
981
+ γ2
982
+ keγk(ψi
983
+ k−σk)|∇xψi
984
+ k|2|⟨∇xψi
985
+ k, pk⟩|
986
+ = (1 − ρ)
987
+ I
988
+
989
+ i=1
990
+ γ2
991
+ keγk(ψi
992
+ k−σk)|∇xψi
993
+ k|2|⟨∇xψi
994
+ k, pk⟩|.
995
+ Using this and integrating the previous equality, we deduce the existence of M1 > 0 such that:
996
+ � T
997
+ 0
998
+ I
999
+
1000
+ i=1
1001
+ γ2
1002
+ keγk(ψi
1003
+ k−σk)|∇xψi
1004
+ k|2|⟨∇xψi
1005
+ k, pk⟩|dt ≤ M1.
1006
+ (4.2)
1007
+ We are now in a position to show that
1008
+ � T
1009
+ 0
1010
+ I
1011
+
1012
+ i=1
1013
+ γ2
1014
+ keγk(ψi
1015
+ k−σk)|∇xψi
1016
+ k|
1017
+ ��⟨∇xψi
1018
+ k, pk⟩
1019
+ �� dt
1020
+ 12
1021
+
1022
+ is bounded. For simplicity, set Li
1023
+ k(t) = γ2
1024
+ keγk(ψi
1025
+ k−σk)|∇xψi
1026
+ k|
1027
+ ��⟨∇xψi
1028
+ k, pk⟩
1029
+ ��. Notice that
1030
+ I
1031
+
1032
+ i=1
1033
+ � T
1034
+ 0
1035
+ Li
1036
+ k(t)dt =
1037
+ I
1038
+
1039
+ i=1
1040
+ ��
1041
+ {t:|∇xψi
1042
+ k|<η}
1043
+ Li
1044
+ k(t) dt +
1045
+
1046
+ {t:|∇xψi
1047
+ k|≥η}
1048
+ Li
1049
+ k(t)dt
1050
+
1051
+ .
1052
+ Using (A1) and (4.2), we deduce that
1053
+ I
1054
+
1055
+ i=1
1056
+ � T
1057
+ 0
1058
+ Li
1059
+ k(t) dt ≤
1060
+ I
1061
+
1062
+ i=1
1063
+
1064
+ γ2
1065
+ ke−γk(β+σk)η2 max
1066
+ t |pk(t)|
1067
+
1068
+ +
1069
+ I
1070
+
1071
+ i=1
1072
+
1073
+ γ2
1074
+ k
1075
+
1076
+ {t:|∇xψi
1077
+ k|≥η}
1078
+ eγk(ψi
1079
+ k−σk) |∇xψi
1080
+ k|2
1081
+ |∇xψi
1082
+ k|
1083
+ ��⟨∇xψi
1084
+ k, pk⟩
1085
+ �� dt
1086
+
1087
+ ≤ γ2
1088
+ kI e−γk(β+σk)η2M0
1089
+ + 1
1090
+ η
1091
+ I
1092
+
1093
+ i=1
1094
+ �� T
1095
+ 0
1096
+ γ2
1097
+ keγk(ψi
1098
+ k−σk)|∇xψi
1099
+ k|2��⟨∇xψi
1100
+ k, pk⟩
1101
+ �� dt
1102
+
1103
+ ≤ η2M0I + M1
1104
+ η ,
1105
+ for k large enough. Summarizing, there exists a M2 > 0 such that
1106
+ I
1107
+
1108
+ i=1
1109
+ γ2
1110
+ k
1111
+ � T
1112
+ 0
1113
+ eγk(ψi
1114
+ k−σk)|∇ψi
1115
+ k|
1116
+ ��⟨∇ψi
1117
+ k, pk⟩
1118
+ �� dt
1119
+ ≤ M2.
1120
+ (4.3)
1121
+ Mimicking the analysis conducted in Step 1, b) and c) of the proof of Theorem 2 in [10] and taking into
1122
+ account (b) of Proposition 3.2 we conclude that there exist constants N1 > 0 such that
1123
+ � T
1124
+ 0
1125
+ | ˙pγk(t)| dt ≤ N1,
1126
+ (4.4)
1127
+ for k sufficiently large, proving our claim.
1128
+ Before proceeding, observe that it is a simple matter to assert the existence of a constant N2 such
1129
+ that
1130
+ I
1131
+
1132
+ i=1
1133
+ � T
1134
+ 0
1135
+ γ2
1136
+ keγk(ψi
1137
+ k−σk)|⟨∇ψi
1138
+ k, pγk⟩|dt ≤ N2.
1139
+ (4.5)
1140
+ This inequality will be of help in what follows.
1141
+ Let us now recall that
1142
+ ξi
1143
+ k(t) = γkeγk(ψi(t,xk(t))−σk)
1144
+ and that the second inequality in (2.11) holds. We turn to the analysis of Step 2 in the proof of Theorem
1145
+ 2 in [10] (see also [14]). Adapting those arguments, we can conclude the existence of some function p ∈
1146
+ BV ([0, T], Rn) and, for i = 1, . . . , I, functions ξi ∈ L∞([0, T], R) with ξi(t) ≥ 0 a. e. t, ξi(t) = 0, t ∈ Ii
1147
+ b,
1148
+ where
1149
+ Ii
1150
+ b =
1151
+
1152
+ t ∈ [0, T] : ψi(t, ˆx(t)) < 0
1153
+
1154
+ ,
1155
+ and finite signed Radon measures ηi, null in Ii
1156
+ b, such that, for any z ∈ C([0, T], Rn)
1157
+ � T
1158
+ 0
1159
+ ⟨z, dp⟩ = −
1160
+ � T
1161
+ 0
1162
+ ⟨z, (∇ ˆf)∗p⟩dt +
1163
+ I
1164
+
1165
+ i=1
1166
+ �� T
1167
+ 0
1168
+ ξi⟨z, ∇2 ˆψip⟩dt +
1169
+ � T
1170
+ 0
1171
+ ⟨z, ∇ ˆψi(t)⟩dηi
1172
+
1173
+ ,
1174
+ where ∇ ˆψi(t) = ∇ψi(t, ˆx(t)). The finite signed Radon measures ηi are weak-∗ limits of
1175
+ γ2
1176
+ keγk(ψi
1177
+ k−σk)⟨∇ψi
1178
+ k(xk(t), pk(t)⟩dt.
1179
+ 13
1180
+
1181
+ Observe that the measures
1182
+ ⟨∇ψi(ˆx(t), p(t)⟩dηi(t)
1183
+ (4.6)
1184
+ are nonnegative.
1185
+ For each i = 1, . . . , I, the sequence ξi
1186
+ k is weakly-∗ convergent in L∞ to ξi ≥ 0. Following [14], we
1187
+ deduce from (4.5) that, for each i = 1, . . . , I,
1188
+ � T
1189
+ 0
1190
+ |ξi⟨∇x ˆψi, p⟩|dt = lim
1191
+ k→∞
1192
+ � T
1193
+ 0
1194
+ |ξi
1195
+ k⟨∇x ˆψi, p⟩|dt
1196
+ ≤ lim
1197
+ k→∞
1198
+ �� T
1199
+ 0
1200
+ ξi
1201
+ k|⟨∇x ˆψi, p⟩ − ⟨∇xψi
1202
+ k, pk⟩|dt +
1203
+ � T
1204
+ 0
1205
+ ξi
1206
+ k|⟨∇xψi
1207
+ k, pk⟩|dt
1208
+
1209
+ ≤ lim
1210
+ k→∞
1211
+ ����ξi
1212
+ k
1213
+ ���
1214
+ L∞
1215
+ ���⟨∇x ˆψi, p⟩ − ⟨∇xψi
1216
+ k, pk⟩
1217
+ ���
1218
+ L1 + N2
1219
+ γk
1220
+
1221
+ = 0.
1222
+ It turns out that
1223
+ ξi⟨∇x ˆψi, p⟩ = 0 a.e..
1224
+ (4.7)
1225
+ Consider now the sequence of scalars {λk}. It is an easy matter to show that there exists a subsequence
1226
+ of {λk} converging to some λ ≥ 0. This, together with the convergence of pk to p, allows us to take limits
1227
+ in (a) and (c) of Proposition 3.2 to deduce that
1228
+ λ + |p(T)|= 1
1229
+ and
1230
+ ⟨p(t), f(t, ˆx(t), u)⟩ − αλ|u − ˆu(t)|≤ ⟨p(t), f(t, ˆx(t), ˆu(t))⟩ ∀u ∈ U, a.e. t ∈ [0, T].
1231
+ It remains to take limits of the transversality conditions (d) in Proposition 3.2. First, observe that
1232
+ CT + ϵkBn = {x : d(x, CT ) ≤ ϵk} .
1233
+ From the basic properties of the Mordukhovich normal cone and subdifferential (see [19], section 1.3.3)
1234
+ we have
1235
+ NCT +ϵkBn(xk(T)) ⊂ cl cone ∂d(xk(T), CT )
1236
+ and
1237
+ NCT (ˆx(T)) = cl cone ∂d(ˆx(T), CT ).
1238
+ Passing to the limit as k → ∞ we get
1239
+ (p(0), −p(T)) ∈ NC0(ˆx(0)) × NCT (ˆx(T)) + {0} × λ ∂φ(ˆx(T)).
1240
+ Finally, and mimicking Step 3 in the proof of Theorem 2 in [10], we remove the dependence of the
1241
+ conditions on the parameter α. This is done by taking further limits, this time considering a sequence of
1242
+ αj ↓ 0.
1243
+ We then summarize our conclusions in the following Theorem.
1244
+ Theorem 4.1 Let (ˆx, ˆu) be the optimal solution to (P). Suppose that assumption A1–A6 are satisfied.
1245
+ For i = 1, · · · , I, set
1246
+ Ii
1247
+ b = {t ∈ [0, T] : ψi(t, ˆx(t)) < 0}.
1248
+ There exist λ ≥ 0, p ∈ BV ([0, T], Rn), finite signed Randon measures ηi, null in Ii
1249
+ b, for i = 1, · · · , I,
1250
+ ξi ∈ L∞([0, T], R), with i = 1, · · · , I, where ξi(t) ≥ 0 a. e. t and ξi(t) = 0, t ∈ Ii
1251
+ b, such that
1252
+ a) λ + |p(T)|̸= 0,
1253
+ 14
1254
+
1255
+ b) ˙ˆx(t) = f(t, ˆx(t), ˆu(t)) −
1256
+ I
1257
+
1258
+ i=1
1259
+ ξi(t)∇x ˆψi(t),
1260
+ c) for any z ∈ C([0, T]; Rn)
1261
+ � T
1262
+ 0
1263
+ ⟨z(t), dp(t)⟩ = −
1264
+ � T
1265
+ 0
1266
+ ⟨z(t), (∇x ˆf(t))∗p(t)⟩dt
1267
+ +
1268
+ I
1269
+
1270
+ i=1
1271
+ �� T
1272
+ 0
1273
+ ξi(t)⟨z(t), ∇2
1274
+ x ˆψi(t)p(t)⟩dt +
1275
+ � T
1276
+ 0
1277
+ ⟨z(t), ∇x ˆψi(t)⟩dηi
1278
+
1279
+ ,
1280
+ where ∇ ˆf(t) = ∇xf(t, ˆx(t), ˆu(t)),
1281
+ ∇ ˆψi(t) = ∇ψi(t, ˆx(t)) and ∇2 ˆψi(t) = ∇2ψi(t, x(t)),
1282
+ d) ξi(t)⟨∇xψi(t, ˆx(t)), p(t)⟩ = 0, a.e. t for all i = 1, . . . , I,
1283
+ e) for all i = 1, . . . , I, the meaures ⟨∇ψi(ˆx(t), p(t)⟩dηi(t) are nonnegative,
1284
+ f) ⟨p(t), f(t, ˆx(t), u)⟩ ≤ ⟨p(t), f(t, ˆx(t), ˆu(t))⟩ for all u ∈ U, a.e. t,
1285
+ g)
1286
+ (p(0), −p(T)) ∈ NC0(ˆx(0)) × NCT (ˆx(T)) + {0} × λ∂φ(ˆx(T)).
1287
+ Noteworthy, condition e) is not considered in any of our previous works.
1288
+ We now turn to the free end point case, i. e., to the problem
1289
+ (Pf)
1290
+
1291
+
1292
+
1293
+
1294
+
1295
+
1296
+
1297
+
1298
+
1299
+
1300
+
1301
+
1302
+
1303
+
1304
+
1305
+
1306
+
1307
+ Minimize φ(x(T))
1308
+ over processes (x, u) such that
1309
+ ˙x(t) ∈ f(t, x(t), u(t)) − NC(t)(x(t)), a.e. t ∈ [0, T],
1310
+ u(t) ∈ U,
1311
+ a.e. t ∈ [0, T],
1312
+ x(0) ∈ C0 ⊂ C(0).
1313
+ Problem (Pf) differs from (P) because x(T) is not constrained to take values in CT . We apply Theorem
1314
+ 4.1 to (Pf). Since x(T) is free, we deduce from (f) in the above Theorem that −p(T) = λ∂φ(ˆx(T)).
1315
+ Suppose that λ = 0.
1316
+ Then p(T) = 0 contradicting the nontriviality condition (a) of Theorem 4.1.
1317
+ Without loss of generality, we then conclude that the conditions of Theorem 4.1 hold with λ = 1. We
1318
+ summarize our findings in the following Corollary.
1319
+ Corollary 4.2 Let (ˆx, ˆu) be the optimal solution to (Pf). Suppose that assumption A1–A6 are satisfied.
1320
+ For i = 1, · · · , I, set
1321
+ Ii
1322
+ b = {t ∈ [0, T] : ψi(t, ˆx(t)) < 0}.
1323
+ There exist p ∈ BV ([0, T], Rn), finite signed Randon measures ηi, null in Ii
1324
+ b, for i = 1, · · · , I, ξi ∈
1325
+ L∞([0, T], R), with i = 1, · · · , I, where ξi(t) ≥ 0 a.e. t and ξi(t) = 0 for t ∈ Ii
1326
+ b, such that
1327
+ a) ˙ˆx(t) = f(t, ˆx(t), ˆu(t)) −
1328
+ I
1329
+
1330
+ i=1
1331
+ ξi(t)∇x ˆψi(t),
1332
+ 15
1333
+
1334
+ b) for any z ∈ C([0, T]; Rn)
1335
+ � T
1336
+ 0
1337
+ ⟨z(t), dp(t)⟩ = −
1338
+ � T
1339
+ 0
1340
+ ⟨z(t), (∇x ˆf(t))∗p(t)⟩dt
1341
+ +
1342
+ I
1343
+
1344
+ i=1
1345
+ �� T
1346
+ 0
1347
+ ξi(t)⟨z(t), ∇2
1348
+ x ˆψi(t)p(t)⟩dt +
1349
+ � T
1350
+ 0
1351
+ ⟨z(t), ∇x ˆψi(t)⟩dηi
1352
+
1353
+ ,
1354
+ where ∇ ˆf(t) = ∇xf(t, ˆx(t), ˆu(t)),
1355
+ ∇ ˆψi(t) = ∇ψi(t, ˆx(t)) and ∇2 ˆψi(t) = ∇2ψi(t, x(t)),
1356
+ c) ξi(t)⟨∇xψi(t, ˆx(t)), p(t)⟩ = 0 for a.e. t and for all i = 1, . . . , I,
1357
+ d) for all i = 1, . . . , I, the meaures ⟨∇ψi(ˆx(t), p(t)⟩dηi(t) are nonnegative,
1358
+ e) ⟨p(t), f(t, ˆx(t), u)⟩ ≤ ⟨p(t), f(t, ˆx(t), ˆu(t))⟩ for all u ∈ U, a.e. t,
1359
+ f)
1360
+ (p(0), −p(T)) ∈ NC0(ˆx(0)) × {0} + {0} × ∂φ(ˆx(T)).
1361
+ 5
1362
+ Example
1363
+ Let us consider the following problem
1364
+
1365
+
1366
+
1367
+
1368
+
1369
+
1370
+
1371
+
1372
+
1373
+
1374
+
1375
+
1376
+
1377
+
1378
+
1379
+
1380
+
1381
+
1382
+
1383
+
1384
+
1385
+
1386
+
1387
+
1388
+
1389
+
1390
+
1391
+
1392
+
1393
+ Minimize
1394
+ − x(T)
1395
+ over processes ((x, y, z), u) such that
1396
+
1397
+
1398
+ ˙x(t)
1399
+ ˙y(t)
1400
+ ˙z(t)
1401
+
1402
+ � ∈
1403
+
1404
+
1405
+ 0
1406
+ σ
1407
+ 0
1408
+ 0
1409
+ 0
1410
+ 0
1411
+ 0
1412
+ 0
1413
+ 0
1414
+
1415
+
1416
+
1417
+
1418
+ x
1419
+ y
1420
+ z
1421
+
1422
+ � +
1423
+
1424
+
1425
+ 0
1426
+ u
1427
+ 0
1428
+
1429
+ � − NC(x, y, z),
1430
+ u ∈ [−1, 1],
1431
+ (x, y, z)(0) = (x0, y0, z0),
1432
+ (x, y, z)(T) ∈ CT ,
1433
+ where
1434
+ • 0 < σ ≪ 1,
1435
+ • C = {(x, y, z) | x2 + y2 + (z + h)2 ≤ 1, x2 + y2 + (z − h)2 ≤ 1}, 2h2 < 1,
1436
+ • (x0, y0, z0) ∈ intC, with x0 < −δ, y0 = 0 and z0 > 0,
1437
+ • CT = {(x, y, z) | x ≤ 0, y ≥ 0, δy − y2x ≤ δy2} ∩ C, where
1438
+ δ < y2|x0|
1439
+ y1
1440
+ , with y1 =
1441
+
1442
+ 1 − x2
1443
+ 0 − (z0 + h)2 and y2 =
1444
+
1445
+ 1 − h2.
1446
+ We choose T > 0 small and, nonetheless, sufficiently large to guarantee that, when σ = 0, the system
1447
+ can reach the interior of CT but not the segment {(x, 0, 0) | x ∈ [−δ, 0]}. Since σ and T are small, it
1448
+ follows that the optimal trajectory should reach CT at the face δy − y2x = δy2 of CT .
1449
+ 16
1450
+
1451
+ To significantly increase the value of the x(T), the optimal trajectory needs to live on the boundary
1452
+ of C for some interval of time. Then, before reaching and after leaving the boundary of C, the optimal
1453
+ trajectory lives in the interior of C. Since δ is small, the trajectory cannot reach CT from any point of
1454
+ the sphere x2 +y2 +(z +h)2 = 1 with z > 0. This means that, while on the boundary of C the trajectory
1455
+ should move on the sphere x2 + y2 + (z + h)2 = 1 untill reaching the plane z = 0 and then it moves on
1456
+ the intersection of the two spheres.
1457
+ While in the interior of C, the control can change sign from −1 to 1 or from 1 to −1. Certainly, the
1458
+ control should be 1 right before reaching the boundary and −1 right before arriving at CT . Changes of
1459
+ the control from 1 to −1 or −1 to 1 before reaching the boundary translate into time waste and leads to
1460
+ smaller values of x(T). It then follows that the optimal control should be of the form
1461
+ u(t) =
1462
+
1463
+ 1,
1464
+ t ∈ [0, ˜t],
1465
+ −1,
1466
+ t ∈ ]˜t, T],
1467
+ (5.1)
1468
+ for some value ˜t ∈]0, T[.
1469
+ After the modification (2.5), the data of the problem satisfy the conditions under which Theorem 4.1
1470
+ holds. We now show that the conclusions of Theorem 4.1 completly identify the structure (5.1) of the
1471
+ optimal control.
1472
+ From Theorem 4.1 we deduce the existence of λ ≥ 0, p, q, r ∈ BV ([0, T], R), finite signed Randon
1473
+ measures η1 and η2, null respectively in
1474
+ I1
1475
+ b =
1476
+
1477
+ (x, y, z) | x2 + y2 + (z + h)2 − 1 < 0
1478
+
1479
+ and
1480
+ I2
1481
+ b =
1482
+
1483
+ (x, y, z) | x2 + y2 + (z − h)2 − 1 < 0
1484
+
1485
+ ,
1486
+ ξi ∈ L∞([0, T], R), with i = 1, 2, where ξi(t) ≥ 0 a. e. t and ξi(t) = 0, t ∈ Ii
1487
+ b, such that
1488
+ (i)
1489
+
1490
+
1491
+ ˙x(t)
1492
+ ˙y(t)
1493
+ ˙z(t)
1494
+
1495
+ � =
1496
+
1497
+
1498
+ 0
1499
+ σ
1500
+ 0
1501
+ 0
1502
+ 0
1503
+ 0
1504
+ 0
1505
+ 0
1506
+ 0
1507
+
1508
+
1509
+
1510
+
1511
+ x
1512
+ y
1513
+ z
1514
+
1515
+ � +
1516
+
1517
+
1518
+ 0
1519
+ u
1520
+ 0
1521
+
1522
+ � − 2ξ1
1523
+
1524
+
1525
+ x
1526
+ y
1527
+ z + h
1528
+
1529
+ � − 2ξ2
1530
+
1531
+
1532
+ x
1533
+ y
1534
+ z − h
1535
+
1536
+
1537
+ (ii)
1538
+ d
1539
+
1540
+
1541
+ p
1542
+ q
1543
+ r
1544
+
1545
+ � =
1546
+
1547
+
1548
+ 0
1549
+ 0
1550
+ 0
1551
+ −σ
1552
+ 0
1553
+ 0
1554
+ 0
1555
+ 0
1556
+ 0
1557
+
1558
+
1559
+
1560
+
1561
+ p
1562
+ q
1563
+ r
1564
+
1565
+ � dt
1566
+ +2(ξ1 + ξ2)
1567
+
1568
+
1569
+ p
1570
+ q
1571
+ r
1572
+
1573
+ � dt + 2
1574
+
1575
+
1576
+ x
1577
+ y
1578
+ z + h
1579
+
1580
+ � dη1 + 2
1581
+
1582
+
1583
+ x
1584
+ y
1585
+ z − h
1586
+
1587
+ � dη2,
1588
+ (iii)
1589
+
1590
+
1591
+ p
1592
+ q
1593
+ r
1594
+
1595
+ � (T) =
1596
+
1597
+
1598
+ λ
1599
+ 0
1600
+ 0
1601
+
1602
+ � + µ
1603
+
1604
+
1605
+ y2
1606
+ −δ
1607
+ 0
1608
+
1609
+ � , where µ ≥ 0,
1610
+ (iv)
1611
+ ξ1(xp + yq + (z + h)r) = 0, ξ2(xp + yq + (z − h)r) = 0,
1612
+ (v)
1613
+ the meaures (xp + yq + (z + h)r)dη1 and (xp + yq + (z − h)r)dη2
1614
+ are nonnegative,
1615
+ (vi)
1616
+ maxu∈[−1,1] uq = ˆuq.
1617
+ where ˆu is the optimal control.
1618
+ Let t1 be the instant of time when the trajectory reaches the shere x2 + y2 + (z + h)2 = 1, t2 the
1619
+ instant of time when the trajectory reaches the intersection of the two spheres and t3 be the instant of
1620
+ time the trajectory leaves the boundary of C. We have 0 < t1 < t2 < t3 < T.
1621
+ Next we show that the multiplier q changes sign only once and so identifing the structure (5.1) of the
1622
+ optimal control in a unique way. We start by looking at the case when t = T. We have
1623
+
1624
+ p
1625
+ q
1626
+
1627
+ (T) =
1628
+
1629
+ λ
1630
+ 0
1631
+
1632
+ + µ
1633
+
1634
+ y2
1635
+ −δ
1636
+
1637
+ .
1638
+ 17
1639
+
1640
+ Starting from t = T, let us go backwards in time until the instant t3 when the trajectory leaves the
1641
+ boundary of C. If q(T) = 0, then p(T) = λ > 0 and we would have q(t) > 0 for t ∈]t3, T[ (see (ii) above),
1642
+ which is impossible. We then have p(T) > 0 and q(T) < 0 and, in ]t3, T[, since σ is small, the vector
1643
+ (p(t), q(t)) does not change much. At t = t3, the vector (p, q) has a jump and such jump can only occur
1644
+ along the vector (x(t3), y(t3)). Therefore, we have p(t3 − 0) > 0 and q(t3 − 0) < 0.
1645
+ Let us now consider t ∈]t2, t3[. We have the following
1646
+ 1. when t ∈ [t2, t3], we have z = 0;
1647
+ 2. condition (i) above implies that ξ1 = ξ2 = ξ, ξ > 0 since, otherwise the motion along x2+y2 = 1−h2
1648
+ would not be possible;
1649
+ 3. from 0 = d
1650
+ dt(x2 + y2) = σ2xy − 8ξx2 + 2uy − 8ξy2 we get ξ = σxy+uy
1651
+ 4(1−h2);
1652
+ 4. condition (iv) implies that r = 0 leading to xp + yq = 0. Since x < 0, y > 0, then q = 0 implies
1653
+ p = 0;
1654
+ 5. condition (ii) implies dη1 = dη2 = dη;
1655
+ 6. 0 = d(xp + yq) = uqdt + 4(1 − h2)dη ⇒ dη
1656
+ dt = −
1657
+ uq
1658
+ 4(1−h2);
1659
+ 7. from the above analysis we deduce that
1660
+ ˙p = σxy + uy
1661
+ (1 − h2) p −
1662
+ xuq
1663
+ (1 − h2),
1664
+ ˙q = −σp +
1665
+ σxy
1666
+ (1 − h2) q.
1667
+ Thus, (p, q) is a solution to a linear system and it can never be equal to zero. It follows that q
1668
+ cannot be zero because q = 0 implies p = 0. Since q ̸= 0, we have q > 0.
1669
+ Let us consider the case when t = t2. We claim that
1670
+ (p(t2 − 0), q(t2 − 0)) ̸= (0, 0).
1671
+ Seeking a contradiction, assume that it is (p(t2 − 0), q(t2 − 0)) = (0, 0). Then we have
1672
+ (p(t2 + 0), q(t2 + 0)) = (0, 0) + (2x2(t2), 2y2(t2))(dη1 + dη2)
1673
+ and such jump has to be normal to (x(t2), y(t2)) since r(t2 + 0) = 0 (see (iv)). It follows that (x2(t2) +
1674
+ y2(t2))(dη1 + dη2) = 0 and, since x2(t2) + y2(t2) > 0, we get dη1 + dη2 = 0, proving our claim.
1675
+ We now consider t ∈]t1, t2[. It is easy to see that ξ2 = 0 and dη2 = 0. We also deduce that
1676
+ 1. 0 = d
1677
+ dt(x2+y2+(z+h)2) = 2σxy+2uy−4ξ1y2−4ξ1x2−4ξ1(z+h)2 which implies that ξ1 = σxy+uy
1678
+ 2
1679
+ ;
1680
+ 2. also 0 = d(xp + yq + (z + h)r) = uqdt + 2dη1 implies that dη1
1681
+ dt = − uq
1682
+ 2 ;
1683
+ 3. from the above we deduce that
1684
+ ˙p = (σxy + uy)p − xuq,
1685
+ ˙q = −σp + σxyq.
1686
+ Thus (p, q) is a solution to a linear system and never is equal to zero. Second equation implies that
1687
+ if q = 0 then ˙q ̸= 0. Hence q > 0.
1688
+ 18
1689
+
1690
+ Now we need to consider t = t1. We claim that
1691
+ (p(t1 − 0), q(t1 − 0), r(t1 − 0)) ̸= (0, 0, 0).
1692
+ Let us then assume that it is (p(t1−0), q(t1−0), r(t1−0)) = (0, 0, 0). It then follows that (p(t1+0), q(t1+
1693
+ 0), r(t1 +0)) = (0, 0, 0)+(2x(t1)dη1, 2y(t1)dη1, 2(z(t1)+h)dη1). We now show that there is no such jump.
1694
+ Set r(t1 − 0) = r0. Then it follows from (iv) that (x(t1) · 0 + y(t1) · 0 + (z(t1) + h))r0 = 0 which implies
1695
+ that r0 = 0. We also have (x2(t1) + y2(t1) + (z(t1) + h)2)dη1 = 0 from (v). But this implies that dη1 = 0.
1696
+ Consequently, the multipliers do not exhibit a jump at t1.
1697
+ From the previous analysis we deduce that q should be positive almost everywhere on the boundary. It
1698
+ then follows that to find the optimal solution we have to analyze admissible trajectories with the controls
1699
+ with the structure (5.1) and choose the optimal value of ˜t.
1700
+ Acknowledgements
1701
+ The authors gratefully thank the support of Portuguese Foundation for Science and Technology (FCT)
1702
+ in the framework of the Strategic Funding UIDB/04650/2020.
1703
+ Also we thank the support by the ERDF - European Regional Development Fund through the Oper-
1704
+ ational Programme for Competitiveness and Internationalisation - COMPETE 2020, INCO.2030, under
1705
+ the Portugal 2020 Partnership Agreement and by National Funds, Norte 2020, through CCDRN and
1706
+ FCT, within projects To Chair (POCI-01-0145-FEDER-028247), Upwind (PTDC/EEI-AUT/31447/2017
1707
+ - POCI-01-0145-FEDER-031447) and Systec R&D unit (UIDB/00147/2020).
1708
+ References
1709
+ [1] Addy K, Adly S, Brogliato B, Goeleven D, A method using the approach of Moreau and Pana-
1710
+ giotopoulos for the mathematical formulation of non-regular circuits in electronics, Nonlinear
1711
+ Anal. Hybrid Syst., vol. 1, 30–43, (2013), https://doi.org/10.1016/j.nahs.2006.04.00.
1712
+ [2] Arroud
1713
+ C
1714
+ and
1715
+ Colombo
1716
+ G,
1717
+ Necessary
1718
+ conditions
1719
+ for
1720
+ a
1721
+ nonclassical
1722
+ control
1723
+ problem
1724
+ with state constraints,
1725
+ 20th IFAC World Congress,
1726
+ Toulouse,
1727
+ France,
1728
+ July 9-14,
1729
+ 2017,
1730
+ https://doi.org/10.1016/j.ifacol.2017.08.110.
1731
+ [3] Arroud C and Colombo G, A maximum principle for the controlled sweeping process, Set-Valued
1732
+ Var. Anal 26, 607–629 (2018) DOI: 10.1007/s11228-017-0400-4.
1733
+ [4] Brokate M, Krejˇc´ı P Optimal control of ODE systems Involving a rate independent variational in-
1734
+ equality, Disc. Cont. Dyn. Syst. Ser. B, vol. 18 (2) 331–348 (2013), doi: 10.3934/dcdsb.2013.18.331.
1735
+ [5] Cao TH, Mordukhovich B, Optimality conditions for a controlled sweeping process with applica-
1736
+ tions to the crowd motion model, Disc. Cont. Dyn. Syst. Ser. B, vol. 22, 267–306 (2017).
1737
+ [6] Cao
1738
+ TH,
1739
+ Colombo
1740
+ G,
1741
+ Mordukhovich
1742
+ B,
1743
+ Nguyen
1744
+ D.,
1745
+ Optimization
1746
+ of
1747
+ fully
1748
+ con-
1749
+ trolled
1750
+ sweeping
1751
+ processes,
1752
+ Journal
1753
+ of
1754
+ Differential
1755
+ Equations,
1756
+ 295,
1757
+ 138–186
1758
+ (2021)
1759
+ https://doi.org/10.1016/j.jde.2021.05.042
1760
+ [7] Clarke F, Optimization and nonsmooth analysis, John Wiley, New York (1983).
1761
+ [8] Colombo G, Palladino M, The minimum time function for the controlled Moreau’s sweeping
1762
+ process, SIAM, vol. 54, no. 4,2036– 2062 (2016), https://doi.org/10.1137/15M1043364.
1763
+ [9] Colombo G,Henrion R, Hoang ND, Mordukhovich BS, Optimal control of the sweeping process
1764
+ over polyhedral controlled sets, Journal of Differential Equations, vol. 260, 4, 3397–3447, (2016),
1765
+ https://doi.org/10.1016/j.jde.2015.10.039.
1766
+ 19
1767
+
1768
+ [10] de Pinho MdR, Ferreira MMA, Smirnov G, Optimal Control involving Sweeping Processes, Set-
1769
+ Valued Var. Anal 27, 523–548, (2019), https://doi.org/10.1007/s11228-018-0501-8.
1770
+ [11] de Pinho MdR, Ferreira MMA, Smirnov G, Correction to: Optimal Control Involving Sweeping
1771
+ Processes, Set-Valued Var. Anal 27, 1025–1027 (2019) https://doi.org/10.1007/s11228-019-00520-
1772
+ 5.
1773
+ [12] de Pinho MdR, Ferreira MMA, Smirnov G, Optimal Control with Sweeping Processes: Numerical
1774
+ Method, J Optim Theory Appl 185, 845– 858 (2020) https://doi.org/10.1007/s10957-020-01670-5
1775
+ [13] de Pinho MdR, Ferreira MMA, Smirnov G, Optimal Control Involving Sweeping Processes with
1776
+ End Point Constraints, 2021 60th IEEE Conference on Decision and Control (CDC), 2021, 96–
1777
+ 101(2019) doi: 10.1109/CDC45484.2021.9683291
1778
+ [14] de Pinho MdR, Ferreira MMA, Smirnov G, Necessary conditions for optimal control problems
1779
+ with sweeping systems and end point constraints, Optimization, to appear (2022).
1780
+ [15] Hermosilla C, Palladino M, Optimal Control of the Sweeping Process with a Non-Smooth Moving
1781
+ Set, SIAM j. Cont. Optim., to appear (2022).
1782
+ [16] Kunze M, Monteiro Marques MDP, An Introduction to Moreau’s sweeping process. Impacts in
1783
+ Mechanical Systems, Lecture Notes in Physics, vol. 551, 1–60, (2000).
1784
+ [17] Maury B, Venel J (2011), A discrete contact model for crowd motion, ESAIM: M2AN 45 1,
1785
+ 145–168.
1786
+ [18] Moreau JJ, On unilateral constraints, friction and plasticity, In: Capriz G., Stampacchia G. (Eds.)
1787
+ New Variational Techniques in Mathematical Physics, CIME ciclo Bressanone 1973. Edizioni
1788
+ Cremonese, Rome, 171–322 (1974).
1789
+ [19] Mordukhovich B, Variational analysis and generalized differentiation. Basic Theory. Fundamental
1790
+ Principles of Mathematical Sciences 330, Springer-Verlag, Berlin (2006).
1791
+ [20] Mordukhovich B, Variational analysis and generalized differentiation II. Applications, Fundamen-
1792
+ tal Principles of Mathematical Sciences 330, Springer-Verlag, Berlin (2006).
1793
+ [21] Thibault L, Moreau sweeping process with bounded truncated retraction, J. Convex Anal, vol.
1794
+ 23, pp. 1051–1098 (2016).
1795
+ [22] Vinter RB, Optimal Control, Birkh¨auser, Systems and Control: Foundations and Applications,
1796
+ Boston MA (2000).
1797
+ [23] Zeidan V, Nour C, Saoud H, A nonsmooth maximum principle for a controlled noncon-
1798
+ vex sweeping process, Journal of Differential Equations, vol. 269 (11), 9531–9582 (2020),
1799
+ https://doi.org/10.1016/j.jde.2020.06.053
1800
+ 20
1801
+
-NFRT4oBgHgl3EQfrTfI/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
.gitattributes CHANGED
@@ -4291,3 +4291,54 @@ ENAyT4oBgHgl3EQfSPf9/content/2301.00085v1.pdf filter=lfs diff=lfs merge=lfs -tex
4291
  GtAzT4oBgHgl3EQfUvxQ/content/2301.01271v1.pdf filter=lfs diff=lfs merge=lfs -text
4292
  ltFPT4oBgHgl3EQf3TXn/content/2301.13190v1.pdf filter=lfs diff=lfs merge=lfs -text
4293
  XtE3T4oBgHgl3EQf1gsc/content/2301.04746v1.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4291
  GtAzT4oBgHgl3EQfUvxQ/content/2301.01271v1.pdf filter=lfs diff=lfs merge=lfs -text
4292
  ltFPT4oBgHgl3EQf3TXn/content/2301.13190v1.pdf filter=lfs diff=lfs merge=lfs -text
4293
  XtE3T4oBgHgl3EQf1gsc/content/2301.04746v1.pdf filter=lfs diff=lfs merge=lfs -text
4294
+ XtE3T4oBgHgl3EQf1gsc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4295
+ etE1T4oBgHgl3EQfLwNP/content/2301.02980v1.pdf filter=lfs diff=lfs merge=lfs -text
4296
+ h9E2T4oBgHgl3EQfHwY9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4297
+ ndE2T4oBgHgl3EQfewdN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4298
+ ONE1T4oBgHgl3EQftwWv/content/2301.03381v1.pdf filter=lfs diff=lfs merge=lfs -text
4299
+ ltFJT4oBgHgl3EQfZSy-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4300
+ wdE2T4oBgHgl3EQfgwdJ/content/2301.03940v1.pdf filter=lfs diff=lfs merge=lfs -text
4301
+ stE5T4oBgHgl3EQfmQ9N/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4302
+ wdAzT4oBgHgl3EQfCPqo/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4303
+ u9E4T4oBgHgl3EQfWwwm/content/2301.05035v1.pdf filter=lfs diff=lfs merge=lfs -text
4304
+ 39AyT4oBgHgl3EQf1_mj/content/2301.00744v1.pdf filter=lfs diff=lfs merge=lfs -text
4305
+ l9FPT4oBgHgl3EQfIDTd/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4306
+ cNFPT4oBgHgl3EQfBjTq/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4307
+ XdFRT4oBgHgl3EQfNjdq/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4308
+ A9AyT4oBgHgl3EQf3_rL/content/2301.00780v1.pdf filter=lfs diff=lfs merge=lfs -text
4309
+ ddFJT4oBgHgl3EQfSCyU/content/2301.11498v1.pdf filter=lfs diff=lfs merge=lfs -text
4310
+ ENAyT4oBgHgl3EQfSPf9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4311
+ CNAzT4oBgHgl3EQfTvwq/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4312
+ MtFRT4oBgHgl3EQfGDcg/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4313
+ xtE2T4oBgHgl3EQfhQc5/content/2301.03945v1.pdf filter=lfs diff=lfs merge=lfs -text
4314
+ WNE5T4oBgHgl3EQfBw7L/content/2301.05390v1.pdf filter=lfs diff=lfs merge=lfs -text
4315
+ 1tFIT4oBgHgl3EQf4CvP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4316
+ 09AzT4oBgHgl3EQfDPrL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4317
+ vdE2T4oBgHgl3EQf2gj_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4318
+ W9AzT4oBgHgl3EQfmf0I/content/2301.01562v1.pdf filter=lfs diff=lfs merge=lfs -text
4319
+ vdE2T4oBgHgl3EQf2gj_/content/2301.04163v1.pdf filter=lfs diff=lfs merge=lfs -text
4320
+ 09AzT4oBgHgl3EQfDPrL/content/2301.00974v1.pdf filter=lfs diff=lfs merge=lfs -text
4321
+ iNE0T4oBgHgl3EQfpgG_/content/2301.02541v1.pdf filter=lfs diff=lfs merge=lfs -text
4322
+ w9E0T4oBgHgl3EQftAEr/content/2301.02585v1.pdf filter=lfs diff=lfs merge=lfs -text
4323
+ fNE3T4oBgHgl3EQffQos/content/2301.04550v1.pdf filter=lfs diff=lfs merge=lfs -text
4324
+ BdAzT4oBgHgl3EQfh_0J/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4325
+ 3NFAT4oBgHgl3EQflB25/content/2301.08615v1.pdf filter=lfs diff=lfs merge=lfs -text
4326
+ K9E0T4oBgHgl3EQfSgAu/content/2301.02222v1.pdf filter=lfs diff=lfs merge=lfs -text
4327
+ 4dE1T4oBgHgl3EQf6QUq/content/2301.03520v1.pdf filter=lfs diff=lfs merge=lfs -text
4328
+ GtAzT4oBgHgl3EQfUvxQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4329
+ ONE1T4oBgHgl3EQftwWv/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4330
+ ddE2T4oBgHgl3EQfbAcC/content/2301.03879v1.pdf filter=lfs diff=lfs merge=lfs -text
4331
+ ddE2T4oBgHgl3EQfbAcC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4332
+ KtE4T4oBgHgl3EQf7w7r/content/2301.05343v1.pdf filter=lfs diff=lfs merge=lfs -text
4333
+ RNFRT4oBgHgl3EQfKzd9/content/2301.13500v1.pdf filter=lfs diff=lfs merge=lfs -text
4334
+ u9E4T4oBgHgl3EQfWwwm/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4335
+ _dAzT4oBgHgl3EQfFvqG/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4336
+ btAzT4oBgHgl3EQfLfs5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4337
+ WNE5T4oBgHgl3EQfBw7L/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4338
+ V9E2T4oBgHgl3EQfuAiM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4339
+ mtAzT4oBgHgl3EQfqP1R/content/2301.01625v1.pdf filter=lfs diff=lfs merge=lfs -text
4340
+ tNAzT4oBgHgl3EQfPft0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4341
+ _dAzT4oBgHgl3EQfFvqG/content/2301.01016v1.pdf filter=lfs diff=lfs merge=lfs -text
4342
+ L9E0T4oBgHgl3EQfjAEs/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
4343
+ SdAyT4oBgHgl3EQf8PoD/content/2301.00851v1.pdf filter=lfs diff=lfs merge=lfs -text
4344
+ lNE0T4oBgHgl3EQf7wKH/content/2301.02780v1.pdf filter=lfs diff=lfs merge=lfs -text
09AzT4oBgHgl3EQfDPrL/content/2301.00974v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81d01afc6fea4cf2ba1b667e6c839109ee9bca215294aae14a8ff4dc56a743e1
3
+ size 2258073
09AzT4oBgHgl3EQfDPrL/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51e25964dfb8953a122c5989e7109455ed14795b4d7007bfcebe83e3589a3030
3
+ size 9175085
09AzT4oBgHgl3EQfDPrL/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49964442a7d70d5e496ea4162197df4e7518a7d05209a38ec81ff4056a4463ff
3
+ size 265408
1NFPT4oBgHgl3EQfUTQQ/content/tmp_files/2301.13056v1.pdf.txt ADDED
@@ -0,0 +1,1506 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.13056v1 [math.AG] 30 Jan 2023
2
+ EQUIVARIANT ORIENTED HOMOLOGY OF THE AFFINE
3
+ GRASSMANNIAN
4
+ CHANGLONG ZHONG
5
+ Abstract. We generalize the property of small-torus equivariant K-homology of the affine Grass-
6
+ mannian to general oriented (co)homology theory in the sense of Levine and Morel. The main tool
7
+ we use is the formal affine Demazure algebra associated to the affine root system. More precisely, we
8
+ prove that the small-torus equivariant oriented cohomology of the affine Grassmannian satisfies the
9
+ GKM condition. We also show that its dual, the small-torus equivariant homology, is isomorphic to
10
+ the centralizer of the equivariant oriented cohomology of a point in the the formal affine Demazure
11
+ algebra.
12
+ 0. Introduction
13
+ Let h be an oriented cohomology theory in the sense of Levine and Morel. Let G be a semi-simple
14
+ linear algebraic group over C with maximal torus T and a Borel subgroup B. Let GrG be the affine
15
+ Grassmannian of G. T is called the small torus, in contrary to the big torus Ta of GrG. The theory of
16
+ hTa(GrG) when h is the equivariant cohomology or the K-theory, is studied by Kostant and Kumar
17
+ in [KK86, KK90]. It is dual to the so-called affine nil-Hecke algebra (equivariant cohomology case)
18
+ or the affine 0-Hecke algebra. Alternatively, the affine nil-Hecke algebra and the affine 0-Hecke
19
+ algebra can be called the equivariant homology and the equivariant K-homology theory.
20
+ The small torus equivariant homology theory HT(GrG) of the affine Grassmannian was first
21
+ studied by Peterson [P97]. Moreover, he raised a conjecture (without a proof) saying that HT(GrG)
22
+ is isomorphic to the quantum cohomology QHT (G/B)of G/B. This conjecture, together with its
23
+ partial flag variety version, is proved by Lam-Shimozono in [LS10]. One key step is the identification
24
+ of HT(G/B) with the centralizer of HT (pt) in HT (Ga/Ba) where Ga is the Kac-Moody group
25
+ associated to the affine root system and Ba is its Borel subgroup.
26
+ For K-theory, similar property was expected to hold.
27
+ In [LSS10], the authors study the K-
28
+ theoretic Peterson subalgebra, i.e., the centralizer of the ring KT (pt) in the small-torus affine
29
+ 0-Hecke algebra, i.e., the equivariant K-homology KT (Ga/Ba). It is proved that this algebra is iso-
30
+ morphic to KT (GrG). One of the main tools is the small-torus GKM condition of the T-equivariant
31
+ K-cohomology. In [LLMS18], some evidence was provided in supporting the K-theory Peterson
32
+ Conjecture. In [K18], using the study of semi-infinite flag variety, Kato proves this conjecture.
33
+ More precisely, he embeds quantum K-theory of flag variety and certain localization of the Peter-
34
+ son subalgebra into T-equivariant K-theory of semi-infinite flag variety, and proves that their image
35
+ coincide.
36
+ In all the work mentioned above, the Peterson subalgebra plays key roles. In this paper, we
37
+ generalize the construction of the Peterson subalgebra into general oriented cohomology theory
38
+ h. Associated to such theory, there is a formal group law F over the coefficient ring R = h(pt).
39
+ Associated to F and a Kac-Moody root system, in [CZZ16, CZZ19, CZZ15, CZZ20], the author
40
+ generalized Kostant-Kumar’s construction and defined the formal affine Demazure algebra (FADA).
41
+ It is a non-commutative algebra generated by the divided difference operators. Its dual give an
42
+ algebraic model for hTa(Ga/Ba). Since Levine-Morel’s oriented cohomology theory is only defined
43
+ 1
44
+
45
+ 2
46
+ C. ZHONG
47
+ for smooth projective varieties, in this paper we do not intend to generalize the geometric theory.
48
+ Instead, we only work with the algebraic model, i.e., the FADA associated to h.
49
+ Following the same idea as the work mentioned above in cohomology and K-theory, we look at
50
+ the small-torus (the torus T) version, which is very similar as the big torus case Ta. We define
51
+ the small torus FADA, DWa. In this paper, our first main result (Theorem 4.3) shows that the
52
+ algebraic models for hT (Ga/Ba) and hT (GrG), i.e., D∗
53
+ Wa and (D∗
54
+ Wa)W , satisfy the small torus GKM
55
+ condition. Based on that, we prove the second main result (Theorem 5.5), which shows that the
56
+ dual of hT (GrG), denoted by DQ∨ (Q∨ being the coroot lattice), coincides with the centralizer of
57
+ hT (pt) in the FADA DWa. This defines the Peterson subalgebra associated to h.
58
+ Our result generalizes and extends properties for equivariant cohomology and K-theory. More-
59
+ over, our method is uniform and does not reply on the specific oriented cohomology theory. As
60
+ an application of this construction, we define actions of the FADA (of the big and small torus) on
61
+ the algebraic models fo hTa(GrG) and hT (GrG). This is called the left Hecke action. For finite flag
62
+ varieties case it is studied in [MNS22] using geometric arguments (see also [B97, K03, T09, LZZ20]).
63
+ For connective K-theory (which specializes to cohomology and K-theory), we compute the recursive
64
+ formulas for certain basis in hT (GrG) (Theorem 2.3).
65
+ It is natural to consider generalizing Kato’s construction to this case, that is, invert Schubert
66
+ classes in DQ∨ corresponding to tλ ∈ Q∨
67
+ <. This localization for K-theory was proved to be isomor-
68
+ phic to QKT (G/B). For h beyong singular cohomology and K-theory, however, the first obstruction
69
+ is that there is no ‘quantum’ oriented cohomology theory defined. The other obstruction is that
70
+ the divided difference operators do not satisfy braid relations.
71
+ This was a key step in Kato’s
72
+ construction (see [K18, Theorem 1.7]). The author plans to investigate this in a future paper.
73
+ This paper is organized as follows: In §1 we recall the construction of the FADA for the big torus
74
+ Ta, and in §2 we compute the recursive formulas via the left Hecke action. In §3 we we repeat the
75
+ construction for the small torus and indicates the difference from the big torus case. In §4 we prove
76
+ that dual of the small torus FADA satisfies the small torus GKM condition, and in §5 we define
77
+ the Peterson subalgebra and show that it coincides with the centralizer of hT (pt). In the appendix
78
+ we provide some computational result in the ˆA1 case.
79
+ Notations. Let G ⊃ B ⊃ T be such that G is simple, simply connected algebraic group over C
80
+ with a Borel subgroup B and a torus T. Let Ga ⊃ Ba ⊃ Ta where Ga is the affine Kac-Moody group
81
+ with Borel subgroup Ba and the affine torus Ta. Let P be the maximal parabolic group scheme so
82
+ that Ga/P = GrG is the affine Grassmannian. Let T ∗ (resp. T ∗
83
+ a ) be the group of characters of T
84
+ (resp. Ta), then T ∗
85
+ a = T ∗ ⊕ Zδ.
86
+ Let W be the Weyl group of G, I = {α1, ..., αn} be the simple roots, Q = ⊕iZαi ⊂ T ∗ be the root
87
+ lattice, Q∨ = ⊕iZα∨
88
+ i be the coroot lattice, θ be the longest element, δ is the null root, α0 = −θ + δ
89
+ be the extra simple root. Denote Ia = {α0, ..., αn}. For each λ ∈ Q∨, let tλ be the translation
90
+ acting on Q. We then have tλ1tλ2 = tλ1+λ2, and wtλw−1 = tw(λ), w ∈ W. Let Q∨
91
+ ≤ be the set of
92
+ antidominant coroots, Q∨
93
+ < be the set of strictly antidominant coroots (i.e., (λ, αi) < 0 ∀i ∈ I). Let
94
+ Wa = W ⋉ Q∨ be the affine Weyl group, ℓ be the length function on Wa, and w0 ∈ W be the
95
+ longest element.
96
+ Let Φ be the set of roots for W, Φa = Zδ + Φ be the set of real affine roots, and Φ±
97
+ a , Φ±
98
+ be the corresponding set of positive/negative roots for the corresponding systems. Let inv(w) =
99
+ w−1Φ+
100
+ a ∩ Φ−
101
+ a . We have
102
+ Φ+
103
+ a = {α + kδ|α ∈ Φ+, k = 0 or α ∈ Φ, k > 0}.
104
+
105
+ AFFINE GRASSMANNIAN
106
+ 3
107
+ Let W −
108
+ a be the minimal length representatives of Wa/W. There is a bijection
109
+ W −
110
+ a → Q∨,
111
+ w �→ λ, if wW = tλW.
112
+ Moreover, W −
113
+ a ∩ Q∨ = {tλ|λ ∈ Q≤}. The action of α + kδ on µ + mδ ∈ Q ⊕ Zδ is given by
114
+ sα+kδ(µ + mδ) = µ + mδ − ⟨µ, α∨⟩(α + kδ).
115
+ In particular, for λ ∈ Q∨, w ∈ W, µ ∈ Q, we have sα+kδ = sαtkα∨, wtλ(µ) = w(µ).
116
+ We say the set of reduced sequences Iw, w ∈ Wa is W-compatible if Iw = Iu ∪ Iv for w = uv, u ∈
117
+ W −
118
+ a , v ∈ W.
119
+ 1. FADA for the big torus
120
+ In this section, we recall the construction of the formal affine Demazure algebra (FADA) for the
121
+ affine root system. All the construction can be found in [CZZ20].
122
+ 1.1.
123
+ Let F be a one dimensional formal group law over a domain R with characteristic 0. Following
124
+ from [LM07] that there is an oriented cohomology h whose associated formal group law is F. In
125
+ this paper we won’t need any geometric property of this h, since our treatment is pure algebraic
126
+ and self-contained.
127
+ Example 1.1. Let F = Fc = x + y − cxy be the connective formal group law (for connective
128
+ K-theory) over R = Z[c]. Specializing to c = 0 or c = 1, one obtains the additive or multiplicative
129
+ formal group law. One of the simplest formal group laws beyond Fc is the hyperbolic formal group
130
+ law considered in [LZZ20]:
131
+ F(x, y) = x + y − cxy
132
+ 1 + axy
133
+ , R = Z[c, a].
134
+ Let ˆS be the formal group algebra of T ∗
135
+ a defined in [CPZ13]. That is,
136
+ ˆS = R[[xµ|µ ∈ T ∗
137
+ a ]]/JF ,
138
+ where JF is the closure of the ideal generated x0 and xµ1+µ2 − F(xµ1, xµ2), µ1, µ2 ∈ T ∗
139
+ a . Indeed,
140
+ after fixing a basis of T ∗
141
+ a ∼= Zn+1, ˆS is isomorphic to the power series ring R[[x1, ..., xn+1]].
142
+ Remark 1.2. If F = Fc is the connective formal group law, one can just replace ˆS by R[xµ|µ ∈
143
+ T ∗
144
+ a ]/JF . In other words, in this case one can use the polynomial ring instead of the power series ring.
145
+ For instance, if c = 0, then ˆS ∼= SymR(T ∗
146
+ a ), xµ �→ µ. If c ∈ R×, then ˆS ∼= R[T ∗
147
+ a ], xµ �→ c−1(1 − e−µ).
148
+ Throughout this paper, whenever we specializes to Fc, we assume that ˆS is the polynomial version.
149
+ 1.2.
150
+ Define ˆQ = ˆS[ 1
151
+ xα , α ∈ Φa]. The Weyl groups Wa acts on ˆQ, so we can define the twisted group
152
+ algebra ˆQWa := ˆQ ⋊ R[Wa],, which is a free left ˆQ-module with basis denoted by ηw, w ∈ Wa and
153
+ the product cηwc′ηw′ = cw(c′)ηww′, c, c′ ∈ ˆQ.
154
+ For each α ∈ Φa, define κα =
155
+ 1
156
+ xα +
157
+ 1
158
+ x−α ∈ ˆS. If F = Fc, then κα = c. For each simple root
159
+ αi, we define the Demazure element ˆXαi =
160
+ 1
161
+ xαi (1 − ηsi). It is easy to check that ˆX2
162
+ α = κα ˆXα. For
163
+ simplicity, denote ηi = ηαi = ηsi, x±i = x±αi, ˆXi = ˆXαi, i ∈ Ia. If Iw = (i1, ..., ik), ij ∈ Ia is a
164
+ reduced sequence of w ∈ Wa, we define ˆXIw correspondingly. It is well known that they depends
165
+ on the choice of Iw, unless F = Fc.
166
+ Write
167
+ (1)
168
+ ˆXIw =
169
+
170
+ v≤w
171
+ ˆaIw,vηv,
172
+ ηw =
173
+
174
+ v≤w
175
+ ˆbw,Iv ˆXIv,
176
+ ˆaIw,v ∈ ˆQ, ˆbw,Iv ∈ ˆS,
177
+
178
+ 4
179
+ C. ZHONG
180
+ then we have ˆbw,Iw = �
181
+ α∈inv(w) xα =
182
+ 1
183
+ ˆaIw,w .
184
+ Let ˆDWa be the subalgebra of ˆQWa generated by ˆS and ˆXi, i ∈ Ia. This is called the formal
185
+ affine Demazure algebra (FADA) for the big torus. It is easy to see that ˆXIw, w ∈ Wa is a ˆQ-basis
186
+ of ˆQWa, and it is proved in [CZZ20] that it is also a basis of the left ˆS-module ˆDWa. Note that
187
+ W ⊂ ˆDWa via the map si �→ ηi = 1 − xi ˆXi ∈ ˆDWa.
188
+ Remark 1.3. It is not difficult to derive that there is a residue description of the coefficients in
189
+ the expression of elements of ˆDWa as linear combinations of ηw. Such description was first given in
190
+ [GKV97]. See [ZZ17] for more details.
191
+ 1.3.
192
+ We define the duals of left modules:
193
+ ˆQ∗
194
+ Wa = Hom ˆ
195
+ Q( ˆQWa, ˆQ) = Hom(Wa, ˆQ),
196
+ ˆD∗
197
+ Wa = Hom ˆS( ˆDWa, ˆS).
198
+ Dual to the elements ηw, ˆXIw ∈ ˆDWa ⊂ ˆQWa, we have ˆfw, ˆX∗
199
+ Iw ∈ ˆD∗
200
+ Wa ⊂ ˆQ∗
201
+ Wa.
202
+ The product
203
+ structure on ˆQ∗
204
+ Wa is defined by ˆfw ˆfv = δw,v ˆfw, with the unit given by 1 = �
205
+ w∈Wa ˆfw. Note that
206
+ here we usually use � to denote a sum of (possibly) infinitely many terms, and � to denote a
207
+ finite sum.
208
+ Lemma 1.4. We have
209
+ ˆD∗
210
+ Wa = { ˆf ∈ ˆQ∗
211
+ Wa| ˆf( ˆDWa) ⊂ ˆS}.
212
+ Proof. Denote the RHS by Z1. It is clear that ˆD∗
213
+ Wa is contained in Z1 since ˆXIv generate ˆDWa,
214
+ ˆX∗
215
+ Iw generate ˆD∗
216
+ Wa, and ˆX∗
217
+ Iw( ˆXIv) = δw,v. Conversely, let ˆf = �
218
+ ℓ(w)≥k cw ˆfw ∈ Z1. If ℓ(u) = k,
219
+ then from (1), we have
220
+ ˆf( ˆXIu) =
221
+
222
+ ℓ(w)≥k
223
+ cw ˆfw(
224
+
225
+ v≤u
226
+ ˆaIu,vηv) = cuˆaIu,u ∈ ˆS.
227
+ Denote ˆf ′ := ˆf −�
228
+ ℓ(u)=k cuˆaIu,u ˆX∗
229
+ Iu. Note that ˆX∗
230
+ Iu = �
231
+ w∈Wa ˆbw,Iufw and ˆbu,IuˆaIu,u = 1, so for any
232
+ u with ℓ(u) = k, we have ˆf ′(ηu) = cu − cuˆaIu,u ˆX∗
233
+ Iu(ηu) = cu − cu = 0, so ˆf ′ is a linear combination
234
+ of ˆfw, ℓ(w) ≥ k + 1. Repeating this process, we get that ˆf ∈ ˆD∗
235
+ Wa.
236
+
237
+ 1.4.
238
+ There is an ˆQ-linear action of ˆQWa on ˆQ∗
239
+ Wa, defined by
240
+ (z • ˆf)(z′) = ˆf(z′z),
241
+ z, z′ ∈ ˆQWa, ˆf ∈ ˆQ∗
242
+ Wa.
243
+ This is called the right Hecke action. We have
244
+ cηw • c′ ˆfw′ = c′w′w−1(c) ˆfw′w−1, c, c′ ∈ ˆQ.
245
+ It follows from Lemma 1.4 and similar reason as in [CZZ19, §10] that this induces an action of ˆDWa
246
+ on ˆD∗
247
+ Wa. Moreover, it induces an action of W ⊂ ˆDWa on ˆQ∗
248
+ Wa and ˆD∗
249
+ Wa. By definition it is easy to
250
+ get
251
+ ˆXα •
252
+
253
+ w∈Wa
254
+ cw ˆfw =
255
+
256
+ w∈Wa
257
+ cw − csw(α)w
258
+ xw(α)
259
+ ˆfw.
260
+ (2)
261
+ The following proposition is proved in the finite case in [CZZ19, Lemma 10.2, Theorem 10.7].
262
+ Proposition 1.5. The subset ˆD∗
263
+ Wa ⊂ ˆQ∗
264
+ Wa satisfies the following (big-torus) GKM condition:
265
+ ˆD∗
266
+ Wa = { ˆf ∈ ˆQ∗
267
+ Wa| ˆf(ηw) ∈ ˆS and ˆf(ηw − ηsαw) ∈ xα ˆS, ∀α ∈ Φa}.
268
+
269
+ AFFINE GRASSMANNIAN
270
+ 5
271
+ Proof. Denote the RHS by Z2. Let ˆf ∈ ˆD∗
272
+ Wa, we know ˆXα • ˆf ∈ ˆD∗
273
+ Wa. Then (2) implies that ˆf
274
+ satisfies the condition defining Z2, so ˆD∗
275
+ Wa ⊂ Z2.
276
+ For the other direction, we first show that ˆD∗
277
+ Wa is a maximal ˆDWa-submodule of ˆS∗
278
+ Wa :=
279
+ Hom(Wa, ˆS).
280
+ This can be proved as follows: if M ⊂ ˆS∗
281
+ Wa is a ˆDWa-module, for any ˆf ∈ M,
282
+ we have ˆXI • ˆf ∈ M ⊂ ˆS∗
283
+ Wa, so ˆXI • ˆf(ηe) = ˆf( ˆXI) ∈ ˆS, so f ∈ ˆD∗
284
+ Wa. One then can show that
285
+ the subset Z2 is a ˆDWa-module, which follows from the same proof as in the finite case in [CZZ19,
286
+ Theorem 10.2]. Since ˆD∗
287
+ Wa is a maximal submodule, we have Z2 ⊂ ˆD∗
288
+ Wa. The proof is finished.
289
+
290
+ 1.5.
291
+ We can similarly define the non-commutative ring ˆQQ∨ = ˆQ ⋊ R[Q∨] with a ˆQ-basis ηtλ, λ ∈
292
+ Q∨. Then there is a canonical map of left ˆQ-modules:
293
+ pr : ˆQWa → ˆQQ∨,
294
+ cηtλw �→ cηtλ,
295
+ w ∈ W, λ ∈ Q∨, c ∈ ˆQ.
296
+ Define ˆDWa/W = pr( ˆDWa) ⊂ ˆQQ∨. Indeed, this is the same as the relative Demazure module
297
+ defined in [CZZ19, §11].
298
+ We can also consider the ˆQ-dual ˆQ∗
299
+ Q∨ and the ˆS-dual ˆD∗
300
+ Wa/W. The elements dual to ηtλ ∈ ˆQQ∨
301
+ are denoted by ˆftλ.
302
+ The projection pr then induces embeddings pr∗ : ˆQ∗
303
+ Q∨ ֒→ ˆQ∗
304
+ Wa and pr∗ :
305
+ ˆDWa/W ֒→ ˆD∗
306
+ Wa. It is easy to see that
307
+ pr∗( ˆftλ) =
308
+
309
+ v∈W
310
+ ˆftλv.
311
+ Moreover, similar as in the finite case [CZZ19, Lemma 11.7], we have
312
+ pr∗( ˆQ∗
313
+ Q∨) = ( ˆQ∗
314
+ Wa)W ,
315
+ pr∗( ˆD∗
316
+ Wa/W) = ( ˆD∗
317
+ Wa)W .
318
+ Indeed, elements of pr∗( ˆQ∗
319
+ Q∨) = ( ˆQ∗
320
+ Wa)W are precisely the elements ˆf ∈ ˆQ∗
321
+ Wa satisfying ˆf(ηtλw −
322
+ ηtλ) = 0 for any λ ∈ Q∨, w ∈ W. It follows from similar reason as [CZZ19, Corollary 8.5, Lemma
323
+ 11.5] that if Iw, w ∈ Wa is W-compatible, then ˆbuv,Iw = ˆbu,Iw for any v ∈ W. We then have
324
+ Lemma 1.6. Assume the sequences Iw, w ∈ Wa is W-compatible, then pr(XIw), w ∈ W −
325
+ a is a basis
326
+ of ˆDWa/W , and { ˆX∗
327
+ Iw, w ∈ W −
328
+ a } is a ˆQ-basis of ( ˆQ∗
329
+ Wa)W and a ˆS-basis of ( ˆD∗
330
+ Wa)W .
331
+ Note that ( ˆD∗
332
+ Wa)W is the algebraic model for hTa(GrG) and the embedding ( ˆD∗
333
+ Wa)W ⊂ ˆD∗
334
+ Wa is
335
+ the algebraic model for the pull-back hTa(GrG) → hTa(Ga/Ba).
336
+ 1.6.
337
+ Similar as the finite case in [LZZ20, §3], there is another action of ˆQWa on ˆQ∗
338
+ Wa by
339
+ aηv ⊙ b ˆfw = av(b) ˆfvw, a, b ∈ ˆQ, w, v ∈ Wa.
340
+ This is called the left Hecke action. It is easy to see that it commutes with the •-action. Note
341
+ however that the ⊙-action is not ˆQ-linear.
342
+ Lemma 1.7. The ⊙ action of ˆQWa on ˆQ∗
343
+ Wa induces an action of ˆDWa on ˆD∗
344
+ Wa.
345
+ Proof. We have
346
+ ˆXα ⊙
347
+
348
+ w
349
+ cw ˆfw = 1
350
+
351
+ (1 − ηα) ⊙
352
+
353
+ w
354
+ cw ˆfw =
355
+
356
+ w
357
+ cw − sα(csαw)
358
+
359
+ ˆfw.
360
+ let dw,α = cw−sα(csαw)
361
+
362
+ . We show that dw,α satisfy the big-torus GKM condition, that is, dw,α −
363
+ dsβw,α ∈ xβ ˆS for any β.
364
+
365
+ 6
366
+ C. ZHONG
367
+ Denote cw − csαw = xαp, p ∈ ˆS and x−α = −xα + x2
368
+ αq, q ∈ ˆS. If β = α, then we have
369
+ dw,α − dsβw,α
370
+ =
371
+ cw − sα(csαw) − csαw + sα(cw)
372
+
373
+ = xαp + sα(cw) − sα(cw − xαp)
374
+
375
+ =
376
+ p + x−αsα(p)
377
+
378
+ = p − sα(p) + xαq,
379
+ which is clearly a multiple of xα. If β ̸= α, then
380
+ dw,α − dsβw,α = cw − sα(csαw) − (csβw − sα(csαsβw))
381
+
382
+ = cw − sα(csαw) − csβw + sα(csαsβw)
383
+
384
+ .
385
+ Since xα, xβ are coprime [CZZ20, Lemma 2.2], it suffices to prove the numerator is divisible by
386
+ xβ. Note cw − csβw is already divisible by xβ. Furthermore, cw − csαsβw = cw − cssα(β)sαw, so it is
387
+ divisible by ssα(β). Therefore, −sα(csαw) + sα(csαsβw) is divisible by sα(xsα(β)) = xβ. The proof is
388
+ finished.
389
+
390
+ Consequently, the ⊙-action of ˆDWa on ˆD∗
391
+ Wa restricts to an action on ( ˆD∗
392
+ Wa)W .
393
+ 1.7.
394
+ Indeed, there is a characteristic map
395
+ c : ˆS → ˆD∗
396
+ Wa, z �→ z • 1,
397
+ whose geometric model is the map sending a character of the torus to the first Chern class of the
398
+ associated line bundle over the flag variety [CZZ15, §10]. We then have a map
399
+ φ : ˆS ⊗ ˆSWa ˆS → ˆD∗
400
+ Wa, a ⊗ b �→ ac(b) =
401
+
402
+ w
403
+ aw(b) ˆfw.
404
+ This is proved to be an isomorphism in some cases. It is easy to see that for any z ∈ DWa, there
405
+ are the following commutative diagrams
406
+ ˆS ⊗ ˆSWa ˆS
407
+ φ
408
+
409
+ z· ⊗id
410
+
411
+ ˆD∗
412
+ Wa
413
+ z⊙
414
+
415
+ ˆS ⊗ ˆSWa ˆS
416
+ φ
417
+ � ˆD∗
418
+ Wa
419
+ ,
420
+ ˆS ⊗ ˆSWa ˆS
421
+ φ
422
+
423
+ id ⊗z·
424
+
425
+ ˆD∗
426
+ Wa
427
+ z•
428
+
429
+ ˆS ⊗ ˆSWa ˆS
430
+ φ
431
+ � ˆD∗
432
+ Wa
433
+ .
434
+ 2. Equivariant connective K-theory of the affine Grassmannian
435
+ As an application of the left Hecke action, we derive the recursive formulas for this action on
436
+ bases in connective K-theory of GrG. In this section only, assume F = Fc. Our results specialize
437
+ to equivariant K-theory (resp. equivariant cohomology) by letting c = 1 (resp. c = 0). In both
438
+ cases, our results are only known for flag varieties of finite root systems. Since ˆXi do not satisfy
439
+ the braid relations, the result of this section do not generalize to general F.
440
+ 2.1.
441
+ Denote ǫw = (−1)ℓ(w) and cw = cℓ(w). We have x−α =
442
+
443
+ cxα−1 and κα = c for any α, and ˆXIw
444
+ can be denoted by ˆXw.
445
+ Note that there is another operator ˆYi = ˆYαi = c − ˆXαi such that ˆY 2
446
+ αi = c ˆYαi and braid rela-
447
+ tions are satisfied. This is the algebraic model of the composition hTa(Ga/Ba) → hTa(Ga/Pi) →
448
+ hTa(Ga/Ba) where Pi is the minimal parabolic subgroup corresponding to αi ∈ Ia. Moreover, we
449
+ have
450
+ ˆXw =
451
+
452
+ v≤w
453
+ ǫvcwc−1
454
+ v
455
+ ˆYv.
456
+
457
+ AFFINE GRASSMANNIAN
458
+ 7
459
+ Most properties of ˆXw are also satisfied by ˆYw, except for Lemma 1.6. Indeed, ˆY ∗
460
+ w, w ∈ W −
461
+ a is not
462
+ W-invariant.
463
+ Denote xΦ = �
464
+ α∈Φ− xα. It is well known that Yw0 = �
465
+ w∈W ηw 1
466
+ xΦ . Moreover, the map Yw0 •
467
+ :
468
+ ˆD∗
469
+ Wa → ( ˆD∗
470
+ Wa)W is the algebraic model for the map hTa(Ga/Ba) → hTa(GrG). We first compute
471
+ the image of the two bases via this map.
472
+ Lemma 2.1. Let F = Fc. For any w ∈ Wa and u = u1u2, u1 ∈ W −
473
+ a , u2 ∈ W, we have
474
+ Yw0 • ˆX∗
475
+ u1u2 = ǫu2cw0c−1
476
+ u2 ˆX∗
477
+ u1, Yw0 • ˆY ∗
478
+ w =
479
+
480
+ v1v2≥w,v1∈W −
481
+ a ,v2∈W
482
+ ǫwǫv2cv1w0c−1
483
+ w ˆX∗
484
+ v1.
485
+ In particular, Yw0 • ˆY ∗
486
+ w, w ∈ W −
487
+ a is a basis of ( ˆD∗
488
+ Wa)W if and only if c ∈ R×.
489
+ Proof. For each v ∈ Wa, write v = v1v2, v1 ∈ W −
490
+ a , v2 ∈ W. From ˆXwYw0 = 0, w ∈ W, we have
491
+ (Yw0• ˆX∗
492
+ u1u2)( ˆXv1v2) = ˆX∗
493
+ u1u2( ˆXv1v2Yw0) = δv2,e ˆX∗
494
+ u1u2( ˆXv1
495
+
496
+ w′≤w0
497
+ ǫw′cw0c−1
498
+ w′ ˆXw′) = δv2,eδv1,u1ǫu2cw0c−1
499
+ u2 .
500
+ This proves the first identity. For the second one, it is easy to see that ˆY ∗
501
+ w = �
502
+ v≥w ǫwcvc−1
503
+ w ˆX∗
504
+ v. So
505
+ Yw0 • ˆY ∗
506
+ w = Yw0 •
507
+
508
+ v≥w
509
+ ǫwcvc−1
510
+ w ˆX∗
511
+ v =
512
+
513
+ v1v2≥w,v1∈W −
514
+ a ,v2∈W
515
+ ǫwǫv2cv1w0c−1
516
+ w ˆX∗
517
+ v1.
518
+ This proves the second identity.
519
+ The transition matrix between ˆX∗
520
+ v, v ∈ W −
521
+ a
522
+ and Yw0 • ˆY ∗
523
+ w, w ∈ W −
524
+ a
525
+ is upper triangular with
526
+ diagonal entries ǫwcw0, so the last statement follows.
527
+
528
+ 2.2.
529
+ Before computing the ⊙-action, we need to prove some identities in ˆDWa.
530
+ Lemma 2.2. Let F = Fc. Writing ηu = �
531
+ v≤u ˆbu,v ˆXv = �
532
+ v≤u ˆbY
533
+ u,v ˆYv, then
534
+ ˆbsiu,v
535
+ =
536
+
537
+ si(ˆbu,v),
538
+ siv > v;
539
+ (1 − cxi)si(ˆbu,v) − xisi(ˆbu,siv),
540
+ siv < v.
541
+ ˆbY
542
+ siu,v
543
+ =
544
+
545
+ (1 − cxi)si(ˆbY
546
+ u,v),
547
+ siv > v;
548
+ xisi(ˆbY
549
+ u,siv) + si(ˆbY
550
+ u,v),
551
+ siv < v.
552
+ Proof. We prove the first one, and the second one follows similarly. Denote iWa = {v ∈ Wa|siv > v}.
553
+ We have
554
+ ηsiu
555
+ =
556
+ ηiηu = ηi
557
+
558
+ v∈iWa
559
+ ˆbu,v ˆXv + ˆbu,siv ˆXsiv =
560
+
561
+ v∈iWa
562
+ si(ˆbu,v)ηi ˆXv + si(ˆbu,siv)ηi ˆXsiv
563
+ =
564
+
565
+ v∈iWa
566
+ si(ˆbu,v)(1 − xi ˆXi) ˆXv + si(ˆbu,siv)(1 − xi ˆXi) ˆXsiv
567
+ =
568
+
569
+ v∈iWa
570
+ si(ˆbu,v)( ˆXv − xi ˆXsiv) + si(ˆbu,siv) ˆXsiv − cxisi(ˆbu,siv) ˆXv
571
+ =
572
+
573
+ v∈iWa
574
+ si(ˆbu,v) ˆXv + (si(ˆbu,siv)(1 − cxi) − xisi(ˆbu,v)) ˆXsiv.
575
+ The conclusion then follows.
576
+
577
+ Note that if v ∈ W −
578
+ a
579
+ and siv < v, then siv ∈ W −
580
+ a . We have the following recursive formula,
581
+ whose proof follows from the definition and Lemma 2.2.
582
+
583
+ 8
584
+ C. ZHONG
585
+ Theorem 2.3. For F = Fc, with i ∈ Ia, we have
586
+ ˆX−i ⊙ ˆX∗
587
+ v
588
+ =
589
+
590
+ 0,
591
+ siv > v,
592
+ c ˆX∗
593
+ v + ˆX∗
594
+ siv,
595
+ siv < v,
596
+ ˆY−i ⊙ ˆY ∗
597
+ v
598
+ =
599
+
600
+ 0,
601
+ siv > v,
602
+ c ˆY ∗
603
+ v + ˆY ∗
604
+ siv,
605
+ siv < v.
606
+ Here
607
+ ˆX−i = ηw0 ˆXiηw0 =
608
+ 1
609
+ x−i
610
+ (1 − ηi), ˆY−i = ηw0 ˆYiηw0 = 1
611
+ xi
612
+ +
613
+ 1
614
+ x−i
615
+ ηi.
616
+ Consequently, if v ∈ W −
617
+ a , we have
618
+ ˆY−i ⊙ (Yw0 • Y ∗
619
+ v ) =
620
+
621
+ 0,
622
+ siv > v,
623
+ c(Yw0 • ˆY ∗
624
+ v ) + (Yw0 • ˆY ∗
625
+ siv),
626
+ siv < v.
627
+ Proof. We have
628
+ ˆX−i ⊙ ˆX∗
629
+ v = ( 1
630
+ x−i
631
+
632
+ 1
633
+ x−i
634
+ ηi) ⊙
635
+
636
+ u≥v
637
+ ˆbu,v ˆfu =
638
+
639
+ u
640
+ ˆbu,v
641
+ x−i
642
+ ˆfu −
643
+
644
+ u
645
+ si(ˆbu,v)
646
+ x−i
647
+ ˆfsiu =
648
+
649
+ u
650
+ ˆbu,v − si(ˆbsiu,v)
651
+ x−i
652
+ ˆfu.
653
+ Plugging the formula in Lemma 2.2, we obtain the formula.
654
+ The formula for ˆY−i ⊙ ˆY ∗
655
+ v follows similarly. From the commutativity of the two actions • and ⊙,
656
+ one obtains the last statement.
657
+
658
+ 3. FADA for the small torus
659
+ We repeat the construction of FADA for the small torus, which is very similar as above.
660
+ 3.1.
661
+ Let S be the formal group algebra associated to T ∗, that is, it is (non-canonically) isomorphic
662
+ a power series ring of rank n. When the formal group law F = Fc, we can again take the polynomial
663
+ version, i.e., see Remark 1.2. Let Q = S[ 1
664
+ xα , α ∈ Φ], QWa = Q ⋊ R[Wa], QQ∨ = Q ⋊ R[Q∨]. For
665
+ any α ∈ Φ, let κα =
666
+ 1
667
+ xα +
668
+ 1
669
+ x−α and κα0 =
670
+ 1
671
+ x−θ + 1
672
+ xθ . We have the projection
673
+ pr : QWa → QQ∨, ηtλw �→ ηtλ,
674
+ w ∈ W.
675
+ Define
676
+ Xα = 1
677
+
678
+ (1 − ηα),
679
+ Xα0 =
680
+ 1
681
+ x−θ
682
+ (1 − ηs0),
683
+ α ∈ Φ.
684
+ For simplicity, denote x±i = x±αi, Xi = Xαi, ηi = ηsi, X0 = Xα0. They satisfy relations similar
685
+ as that of ˆXi. One can define XIw for any reduced sequence Iw of w, which depends only on w if
686
+ F = Fc.
687
+ Remark 3.1. Consider K-theory, in which case F = Fc with c = 1. Our −X−αi is the Ti in
688
+ [LSS10, LLMS18]. Our 1 − Xαi coincides with the Di in [K18]. For cohomology, c = 0, κα = 0,
689
+ and our Xi is the Ai in [P97, Proposition 2.11] and [L06].
690
+ Lemma 3.2. We have pr(zXi) = 0 if z ∈ QWa, i ∈ I.
691
+ Proof. Let z = pηw, p ∈ Q, w ∈ Wa , then
692
+ pr(zXi) = pr(pηwXi) = pr(
693
+ p
694
+ w(xi)(ηw − ηwsi)) =
695
+ p
696
+ w(xi)(pr(ηw) − pr(ηwsi)) = 0.
697
+
698
+
699
+ AFFINE GRASSMANNIAN
700
+ 9
701
+ Define DWa to be the subalgebra of QWa generated by S and Xi, i ∈ Ia, and DWa/W = pr(DWa).
702
+ Then DWa is a free left S-module with basis XIw, w ∈ Wa. Denote XIw = pr(XIw), w ∈ W −
703
+ a .
704
+ Lemma 3.3. If Iw, w ∈ Wa are W-compatible, then the set {XIw|w ∈ W −
705
+ a } is a basis of the left
706
+ S-module DWa/W .
707
+ Proof. They follow easily from Lemma 3.2. See [CZZ19, Lemma 11.3].
708
+
709
+ The projection p : T ∗
710
+ a → T ∗, µ + kδ �→ µ induces projections ˆS → S, ˆQ → Q and ˆQWa → QWa.
711
+ Clearly p( ˆXαi) = Xαi and p( ˆXIw) = XIw, so p( ˆDWa) = DWa. More explicitly, we have
712
+ ˆXIw =
713
+
714
+ v≤w
715
+ ˆaIw,vηv ∈ ˆQWa,
716
+ XIw =
717
+
718
+ v≤w
719
+ aIw,vηv ∈ QWa,
720
+ p(ˆaIw,v) = aIw,v ∈ Q,
721
+ ηw =
722
+
723
+ v≤w
724
+ ˆbw,Iv ˆXIv ∈ ˆQWa,
725
+ ηw =
726
+
727
+ v≤w
728
+ bw,IvXIv ∈ QWa,
729
+ p(ˆbw,Iv) = bw,Iv ∈ S.
730
+ Note that the embedding i : Q → ˆQ induces a section QWa → ˆQWa of p. However, it does not map
731
+ DWa to ˆDWa. For example, X0 is mapped to x−θ+δ
732
+ x−θ
733
+ ˆX0 which does not belong to ˆDWa.
734
+ 3.2.
735
+ As before, we can take the duals, which will give us Q-modules Q∗
736
+ Wa, Q∗
737
+ Q∨, and S-modules
738
+ D∗
739
+ Wa, D∗
740
+ Wa/W . The elements dual to
741
+ ηw, XIw ∈ DWa ⊂ QWa, ηtλ, XIw ∈ DWa/W ⊂ QQ∨,
742
+ are denoted by
743
+ fw, X∗
744
+ Iw ∈ D∗
745
+ Wa ⊂ Q∗
746
+ Wa, ftλ, X∗
747
+ Iw ∈ D∗
748
+ Wa/W ⊂ Q∗
749
+ Q∨,
750
+ correspondingly. Note that the notation ftλ can be thought as in Q∗
751
+ Wa and Q∗
752
+ Q∨, just like ηtλ can
753
+ be thought as in QWa and QQ∨. Similar as Proposition 1.5, we have
754
+ (3)
755
+ D∗
756
+ Wa = {f ∈ Q∗
757
+ Wa|f(DWa) ⊂ S}.
758
+ Moreover, by definition, the dual map pr∗ : Q∗
759
+ Q∨ → Q∗
760
+ Wa satisfies
761
+ pr∗(ftλ) =
762
+
763
+ w∈W
764
+ ftλw.
765
+ Following from the definition, we have
766
+ ˆX∗
767
+ Iw =
768
+
769
+ v≥w
770
+ ˆbv,Iw ˆfv ∈ ˆD∗
771
+ Wa,
772
+ X∗
773
+ Iw =
774
+
775
+ v≥w
776
+ bv,Iwfv ∈ D∗
777
+ Wa.
778
+ Since p(ˆbv,Iw) = bv,Iw, so the map q :
779
+ ˆQ∗
780
+ Wa → Q∗
781
+ Wa, �
782
+ w aw ˆfw �→ �
783
+ w p(aw)fw induces a map
784
+ q : ˆD∗
785
+ Wa → D∗
786
+ Wa such that q( ˆX∗
787
+ Iw) = X∗
788
+ Iw. Moreover, since
789
+ p∗(X∗
790
+ Iw)( ˆXIv) = X∗
791
+ Iw(p( ˆXIv)) = X∗
792
+ Iw(XIv) = δw,v,
793
+ so p∗(X∗
794
+ Iw) = ˆX∗
795
+ Iw. Note that neither q nor p∗ are isomorphisms, since the domains and targets are
796
+ modules over different rings.
797
+ Similar as Lemma 1.6, we have
798
+ Lemma 3.4. If Iw, w ∈ Wa are W-compatible, then the set X∗
799
+ Iw, w ∈ W −
800
+ a form a basis of (Q∗
801
+ Wa)W
802
+ and of (D∗
803
+ Wa)W , respectively.
804
+
805
+ 10
806
+ C. ZHONG
807
+ Lemma 3.5. Assume that {Iw, w ∈ Wa} is W-compatible. For any w ∈ W, u ∈ W −
808
+ a , we have
809
+ X∗
810
+ Iu =
811
+
812
+ λ∈Q∨
813
+ btλw,Iuftλ ∈ Q∗
814
+ Q∨.
815
+ Proof. For any λ ∈ Q∨, write
816
+ ηtλw =
817
+
818
+ u∈W −
819
+ a ,v∈W
820
+ btλw,Iu∪IvXIu∪Iv.
821
+ By Lemma 3.2, we have
822
+ ηtλ = pr(ηtλw) =
823
+
824
+ u∈W −
825
+ a ,v∈W
826
+ btλw,Iu∪Iv pr(XIu∪Iv) =
827
+
828
+ u∈W −
829
+ a
830
+ btλw,Iu pr(XIu) =
831
+
832
+ u∈W −
833
+ a
834
+ btλw,IuXIu.
835
+ Therefore,
836
+ X∗
837
+ Iu =
838
+
839
+ λ∈Q∨
840
+ btλw,Iuftλ ∈ Q∗
841
+ Q∨.
842
+
843
+ This lemma implies that we have pr∗(X∗
844
+ Iu) = X∗
845
+ Iu, u ∈ W −
846
+ a .
847
+ 3.3.
848
+ There is a •-action of QWa on Q∗
849
+ Wa, defined similar as the big torus case.
850
+ Lemma 3.6. The •-action of QWa on Q∗
851
+ Warestricts to an action of DWa on D∗
852
+ Wa.
853
+ Proof. Since DWa is a S-module with basis XIu, u ∈ Wa, so for any w, v ∈ Wa, i ∈ Ia, we have
854
+ XIvXi = �
855
+ u cIv∪si,IuXIu with cIv∪si,Iu ∈ S. We have
856
+ (Xi • X∗
857
+ Iw)(XIv) = X∗
858
+ Iw(XIvXi) = cIv∪si,Iw ∈ S.
859
+ By (3), Xi • X∗
860
+ Iw ∈ D∗
861
+ Wa.
862
+
863
+ Lemma 3.7. We have
864
+ D∗
865
+ Wa ⊂ {f ∈ Q∗
866
+ Wa|f(ηw) ∈ S, and f(ηw − ηsαw) ∈ xαS, ∀α ∈ Φ, w ∈ Wa}.
867
+ One of the main results of this paper is to study how different the two sets are, that is, to derive
868
+ the small torus GKM condition.
869
+ Proof. Since ηw ∈ DWa, then it follows from (3) that f(ηw) ∈ S. Let i ∈ I and f = �
870
+ w∈Wa awfw ∈
871
+ D∗
872
+ Wa with aw = f(ηw) ∈ S. We have
873
+ Xi•f = 1
874
+ xi
875
+ (1−ηi)•
876
+
877
+ w
878
+ awfw =
879
+
880
+ w
881
+ aw
882
+ w(xi)fw−
883
+
884
+ w
885
+ aw
886
+ wsi(xi)fwsi =
887
+
888
+ w
889
+ aw − awsi
890
+ w(xi)
891
+ =
892
+
893
+ w
894
+ aw − asw(αi)w
895
+ xw(αi)
896
+ fw.
897
+ By Lemma 3.6, Xi • f ∈ D∗
898
+ Wa, so f(ηw − ηsβw) =
899
+ aw−asβw
900
+
901
+ ∈ S for any β ∈ Φ.
902
+
903
+ 3.4.
904
+ We can similarly define the ⊙ action
905
+ aηw ⊙ bfv = aw(b)fwv, w, v ∈ Wa, a, b ∈ Q.
906
+ It is easy to see that the ⊙ and the • actions commute with each other.
907
+ Lemma 3.8. For any ˆz ∈ ˆQWa, ˆf ∈ ˆQ∗
908
+ Wa, we have
909
+ p(ˆz) ⊙ q( ˆf) = q(ˆz ⊙ ˆf).
910
+ In particular, the ⊙-action of QWa on Q∗
911
+ Wa induces an action of DWa on D∗
912
+ Wa.
913
+
914
+ AFFINE GRASSMANNIAN
915
+ 11
916
+ Proof. Write ˆz = ˆaηv, ˆf = ˆb ˆfw, ˆa,ˆb ∈ ˆQ, w, v ∈ Wa and suppose p(ˆa) = a, p(ˆb) = b, then
917
+ p(ˆz) ⊙ q( ˆf) = aηv ⊙ bfw = av(b)fvw = q(ˆav(ˆb) ˆfvw) = q(ˆaηv ⊙ ˆb ˆfw) = q(ˆz ⊙ ˆf).
918
+ For the second part, note that p : ˆDWa → DWa and q : ˆD∗
919
+ Wa → D∗
920
+ Wa are both surjective. Given
921
+ z ∈ DWa and f ∈ D∗
922
+ Wa, suppose z = p(ˆz) and f = q( ˆf) for some ˆz ∈ ˆDWa and ˆf ∈ ˆD∗
923
+ Wa, then
924
+ z ⊙ f = p(ˆz) ⊙ q( ˆf) = q(ˆz ⊙ ˆf) ∈ q( ˆD∗
925
+ Wa) = D∗
926
+ Wa.
927
+
928
+ Remark 3.9. If F = Fc, then all results in §2 holds for X∗
929
+ w and the corresponding Y ∗
930
+ w.
931
+ 4. The small-torus GKM condition
932
+ In this section, we study the small-torus GKM condition on the equivariant oriented cohomology
933
+ of the affine flag variety and of the affine Grassmannian.
934
+ 4.1.
935
+ For each α ∈ Φ, we define
936
+ Zα =
937
+ 1
938
+ x−α
939
+ (1 − ηtα∨) ∈ QWa.
940
+ Lemma 4.1. For each α ∈ Φ, we have Zα ∈ DWa.
941
+ Proof. It suffices to show that Zα is contained in the subalgebra of DWa generated by S and Xα. So
942
+ we assume the root system is the affine root system of SL2 with simple roots α1 = α, α0 = −α + δ.
943
+ Then tα∨ = s0s1. We have ηs1 = 1 − xαX1, ηs0 = 1 − x−αX0, so ηs0s1 = 1 − x−αX0 − x−αX1 +
944
+ x2
945
+ −αX0X1. Therefore,
946
+ Zα =
947
+ 1
948
+ x−α
949
+ (1 − ηs0s1) = X0 + X1 − x−αX0X1 ∈ DWa.
950
+
951
+ Example 4.2. Suppose the root system is ˆA1 with two simple roots α1 = α, α0 = −α + δ.
952
+ (1) If F = Fc with c = 0, then we have Zα = X0 + X1 + αX0X1.
953
+ (2) If F = Fc with c = 1, then we have Zα = X0 + X1 + (eα − 1)X0X1.
954
+ Since DWa acts on D∗
955
+ Wa, so we know that Zα acts on D∗
956
+ Wa. Note that
957
+ Zk
958
+ α = 1
959
+ xkα
960
+ (1 − ηtα∨)k.
961
+ 4.2.
962
+ We are now ready to prove the first main result of this paper.
963
+ Theorem 4.3.
964
+ (1) The subset D∗
965
+ Wa ⊂ Q∗
966
+ Wa consists of elements satisfying the following small-
967
+ torus GKM condition:
968
+ f
969
+
970
+ (1 − ηtα∨)dηw
971
+
972
+ ∈ xd
973
+ αS, and f
974
+
975
+ (1 − ηtα∨)d−1(1 − ηsα)��w
976
+
977
+ ∈ xd
978
+ αS, ∀α ∈ Φ, w ∈ Wa, d ≥ 1.
979
+ (2) The subset (D∗
980
+ Wa)W ⊂ (Q∗
981
+ Wa)W consists of elements satisfying the following small-torus
982
+ Grassmannian condition:
983
+ f
984
+
985
+ (1 − ηtα∨)dηw
986
+
987
+ ∈ xd
988
+ αS, ∀α ∈ Φ, w ∈ Wa, d ≥ 1.
989
+
990
+ 12
991
+ C. ZHONG
992
+ Our proof follows similarly as that of [LSS10, Theorem 4.3]. The key improvement is that we
993
+ don’t need to prove Propositions 4.4 and 4.5 of loc.it., since we can use the operators Zα. However,
994
+ for the convenience of the readers, we include an appendix, which gives all coefficients of bw,Iv in
995
+ the ˆA1 case. They can be used to show that X∗
996
+ Iw satisfy the small torus GKM condition.
997
+ Proof. (1).
998
+ We prove that elements of D∗
999
+ Wa satisfy the small-torus GKM condition.
1000
+ Let f =
1001
+
1002
+ w cwfw ∈ D∗
1003
+ Wa, we have
1004
+ Zα •
1005
+
1006
+ w
1007
+ cwfw =
1008
+
1009
+ w
1010
+ (
1011
+ cw
1012
+ w(x−α)fw −
1013
+ cw
1014
+ wt−α∨(x−α)fwt−α∨) =
1015
+
1016
+ w
1017
+ cw − ctw(α∨)w
1018
+ x−w(α)
1019
+ fw ∈ D∗
1020
+ Wa.
1021
+ Note that
1022
+
1023
+ x−α is invertible in S. Therefore, denoting w(α) = β, by (3), we have f((1 − ηtβ∨)ηw) ∈
1024
+ xβS for any β ∈ Φ.
1025
+ Moreover, denote dw =
1026
+ cw−cwtα∨
1027
+ x−w(α) , then dwtα∨ =
1028
+ cwtα∨ −cwtα∨ tα∨
1029
+ x−wtα∨ (α)
1030
+ =
1031
+ cwtα∨ −cwt2α∨
1032
+ x−w(α)
1033
+ . Therefore, We
1034
+ have
1035
+ Z2
1036
+ α • f
1037
+ =
1038
+ Zα • Zα •
1039
+
1040
+ w
1041
+ cwfw =
1042
+
1043
+ w
1044
+ (dw − dwtα∨
1045
+ w(x−α)
1046
+ )fw =
1047
+
1048
+ w
1049
+ cw − 2cwtα∨ + cwt2α∨
1050
+ w(x−α)2
1051
+ fw
1052
+ =
1053
+
1054
+ w
1055
+ cw − 2ctw(α∨)w + ct2w(α∨)w
1056
+ x2
1057
+ −w(α)
1058
+ fw =
1059
+
1060
+ w
1061
+ 1
1062
+ x2
1063
+ −w(α)
1064
+ f((1 − ηtw(α∨))2ηw)fw.
1065
+ Denoting w(α) = β, we see that f((1−ηtβ∨)2ηw) ∈ x2
1066
+ βS. Inductively, we see that f((1−ηtα∨ )dηw) ∈
1067
+ xd
1068
+ αS for all d ≥ 1.
1069
+ Similarly, if one applies Zd−1
1070
+ α
1071
+ Xα ∈ DWa on f, which gives Zd−1
1072
+ α
1073
+ Xα • f ∈ D∗
1074
+ Wa, one will see that
1075
+ f satisfies the second condition.
1076
+ For the rest of the proof and for that of (2), it is identical to that of [LSS10, Theorem 4.3], so it
1077
+ is skipped.
1078
+
1079
+ Corollary 4.4. The subset D∗
1080
+ Wa/W ⊂ Q∗
1081
+ Q∨ consists of elements satisfying the following small torus
1082
+ Grassmannian condition:
1083
+ f((1 − ηt∨
1084
+ α)dηtλ) ∈ xd
1085
+ αS, ∀α ∈ Φ, d ≥ 1, λ ∈ Q∨.
1086
+ Proof. This follows from the identity pr∗(ftλ) = �
1087
+ v∈W ftλv.
1088
+
1089
+ 5. The Peterson subalgebra
1090
+ In this section, we embed DWa/W into DWa and show that it coincides with the centralizer of S
1091
+ in DWa. This is called the Peterson subalgebra, which gives the algebraic model for the equivariant
1092
+ oriented ‘homology’ of the affine Grassmannian.
1093
+ 5.1.
1094
+ We have a canonical ring embedding (and also a Q-module embedding)
1095
+ k : QQ∨ → QWa,
1096
+ pηtλ �→ pηtλ,
1097
+ such that pr ◦k = idQWa. It is easy to see that the dual map k∗ : Q∗
1098
+ Wa → Q∗
1099
+ Q∨ satisfies
1100
+ (4)
1101
+ k∗(ftλu) = δu,eftλ,
1102
+ u ∈ W.
1103
+ For K-theory, our map k is the map k : KT (GrG) → K in [LSS10, §5.2], and k∗ is the wrong-way
1104
+ map ̟ of [LSS10, $4.4].
1105
+ The following lemma generalizes [P97], [L06, Theorem 4.4] for the cohomology case, and [LSS10,
1106
+ Lemma 4.6] for the K-theory case.
1107
+
1108
+ AFFINE GRASSMANNIAN
1109
+ 13
1110
+ Lemma 5.1. The map k∗ induces a map k∗ : D∗
1111
+ Wa → D∗
1112
+ Wa/W . Consequently, the map k induces a
1113
+ map k : DWa/W → DWa.
1114
+ Proof. Given f ∈ D∗
1115
+ Wa, then f satisfies the small-torus GKM condition Theorem 4.3, that is,
1116
+ f((1 − ηtα∨)dηtλu) ∈ xd
1117
+ αS,
1118
+ ∀u ∈ W, λ ∈ Q∨, α ∈ Φ, d ≥ 1.
1119
+ Therefore,
1120
+ k∗(f)((1 − ηtα∨)dηtλ) = f
1121
+
1122
+ k((1 − ηtα∨)dηtλ)
1123
+
1124
+ = f((1 − ηt∨
1125
+ α)dηtλ) ∈ xd
1126
+ αS.
1127
+ Therefore, by Corollary 4.4, k∗(f) ∈ D∗
1128
+ Wa/W .
1129
+
1130
+ Remark 5.2. It would be interesting to find a direct proof of the fact that k maps DWa/W to
1131
+ DWa. One possible choice is to find the small torus residue condition of DWa similar to the residue
1132
+ condition of [GKV97] (see [ZZ17]).
1133
+ Example 5.3. Note that this result is not true for the big torus case, that is, k( ˆDWa/W ) is not
1134
+ contained in ˆDWa. For example, in the ˆA1 case, we have
1135
+ pr(X0) = pr(
1136
+ 1
1137
+ x−α+δ
1138
+ (1 − ηtα∨s1)) =
1139
+ 1
1140
+ x−α+δ
1141
+ (1 − ηtα∨) ∈ ˆDWa/W ,
1142
+ and
1143
+ k(pr(X0)) =
1144
+ 1
1145
+ x−α+δ
1146
+ (1 − ηtα) =
1147
+ 1
1148
+ x−α+δ
1149
+ (1 − ηs0s1) ̸∈ ˆDWa.
1150
+ Lemma 5.4. If Iw, w ∈ W is W-compatible, then k∗(X∗
1151
+ Iu) = X∗
1152
+ Iu for any u ∈ W −
1153
+ a .
1154
+ Proof. By (4) and Lemma 3.5, we have
1155
+ k∗(X∗
1156
+ Iu) = k∗(
1157
+
1158
+ λ∈Q∨,w∈W
1159
+ btλw,Iuftλw) =
1160
+
1161
+ λ∈Q∨
1162
+ btλ,Iuftλ = X∗
1163
+ Iu.
1164
+
1165
+ 5.2.
1166
+ Let CDWa(S) be the centralizer of S in DWa. Our second main result is the following, which
1167
+ generalizes [LSS10, Lemma 5.2] in the K-theory case and [P97, §9.3] in the cohomology case (proved
1168
+ in [LS10, Theorem 6.2]).
1169
+ Theorem 5.5. We have CDWa(S) = k(QQ∨) ∩ DWa = k(DWa/W ).
1170
+ Proof. We look at the first identity. Since tλ(p) = p for any p ∈ S, so it is clear that QQ∨ ∩ DWa ⊂
1171
+ CDWa(S). Conversely, let z = �
1172
+ w∈Wa cwηw ∈ CDWa(S), then for any µ ∈ T ∗, we have
1173
+ 0 = xµz − zxµ =
1174
+
1175
+ w∈Wa
1176
+ cw(xµ − xw(µ))ηw.
1177
+ Therefore, for any cw ̸= 0, we have µ = w(µ) for all µ ∈ T ∗. we can take µ to be W-regular, which
1178
+ shows that cw ̸= 0 only when w = tλ for some λ ∈ Q∨. So z ∈ k(QQ∨). The first identity is proved.
1179
+ We now look at the second identity. It follows from Lemma 5.1 that k(DWa/W) ⊂ k(QQ∨)∩DWa.
1180
+ For the other inclusion, note that ηtλ ∈ DWa is a Q-basis of k(QQ∨). Given any z = �
1181
+ λ∈Q∨ pληtλ ∈
1182
+ k(QQ∨) ∩ DWa, pλ ∈ Q, then pr(z) ∈ pr(DWa) = DWa/W , and
1183
+ k ◦ pr(z) = k ◦ pr(
1184
+
1185
+ λ
1186
+ pληtλ) = k(
1187
+
1188
+ λ
1189
+ pληtλ) =
1190
+
1191
+ λ
1192
+ pληtλ = z.
1193
+ Therefore, k(QQ∨) ∩ DWa ⊂ k(DWa/W ). The second identity is proved.
1194
+
1195
+
1196
+ 14
1197
+ C. ZHONG
1198
+ Definition 5.6. We define the Peterson subalgebra to be DQ∨ = k(DWa/W ).
1199
+ Let Iw, w ∈ Wa be W-compatible. Since DWa/W is a free S-module with basis XIw, w ∈ W −
1200
+ a , so
1201
+ k(XIw) form a basis of DQ∨. This is the algebraic model for the oriented homology of the affine
1202
+ Grassmannian GrG. The following result generalizes [LSS10, Theorem 5.3] in K-theory.
1203
+ Theorem 5.7. The ring DQ∨ is a Hopf algebra, and the embedding DQ∨ → QQ∨ is an Hopf-algebra
1204
+ homomorphism.
1205
+ Proof. The coproduct structure on QWa is defined as △ : QWa → QWa ⊗Q QWa, ηw �→ ηw ⊗ ηw. It is
1206
+ easy to see that this induces a coproduct structure on QQ∨, and by [CZZ16], it induces a coproduct
1207
+ structure on DWa. Therefore, it induces a coproduct structure on DQ∨. The product structure is
1208
+ induced by that of QQ∨, The antipode is s : QQ∨ → QQ∨, ηtλ �→ ηt−λ. It is then routine to check
1209
+ that DQ∨ is a Hopf algebra and the embedding to QQ∨ is an embedding of Hopf algebras.
1210
+
1211
+ Remark 5.8. For K-theory, we know the Hecke algebra is contained DWa.
1212
+ It is proved by
1213
+ Berenstein-Kazhdan [BK19] that certain localization of the Hecke algbra is a Hopf algebra.
1214
+ It
1215
+ is not difficult to see that it is compatible with the Hopf algebra structure of DQ∨.
1216
+ 5.3.
1217
+ The following theorem generalizes [LSS10, Theorem 5.4] in the K-theory case and [LS10,
1218
+ Theorem 6.2] in the cohomology case.
1219
+ Theorem 5.9. Assume Iw, w ∈ Wa is W-compatible. If u ∈ W −
1220
+ a , then we have
1221
+ k(XIu) = XIu +
1222
+
1223
+ v∈Wa\W −
1224
+ a
1225
+ cIu,IvXIv, cIu,Iv ∈ S.
1226
+ Proof. If w ∈ W −
1227
+ a , by Lemma 5.4, we have
1228
+ X∗
1229
+ Iw(k(XIu)) = k∗(X∗
1230
+ Iw)((XIu)) = X∗
1231
+ Iw(XIu) = δw,u,
1232
+ Therefore,
1233
+ k(XIu) =
1234
+
1235
+ v∈Wa
1236
+ X∗
1237
+ Iv(k(XIu))XIv = XIu +
1238
+
1239
+ v∈Wa\W −
1240
+ a
1241
+ cIu,IvXIv.
1242
+
1243
+ Example 5.10. Consider the ˆA1 case, then there are two simple roots α1 = α, α0 = −α + δ. By
1244
+ direct computation, we have
1245
+ (1) k(X0) = X0 + X1 − x−αX01.
1246
+ (2) k(X10) = X10 − x−α
1247
+ xα X01.
1248
+ (3) k(X010) = X010 + X101 − x−αX1010.
1249
+ Corollary 5.11. Assume Iw, w ∈ Wa is W-compatible. Let u, v ∈ W −
1250
+ a . Write
1251
+ XIuXIv =
1252
+
1253
+ w∈Wa
1254
+ dIw
1255
+ Iu,IvXIw, XIuXIv =
1256
+
1257
+ w∈W −
1258
+ a
1259
+ dIw
1260
+ Iu,IvXIw,
1261
+ then
1262
+ d
1263
+ Iw3
1264
+ Iu,Iv =
1265
+
1266
+ w2∈Wa
1267
+ cIu,Iw2d
1268
+ Iw3
1269
+ Iw2,Iv.
1270
+ Proof. We have
1271
+ k(
1272
+
1273
+ w∈W −
1274
+ a
1275
+ dIw
1276
+ Iu,IvXIw)
1277
+ =
1278
+ k(XIuXIv) = k(XIu)k(XIv) = k(XIu)
1279
+
1280
+ w1∈Wa
1281
+ cIv,Iw1XIw1
1282
+
1283
+ AFFINE GRASSMANNIAN
1284
+ 15
1285
+ =
1286
+
1287
+ w1∈Wa
1288
+ cIv,Iw1k(XIu)XIw1 =
1289
+
1290
+ w1,w2∈Wa
1291
+ cIv,Iw1cIu,Iw2XIw2XIw1.
1292
+ Let w3 ∈ W −
1293
+ a . By [CZZ19, Theorem 8.2], we know that X∗
1294
+ Iw3(XIw2XIw1) = 0 unless w1 ∈ W −
1295
+ a , in
1296
+ which case cIv,Iw1 = δKr
1297
+ v,w1 by Theorem 5.9. Therefore, applying X∗
1298
+ Iw3, w3 ∈ W −
1299
+ a , and using Lemma
1300
+ 5.4, we get
1301
+ d
1302
+ Iw3
1303
+ Iu,Iv
1304
+ =
1305
+ X∗
1306
+ Iw3(
1307
+
1308
+ w∈W −
1309
+ a
1310
+ dIw
1311
+ Iu,IvXIw) = k∗(X∗
1312
+ Iw3)(
1313
+
1314
+ w∈W −
1315
+ a
1316
+ dIw
1317
+ Iu,IvXIw)
1318
+ =
1319
+ X∗
1320
+ Iw3(k(
1321
+
1322
+ w∈W −
1323
+ a
1324
+ dIw
1325
+ Iu,IvXIw)) = X∗
1326
+ Iw3(
1327
+
1328
+ w1,w2∈Wa
1329
+ cIv,Iw1cIu,Iw2XIw2XIw1)
1330
+ =
1331
+
1332
+ w2∈Wa
1333
+ cIu,Iw2X∗
1334
+ Iw3(XIw2XIv) =
1335
+
1336
+ w2∈Wa
1337
+ cIu,Iw2d
1338
+ Iw3
1339
+ Iw2,Iv.
1340
+
1341
+ 6. Appendix: Restriction formula in the ˆA1 case
1342
+ In this Appendix, we perform some computation in the ˆA1 case.
1343
+ 6.1.
1344
+ In this case, there are two simple roots, α1 = α, α0 = −α + δ, and any w ∈ Wa has a unique
1345
+ reduced decomposition, so XIw, YIw can be denoted by Xw, Yw, respectively. Moreover, X2
1346
+ i = καXi.
1347
+ We use the notation as in [LSS10, §4.3]. Let
1348
+ σ0 = e, σ2i = (s1s0)i = t−iα∨, σ−2i = (s0s1)i = tiα∨, σ2i+1 = s0σ2i, σ−(2i+1) = s1σ−2i, i ≥ 1,
1349
+ and W −
1350
+ a = {σi|i ≥ 0}. Denote µ = − x−1
1351
+ x1 . So if F = Fc with c = 0, then µ = 1, and if F = Fc with
1352
+ c = 1, then µ = eα if one identifies xα with 1 − e−α.
1353
+ Let S≤a be the sum h0 + h1 + · · · + ha of homogeneous symmetric functions. Denote Si
1354
+ ≤a to be
1355
+ S≤a(x, x, · · · , x) where there are i copies of x. For instance, S3
1356
+ ≤3(x) = 1 + 3x + 6x2 + 10x3. We
1357
+ have the following identities:
1358
+ Si
1359
+ ≤a(x) = xSi
1360
+ ≤a−1(x) + Si−1
1361
+ ≤a (x),
1362
+ Si
1363
+ ≤a(x) =
1364
+ a
1365
+
1366
+ j=0
1367
+ xj
1368
+
1369
+ j + i − 1
1370
+ i − 1
1371
+
1372
+ .
1373
+ Then the following identities can be verified by direct computation for lower k and then continued
1374
+ with induction:
1375
+ ησ2k
1376
+ =
1377
+ 1 + x2k
1378
+ 1 Xσ2k +
1379
+
1380
+ 1≤j≤k−1
1381
+ x2j
1382
+ 1 (S2j
1383
+ ≤k−j(µ−1)Xσ2j + S2j
1384
+ ≤k−j−1(µ−1)Xσ−2j)
1385
+
1386
+
1387
+ 1≤i≤k
1388
+ x2i−1
1389
+ 1
1390
+ S2i−1
1391
+ ≤k−i(µ−1)(Xσ2i−1 + Xσ−2i+1),
1392
+ ησ−2k
1393
+ =
1394
+ 1 + x2k
1395
+ −1Xσ−2k +
1396
+
1397
+ 1≤j≤k−1
1398
+ x2j
1399
+ −1(S2j
1400
+ ≤k−j−1(µ)Xσ2j + S2j
1401
+ ≤k−j(µ)Xσ−2j)
1402
+
1403
+
1404
+ 1≤i≤k
1405
+ x2i−1
1406
+ −1 S2i���1
1407
+ ≤k−i(µ)(Xσ2i−1 + Xσ−2i+1),
1408
+ ησ−2k−1
1409
+ =
1410
+ 1 − x2k+1
1411
+ 1
1412
+ Xσ−2k−1 +
1413
+
1414
+ 1≤j≤k
1415
+ x2j
1416
+ 1 S2j
1417
+ ≤k−j(µ−1)(Xσ2j + Xσ−2j)
1418
+
1419
+ 16
1420
+ C. ZHONG
1421
+
1422
+
1423
+ 1≤i≤k
1424
+ x2i−1
1425
+ 1
1426
+
1427
+ S2i−1
1428
+ ≤k−i(µ−1)Xσ2i−1 + S2i−1
1429
+ ≤k−i+1(µ−1)Xσ−2i+1
1430
+
1431
+ ,
1432
+ ησ2k+1
1433
+ =
1434
+ 1 − x2k+1
1435
+ −1
1436
+ Xσ2k+1 +
1437
+
1438
+ 1≤j≤k
1439
+ x2j
1440
+ −1S2j
1441
+ ≤k−j(µ)(Xσ2j + Xσ−2j)
1442
+
1443
+
1444
+ 1≤i≤k
1445
+ x2i−1
1446
+ −1
1447
+
1448
+ S2i−1
1449
+ ≤k−i+1(µ)Xσ2i−1 + S2i−1
1450
+ ≤k−i(µ)Xσ−2i+1
1451
+
1452
+ .
1453
+ For F = Fc with c = 1, that is, in the K-theory case, these identities specializes to the corresponding
1454
+ ones in [LSS10, (4.5), (4.6)] after identifying our −X−αi with Ti in [LSS10] (see Remark 3.1). By
1455
+ using these identities, following the same idea as in [LSS10, §4.3], one can prove that X∗
1456
+ Iw satisfy
1457
+ the small torus GKM conditions in Theorem 4.3.
1458
+ Acknowledge. The author would like to thank Cristian Lenart, Changzheng Li and Gufang Zhao
1459
+ for helpful discussions.
1460
+ References
1461
+ [BK19] A. Berenstein, D. Kazhcan, Hecke-Hopf algebras, Advances in Mathematics, 353 (2019) 312-395. 5.8
1462
+ [CPZ13] B. Calm´es, V. Petrov, K. Zainoulline, Invariants, torsion indices and oriented cohomology of complete flags,
1463
+ Annales scientifiques de l’ ´Ecole normale sup´erieure (4) 46(3), 405–448 (2013). 1.1
1464
+ [CZZ16] B. Calm`es, K. Zainoulline, and C. Zhong, A coproduct structure on the formal affine Demazure algebra,
1465
+ Mathematische Zeitschrift, 282 (2016) (3), 1191-1218. 0, 5.2
1466
+ [CZZ19] B. Calm`es, K. Zainoulline, and C. Zhong, Push-pull operators on the formal affine Demazure algebra and
1467
+ its dual, Manuscripta Mathematica, 160 (2019), no. 1-2, 9-50. 0, 1.4, 1.4, 1.4, 1.5, 3.1, 5.3
1468
+ [CZZ15] B. Calm`es, K. Zainoulline, and C. Zhong, Equivariant oriented cohomology of flag varieties, Documenta
1469
+ Mathematica, Extra Volume: Alexander S. Merkurjev’s Sixtieth Birthday (2015), 113-144. 0, 1.7
1470
+ [CZZ20] B. Calm`es, K. Zainoulline, and C. Zhong, Formal affine Demazure and Hecke algebras associated to Kac-
1471
+ Moody root systems, Algebra Representation Theory, 23 (2020), no.3, 1031-1050. 0, 1, 1.2, 1.6
1472
+ [B97] M. Brion, Equivariant Chow groups for torus actions, Transformation Groups, 2(3): 225-267, 1997. 0
1473
+ [GKV97] V. Ginzburg, M. Kapranov, and E. Vasserot, Residue construction of Hecke algebras, Advances in Mathe-
1474
+ matics 128 (1997), no. 1, 1-19. 1.3, 5.2
1475
+ [K18] S. Kato, Loop structure on equivariant K-theory of semi-infinite flag manifolds, arXiv:1805.01718. 0, 3.1
1476
+ [KK86] B. Kostant and S. Kumar, The nil Hecke ring and cohomology of G/P for a Kac-Moody group G∗, Advances
1477
+ in Mathematics. 62 (1986), no. 3, 187-237. 0
1478
+ [KK90] B. Kostant and S. Kumar, T -equivariant K-theory of generalized flag varieties, Journal of Differential Ge-
1479
+ ometry 32 (1990), 549–603.
1480
+ [K03] A. Knutson, A Schubert calculus recurrence from the noncomplex W-action on G/B,arXiv:0306304. 0
1481
+ 0
1482
+ [L06] T. Lam, Schubert polynomials for the affine Grassmannian, Journal of the American Mathematical Society, 21
1483
+ (1), 3.1, 5.1
1484
+ [LLMS18] T. Lam, C. Li, L. Mihalcea, M. Shimozono, A conjectural Peterson isomorphism in K-theory, Journal of
1485
+ Algebra, 513:326–343, 2018. 0, 3.1
1486
+ [LSS10] T. Lam, A. Schilling, M. Shimozono, K-theory Schubert calculus of the affine Grassmannian, Compositio
1487
+ Mathematica, 146 (2010), no. 4, 811–852. 0, 3.1, 4.2, 5.1, 5.2, 5.2, 5.3, 6.1
1488
+ [LS10] T. Lam, M. Shimozono, Quantum cohomology of G/P and homology of affine Grassmannian, Acta Mathe-
1489
+ matica, 204(1):49–90, 2010. 0, 5.2, 5.3
1490
+ [LZZ20] C. Lenart, K. Zainoulline, C. Zhong, Parabolic Kazhdan-Lusztig basis, Schubert classes and equivariant
1491
+ oriented cohomology, Journal of the Institute of Mathematics of Jussieu, 19 (2020), no. 6, 1889-1929. 0, 1.1, 1.6
1492
+ [LM07] M. Levine and F. Morel, Algebraic cobordism, Springer Monographs in Mathematics. Springer-Verlag, Berlin,
1493
+ 2007. 1.1
1494
+ [MNS22] L.C. Mihalcea, H. Naruse, C. Su, Left Demazure-Lusztig operators on equivariant (quantum) cohomology
1495
+ and K-theory, International Mathematics Research Notices, 2022, no. 16, 12096–12147. 0
1496
+ [P97] D. Peterson, Quantum cohomology of G/P, Lecture at MIT, 1997. 0, 3.1, 5.1, 5.2
1497
+ (4) 53 (2020), no. 3, 663–711.
1498
+
1499
+ AFFINE GRASSMANNIAN
1500
+ 17
1501
+ [T09] J. Tymoczko, Divided difference operators for partial flag varieties, arXiv:0912.2545 0
1502
+ [ZZ17] G. Zhao and C. Zhong, Geometric representations of the formal affine Hecke algebra, Advances in Mathemat-
1503
+ ics, 317 (2017), 50-90. 1.3, 5.2
1504
+ State University of New York at Albany, 1400 Washington Ave, CK399, Albany, NY, 12222
1505
+ Email address: [email protected]
1506
+
1NFPT4oBgHgl3EQfUTQQ/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
1tFIT4oBgHgl3EQf4CvP/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eba1906329f5014822f7b8c4f574e98549a3da67301346ed8de5847492208f9d
3
+ size 3604525
39AyT4oBgHgl3EQf1_mj/content/2301.00744v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a9fa71ebab765acbd6c0f7b88f3182064eb09b523011bb1dbd61324943c7327
3
+ size 317769
39AyT4oBgHgl3EQf1_mj/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac23c5fd1355e98c7eda00c0a10cec00643eccf6dc3c86b0003bf02385b1a96b
3
+ size 149070
39E2T4oBgHgl3EQfjgft/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e779b28d51c85e7ddae2816050e3310179c7b9c154e5a7996c00082b4c3ca44
3
+ size 204023
3NFAT4oBgHgl3EQflB25/content/2301.08615v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1a165f32c6be8aefc680e448f9585007d0353979bfdb7e4dd2902901de5276e
3
+ size 226120
3NFAT4oBgHgl3EQflB25/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba3325a573fdbac36f4e555a4e00a597e7218f3ceb984c03b7743ba4cdccaa90
3
+ size 91746
4dE1T4oBgHgl3EQf6QUq/content/2301.03520v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d22bbe448eabb391848cee387a27a3c1e4b09da9679f2f000809270ef00ef7a
3
+ size 158196
4dE1T4oBgHgl3EQf6QUq/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7eff9b9395d83b26e28f79f3ad1630d9a13ffe33778920579c6f3aa23881c4f6
3
+ size 70856
59AyT4oBgHgl3EQfcfe1/content/tmp_files/2301.00285v1.pdf.txt ADDED
@@ -0,0 +1,1073 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ On the Smoothness of the Solution to the Two-Dimensional Radiation
4
+ Transfer Equation
5
+
6
+ Dean Wang
7
+
8
+ The Ohio State University
9
+ 201 West 19th Avenue, Columbus, Ohio 43210
10
+
11
12
+
13
+
14
+ ABSTRACT
15
+
16
+ In this paper, we deal with the differential properties of the scalar flux 𝜙(𝑥) defined over a
17
+ two-dimensional bounded convex domain, as a solution to the integral radiation transfer
18
+ equation. Estimates for the derivatives of 𝜙(𝑥) near the boundary of the domain are given
19
+ based on Vainikko’s regularity theorem. A numerical example is presented to demonstrate the
20
+ implication of the solution smoothness on the convergence behavior of the diamond difference
21
+ method.
22
+
23
+ KEYWORDS: Integral Equation, Radiation Transfer, Regularity, Numerical Convergence
24
+
25
+
26
+ 1. INTRODUCTION
27
+
28
+ Consider the integral equation of the second kind
29
+
30
+ 𝜙(𝑥) = ∫ K(𝑥, 𝑦)𝜙(𝑦)𝑑𝑦
31
+ !
32
+ + 𝑓(𝑥) ,
33
+ 𝑥 ∈ G ,
34
+
35
+
36
+ (1)
37
+
38
+ where G ⊂ ℝ" , 𝑛 ≥ 1, is an open bounded domain and the kernel K(𝑥, 𝑦) is weakly singular, i.e.,
39
+ |K(𝑥, 𝑦)| ≤ 𝐶|𝑥 − 𝑦|#$, 0 ≤ 𝜈 ≤ 𝑛. Weakly singular integral equations arise in many physical applications
40
+ such as elliptic boundary problems and particle transport.
41
+
42
+ The standard integro-differential equation of radiation transfer can be reformulated as a weakly singular
43
+ integral equation. The one-group radiation transfer problem in a three-dimensional (3D) convex domain
44
+ reads as follows: find a function 𝜙: G: × Ω → ℝ% such as
45
+
46
+
47
+ Ω&
48
+ '((*,,)
49
+ '*!
50
+ .
51
+ &/0
52
+ + 𝜎(𝑥)𝜙(𝑥, Ω) =
53
+ 1"(*)
54
+ 23 ∫ 𝑠(𝑥, Ω, Ω′)𝜙(𝑥, Ω4)𝑑Ω′
55
+ ,
56
+ + 𝑓(𝑥, Ω) ,
57
+ 𝑥 ∈ G , (2)
58
+
59
+
60
+ 𝜙(𝑥, Ω) = 𝜙56(𝑥, Ω), 𝑥 ∈ 𝜕G, Ω ∙ 𝑛D(𝑥) < 0 ,
61
+
62
+
63
+ (3)
64
+
65
+ where Ω denotes the direction of radiation transfer, 𝜕G is the boundary of the domain G ⊂ ℝ., 𝜎 is the
66
+ extinction coefficient (or macroscopic total cross section in neutron transport), 𝜎7 is the scattering
67
+ coefficient (or macroscopic scattering cross section), 𝑠 is the phase function of scattering with
68
+ ∫ 𝑠(𝑥, Ω, Ω′)𝑑Ω′
69
+ ,
70
+ = 4𝜋, 𝑓 is the external source function, and 𝑛D is the unit normal vector of the domain
71
+ surface. Note that 𝜎7(𝑥) ≤ 𝜎(𝑥) and 𝑠(𝑥, Ω, Ω4) = 𝑠(𝑥, Ω′, Ω) under physical conditions.
72
+
73
+
74
+ Dean Wang
75
+
76
+ Assuming the isotropic scattering, i.e., 𝑠(𝑥, Ω, Ω4) = 1, 𝑓(𝑥, Ω) =
77
+ 8(*)
78
+ 23 , and 𝜙56(𝑥, Ω) =
79
+ (#$(*)
80
+ 23
81
+ , we can
82
+ obtain the so-called Peierls integral equation of radiation transfer for the scalar flux 𝜙(𝑥) as follows:
83
+
84
+ 𝜙(𝑥) =
85
+ 0
86
+ 23 ∫
87
+ 1"(9):–&((,*)
88
+ |*–9|,
89
+ 𝜙(𝑦)𝑑𝑦
90
+ 6
91
+ +
92
+ 0
93
+ 23 ∫
94
+ :–&((,*)
95
+ |*–9|, 𝑓(𝑦)𝑑𝑦
96
+ 6
97
+
98
+
99
+ +
100
+ 0
101
+ 23 ∫
102
+ :–&((,*)
103
+ |*–9|, H
104
+ *–9
105
+ |*–9| ∙ 𝑛(𝑥)H 𝜙56(𝑦)𝑑𝑆9
106
+ '6
107
+ ,
108
+
109
+ (4)
110
+
111
+ 𝜏(𝑥, 𝑦) = ∫
112
+ 𝜎(𝑟– 𝜉Ω)𝑑𝜉
113
+ |*–9|
114
+ =
115
+ ,
116
+
117
+
118
+
119
+ (5)
120
+
121
+ where 𝑑𝑆 is the differential element of the domain surface, 𝜏(𝑥, 𝑦) is the optical path between 𝑥 and 𝑦. One
122
+ can find detailed derivation in [6,7].
123
+
124
+ For simplicity, we assume 𝜎 and 𝜎7 are constant over the domain. Then Eq. (4) can be simplified as
125
+
126
+ 𝜙(𝑥) = ∫ K(𝑥, 𝑦)𝜙(𝑦)𝑑𝑦
127
+ 6
128
+ +
129
+ 0
130
+ 1" ∫ K(𝑥, 𝑦)𝑓(𝑦)𝑑𝑦
131
+ 6
132
+
133
+
134
+ +
135
+ 0
136
+ 1" ∫
137
+ K(𝑥, 𝑦) H
138
+ *–9
139
+ |*–9| ∙ 𝑛(𝑥)H 𝜙56(𝑦)𝑑𝑆9
140
+ '6
141
+ ,
142
+
143
+
144
+ (6)
145
+
146
+ where the 3D radiation kernel is given as
147
+
148
+ K(𝑥, 𝑦) =
149
+ 1":–-|(–*|
150
+ 23|*–9|, .
151
+
152
+
153
+
154
+ (7)
155
+
156
+ The boundary integral term in the above equation can produce singularities in the solution. We omit its
157
+ discussion in this paper. In other words, we only consider the problem with the vacuum boundary condition,
158
+ i.e., 𝜙56 = 0. Thus, Eq. (6) can be treated as the weakly integral equation of the second kind.
159
+
160
+ Since K(𝑥, 𝑦) has a singularity at 𝑥 = 𝑦, the solution of a weakly integral equation is generally not a smooth
161
+ function and its derivatives at the boundary would become unbounded from a certain order. There was
162
+ extensive research on the smoothness (regularity) properties of the solutions to weakly integral equations
163
+ [1,2], especially those early work in neutron transport theory done in the former Soviet Union [3,4]. It is
164
+ believed that Vladimirov first proved that the scalar flux 𝜙(𝑥) possesses the property |𝜙(𝑥 + ℎ) −
165
+ 𝜙(𝑥)|~ℎlogℎ for the one-group transport problem with isotropic scattering in a bounded domain [3].
166
+ Germogenova analyzed the local regularity of the angular flux 𝜙(𝑥, Ω) in a neighborhood of the
167
+ discontinuity interface and obtained an estimate of the first derivative, which has the singularity near the
168
+ interface [4]. Pitkaranta derived a local singular resolution showing explicitly the behavior of 𝜙(𝑥) near
169
+ the smooth portion of the boundary [5]. Vainikko introduced weighted spaces and obtained sharp estimates
170
+ of pointwise derivatives near the smooth boundary for multidimensional weakly singular integral equations
171
+ [6].
172
+
173
+ There exists some previous research work on the regularity of the integral radiation transfer solutions [7,8].
174
+ However, the 2D kernel used in those studies is physically incorrect. In this paper, we rederive the 2D
175
+ kernel by directly integrating the 3D kernel with respect to the third dimension. We examine the differential
176
+ properties of the new 2D kernel and provide estimates of pointwise derivatives of the scalar flux according
177
+ to Vainikko’s regularity theorem for the weakly integral equation of the second kind.
178
+
179
+
180
+ Smoothness of the Radiation Transfer Solution
181
+
182
+ The remainder of the paper is organized as follows. In Sect. 2, we derive the 2D kernel for the integral
183
+ radiation transfer equation. We examine the derivatives of the kernel and show that they satisfy the
184
+ boundedness condition of Vainikko’s regularity theorem in Sect. 3. Then the estimates of local regularity
185
+ of the scalar flux near the boundary of the domain are given. Sect. 4 presents numerical results to
186
+ demonstrate that the rate of convergence of numerical methods can be affected by the smoothness of the
187
+ exact solution. Concluding remarks are given in Sect. 5.
188
+
189
+
190
+ 2. TWO-DIMENSIONAL RADIATION TRANSFER EQUATION
191
+
192
+ In this section, we derive the 2D integral radiation transfer equation from its 3D form, Eq. (6). In 3D, 𝑑𝑦 =
193
+ 𝑑𝑦0𝑑𝑦>𝑑𝑦. and |𝑥– 𝑦| = S(𝑥0– 𝑦0)> + (𝑥>– 𝑦>)> + (𝑥.– 𝑦.)> . Let 𝜌 = S(𝑥0– 𝑦0)> + (𝑥>– 𝑦>)> , then
194
+ |𝑥– 𝑦| = S𝜌> + (𝑥.– 𝑦.)>. In a 2D domain 𝐺 ⊂ ℝ>, the solution function 𝜙(𝑥) only depends on 𝑥0 and 𝑥>
195
+ in Cartesian coordinates. Therefore, we only need to find the 2D radiation kernel, which can be obtained
196
+ by integrating out 𝑦. as follows:
197
+
198
+ K(𝑥, 𝑦) = ∫
199
+ 1":–-|(–*|
200
+ 23|*–9|, 𝑑𝑦.
201
+ ?
202
+ #?
203
+
204
+
205
+ =
206
+ 1"
207
+ 23 ∫
208
+ :
209
+ /-01,2((3–*3),
210
+ @,%(*3–93),
211
+ 𝑑𝑦.
212
+ ?
213
+ #?
214
+ .
215
+ (8)
216
+
217
+ To proceed, we introduce the variables 𝑡 = 𝜎S𝜌> + (𝑥.– 𝑦.)> and 𝑧 = 𝑦. − 𝑥.. Then we substitute 𝑑𝑦. =
218
+ 𝑑z =
219
+ A
220
+ 1BA,–1,@, 𝑑𝑡 into the above equation to have
221
+
222
+ K(𝑥, 𝑦) =
223
+ 1"
224
+ 23 ∫
225
+ :/4
226
+ 5,
227
+ -,
228
+ 𝑑z
229
+ ?
230
+ #?
231
+ =
232
+ 1"
233
+ >3 ∫
234
+ :/4
235
+ 5,
236
+ -,
237
+ 𝑑z
238
+ ?
239
+ =
240
+
241
+
242
+ =
243
+ 1"
244
+ >3 ∫
245
+ :/4
246
+ 5,
247
+ -,
248
+ A
249
+ 1BA,–1,@, 𝑑𝑡
250
+ ?
251
+ =
252
+
253
+
254
+ =
255
+ 1"1
256
+ >3 ∫
257
+ :/4
258
+ ABA,–1,@, 𝑑𝑡
259
+ ?
260
+ 1@
261
+ =
262
+ 1"1
263
+ >3 ∫
264
+ :/4
265
+ ABA,#1,|*–9|, 𝑑𝑡
266
+ ?
267
+ 1|*–9|
268
+ . (9)
269
+
270
+ Note that the 2D radiation kernel is always positive. By replacing the 3D kernel of Eq. (7) with the above
271
+ one, Eq. (6) becomes the 2D integral radiation transfer equation. Notice that the surface integral in the last
272
+ term on the right-hand side of Eq. (6) should be replaced with the line integral in the 2D domain.
273
+
274
+ Now we show that the 2D kernel K(𝑥, 𝑦) has a singularity at 𝜌 = 0 (i.e., 𝑥 = 𝑦) as follows:
275
+
276
+ K(𝑥, 𝑦) =
277
+ 1"1
278
+ >3 ∫
279
+ :/4
280
+ ABA,–1,@, 𝑑𝑡
281
+ ?
282
+ 1@
283
+ >
284
+ 1"1
285
+ >3 ∫
286
+ :/4
287
+ A, 𝑑𝑡
288
+ ?
289
+ 1@
290
+
291
+
292
+ =
293
+ 1"1
294
+ >3 e
295
+ C/-1
296
+ 1@ − Γ(0, 𝜎𝜌)g , (10)
297
+
298
+ where Γ(0, 𝑎) = ∫
299
+ :/4
300
+ A 𝑑𝑡
301
+ ?
302
+ D
303
+ , is the incomplete gamma function. The singular behavior of K(𝑥, 𝑦) near 𝜌 =
304
+ 0 is dominated by the first term
305
+ C/-1
306
+ 1@ in the brackets since the gamma function tends to infinity much slower.
307
+
308
+ Dean Wang
309
+
310
+ Remark 2.1. It should be noted that the 2D kernel defined by Eq. (9) is equivalent to the more conventional
311
+ one defined by the Bickley-Naylor functions [9]. Johnson and Pitkaranta derived a 2D kernel for neutron
312
+ transport by reformulating the standard integro-differential equation on the 2D plane [7]. The kernel
313
+ obtained is, K(𝑥, 𝑦) =
314
+ :–|(–*|
315
+ |*–9| (assuming 𝜎 = 1), which is however mathematically correct but physically
316
+ incorrect. Hennebach et al. also used the same 2D kernel for analyzing the radiation transfer solutions [8].
317
+ In addition, the integral equations in other geometries such as slab or sphere can be obtained by following
318
+ the same approach, and they can be found in [10].
319
+
320
+ Applying Banach’s fixed-point theorem, we can prove the existence and uniqueness of the solution in the
321
+ 2D domain by showing that ∫ K(𝑥, 𝑦)𝑑𝑦
322
+ !
323
+ is bounded below unity as follows.
324
+
325
+ ∫ K(𝑥, 𝑦)𝑑𝑦
326
+ 6
327
+ = ∫ i
328
+ 1"1
329
+ >3 ∫
330
+ :/4
331
+ ABA,–1,@, 𝑑𝑡
332
+ ?
333
+ 1@
334
+ j 𝑑𝑦
335
+ 6
336
+
337
+
338
+ =
339
+ 1"1
340
+ >3 ∫ 𝑑𝑦 ∫
341
+ :/4
342
+ ABA,–1,@, 𝑑𝑡
343
+ ?
344
+ 1@
345
+ 6
346
+
347
+
348
+ =
349
+ 1"1
350
+ >3 ∫ 𝜌𝑑𝜑𝑑𝜌 ∫
351
+ :/4
352
+ ABA,#1,@, 𝑑𝑡
353
+ ?
354
+ 1@
355
+ 6
356
+ ,
357
+
358
+ (11)
359
+
360
+ where 𝜑 is the azimuthal angle. By extending the above bounded domain to the whole space, we have
361
+
362
+ ∫ K(𝑥, 𝑦)𝑑𝑦
363
+ 6
364
+ <
365
+ 1"1
366
+ >3 ∫
367
+ 2𝜋𝜌𝑑𝜌
368
+ ?
369
+ =
370
+
371
+ :/4
372
+ ABA,–1,@, 𝑑𝑡
373
+ ?
374
+ 1@
375
+
376
+
377
+ = 𝜎7𝜎 ∫
378
+ 𝜌𝑑𝜌
379
+ ?
380
+ =
381
+
382
+ :/4
383
+ ABA,–1,@, 𝑑𝑡
384
+ ?
385
+ 1@
386
+ .
387
+
388
+ (12)
389
+
390
+ Denoting 𝜁 = 𝜎𝜌, Eq. (12) is simplified as
391
+
392
+ ∫ K(𝑥, 𝑦)𝑑𝑦
393
+ 6
394
+ <
395
+ 1"
396
+ 1 ∫
397
+ 𝜁𝑑𝜁
398
+ ?
399
+ =
400
+
401
+ :/4
402
+ ABA,–E, 𝑑𝑡
403
+ ?
404
+ E
405
+
406
+
407
+ =
408
+ 1"
409
+ 1 ∫
410
+
411
+ :/4
412
+ A
413
+ E
414
+ BA,#E, 𝑑𝑡𝑑𝜁
415
+ ?
416
+ E
417
+ ?
418
+ =
419
+ =
420
+ 1"
421
+ 1 ∫
422
+ :/4
423
+ A 𝑑𝑡 ∫
424
+ E
425
+ BA,–E, 𝑑𝜁
426
+ A
427
+ =
428
+ ?
429
+ =
430
+
431
+
432
+ =
433
+ 1"
434
+ 1 ∫
435
+ e#F𝑑𝑡
436
+ ?
437
+ =
438
+
439
+
440
+ =
441
+ 1"
442
+ 1 ≤ 1 . (13)
443
+
444
+ Notice we have changed the order of integration to solve the integral. It is apparent that for there exists a
445
+ unique solution, the physical condition, 𝜎7 ≤ 𝜎, must be satisfied.
446
+
447
+
448
+ 3. SMOOTHNESS OF THE SOLUTIONS
449
+
450
+ We first introduce Vainikko’s regularity theorem [6], which provides a sharp characterization of
451
+ singularities for the general weakly integral equation of the second kind. Then we analyze the
452
+ differentiational properties of the 2D radiation kernel and show that the derivatives are properly bounded.
453
+ Finally, Vainikko’s theorem is used to give the estimates of pointwise derivatives of the radiation solution.
454
+
455
+ Smoothness of the Radiation Transfer Solution
456
+
457
+ 3.1. Vainikko’s Regularity Theorem
458
+
459
+ Before we state the theorem, we introduce the definition of weighted spaces ℂG,$(G) [6].
460
+
461
+ Weighted space ℂ𝒎,𝝂(𝐆). For a 𝜆 ∈ ℝ, introduce a weight function
462
+
463
+ 𝑤J = r
464
+ 1 , 𝜆 < 0
465
+ (1 + |log𝜚(𝑥)|)#0 , 𝜆 = 0
466
+ 𝜚(𝑥)J , 𝜆 > 0
467
+ ,
468
+ 𝑥 ∈ G
469
+
470
+ (14)
471
+
472
+ where G ⊂ ℝ" is an open bounded domain and 𝜚(𝑥) = inf
473
+ 9∈'6|𝑥 − 𝑦| is the distance from 𝑥 to the boundary
474
+ 𝜕G. Let 𝑚 ∈ ℕ, 𝜈 ∈ ℝ and 𝜈 < 𝑛. Define the space ℂG,$(G) as the set of all 𝑚 times continuously
475
+ differentiable functions 𝜙: G → ℝ such that
476
+
477
+ ‖𝜙‖G,$ = ∑
478
+ sup
479
+ *∈6
480
+ {𝑤|L|–("–$)|𝐷L𝜙(𝑥)|}
481
+ |L|MG
482
+ < ∞ .
483
+
484
+
485
+ (15)
486
+
487
+ In other words, a 𝑚 times continuously differentiable function 𝜙 on G belongs to ℂG,$(G) if the growth of
488
+ its derivatives near the boundary can be estimated as follows:
489
+
490
+ |𝐷L𝜙(𝑥)| ≤ 𝑐 r
491
+ 1 , |𝛼| < 𝑛– 𝜈
492
+ 1 + |log𝜚(𝑥)| , |𝛼| = 𝑛– 𝜈
493
+ 𝜚(𝑥)"#$#|L| , |𝛼| > 𝑛– 𝜈
494
+ ,
495
+ 𝑥 ∈ G, |𝛼| ≤ 𝑚 ,
496
+
497
+ (16)
498
+
499
+ where 𝑐 is a constant. The space ℂG,$(G), equipped with the norm ‖∙‖G,$, is a complete Banach space.
500
+
501
+ After defining the weighted space, we introduce the smoothness assumption about the kernel in the
502
+ following form: the kernel K(𝑥, 𝑦) is 𝑚 times continuously differentiable on (G × G)\{𝑥 = 𝑦} and there
503
+ exists a real number 𝜈 ∈ (−∞, 𝑛) such that the estimate
504
+
505
+ H𝐷*L𝐷*%9
506
+ N
507
+ K(𝑥, 𝑦)H ≤ 𝑐 r
508
+ 1 , 𝜈 + |𝛼| < 0
509
+ 1 + „log|𝑥 − 𝑦|„ , 𝜈 + |𝛼| = 0
510
+ |𝑥 − 𝑦|#$#|L| , 𝜈 + |𝛼| > 0
511
+ , 𝑥, 𝑦 ∈ G
512
+ (17)
513
+
514
+ where
515
+ 𝐷*L = …
516
+ '
517
+ '*6†
518
+ L6 ⋯ …
519
+ '
520
+ '*7†
521
+ L7 ,
522
+
523
+
524
+
525
+
526
+ (18)
527
+
528
+ 𝐷*%9
529
+ N
530
+ = …
531
+ '
532
+ '*6 +
533
+ '
534
+ '96†
535
+ N6 ⋯ …
536
+ '
537
+ '*7 +
538
+ '
539
+ '97†
540
+ N7 ,
541
+
542
+
543
+ (19)
544
+
545
+ holds for all multi-indices 𝛼 = (𝛼0, ⋯ , 𝛼") ∈ ℤ%" and 𝛽 = (𝛽0, ⋯ , 𝛽") ∈ ℤ%" with |𝛼| + |𝛽| ≤ 𝑚. Here the
546
+ following usual conventions are adopted: |𝛼| = 𝛼0 + ⋯ + 𝛼", and |𝑥| = S𝑥0
547
+ > + ⋯ + 𝑥">.
548
+
549
+ Now we present Vainikko’s theorem in characterizing the regularity properties of a solution to the weakly
550
+ integral equation of the second kind [6].
551
+
552
+ Theorem 3.1. Let G ⊂ ℝ" be an open bounded domain, 𝑓 ∈ ℂG,$(G) and let the kernel K(𝑥, 𝑦) satisfy the
553
+ condition (17). If the integral equation (1) has a solution, 𝜙 ∈ L?(G) then 𝜙 ∈ ℂG,$(G).
554
+
555
+ Dean Wang
556
+
557
+
558
+ Remark 3.1. The solution does not improve its properties near the boundary 𝜕G, remaining only in
559
+ ℂG,$(G), even if 𝜕G is of class ℂ? and, 𝑓 ∈ ℂ?(G). A proof can be found in [6]. More precisely, for any 𝑛
560
+ and 𝜈 (𝜈 < 𝑛) there are kernels K(𝑥, 𝑦) satisfying (17) and such that Eq. (1) is uniquely solvable and, for a
561
+ suitable 𝑓 ∈ ℂ?(G), the normal derivatives of order 𝑘 of the solution behave near 𝜕G as log𝜚(𝑥) if 𝑘 =
562
+ 𝑛– 𝜈, and as 𝜚(𝑥)"#$#O for 𝑘 > 𝑛– 𝜈.
563
+
564
+ 3.2. Smoothness of the Radiation Transfer Solution
565
+
566
+ To apply the results of Theorem 3.1 to the 2D integral radiation transfer equation, we need to analyze the
567
+ kernel K(𝑥, 𝑦) and show it satisfying the condition (17), i.e., H𝐷*L𝐷*%9
568
+ N
569
+ K(𝑥, 𝑦)H ≤ 𝑐|𝑥 − 𝑦|#0#|L|. We can
570
+ simply set |𝛽| = 0 without loss of generality for our problem.
571
+
572
+ |𝜶| = 𝟎:
573
+ K(𝑥, 𝑦) =
574
+ 1"1
575
+ >3 ∫
576
+ :/4
577
+ ABA,–1,@, 𝑑𝑡
578
+ ?
579
+ 1@
580
+ <
581
+ 1"1
582
+ >3 ∫
583
+ :/-1
584
+ ABA,–1,@, 𝑑𝑡
585
+ ?
586
+ 1@
587
+
588
+
589
+ =
590
+ 1"1:/-1
591
+ >3
592
+
593
+ 0
594
+ ABA,–1,@, 𝑑𝑡
595
+ ?
596
+ 1@
597
+ =
598
+ 1"1:/-1
599
+ >3
600
+ 3
601
+ >1@ =
602
+ 1":/-|(–*|
603
+ 2|*–9|
604
+
605
+ ≤ 𝑐|𝑥– 𝑦|#0 . (20)
606
+
607
+ |𝜶| = 𝟏: Let 𝜁 = 𝜎𝜌 = 𝜎|𝑥– 𝑦| = 𝜎S(𝑥0– 𝑦0)> + (𝑥>– 𝑦>)>, then K(𝑥, 𝑦) =
608
+ 1"1
609
+ >3 ∫
610
+ :/4
611
+ ABA,–E, 𝑑𝑡
612
+ ?
613
+ E
614
+ , and
615
+
616
+ |𝐷*K(𝑥, 𝑦)| = H
617
+ '
618
+ 'E K(𝑥, 𝑦) 'E
619
+ '*H = H
620
+ '
621
+ 'E K(𝑥, 𝑦)H H
622
+ 'E
623
+ '*H ,
624
+ (21)
625
+
626
+ where,
627
+
628
+ H
629
+ 'E
630
+ '*6H = 𝜎 •
631
+ (*6–96)
632
+ B(*6–96),%(*,–9,),• ≤ 𝜎 ,
633
+ (22)
634
+
635
+ H
636
+ 'E
637
+ '*,H = 𝜎 •
638
+ (*,–9,)
639
+ B(*6–96),%(*,–9,),• ≤ 𝜎 .
640
+ (23)
641
+
642
+ Apparently, we only need to find the upper bound of •
643
+ '
644
+ 'E ∫
645
+ :/4
646
+ ABA,–E, 𝑑𝑡
647
+ ?
648
+ E
649
+ • ≤ 𝑐𝜁#>, which is shown in the
650
+ following. First, we simplify the integral ∫
651
+ :/4
652
+ ABA,–E, 𝑑𝑡
653
+ ?
654
+ E
655
+ as
656
+
657
+
658
+ :/4
659
+ ABA,–E, 𝑑𝑡
660
+ ?
661
+ E
662
+ =
663
+ 0
664
+ E, ∫
665
+ A:/4
666
+ BA,–E, 𝑑𝑡
667
+ ?
668
+ E
669
+
670
+ 0
671
+ E, ∫
672
+ :/4BA,–E,
673
+ A
674
+ 𝑑𝑡
675
+ ?
676
+ E
677
+
678
+
679
+ =
680
+ P6(E)
681
+ E
682
+
683
+ 0
684
+ E, ∫
685
+ :/4BA,–E,
686
+ A
687
+ 𝑑𝑡
688
+ ?
689
+ E
690
+ ,
691
+ (24)
692
+
693
+ where 𝐾0(𝜁) is the modified Bessel function of the second kind, and 𝐾0(𝜁)~
694
+ 0
695
+ E when 𝜁 → 0 [11].
696
+
697
+
698
+ '
699
+ 'E ∫
700
+ :/4
701
+ ABA,–E, 𝑑𝑡
702
+ ?
703
+ E
704
+
705
+
706
+ Smoothness of the Radiation Transfer Solution
707
+
708
+
709
+ = •−
710
+ P6(E)
711
+ E, +
712
+ P68(E)
713
+ E
714
+ +
715
+ >
716
+ E3 ∫
717
+ :/4BA,–E,
718
+ A
719
+ 𝑑𝑡
720
+ ?
721
+ E
722
+
723
+ 0
724
+ E ∫
725
+ :/4
726
+ ABA,–E, 𝑑𝑡
727
+ E
728
+ ?
729
+ • .
730
+
731
+
732
+ (25)
733
+
734
+ Notice the third term on the right-hand side of Eq. (25), ∫
735
+ :/4BA,–E,
736
+ A
737
+ 𝑑𝑡
738
+ ?
739
+ E
740
+ → 1 as 𝜁 → 0. It is not difficult to
741
+ find that the first three terms will cancel out when 𝜁 → 0. Then we obtain
742
+
743
+
744
+ '
745
+ 'E ∫
746
+ :/4
747
+ ABA,–E, 𝑑𝑡
748
+ ?
749
+ E
750
+ • ≤ •
751
+ 0
752
+ E ∫
753
+ :/4
754
+ ABA,–E, 𝑑𝑡
755
+ E
756
+ ?
757
+
758
+
759
+
760
+ 3
761
+ >
762
+ :/9
763
+ E, =
764
+ 3
765
+ >1,
766
+ :/-|(–*|
767
+ |*–9|, . (26)
768
+
769
+ Notice that here we have used the upper bound of Eq. (20). Now we arrive at the desired result for |𝛼| = 1:
770
+
771
+ |𝐷*K(𝑥, 𝑦)| ≤ 𝑐|𝑥– 𝑦|#> . (27)
772
+
773
+ |𝜶| = 𝟐 (and larger): we can follow the same procedure to find |𝐷*LK(𝑥, 𝑦)| ≤ 𝑐|𝑥 − 𝑦|#0#|L|.
774
+
775
+ Finally, we conclude that the 2D radiation kernel satisfies the condition (17). Therefore, by Theorem 3.1,
776
+ the estimates of derivatives of the scalar flux 𝜙(𝑥) for radiation transfer are the same as for the general
777
+ weakly integral equation of the second kind:
778
+
779
+ |𝐷L𝜙(𝑥)| ≤ 𝑐 r
780
+ 1 , |𝛼| < 1
781
+ 1 + |log𝜚(𝑥)| , |𝛼| = 1
782
+ 𝜚(𝑥)0#|L| , |𝛼| > 1
783
+ ,
784
+ 𝑥 ∈ G .
785
+ (28)
786
+
787
+ Remark 3.2. The first derivative of the solution 𝜙(𝑥) behaves as log𝜚(𝑥) and becomes unbounded as
788
+ approaching the boundary. The derivatives of order 𝑘 behave as 𝜚(𝑥)0#O for 𝑘 > 1. As mentioned in
789
+ Remark 3.1, these pointwise estimates cannot be improved by adding more strong smoothness on the data
790
+ and domain boundary. We point out that the lack of smoothness in the exact solution could adversely affect
791
+ the convergence rate of spatial discretization schemes for solving the radiation transfer equation [12-14].
792
+ According to the regularity results, it is expected that the asymptotic convergence rate of the spatial
793
+ discretization error of finite difference methods would be around 1 in the 𝐿? or 𝐿0 norm.
794
+
795
+
796
+ 4. NUMERICAL RESULTS
797
+
798
+ In this section, we demonstrate how the regularity of the exact solution will impact the numerical
799
+ convergence rate by solving the SN neutron transport equation in its original integro-differential form, using
800
+ the classic second-order diamond difference (DD) method. The model problem is a 1cm × 1cm square with
801
+ the vacuum boundary condition. Thus, there will be no complication from the boundary condition. The S12
802
+ level-symmetric quadrature set is used for angular discretization.
803
+
804
+ We analyze the following four cases: Case 1: ΣA = 1, Σ7 = 0; Case 2: ΣA = 1, Σ7 = 0.8; Case 3: ΣA = 10,
805
+ Σ7 = 0, and Case 4: ΣA = 10, Σ7 = 0.9. For all the cases, the external source 𝑓 = 1, is infinitely
806
+ differentiable, i.e., 𝑓 ∈ ℂ?(G). Cases 1 and 3 are pure absorption problems, while Case 3 is optically
807
+ thicker. It is interesting to note that the solutions are only determined by the external source for these two
808
+ cases. Cases 2 and 4 include the scattering effects, while Case 4 is optically thicker and more diffusive.
809
+ Both the scattering and external source contribute to the solution. The flux L1 errors as a function of mesh
810
+
811
+ Dean Wang
812
+
813
+ size and the rates of convergences are summarized in Table I. The error distributions on the mesh
814
+ 160 × 160 are plotted in Fig. 1. The reference solution for each case is obtained on a very fine mesh,
815
+ 5120 × 5120.
816
+
817
+
818
+ Table I. Flux L1 errors and convergence rates.
819
+
820
+ Mesh
821
+ (𝑵 × 𝑵)
822
+ Case 1
823
+ Case 2
824
+ Case 3
825
+ Case 4
826
+ Error
827
+ Rate
828
+ Error
829
+ Rate
830
+ Error
831
+ Rate
832
+ Error
833
+ Rate
834
+ 10 × 10
835
+ 2.87E-03
836
+
837
+ 3.59E-03
838
+
839
+ 2.31E-03
840
+
841
+ 9.29E-03
842
+
843
+ 20 × 20
844
+ 7.95E-04
845
+ 1.85
846
+ 1.01E-03
847
+ 1.83
848
+ 8.12E-04
849
+ 1.51
850
+ 2.56E-03
851
+ 1.86
852
+ 40 × 40
853
+ 2.90E-04
854
+ 1.45
855
+ 3.73E-04
856
+ 1.44
857
+ 2.31E-04
858
+ 1.82
859
+ 5.89E-04
860
+ 2.12
861
+ 80 × 80
862
+ 1.14E-04
863
+ 1.35
864
+ 1.44E-04
865
+ 1.37
866
+ 5.19E-05
867
+ 2.15
868
+ 1.37E-04
869
+ 2.10
870
+ 160 × 160
871
+ 5.04E-05
872
+ 1.17
873
+ 6.32E-05
874
+ 1.19
875
+ 1.32E-05
876
+ 1.97
877
+ 3.53E-05
878
+ 1.96
879
+ 320 × 320
880
+ 2.46E-05
881
+ 1.03
882
+ 3.06E-05
883
+ 1.04
884
+ 3.61E-06
885
+ 1.87
886
+ 9.39E-06
887
+ 1.91
888
+ 640 × 640
889
+ 1.31E-05
890
+ 0.91
891
+ 1.63E-05
892
+ 0.91
893
+ 1.11E-06
894
+ 1.71
895
+ 2.70E-06
896
+ 1.80
897
+ 1280 × 1280
898
+ 6.26E-06
899
+ 1.07
900
+ 7.76E-06
901
+ 1.07
902
+ 3.87E-07
903
+ 1.51
904
+ 8.51E-07
905
+ 1.66
906
+
907
+
908
+ Case 1
909
+
910
+
911
+
912
+ Case 2
913
+
914
+
915
+ Case 3
916
+
917
+
918
+
919
+ Case 4
920
+
921
+ Figure 1. Flux error distribution on the mesh 𝟏𝟔𝟎 × 𝟏𝟔𝟎.
922
+
923
+ ×10-4
924
+ 4
925
+ Flux L1 Error
926
+ 3
927
+ 2
928
+ 0
929
+ 150
930
+ 100
931
+ 150
932
+ 100
933
+ 50
934
+ 50
935
+ 0
936
+ 0×10-4
937
+ 4
938
+ Flux L1 Error
939
+ 3
940
+ 2
941
+ 0
942
+ 150
943
+ 100
944
+ 150
945
+ 100
946
+ 50
947
+ 50
948
+ 0
949
+ 0×10-4
950
+ 4
951
+ 3
952
+ Flux L1 Error
953
+ 2
954
+ 0
955
+ 150
956
+ 150
957
+ 100
958
+ 100
959
+ 50
960
+ 50
961
+ 0
962
+ 0×10-4
963
+ 6
964
+ Flux L1 Error
965
+ 4
966
+ 2
967
+ 0
968
+ 150
969
+ 150
970
+ 100
971
+ 100
972
+ 50
973
+ 50
974
+ 0
975
+ 0Smoothness of the Radiation Transfer Solution
976
+
977
+ It is evident that the convergence rate decreases as the mesh refines, and the errors are much larger at the
978
+ boundary. The “noisier” distributions in Cases 1 and 2 are due to the ray effects of the discrete ordinates
979
+ (SN) method, which are more pronounced in the optically thin problem. The convergence behavior is similar
980
+ between the cases with and without the scattering, indicating that the source term plays a significant role in
981
+ defining the irregularity of the solution. Cases 3 and 4 show the improved convergence rate as compared to
982
+ Cases 1 and 2 because the exponential function e–1|*–9| makes the kernel less singular as the total cross
983
+ section 𝜎 increases. In addition, Case 4 has a slightly better rate of convergence than Case 3 on fine meshes
984
+ (e.g., 1.84 vs. 1.75 on 640 × 640), because the transport problem becomes more like an elliptic diffusion
985
+ problem [17], and the diffusion solution in general has better regularity. It should be pointed out that in
986
+ Case 3, the convergence rate is only 1.51 on the coarse mesh. It is because for the pure absorption case, the
987
+ DD method becomes unstable when the mesh size is larger than
988
+ >Q!
989
+ 1 , where 𝜇& is the direction cosine of the
990
+ radiation transfer direction. However, it is more stable for the scattering case.
991
+
992
+ Remark 4.1. The error of the DD can be estimated by „𝜙& − 𝜙&
993
+ R„ ≤ 𝐶ℎ&
994
+ >‖𝜙′′‖?, where 𝜙& is the exact
995
+ solution at cell 𝑗, 𝜙&
996
+ R is its numerical result, and ℎ& is the mesh size [15]. Although this optimal error
997
+ estimate is obtained for the 1D slab geometry, one can expect the same to be true in two dimensions. As
998
+ given by Eq. (28), the second derivative 𝜙44 will be bounded in the interior of the domain, while it would
999
+ behave as 𝜙44~ℎ&
1000
+ #0 near the boundary. Therefore, it is expected that the convergence rate of the DD would
1001
+ decrease with refining the mesh, and asymptotically tend to 𝑂(ℎ). If the solution is sufficiently smooth
1002
+ (e.g., a manufactured smooth solution), the DD would maintain its second order of accuracy on any mesh
1003
+ size [16].
1004
+
1005
+ Remark 4.2. The scattering does not appear to play a role in defining the smoothness of the solution. For
1006
+ the problem without the external source, if there exists a nonsmooth incoming flux on the boundary, then
1007
+ the scattering may not be able to regularize the solution either, since the irregularity caused by the incoming
1008
+ flux, which is defined by the surface integral term of Eq. (4), has nothing to do with the scattering and the
1009
+ solution flux 𝜙.
1010
+
1011
+
1012
+ 5. CONCLUSIONS
1013
+
1014
+ We have derived the two-dimensional integral radiation transfer equation and examined the differential
1015
+ properties of the integral kernel for fulfilling the boundedness conditions of Vainikko’s theorem. We use
1016
+ the theorem to estimate the derivatives of the radiation transfer solution near the boundary of the domain.
1017
+ It is noted that the first derivative of the scalar flux 𝜙(𝑥) becomes unbounded when approaching the
1018
+ boundary. The derivatives of order 𝑘 behave as 𝜚(𝑥)0#O for 𝑘 > 1, where 𝜚(𝑥) is the distance to the
1019
+ boundary. A numerical example is presented to demonstrate that the irregularity of the exact solution will
1020
+ reduce the rate of convergence of numerical solutions. The convergence rate improves as the optically
1021
+ thickness of the problem increases. It is interesting to note that the scattering does not help smoothen the
1022
+ solution. However, it does play a crucial role in transforming the transport problem into an elliptic diffusion
1023
+ problem in the asymptotic diffusion limit. We are currently extending the analysis to the boundary integral
1024
+ transport problem in considering nonzero incoming boundary conditions and corner effects. In addition, it
1025
+ would be interesting to study the convergence behavior of weak solutions.
1026
+
1027
+
1028
+ REFERENCES
1029
+
1030
+ 1. S. G. Mikhlin, S. Prossdorf, Singular Integral Operators, Springer-Verlag (1986).
1031
+
1032
+ Dean Wang
1033
+
1034
+ 2. S. G. Mikhlin, Multidimensional Singular Integrals and Integral Equations, Pergamon Press, Oxford
1035
+ (1965).
1036
+ 3. V. S. Vladimirov, Mathematical Problems in the One-Velocity Theory of Particle Transport, (Translated
1037
+ from Transactions of the V. A. Steklov Mathematical Institute, 61, 1961), Atomic Energy of Canada
1038
+ Limited (1963).
1039
+ 4. T. A. Germogenova, “Local properties of the solution of the transport equation,” Dokl. Akad. Nauk
1040
+ SSSR, 187(5), pp. 978-981 (1969).
1041
+ 5. J. Pitkaranta, “Estimates for the Derivatives of Solutions to Weakly Singular Fredholm Integral
1042
+ Equations,” SIAM J. Math. Anal., 11(6), pp. 952-968 (1980).
1043
+ 6. G. Vainikko, Multidimensional Weakly Singular Integral Equations, Springer-Verlag, Berlin
1044
+ Heidelberg (1993).
1045
+ 7. C. Johnson and J. Pitkaranta, “Convergence of A Fully Discrete Scheme for Two-Dimensional Neutron
1046
+ Transport,” SIAM J. Math. Anal., 20(5), pp. 951-966 (1983).
1047
+ 8. E. Hennebach, P. Junghanns, G. Vainikko, “Weakly Singular Integral Equations with Operator-Valued
1048
+ Kernels and An Application to Radiation Transfer Problems,” Integr. Equat. Oper. Th., 22, pp. 37-64
1049
+ (1995).
1050
+ 9. E. E. Lewis and W. F. Miller, Jr., Computational Methods of Neutron Transport, American Nuclear
1051
+ Society (1993).
1052
+ 10. G. J. Bell and S. Glasstone, Nuclear Reactor Theory, Van Nostrand Reinhold Company, New York
1053
+ (1970).
1054
+ 11. M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions: with Formulas, Graphs, and
1055
+ Mathematical Tables, Dover, New York (1970).
1056
+ 12. N. K. Madsen, “Convergence of Singular Difference Approximations for the Discrete Ordinate
1057
+ Equations in 𝑥– 𝑦 Geometry,” Math. Comput., 26(117), 45-50 (1972).
1058
+ 13. E. W. Larsen, “Spatial Convergence Properties of the Diamond Difference Method in x, y Geometry,”
1059
+ Nucl. Sci. Eng., 80, 710-713 (1982).
1060
+ 14. Y. Wang and J. C. Ragusa, “On the Convergence of DGFEM Applied to the Discrete Ordinates
1061
+ Transport Equation for Structured and Unstructured Triangular Meshes,” Nucl. Sci. Eng., 163, 56-72
1062
+ (2009).
1063
+ 15. D. Wang, “Error Analysis of Numerical Methods for Thick Diffusive Neutron Transport Problems on
1064
+ Shishkin Mesh,” Proceedings of International Conference on Physics of Reactors 2022 (PHYSOR
1065
+ 2022), Pittsburgh, PA, USA, May 15-20, 2022, pp. 977-986 (2022).
1066
+ 16. D. Wang, et al., “Solving the SN Transport Equation Using High Order Lax-Friedrichs WENO Fast
1067
+ Sweeping Methods,” Proceedings of International Conference on Mathematics and Computational
1068
+ Methods Applied to Nuclear Science and Engineering 2019 (M&C 2019), Portland, OR, USA, August
1069
+ 25-29, 2019, pp. 61-72 (2019).
1070
+ 17. D. Wang and T. Byambaakhuu, “A New Proof of the Asymptotic Diffusion Limit of the SN Neutron
1071
+ Transport Equation,” Nucl. Sci. Eng., 195, 1347-1358 (2021).
1072
+
1073
+
59AyT4oBgHgl3EQfcfe1/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,451 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf,len=450
2
+ page_content='On the Smoothness of the Solution to the Two-Dimensional Radiation Transfer Equation Dean Wang The Ohio State University 201 West 19th Avenue, Columbus, Ohio 43210 wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
3
+ page_content='12239@osu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
4
+ page_content='edu ABSTRACT In this paper, we deal with the differential properties of the scalar flux 𝜙(𝑥) defined over a two-dimensional bounded convex domain, as a solution to the integral radiation transfer equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
5
+ page_content=' Estimates for the derivatives of 𝜙(𝑥) near the boundary of the domain are given based on Vainikko’s regularity theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
6
+ page_content=' A numerical example is presented to demonstrate the implication of the solution smoothness on the convergence behavior of the diamond difference method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
7
+ page_content=' KEYWORDS: Integral Equation, Radiation Transfer, Regularity, Numerical Convergence 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
8
+ page_content=' INTRODUCTION Consider the integral equation of the second kind 𝜙(𝑥) = ∫ K(𝑥, 𝑦)𝜙(𝑦)𝑑𝑦 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
9
+ page_content=' + 𝑓(𝑥) , 𝑥 ∈ G , (1) where G ⊂ ℝ" , 𝑛 ≥ 1, is an open bounded domain and the kernel K(𝑥, 𝑦) is weakly singular, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
10
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
11
+ page_content=', |K(𝑥, 𝑦)| ≤ 𝐶|𝑥 − 𝑦|#$, 0 ≤ 𝜈 ≤ 𝑛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
12
+ page_content=' Weakly singular integral equations arise in many physical applications such as elliptic boundary problems and particle transport.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
13
+ page_content=' The standard integro-differential equation of radiation transfer can be reformulated as a weakly singular integral equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
14
+ page_content=" The one-group radiation transfer problem in a three-dimensional (3D) convex domain reads as follows: find a function 𝜙: G: × Ω → ℝ% such as ∑ Ω& '((*,,) '*!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
15
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
16
+ page_content=' &/0 + 𝜎(𝑥)𝜙(𝑥, Ω) = 1"(*) 23 ∫ 𝑠(𝑥, Ω, Ω′)𝜙(𝑥, Ω4)𝑑Ω′ , + 𝑓(𝑥, Ω) , 𝑥 ∈ G , (2) 𝜙(𝑥, Ω) = 𝜙56(𝑥, Ω), 𝑥 ∈ 𝜕G, Ω ∙ 𝑛D(𝑥) < 0 , (3) where Ω denotes the direction of radiation transfer, 𝜕G is the boundary of the domain G ⊂ ℝ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
17
+ page_content=', 𝜎 is the extinction coefficient (or macroscopic total cross section in neutron transport), 𝜎7 is the scattering coefficient (or macroscopic scattering cross section), 𝑠 is the phase function of scattering with ∫ 𝑠(𝑥, Ω, Ω′)𝑑Ω′ , = 4𝜋, 𝑓 is the external source function, and 𝑛D is the unit normal vector of the domain surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
18
+ page_content=' Note that 𝜎7(𝑥) ≤ 𝜎(𝑥) and 𝑠(𝑥, Ω, Ω4) = 𝑠(𝑥, Ω′, Ω) under physical conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
19
+ page_content=' Dean Wang Assuming the isotropic scattering, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
20
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
21
+ page_content=', 𝑠(𝑥, Ω, Ω4) = 1, 𝑓(𝑥, Ω) = 8(*) 23 , and 𝜙56(𝑥, Ω) = (#$(*) 23 , we can obtain the so-called Peierls integral equation of radiation transfer for the scalar flux 𝜙(𝑥) as follows: 𝜙(𝑥) = 0 23 ∫ 1"(9):–&((, ) | –9|, 𝜙(𝑦)𝑑𝑦 6 + 0 23 ∫ :–&((, ) | –9|, 𝑓(𝑦)𝑑𝑦 6 + 0 23 ∫ :–&((, ) | –9|, H –9 | –9| 𝑛(𝑥)H 𝜙56(𝑦)𝑑𝑆9 \'6 , (4) 𝜏(𝑥, 𝑦) = ∫ 𝜎(𝑟– 𝜉Ω)𝑑𝜉 | –9| = , (5) where 𝑑𝑆 is the differential element of the domain surface, 𝜏(𝑥, 𝑦) is the optical path between 𝑥 and 𝑦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
22
+ page_content=' One can find detailed derivation in [6,7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
23
+ page_content=' For simplicity, we assume 𝜎 and 𝜎7 are constant over the domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
24
+ page_content=' Then Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
25
+ page_content=' (4) can be simplified as 𝜙(𝑥) = ∫ K(𝑥, 𝑦)𝜙(𝑦)𝑑𝑦 6 + 0 1" ∫ K(𝑥, 𝑦)𝑓(𝑦)𝑑𝑦 6 + 0 1" ∫ K(𝑥, 𝑦) H –9 | –9| 𝑛(𝑥)H 𝜙56(𝑦)𝑑𝑆9 \'6 , (6) where the 3D radiation kernel is given as K(𝑥, 𝑦) = 1":– |(– | 23| –9|, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
26
+ page_content=' (7) The boundary integral term in the above equation can produce singularities in the solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
27
+ page_content=' We omit its discussion in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
28
+ page_content=' In other words, we only consider the problem with the vacuum boundary condition, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
29
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
30
+ page_content=', 𝜙56 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
31
+ page_content=' Thus, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
32
+ page_content=' (6) can be treated as the weakly integral equation of the second kind.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
33
+ page_content=' Since K(𝑥, 𝑦) has a singularity at 𝑥 = 𝑦, the solution of a weakly integral equation is generally not a smooth function and its derivatives at the boundary would become unbounded from a certain order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
34
+ page_content=' There was extensive research on the smoothness (regularity) properties of the solutions to weakly integral equations [1,2], especially those early work in neutron transport theory done in the former Soviet Union [3,4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
35
+ page_content=' It is believed that Vladimirov first proved that the scalar flux 𝜙(𝑥) possesses the property |𝜙(𝑥 + ℎ) − 𝜙(𝑥)|~ℎlogℎ for the one-group transport problem with isotropic scattering in a bounded domain [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
36
+ page_content=' Germogenova analyzed the local regularity of the angular flux 𝜙(𝑥, Ω) in a neighborhood of the discontinuity interface and obtained an estimate of the first derivative, which has the singularity near the interface [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
37
+ page_content=' Pitkaranta derived a local singular resolution showing explicitly the behavior of 𝜙(𝑥) near the smooth portion of the boundary [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
38
+ page_content=' Vainikko introduced weighted spaces and obtained sharp estimates of pointwise derivatives near the smooth boundary for multidimensional weakly singular integral equations [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
39
+ page_content=' There exists some previous research work on the regularity of the integral radiation transfer solutions [7,8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
40
+ page_content=' However, the 2D kernel used in those studies is physically incorrect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
41
+ page_content=' In this paper, we rederive the 2D kernel by directly integrating the 3D kernel with respect to the third dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
42
+ page_content=' We examine the differential properties of the new 2D kernel and provide estimates of pointwise derivatives of the scalar flux according to Vainikko’s regularity theorem for the weakly integral equation of the second kind.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
43
+ page_content=' Smoothness of the Radiation Transfer Solution The remainder of the paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
44
+ page_content=' In Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
45
+ page_content=' 2, we derive the 2D kernel for the integral radiation transfer equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
46
+ page_content=' We examine the derivatives of the kernel and show that they satisfy the boundedness condition of Vainikko’s regularity theorem in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
47
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
48
+ page_content=' Then the estimates of local regularity of the scalar flux near the boundary of the domain are given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
49
+ page_content=' Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
50
+ page_content=' 4 presents numerical results to demonstrate that the rate of convergence of numerical methods can be affected by the smoothness of the exact solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
51
+ page_content=' Concluding remarks are given in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
52
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
53
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
54
+ page_content=' TWO DIMENSIONAL RADIATION TRANSFER EQUATION In this section, we derive the 2D integral radiation transfer equation from its 3D form, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
55
+ page_content=' (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
56
+ page_content=' In 3D, 𝑑𝑦 = 𝑑𝑦0𝑑𝑦>𝑑𝑦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
57
+ page_content=' and |𝑥– 𝑦| = S(𝑥0– 𝑦0)> + (𝑥>– 𝑦>)> + (𝑥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
58
+ page_content='– 𝑦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
59
+ page_content=' )> .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
60
+ page_content=' Let 𝜌 = S(𝑥0– 𝑦0)> + (𝑥>– 𝑦>)> , then |𝑥– 𝑦| = S𝜌> + (𝑥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
61
+ page_content='– 𝑦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
62
+ page_content=')>.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
63
+ page_content=' In a 2D domain 𝐺 ⊂ ℝ>, the solution function 𝜙(𝑥) only depends on 𝑥0 and 𝑥> in Cartesian coordinates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
64
+ page_content=' Therefore, we only need to find the 2D radiation kernel, which can be obtained by integrating out 𝑦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
65
+ page_content=' as follows: K(𝑥, 𝑦) = ∫ 1":– |(– | 23| –9|, 𝑑��.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
66
+ page_content=' ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
67
+ page_content=' #?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
68
+ page_content=' = 1" 23 ∫ : / 01,2((3– 3), @,%( 3–93), 𝑑𝑦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
69
+ page_content=' ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
70
+ page_content=' #?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
71
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
72
+ page_content=' (8) To proceed, we introduce the variables 𝑡 = 𝜎S𝜌> + (𝑥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
73
+ page_content='– 𝑦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
74
+ page_content=' )> and 𝑧 = 𝑦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
75
+ page_content=' − 𝑥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
76
+ page_content='. Then we substitute 𝑑𝑦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
77
+ page_content=' = 𝑑z = A 1BA,–1,@, 𝑑𝑡 into the above equation to have K(𝑥, 𝑦) = 1" 23 ∫ :/4 5, , 𝑑z ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
78
+ page_content=' #?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
79
+ page_content=' = 1" >3 ∫ :/4 5, , 𝑑z ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
80
+ page_content=' = = 1" >3 ∫ :/4 5, , A 1BA,–1,@, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
81
+ page_content=' = = 1"1 >3 ∫ :/4 ABA,–1,@, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
82
+ page_content=' 1@ = 1"1 >3 ∫ :/4 ABA,#1,|*–9|, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
83
+ page_content=' 1|*–9| .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
84
+ page_content=' (9) Note that the 2D radiation kernel is always positive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
85
+ page_content=' By replacing the 3D kernel of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
86
+ page_content=' (7) with the above one, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
87
+ page_content=' (6) becomes the 2D integral radiation transfer equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
88
+ page_content=' Notice that the surface integral in the last term on the right-hand side of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
89
+ page_content=' (6) should be replaced with the line integral in the 2D domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
90
+ page_content=' Now we show that the 2D kernel K(𝑥, 𝑦) has a singularity at 𝜌 = 0 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
91
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
92
+ page_content=', 𝑥 = 𝑦) as follows: K(𝑥, 𝑦) = 1"1 >3 ∫ :/4 ABA,–1,@, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
93
+ page_content=' 1@ > 1"1 >3 ∫ :/4 A, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
94
+ page_content=' 1@ = 1"1 >3 e C/-1 1@ − Γ(0, 𝜎𝜌)g , (10) where Γ(0, 𝑎) = ∫ :/4 A 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
95
+ page_content=' D , is the incomplete gamma function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
96
+ page_content=' The singular behavior of K(𝑥, 𝑦) near 𝜌 = 0 is dominated by the first term C/-1 1@ in the brackets since the gamma function tends to infinity much slower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
97
+ page_content=' Dean Wang Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
98
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
99
+ page_content=' It should be noted that the 2D kernel defined by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
100
+ page_content=' (9) is equivalent to the more conventional one defined by the Bickley-Naylor functions [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
101
+ page_content=' Johnson and Pitkaranta derived a 2D kernel for neutron transport by reformulating the standard integro-differential equation on the 2D plane [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
102
+ page_content=' The kernel obtained is, K(𝑥, 𝑦) = :–|(–*| |*–9| (assuming 𝜎 = 1), which is however mathematically correct but physically incorrect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
103
+ page_content=' Hennebach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
104
+ page_content=' also used the same 2D kernel for analyzing the radiation transfer solutions [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
105
+ page_content=' In addition, the integral equations in other geometries such as slab or sphere can be obtained by following the same approach, and they can be found in [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
106
+ page_content=' Applying Banach’s fixed-point theorem, we can prove the existence and uniqueness of the solution in the 2D domain by showing that ∫ K(𝑥, 𝑦)𝑑𝑦 !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
107
+ page_content=' is bounded below unity as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
108
+ page_content=' ∫ K(𝑥, 𝑦)𝑑𝑦 6 = ∫ i 1"1 >3 ∫ :/4 ABA,–1,@, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
109
+ page_content=' 1@ j 𝑑𝑦 6 = 1"1 >3 ∫ 𝑑𝑦 ∫ :/4 ABA,–1,@, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
110
+ page_content=' 1@ 6 = 1"1 >3 ∫ 𝜌𝑑𝜑𝑑𝜌 ∫ :/4 ABA,#1,@, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
111
+ page_content=' 1@ 6 , (11) where 𝜑 is the azimuthal angle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
112
+ page_content=' By extending the above bounded domain to the whole space, we have ∫ K(𝑥, 𝑦)𝑑𝑦 6 < 1"1 >3 ∫ 2𝜋𝜌𝑑𝜌 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
113
+ page_content=' = ∫ :/4 ABA,–1,@, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
114
+ page_content=' 1@ = 𝜎7𝜎 ∫ 𝜌𝑑𝜌 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
115
+ page_content=' = ∫ :/4 ABA,–1,@, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
116
+ page_content=' 1@ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
117
+ page_content=' (12) Denoting 𝜁 = 𝜎𝜌, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
118
+ page_content=' (12) is simplified as ∫ K(𝑥, 𝑦)𝑑𝑦 6 < 1" 1 ∫ 𝜁𝑑𝜁 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
119
+ page_content=' = ∫ :/4 ABA,–E, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
120
+ page_content=' E = 1" 1 ∫ ∫ :/4 A E BA,#E, 𝑑𝑡𝑑𝜁 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
121
+ page_content=' E ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
122
+ page_content=' = = 1" 1 ∫ :/4 A 𝑑𝑡 ∫ E BA,–E, 𝑑𝜁 A = ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
123
+ page_content=' = = 1" 1 ∫ e#F𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
124
+ page_content=' = = 1" 1 ≤ 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
125
+ page_content=' (13) Notice we have changed the order of integration to solve the integral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
126
+ page_content=' It is apparent that for there exists a unique solution, the physical condition, 𝜎7 ≤ 𝜎, must be satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
127
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
128
+ page_content=' SMOOTHNESS OF THE SOLUTIONS We first introduce Vainikko’s regularity theorem [6], which provides a sharp characterization of singularities for the general weakly integral equation of the second kind.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
129
+ page_content=' Then we analyze the differentiational properties of the 2D radiation kernel and show that the derivatives are properly bounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
130
+ page_content=' Finally, Vainikko’s theorem is used to give the estimates of pointwise derivatives of the radiation solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
131
+ page_content=' Smoothness of the Radiation Transfer Solution 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
132
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
133
+ page_content=' Vainikko’s Regularity Theorem Before we state the theorem, we introduce the definition of weighted spaces ℂG,$(G) [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
134
+ page_content=' Weighted space ℂ𝒎,𝝂(𝐆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
135
+ page_content=' For a 𝜆 ∈ ℝ, introduce a weight function 𝑤J = r 1 , 𝜆 < 0 (1 + |log𝜚(𝑥)|)#0 , 𝜆 = 0 𝜚(𝑥)J , 𝜆 > 0 , 𝑥 ∈ G (14) where G ⊂ ℝ" is an open bounded domain and 𝜚(𝑥) = inf 9∈\'6|𝑥 − 𝑦| is the distance from 𝑥 to the boundary 𝜕G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
136
+ page_content=' Let 𝑚 ∈ ℕ, 𝜈 ∈ ℝ and 𝜈 < 𝑛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
137
+ page_content=' Define the space ℂG,$(G) as the set of all 𝑚 times continuously differentiable functions 𝜙: G → ℝ such that ‖𝜙‖G,$ = ∑ sup ∈6 {𝑤|L|–("–$)|𝐷L𝜙(𝑥)|} |L|MG < ∞ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
138
+ page_content=' (15) In other words, a 𝑚 times continuously differentiable function 𝜙 on G belongs to ℂG,$(G) if the growth of its derivatives near the boundary can be estimated as follows: |𝐷L𝜙(𝑥)| ≤ 𝑐 r 1 , |𝛼| < 𝑛– 𝜈 1 + |log𝜚(𝑥)| , |𝛼| = 𝑛– 𝜈 𝜚(𝑥)"#$#|L| , |𝛼| > 𝑛– 𝜈 , 𝑥 ∈ G, |𝛼| ≤ 𝑚 , (16) where 𝑐 is a constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
139
+ page_content=' The space ℂG,$(G), equipped with the norm ‖∙‖G,$, is a complete Banach space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
140
+ page_content=' After defining the weighted space,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
141
+ page_content=' we introduce the smoothness assumption about the kernel in the following form: the kernel K(𝑥,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
142
+ page_content=' 𝑦) is 𝑚 times continuously differentiable on (G × G)\\{𝑥 = 𝑦} and there exists a real number 𝜈 ∈ (−∞,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
143
+ page_content=' 𝑛) such that the estimate H𝐷*L𝐷*%9 N K(𝑥,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
144
+ page_content=' 𝑦)H ≤ 𝑐 r 1 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
145
+ page_content=' 𝜈 + |𝛼| < 0 1 + „log|𝑥 − 𝑦|„ ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
146
+ page_content=' 𝜈 + |𝛼| = 0 |𝑥 − 𝑦|#$#|L| ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
147
+ page_content=' 𝜈 + |𝛼| > 0 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
148
+ page_content=' 𝑥,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
149
+ page_content=" 𝑦 ∈ G (17) where 𝐷 L = … ' ' 6† L6 ⋯ … ' ' 7† L7 ," metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
150
+ page_content=" (18) 𝐷 %9 N = … ' ' 6 + ' '96† N6 ⋯ … ' ' 7 + ' '97† N7 ," metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
151
+ page_content=' (19) holds for all multi-indices 𝛼 = (𝛼0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
152
+ page_content=' ⋯ ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
153
+ page_content=' 𝛼") ∈ ℤ%" and 𝛽 = (𝛽0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
154
+ page_content=' ⋯ ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
155
+ page_content=' 𝛽") ∈ ℤ%" with |𝛼| + |𝛽| ≤ 𝑚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
156
+ page_content=' Here the following usual conventions are adopted: |𝛼| = 𝛼0 + ⋯ + 𝛼", and |𝑥| = S𝑥0 > + ⋯ + 𝑥">.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
157
+ page_content=' Now we present Vainikko’s theorem in characterizing the regularity properties of a solution to the weakly integral equation of the second kind [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
158
+ page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
159
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
160
+ page_content=' Let G ⊂ ℝ" be an open bounded domain, 𝑓 ∈ ℂG,$(G) and let the kernel K(𝑥, 𝑦) satisfy the condition (17).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
161
+ page_content=' If the integral equation (1) has a solution, 𝜙 ∈ L?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
162
+ page_content=' (G) then 𝜙 ∈ ℂG,$(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
163
+ page_content=' Dean Wang Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
164
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
165
+ page_content=' The solution does not improve its properties near the boundary 𝜕G, remaining only in ℂG,$(G), even if 𝜕G is of class ℂ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
166
+ page_content=' and, 𝑓 ∈ ℂ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
167
+ page_content='(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
168
+ page_content=' A proof can be found in [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
169
+ page_content=' More precisely, for any 𝑛 and 𝜈 (𝜈 < 𝑛) there are kernels K(𝑥, 𝑦) satisfying (17) and such that Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
170
+ page_content=' (1) is uniquely solvable and, for a suitable 𝑓 ∈ ℂ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
171
+ page_content=' (G), the normal derivatives of order 𝑘 of the solution behave near 𝜕G as log𝜚(𝑥) if 𝑘 = 𝑛– 𝜈, and as 𝜚(𝑥)"#$#O for 𝑘 > 𝑛– 𝜈.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
172
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
173
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
174
+ page_content=' Smoothness of the Radiation Transfer Solution To apply the results of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
175
+ page_content='1 to the 2D integral radiation transfer equation, we need to analyze the kernel K(𝑥, 𝑦) and show it satisfying the condition (17), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
176
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
177
+ page_content=', H𝐷*L𝐷*%9 N K(𝑥, 𝑦)H ≤ 𝑐|𝑥 − 𝑦|#0#|L|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
178
+ page_content=' We can simply set |𝛽| = 0 without loss of generality for our problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
179
+ page_content=' |𝜶| = 𝟎: K(𝑥, 𝑦) = 1"1 >3 ∫ :/4 ABA,–1,@, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
180
+ page_content=' 1@ < 1"1 >3 ∫ :/ 1 ABA,–1,@, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
181
+ page_content=' 1@ = 1"1:/ 1 >3 ∫ 0 ABA,–1,@, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
182
+ page_content=' 1@ = 1"1:/ 1 >3 3 >1@ = 1":/ |(– | 2| –9| ≤ 𝑐|𝑥– 𝑦|#0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
183
+ page_content=' (20) |𝜶| = 𝟏: Let 𝜁 = 𝜎𝜌 = 𝜎|𝑥– 𝑦| = 𝜎S(𝑥0– 𝑦0)> + (𝑥>– 𝑦>)>, then K(𝑥, 𝑦) = 1"1 >3 ∫ :/4 ABA,–E, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
184
+ page_content=" E , and |𝐷 K(𝑥, 𝑦)| = H ' 'E K(𝑥, 𝑦) 'E ' H = H ' 'E K(𝑥, 𝑦)H H 'E ' H , (21) where, H 'E ' 6H = 𝜎 ( 6–96) B( 6–96),%( ,–9,), ≤ 𝜎 , (22) H 'E ' ,H = 𝜎 ( ,–9,) B( 6–96),%( ,–9,), ≤ 𝜎 ." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
185
+ page_content=" (23) Apparently, we only need to find the upper bound of • ' 'E ∫ :/4 ABA,–E, 𝑑𝑡 ?" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
186
+ page_content=' E • ≤ 𝑐𝜁#>, which is shown in the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
187
+ page_content=' First, we simplify the integral ∫ :/4 ABA,–E, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
188
+ page_content=' E as ∫ :/4 ABA,–E, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
189
+ page_content=' E = 0 E, ∫ A:/4 BA,–E, 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
190
+ page_content=' E − 0 E, ∫ :/4BA,–E, A 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
191
+ page_content=' E = P6(E) E − 0 E, ∫ :/4BA,–E, A 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
192
+ page_content=' E , (24) where 𝐾0(𝜁) is the modified Bessel function of the second kind, and 𝐾0(𝜁)~ 0 E when 𝜁 → 0 [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
193
+ page_content=" ' 'E ∫ :/4 ABA,–E, 𝑑𝑡 ?" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
194
+ page_content=' E Smoothness of the Radiation Transfer Solution = − P6(E) E, + P68(E) E + > E3 ∫ :/4BA,–E, A 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
195
+ page_content=' E − 0 E ∫ :/4 ABA,–E, 𝑑𝑡 E ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
196
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
197
+ page_content=' (25) Notice the third term on the right-hand side of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
198
+ page_content=' (25), ∫ :/4BA,–E, A 𝑑𝑡 ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
199
+ page_content=' E → 1 as 𝜁 → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
200
+ page_content=' It is not difficult to find that the first three terms will cancel out when 𝜁 → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
201
+ page_content=" Then we obtain ' 'E ∫ :/4 ABA,–E, 𝑑𝑡 ?" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
202
+ page_content=' E ≤ 0 E ∫ :/4 ABA,–E, 𝑑𝑡 E ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
203
+ page_content=' ≤ 3 > :/9 E, = 3 >1, :/-|(–*| |*–9|, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
204
+ page_content=' (26) Notice that here we have used the upper bound of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
205
+ page_content=' (20).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
206
+ page_content=' Now we arrive at the desired result for |𝛼| = 1: |𝐷*K(𝑥, 𝑦)| ≤ 𝑐|𝑥– 𝑦|#> .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
207
+ page_content=' (27) |𝜶| = 𝟐 (and larger): we can follow the same procedure to find |𝐷*LK(𝑥, 𝑦)| ≤ 𝑐|𝑥 − 𝑦|#0#|L|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
208
+ page_content=' Finally, we conclude that the 2D radiation kernel satisfies the condition (17).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
209
+ page_content=' Therefore, by Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
210
+ page_content='1, the estimates of derivatives of the scalar flux 𝜙(𝑥) for radiation transfer are the same as for the general weakly integral equation of the second kind: |𝐷L𝜙(𝑥)| ≤ 𝑐 r 1 , |𝛼| < 1 1 + |log𝜚(𝑥)| , |𝛼| = 1 𝜚(𝑥)0#|L| , |𝛼| > 1 , 𝑥 ∈ G .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
211
+ page_content=' (28) Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
212
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
213
+ page_content=' The first derivative of the solution 𝜙(𝑥) behaves as log𝜚(𝑥) and becomes unbounded as approaching the boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
214
+ page_content=' The derivatives of order 𝑘 behave as 𝜚(𝑥)0#O for 𝑘 > 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
215
+ page_content=' As mentioned in Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
216
+ page_content='1, these pointwise estimates cannot be improved by adding more strong smoothness on the data and domain boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
217
+ page_content=' We point out that the lack of smoothness in the exact solution could adversely affect the convergence rate of spatial discretization schemes for solving the radiation transfer equation [12-14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
218
+ page_content=' According to the regularity results, it is expected that the asymptotic convergence rate of the spatial discretization error of finite difference methods would be around 1 in the 𝐿?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
219
+ page_content=' or 𝐿0 norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
220
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
221
+ page_content=' NUMERICAL RESULTS In this section, we demonstrate how the regularity of the exact solution will impact the numerical convergence rate by solving the SN neutron transport equation in its original integro-differential form, using the classic second-order diamond difference (DD) method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
222
+ page_content=' The model problem is a 1cm × 1cm square with the vacuum boundary condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
223
+ page_content=' Thus, there will be no complication from the boundary condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
224
+ page_content=' The S12 level-symmetric quadrature set is used for angular discretization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
225
+ page_content=' We analyze the following four cases: Case 1: ΣA = 1, Σ7 = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
226
+ page_content=' Case 2: ΣA = 1, Σ7 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
227
+ page_content='8;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
228
+ page_content=' Case 3: ΣA = 10, Σ7 = 0, and Case 4: ΣA = 10, Σ7 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
229
+ page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
230
+ page_content=' For all the cases, the external source 𝑓 = 1, is infinitely differentiable, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
231
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
232
+ page_content=', 𝑓 ∈ ℂ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
233
+ page_content='(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
234
+ page_content=' Cases 1 and 3 are pure absorption problems, while Case 3 is optically thicker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
235
+ page_content=' It is interesting to note that the solutions are only determined by the external source for these two cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
236
+ page_content=' Cases 2 and 4 include the scattering effects, while Case 4 is optically thicker and more diffusive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
237
+ page_content=' Both the scattering and external source contribute to the solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
238
+ page_content=' The flux L1 errors as a function of mesh Dean Wang size and the rates of convergences are summarized in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
239
+ page_content=' The error distributions on the mesh 160 × 160 are plotted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
240
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
241
+ page_content=' The reference solution for each case is obtained on a very fine mesh, 5120 × 5120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
242
+ page_content=' Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
243
+ page_content=' Flux L1 errors and convergence rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
244
+ page_content=' Mesh (𝑵 × 𝑵) Case 1 Case 2 Case 3 Case 4 Error Rate Error Rate Error Rate Error Rate 10 × 10 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
245
+ page_content='87E 03 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
246
+ page_content='59E 03 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
247
+ page_content='31E 03 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
248
+ page_content='29E 03 20 × 20 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
249
+ page_content='95E 04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
250
+ page_content='85 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
251
+ page_content='01E 03 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
252
+ page_content='83 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
253
+ page_content='12E 04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
254
+ page_content='51 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
255
+ page_content='56E 03 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
256
+ page_content='86 40 × 40 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
257
+ page_content='90E 04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
258
+ page_content='45 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
259
+ page_content='73E 04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
260
+ page_content='44 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
261
+ page_content='31E 04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
262
+ page_content='82 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
263
+ page_content='89E 04 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
264
+ page_content='12 80 × 80 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
265
+ page_content='14E 04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
266
+ page_content='35 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
267
+ page_content='44E 04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
268
+ page_content='37 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
269
+ page_content='19E 05 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
270
+ page_content='15 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
271
+ page_content='37E 04 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
272
+ page_content='10 160 × 160 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
273
+ page_content='04E 05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
274
+ page_content='17 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
275
+ page_content='32E 05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
276
+ page_content='19 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
277
+ page_content='32E 05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
278
+ page_content='97 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
279
+ page_content='53E 05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
280
+ page_content='96 320 × 320 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
281
+ page_content='46E 05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
282
+ page_content='03 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
283
+ page_content='06E 05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
284
+ page_content='04 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
285
+ page_content='61E 06 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
286
+ page_content='87 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
287
+ page_content='39E 06 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
288
+ page_content='91 640 × 640 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
289
+ page_content='31E 05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
290
+ page_content='91 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
291
+ page_content='63E 05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
292
+ page_content='91 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
293
+ page_content='11E 06 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
294
+ page_content='71 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
295
+ page_content='70E 06 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
296
+ page_content='80 1280 × 1280 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
297
+ page_content='26E 06 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
298
+ page_content='07 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
299
+ page_content='76E 06 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
300
+ page_content='07 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
301
+ page_content='87E 07 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
302
+ page_content='51 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
303
+ page_content='51E 07 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
304
+ page_content='66 Case 1 Case 2 Case 3 Case 4 Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
305
+ page_content=' Flux error distribution on the mesh 𝟏𝟔𝟎 × 𝟏𝟔𝟎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
306
+ page_content=' ×10-4 4 Flux L1 Error 3 2 0 150 100 150 100 50 50 0 0×10-4 4 Flux L1 Error 3 2 0 150 100 150 100 50 50 0 0×10-4 4 3 Flux L1 Error 2 0 150 150 100 100 50 50 0 0×10-4 6 Flux L1 Error 4 2 0 150 150 100 100 50 50 0 0Smoothness of the Radiation Transfer Solution It is evident that the convergence rate decreases as the mesh refines, and the errors are much larger at the boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
307
+ page_content=' The “noisier” distributions in Cases 1 and 2 are due to the ray effects of the discrete ordinates (SN) method, which are more pronounced in the optically thin problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
308
+ page_content=' The convergence behavior is similar between the cases with and without the scattering, indicating that the source term plays a significant role in defining the irregularity of the solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
309
+ page_content=' Cases 3 and 4 show the improved convergence rate as compared to Cases 1 and 2 because the exponential function e–1|*–9| makes the kernel less singular as the total cross section 𝜎 increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
310
+ page_content=' In addition, Case 4 has a slightly better rate of convergence than Case 3 on fine meshes (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
311
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
312
+ page_content=', 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
313
+ page_content='84 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
314
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
315
+ page_content='75 on 640 × 640), because the transport problem becomes more like an elliptic diffusion problem [17], and the diffusion solution in general has better regularity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
316
+ page_content=' It should be pointed out that in Case 3, the convergence rate is only 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
317
+ page_content='51 on the coarse mesh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
318
+ page_content=' It is because for the pure absorption case, the DD method becomes unstable when the mesh size is larger than >Q!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
319
+ page_content=' 1 , where 𝜇& is the direction cosine of the radiation transfer direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
320
+ page_content=' However, it is more stable for the scattering case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
321
+ page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
322
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
323
+ page_content=' The error of the DD can be estimated by „𝜙& − 𝜙& R„ ≤ 𝐶ℎ& >‖𝜙′′‖?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
324
+ page_content=', where 𝜙& is the exact solution at cell 𝑗, 𝜙& R is its numerical result, and ℎ& is the mesh size [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
325
+ page_content=' Although this optimal error estimate is obtained for the 1D slab geometry, one can expect the same to be true in two dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
326
+ page_content=' As given by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
327
+ page_content=' (28), the second derivative 𝜙44 will be bounded in the interior of the domain, while it would behave as 𝜙44~ℎ& #0 near the boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
328
+ page_content=' Therefore, it is expected that the convergence rate of the DD would decrease with refining the mesh, and asymptotically tend to 𝑂(ℎ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
329
+ page_content=' If the solution is sufficiently smooth (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
330
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
331
+ page_content=', a manufactured smooth solution), the DD would maintain its second order of accuracy on any mesh size [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
332
+ page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
333
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
334
+ page_content=' The scattering does not appear to play a role in defining the smoothness of the solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
335
+ page_content=' For the problem without the external source, if there exists a nonsmooth incoming flux on the boundary, then the scattering may not be able to regularize the solution either, since the irregularity caused by the incoming flux, which is defined by the surface integral term of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
336
+ page_content=' (4), has nothing to do with the scattering and the solution flux 𝜙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
337
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
338
+ page_content=' CONCLUSIONS We have derived the two-dimensional integral radiation transfer equation and examined the differential properties of the integral kernel for fulfilling the boundedness conditions of Vainikko’s theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
339
+ page_content=' We use the theorem to estimate the derivatives of the radiation transfer solution near the boundary of the domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
340
+ page_content=' It is noted that the first derivative of the scalar flux 𝜙(𝑥) becomes unbounded when approaching the boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
341
+ page_content=' The derivatives of order 𝑘 behave as 𝜚(𝑥)0#O for 𝑘 > 1, where 𝜚(𝑥) is the distance to the boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
342
+ page_content=' A numerical example is presented to demonstrate that the irregularity of the exact solution will reduce the rate of convergence of numerical solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
343
+ page_content=' The convergence rate improves as the optically thickness of the problem increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
344
+ page_content=' It is interesting to note that the scattering does not help smoothen the solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
345
+ page_content=' However, it does play a crucial role in transforming the transport problem into an elliptic diffusion problem in the asymptotic diffusion limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
346
+ page_content=' We are currently extending the analysis to the boundary integral transport problem in considering nonzero incoming boundary conditions and corner effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
347
+ page_content=' In addition, it would be interesting to study the convergence behavior of weak solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
348
+ page_content=' REFERENCES 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
349
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
350
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
351
+ page_content=' Mikhlin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
352
+ page_content=' Prossdorf, Singular Integral Operators, Springer-Verlag (1986).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
353
+ page_content=' Dean Wang 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
354
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
355
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
356
+ page_content=' Mikhlin, Multidimensional Singular Integrals and Integral Equations, Pergamon Press, Oxford (1965).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
357
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
358
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
359
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
360
+ page_content=' Vladimirov, Mathematical Problems in the One-Velocity Theory of Particle Transport, (Translated from Transactions of the V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
361
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
362
+ page_content=' Steklov Mathematical Institute, 61, 1961), Atomic Energy of Canada Limited (1963).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
363
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
364
+ page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
365
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
366
+ page_content=' Germogenova, “Local properties of the solution of the transport equation,” Dokl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
367
+ page_content=' Akad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
368
+ page_content=' Nauk SSSR, 187(5), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
369
+ page_content=' 978-981 (1969).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
370
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
371
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
372
+ page_content=' Pitkaranta, “Estimates for the Derivatives of Solutions to Weakly Singular Fredholm Integral Equations,” SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
373
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
374
+ page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
375
+ page_content=', 11(6), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
376
+ page_content=' 952-968 (1980).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
377
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
378
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
379
+ page_content=' Vainikko, Multidimensional Weakly Singular Integral Equations, Springer-Verlag, Berlin Heidelberg (1993).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
380
+ page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
381
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
382
+ page_content=' Johnson and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
383
+ page_content=' Pitkaranta, “Convergence of A Fully Discrete Scheme for Two-Dimensional Neutron Transport,” SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
384
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
385
+ page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
386
+ page_content=', 20(5), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
387
+ page_content=' 951-966 (1983).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
388
+ page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
389
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
390
+ page_content=' Hennebach, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
391
+ page_content=' Junghanns, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
392
+ page_content=' Vainikko, “Weakly Singular Integral Equations with Operator-Valued Kernels and An Application to Radiation Transfer Problems,” Integr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
393
+ page_content=' Equat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
394
+ page_content=' Oper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
395
+ page_content=' Th.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
396
+ page_content=', 22, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
397
+ page_content=' 37-64 (1995).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
398
+ page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
399
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
400
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
401
+ page_content=' Lewis and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
402
+ page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
403
+ page_content=' Miller, Jr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
404
+ page_content=', Computational Methods of Neutron Transport, American Nuclear Society (1993).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
405
+ page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
406
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
407
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
408
+ page_content=' Bell and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
409
+ page_content=' Glasstone, Nuclear Reactor Theory, Van Nostrand Reinhold Company, New York (1970).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
410
+ page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
411
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
412
+ page_content=' Abramowitz and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
413
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
414
+ page_content=' Stegun, Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables, Dover, New York (1970).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
415
+ page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
416
+ page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
417
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
418
+ page_content=' Madsen, “Convergence of Singular Difference Approximations for the Discrete Ordinate Equations in 𝑥– 𝑦 Geometry,” Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
419
+ page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
420
+ page_content=', 26(117), 45-50 (1972).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
421
+ page_content=' 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
422
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
423
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
424
+ page_content=' Larsen, “Spatial Convergence Properties of the Diamond Difference Method in x, y Geometry,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
425
+ page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
426
+ page_content=' Eng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
427
+ page_content=', 80, 710-713 (1982).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
428
+ page_content=' 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
429
+ page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
430
+ page_content=' Wang and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
431
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
432
+ page_content=' Ragusa, “On the Convergence of DGFEM Applied to the Discrete Ordinates Transport Equation for Structured and Unstructured Triangular Meshes,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
433
+ page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
434
+ page_content=' Eng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
435
+ page_content=', 163, 56-72 (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
436
+ page_content=' 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
437
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
438
+ page_content=' Wang, “Error Analysis of Numerical Methods for Thick Diffusive Neutron Transport Problems on Shishkin Mesh,” Proceedings of International Conference on Physics of Reactors 2022 (PHYSOR 2022), Pittsburgh, PA, USA, May 15-20, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
439
+ page_content=' 977-986 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
440
+ page_content=' 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
441
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
442
+ page_content=' Wang, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
443
+ page_content=', “Solving the SN Transport Equation Using High Order Lax-Friedrichs WENO Fast Sweeping Methods,” Proceedings of International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering 2019 (M&C 2019), Portland, OR, USA, August 25-29, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
444
+ page_content=' 61-72 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
445
+ page_content=' 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
446
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
447
+ page_content=' Wang and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
448
+ page_content=' Byambaakhuu, “A New Proof of the Asymptotic Diffusion Limit of the SN Neutron Transport Equation,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
449
+ page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
450
+ page_content=' Eng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
451
+ page_content=', 195, 1347-1358 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/59AyT4oBgHgl3EQfcfe1/content/2301.00285v1.pdf'}
59E1T4oBgHgl3EQfTAPi/content/tmp_files/2301.03074v1.pdf.txt ADDED
@@ -0,0 +1,1442 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ SeedTree: A Dynamically Optimal and
2
+ Local Self-Adjusting Tree
3
+ Arash Pourdamghani1, Chen Avin2, Robert Sama3, Stefan Schmid1,4
4
+ 1TU Berlin, Germany 2School of Electrical and Computer Engineering, Ben Gurion University of the Negev, Israel
5
+ 3Faculty of Computer Science, University of Vienna, Austria 4Fraunhofer SIT, Germany
6
+ Abstract—We consider the fundamental problem of design-
7
+ ing a self-adjusting tree, which efficiently and locally adapts
8
+ itself towards the demand it serves (namely accesses to the
9
+ items stored by the tree nodes), striking a balance between
10
+ the benefits of such adjustments (enabling faster access) and
11
+ their costs (reconfigurations). This problem finds applications,
12
+ among others, in the context of emerging demand-aware and
13
+ reconfigurable datacenter networks and features connections to
14
+ self-adjusting data structures. Our main contribution is SeedTree,
15
+ a dynamically optimal self-adjusting tree which supports local
16
+ (i.e., greedy) routing, which is particularly attractive under highly
17
+ dynamic demands. SeedTree relies on an innovative approach
18
+ which defines a set of unique paths based on randomized item
19
+ addresses, and uses a small constant number of items per node.
20
+ We complement our analytical results by showing the benefits
21
+ of SeedTree empirically, evaluating it on various synthetic and
22
+ real-world communication traces.
23
+ Index Terms—Reconfigurable datacenters, Online algorithms,
24
+ Self-adjusting data structure
25
+ I. INTRODUCTION
26
+ This paper considers the fundamental problem of designing
27
+ self-adjusting trees: trees which adapt themselves towards the
28
+ demand they serve. Such self-adjusting trees need to strike
29
+ an efficient tradeoff between the benefits of such adjustments
30
+ (better performance in the future) and their costs (reconfigura-
31
+ tion overheads now). The problem is motivated by the fact that
32
+ workloads in practice often feature much temporal and spatial
33
+ structure, which may be exploited by self-adjusting optimiza-
34
+ tions [1], [2]. Furthermore, such adjustments are increasingly
35
+ available, as researchers and practitioners are currently making
36
+ great efforts to render networked and distributed systems more
37
+ flexible, supporting dynamic reconfigurations, e.g., by leverag-
38
+ ing programmability (via software-defined networks) [3], [4],
39
+ network virtualization [5], or reconfigurable optical commu-
40
+ nication technologies [6].
41
+ In particular, we study the following abstract model (appli-
42
+ cations will follow): we consider a binary tree which serves
43
+ access requests, issued at the root of the tree, to the items
44
+ stored by the nodes. Each node (e.g., server) stores up to
45
+ c items (e.g., virtual machines), where c is a parameter
46
+ indicating the capacity of a node. We consider an online
47
+ perspective where items are requested over time. An online
48
+ algorithm aims to optimize the tree in order to minimize
49
+ the cost of future access requests (defined as the path length
50
+ This project has received funding from the European Research Council
51
+ (ERC) under grant agreement No. 864228 (AdjustNet), 2020-2025.
52
+ between root and accessed item), while minimizing the number
53
+ of items moving up or down in the tree: the reconfigurations.
54
+ We call each movement a reconfiguration, and keep track of
55
+ its cost. In particular, the online algorithm which does not
56
+ know the future access requests, aims to be competitive with
57
+ an optimal offline algorithm that knows the entire request
58
+ sequence ahead of time. In other words, we are interested in
59
+ an online algorithm with minimum competitive ratio [7] over
60
+ any (even worst-case) request sequence.
61
+ Self-adjusting trees are not only one of the most fundamen-
62
+ tal topological structures of their own merit, they also have
63
+ interesting applications. For example, such trees are a crucial
64
+ building block for more general self-adjusting networks: Avin
65
+ et al. [8] recently showed that multiple trees optimized indi-
66
+ vidually for a single root, can be combined to build general
67
+ communication networks which provide low degree and low
68
+ distortion. The design of a competitive self-adjusting tree as
69
+ studied in this paper, is hence a stepping stone.
70
+ Self-adjusting trees also feature interesting connections to
71
+ self-adjusting data structures (see §VI for a detailed discus-
72
+ sion), for some of which designing and proving constant-
73
+ competitive online algorithms is still an open question [9].
74
+ Interestingly, a recent result shows that constant-competitive
75
+ online algorithms exist for self-adjusting balanced binary trees
76
+ if one maintains a global map of the items in the tree; it was
77
+ proposed to store such a map centrally, at a logical root [10].
78
+ In this paper, we are interested in the question whether
79
+ this limitation can be overcome, and whether a competitive
80
+ decentralized solution exist.
81
+ Our main contribution is a dynamically optimal self-
82
+ adjusting tree, SeedTree*, which achieves a constant compet-
83
+ itive ratio by keeping recently accessed items closer to the
84
+ root, ensuring a working set theorem [9]. Our result also im-
85
+ plies weaker notions such as key independent optimality [11]
86
+ (details will follow). SeedTree further supports local (that is,
87
+ greedy and hence decentralized) routing, which is particularly
88
+ attractive in dynamic networks, by relying on an innovative
89
+ and simple routing approach that enables nodes to take local
90
+ forwarding decisions: SeedTree hashes items to i.i.d. random
91
+ addresses and defines a set of greedy paths based on these
92
+ addresses. A main insight from our work is that a constant
93
+ competitive ratio with locality property can be achieved if
94
+ *The name is due to the additional capacity in nodes of the tree, which
95
+ resembles seeds in fruits of a tree.
96
+ arXiv:2301.03074v1 [cs.DS] 8 Jan 2023
97
+
98
+ Fig. 1: A depiction of SeedTree with capacity 2. Large circles
99
+ represent nodes (nodes) of the system, and small circles
100
+ represent items. The number inside each small circle is the
101
+ hash of the corresponding item.
102
+ nodes feature small constant capacities, that is, by allowing
103
+ nodes to store a small constant number of items. Storing more
104
+ than a single item on a node is often practical, e.g., on a server
105
+ or a peer [12], and it is common in hashing data structures with
106
+ collision [13], [14]. We also evaluate SeedTree empirically,
107
+ both on synthetic traces with ranging temporal locality and
108
+ also data derived from Facebook datacenter networks [1],
109
+ showing how tuning parameters of the SeedTree can lower
110
+ the total (and access) cost for various scenarios.
111
+ The remainder of the paper is organized as follows. §II in-
112
+ troduces our model and preliminaries. We present and analyze
113
+ our online algorithm in §III, and transform it to the matching
114
+ model of datacenter networks in §IV. After discussing our
115
+ empirical evaluation results in §V, we review related works
116
+ in §VI and conclude our contributions in §VII.
117
+ II. MODEL AND PRELIMINARIES
118
+ This section presents our model and introduces preliminar-
119
+ ies used in the design of SeedTree.
120
+ Items and nodes. We assume a set of items V
121
+ =
122
+ (v1, . . . , vn), and a set of nodes S = (s1, . . . )† arranged as a
123
+ binary tree T. We call the node s1 the root, which is at depth
124
+ 0 in the tree T, and a node sj is at depth ⌊log j⌋.
125
+ Each node can store c items, where c is a parameter
126
+ indicating the capacity of a node. In our model, we assume
127
+ that c is a constant. The assignment of items to nodes can
128
+ change over time. We say a node is full if it contains c items,
129
+ and empty if it contains no item (See an example in Figure 1).
130
+ We define the level of item v at time t, levelt(v), as the
131
+ depth of the node containing v. For example, if item v is at
132
+ node sj at time t, we have levelt(v) = ⌊log j⌋.
133
+ Request Sequence and Working Set. Items are requested
134
+ over time in an online manner, modeled as a request sequence
135
+ σ = (σ1, . . . , σm), where σt = v ∈ V means item v is
136
+ requested at time t. We are sometimes interested in the recency
137
+ of item requests, particularly the size of the working set.
138
+ Formally, we define wst(σ, v) as the working set of item v in
139
+ at time t in the request sequence σ. The working set wst(σ, v)
140
+ is a set of unique items requested since the last request to the
141
+ item v before time t. We define a rank of item v at time t,
142
+ rankt(v), as the size of working set of the item v at time t.
143
+ †We assume the set of nodes to be arbitrarily large, as the exact number
144
+ of nodes will be determined based on their used capacity.
145
+ Costs and Competitive Ratio. We partition costs incurred
146
+ by an algorithm, ALG, into two parts, the cost of finding an
147
+ item: the access cost, and the cost of reconfigurations: the
148
+ reconfiguration cost. The search for any item starts at the root
149
+ node and ends at the node containing the item. Based on our
150
+ assumption of constant capacity, we assume the cost of search
151
+ inside a node to be negligible. Furthermore, assuming the local
152
+ routing property, we find an item by traversing a single path
153
+ in our tree; hence the access cost for an access request σi,
154
+ CA
155
+ ALG(σi), equals the level at which the item is stored.
156
+ In our model, a reconfiguration consists of moving an item
157
+ one level up or one level down in the tree, plus potentially
158
+ additional lookups inside a node. We denote the total recon-
159
+ figuration cost after an access request σi by CR
160
+ ALG(σi). Hence,
161
+ the total cost of each access request is CA
162
+ ALG(σi)+CR
163
+ ALG(σi),
164
+ and the total cost of the algorithm on the whole request
165
+ sequence is: CALG(σ) = �m
166
+ i=1 CA
167
+ ALG(σi) + CR
168
+ ALG(σi). The
169
+ objective of SeedTree is to operate at the lowest possible cost,
170
+ or more specifically, as close as possible to the cost of an
171
+ optimal offline algorithm, OPT.
172
+ Definition 1 (Competitive ratio). Given an online algorithm
173
+ ALG and an optimal offline algorithm OPT, the (strict)
174
+ competitive ratio is defined as: ρALG = maxσ
175
+ CALG(σ)
176
+ COP T (σ)
177
+ Furthermore, we say an algorithm has (strict) access com-
178
+ petitive ratio considering only the access cost of the online
179
+ algorithm ALG (not including the reconfiguration cost).
180
+ In this paper, we prove that SeedTree is dynamically optimal.
181
+ It means that the cost of our algorithm matches the cost of the
182
+ optimal offline algorithm asymptotically.
183
+ Definition 2 (Dynamic optimality). Algorithm ALG is dy-
184
+ namically optimal if it has constant competitive ratio, i.e.,
185
+ ρALG = O(1).
186
+ MRU trees. We define a specific class of self-adjusting
187
+ trees, MRU trees. An algorithm maintains a MRU tree if it
188
+ keeps items at a similar level to their ranks.
189
+ Definition 3 (MRU tree). An algorithm has the MRU(0)
190
+ property if for any item v inside its tree and at any given time
191
+ t, the equality levelt(v) = ⌊log ⌈ rankt(v)
192
+ c
193
+ ⌉⌋ holds.
194
+ Similarly, we say an algorithm maintains an MRU(β) if it
195
+ ensures the relaxed bound of levelt(v) ≤ ⌊log ⌈ rankt(v)
196
+ c
197
+ ⌉⌋ + β
198
+ for any item v in the tree.
199
+ III. ONLINE SEEDTREE
200
+ This section presents SeedTree, an online algorithm that
201
+ is dynamically optimal in expectation. This algorithm build
202
+ upon uniformly random generated addresses, and allows for
203
+ local routing, while ensuring dynamic optimality. Details of
204
+ the algorithm are as follows: Algorithm 1 always starts from
205
+ the root node. Upon receiving an access request to an item v
206
+ it performs a local routing (described in Procedure LocalRout-
207
+ ing) based on the uniformly random binary address generated
208
+ for the node v, which uniquely determines the path of v in the
209
+ tree. We call the i-th bit of the address of v by H(v, i). Let
210
+ us assume that the local routing for node v ends in level ℓ.
211
+
212
+ 100
213
+ 001
214
+ 101
215
+ 010
216
+ 010
217
+ 011
218
+ 0
219
+ 111)
220
+ (110
221
+ 011
222
+ 001
223
+ 100
224
+ 110
225
+ 110
226
+ 111(a) Item 001 moves up, node-by-node, until it
227
+ reaches the root.
228
+ (b) The first try of push-down failed, because
229
+ node 100 is full.
230
+ (c) After finding non-full node, items are
231
+ pushed down node-by-node.
232
+ Fig. 2: An example of steps taken in Algorithm 1, starting from the state of SeedTree in Figure 1, which has a capacity equal
233
+ to 2. In this example, the request is an access request to the item with the hash value 001 (the purple circle). Subfigure 2a
234
+ shows the move-to-the-root phase, and Subfigures 2b and 2c depict the push-down phase.
235
+ Procedure LocalRouting(s,v)
236
+ 1 if H(v, level(s)) equals 0 then
237
+ 2
238
+ Return the left child of s.
239
+ 3 else
240
+ 4
241
+ Return the right child of s.
242
+ Then SeedTree performs the following two-phase reconfig-
243
+ uration. These two phases are designed to ensure the level of
244
+ items remains in the same range as their rank (details will
245
+ follow), and the number of items remains the same at each
246
+ level.
247
+ 1) Move-to-the-root: This phase moves the accessed item to
248
+ the node at the lowest level possible, the root of the tree.
249
+ The movement of the item is step-by-step, and it keeps
250
+ all the other items in their previous node (we keep the
251
+ item in a temporary buffer if a node on the path was full).
252
+ This phase is depicted in Figure 2a by zig-zagged purple
253
+ arrows.
254
+ 2) Push-down: In this phase, our algorithm starts from the
255
+ root node, selects an item in the node (including the
256
+ item that has just moved to this node) uniformly at
257
+ random, and moves this item one level down to the new
258
+ node selected in the LocalRouting procedure. The same
259
+ procedure is continued for the new node until reaching
260
+ level ℓ, the level of the accessed item. If the node at
261
+ level ℓ was non-full, the re-establishment of balance
262
+ was successful. Otherwise, if this attempt is failed, the
263
+ algorithm reverses the previous push downs back to the
264
+ root, and starts again, until an attempt is successful. As
265
+ an example, the failed attempt of this phase is depicted
266
+ by dashed red edges in Figure 2b and the last successful
267
+ one by curved blue arrows in Figure 2c.
268
+ Algorithm 1 always terminates, as there is always the chance
269
+ that the item which has been moved to root is selected among
270
+ all candidates, and we know that the node which that item is
271
+ taken from is not full. We now state the main theorem of the
272
+ paper that proves the dynamic optimality of SeedTree.
273
+ Theorem 1. SeedTree is dynamically optimal for any given
274
+ capacity c ≥ 1.
275
+ Algorithm 1: Online SeedTree
276
+ Input: Accessed item v.
277
+ 1 Set s as the root.
278
+ 2 while s does not contain v do
279
+ 3
280
+ s = LocalRouting(s,v).
281
+ 4 Call the current level of v as ℓ.
282
+ 5 Set s as the root, and move item v to s.
283
+ 6 while balance is not fixed do
284
+ 7
285
+ Call the current node s.
286
+ 8
287
+ while level of s is less than ℓ do
288
+ 9
289
+ Take an item in node s, uniformly at random, call it
290
+ v.
291
+ 10
292
+ s = LocalRouting(s,v).
293
+ 11
294
+ Add item v to the node s.
295
+ 12
296
+ if the last chosen node is full then
297
+ 13
298
+ Reverse the push-down back to the root.
299
+ The proof of Theorem 1 is at the end of the section. The first
300
+ step towards the proof is showing that the number of items in
301
+ each level remains the same. It is true because after removing
302
+ an item at a certain level, the algorithm adds an item to the
303
+ same level as a result of the push-down phase.
304
+ Observation 1. SeedTree keeps the number of items the same
305
+ at each level.
306
+ The rest of the analysis is based on the assumption that the
307
+ algorithm was initialized with a fixed fractional occupancy
308
+ 0 < f < 1 of the capacity of each level, i.e., in level i, the
309
+ initial tree has exactly ⌊c · f · 2i⌋ items. At the end of this
310
+ section, we will see that f = 1
311
+ 2 works best for our analysis.
312
+ However, we emphasize that having 0 < f < 1 suffices for
313
+ SeedTree to run properly.
314
+ The second observation is a result of Observation 1. As the
315
+ number of items remains the same in each level (based on
316
+ Observation 1) at most a fraction f of all nodes are full. In
317
+ the lowest level, the number of full nodes might be even lower;
318
+ hence the probability of a uniformly random node being full
319
+ is at most f when we go to the next request.
320
+ Observation 2. Algorithm 1 ensures that the probability of
321
+ any uniformly random chosen node in SeedTree to be full,
322
+ after serving each access request, is at most f.
323
+
324
+ 100
325
+ 001
326
+ 001
327
+ 010
328
+ 010
329
+ 011
330
+ 111
331
+ 110
332
+ 011
333
+ 100
334
+ 110
335
+ 110100
336
+ 001
337
+ 001
338
+ 010
339
+ 010
340
+ 011
341
+ 111
342
+ 110
343
+ 011
344
+ 100
345
+ 110
346
+ 110001
347
+ 100
348
+ 001
349
+ 010
350
+ 010
351
+ 011
352
+ 101
353
+ 111
354
+ 110
355
+ 011
356
+ 100
357
+ 110
358
+ 110According to Algorithm 1, items are selected uniformly at
359
+ random inside a node. In the following lemma, we show that
360
+ a node in a certain level is also selected uniformly at random,
361
+ which enables the rest of the proof.
362
+ Lemma 1. Nodes selected on the final path of the push-down
363
+ phase with a level lower than ℓ are selected uniformly at
364
+ random.
365
+ Proof. Let us denote the probability of ℓ′-th node on the path
366
+ (the node at level ℓ′, denoted by sℓ′) being the selected node
367
+ is
368
+ 1
369
+ 2ℓ′ . Our proof goes by induction. For the basis, ℓ′ = 0, it
370
+ is true since we only have one node, the root. Now assume
371
+ that in the final path of push down, we want to see the
372
+ probability of reaching the current node, sℓ′. Based on the
373
+ induction assumption, we know that the parent of sℓ′, the node
374
+ sℓ′−1, has been selected uniformly at random, with probability
375
+ 1
376
+ 2ℓ′−1 . Based on Line 9 of Algorithm 1, an item is selected
377
+ from those inside sℓ′−1 uniformly at random, plus having the
378
+ independence guarantee of our hash function that generated
379
+ address of the selected item, we can conclude the decision to
380
+ go to left or right from sℓ′−1 was also uniformly at random,
381
+ hence the probability of reach sℓ′ is
382
+ 1
383
+ 2ℓ′−1 · 1
384
+ 2 =
385
+ 1
386
+ 2ℓ′ . Note
387
+ that the above-mentioned choices are independent of whether
388
+ or not the descents sℓ′−1 are full or not. Hence the choice
389
+ is independent of (possible) previous failed attempts of the
390
+ push-down phase (which might happen due to having a full
391
+ node at level ℓ), i.e., the previous attempts do not affect the
392
+ probability of choosing the node sℓ′.
393
+ An essential element of the proof of Theorem 1 is that the
394
+ rank and level of items are related to each other. Lemma 2
395
+ describes one of the aspects of this relation.
396
+ Lemma 2. During the execution of the SeedTree, for items v
397
+ and u at time t, if rankt(v) > rankt(u) then E[levelt(v)] >
398
+ E[levelt(u)].
399
+ Proof. Having rankt(v) > rankt(u), we know that u was
400
+ accessed more recently than v. Let us consider time t′, the
401
+ last time u was accessed. Since the rank of v is strictly larger
402
+ than the rank of u, and as u was moved to the root at time t′,
403
+ we know that levelt′(v) > levelt′(u).
404
+ Items u and v might reach the same level after time t′, but
405
+ it is not a must. We consider the level that they first met as
406
+ a random variable, Luv. We denote Luv = −1 if u and v
407
+ never appear on the same level after time t′. Let us quantify
408
+ the difference in the expected level of u and v, using the law
409
+ of total expectation:
410
+ E[levelt(v)] − E[levelt(u)]
411
+ =
412
+ ⌊log ⌈ n
413
+ c ⌉⌋
414
+
415
+ k=−1
416
+ Pr(Luv = k) · (E[levelt(v)|Luv = k]
417
+ −E[levelt(u)|Luv = k])
418
+ For the case Luv = −1, we know that u and v never reached
419
+ the same level, and the following is always true:
420
+ E[levelt(v)|Lv,u = −1] > E[levelt(u)|Lv,u = −1]
421
+ For k ≥ 0, let us consider the time t′′ when u and v meet
422
+ at the same level, i.e levelt′′(u) = levelt′′(v). After items u
423
+ and v meet for the first time, their expected progress is the
424
+ same. More precisely, consider the current subtree of the node
425
+ containing v at time t′′, and call it T ′. Since the item addresses
426
+ are chosen uniformly at random, the expected number of times
427
+ that T ′ is a subtree of a node containing v, equals the number
428
+ of times that T ′ might be a subtree of node containing u in
429
+ the same level. Hence the expected increase in the level for
430
+ both items u and v stays the same from time t′′ onward.
431
+ Next, we explain why the number of items accessed at a
432
+ higher level is limited in expectation for any given item.
433
+ Lemma 3. For a given item v at time t, there are at most
434
+ 2 · rankws
435
+ t (v) items accessed at a higher level since the last
436
+ time v was accessed, in expectation.
437
+ Proof. Given Lemma 2, the proof is along the lines of the
438
+ proof of Lemma 4 from [10]. We removed the details of the
439
+ proof due to space constraints.
440
+ Now we prove the items in the tree maintained by the online
441
+ SeedTree are not placed much farther from their position in
442
+ a tree that realizes the exact working set property. This in
443
+ turn allows us to approximate the total cost of the online
444
+ SeedTree in comparison to the optimal offline algorithm with
445
+ the same capacity. The approximation factor, 2 − log(f), is
446
+ intuitive: with less capacity in each level (lower values of
447
+ levels’ fractional occupancy), we need to put items further
448
+ down.
449
+ Lemma 4. SeedTree is MRU(2 − log(f)) in expectation.
450
+ Proof. For any given item v and time t, we show that
451
+ E[levelt(v)] ≤ ⌊log ⌈ rankt(v)
452
+ c
453
+ ⌉⌋ + 2 − log(f) remains true,
454
+ considering move-to-the-root and push-down phases. As can
455
+ be seen in Line 8 of Algorithm 1, the item v might move down
456
+ if the current level of v is lower than the level of the accessed
457
+ item.
458
+ Let us denote the increase in the level from time t′ to time
459
+ t by a random variable D(t′, t). We express this increase in
460
+ terms of an indicator random variable I(t′, t, ℓ) which denotes
461
+ whether item v went down from level ℓ during [t′, t] or not.
462
+ We know that:
463
+ D(t′, t) =
464
+
465
+
466
+ I(t′, t, ℓ)
467
+ Let K denote the number of items accessed from a higher
468
+ level, and let us write K = k1 + · · · + k⌈ n
469
+ c ⌉, where kℓ means
470
+ that kℓ such accesses happened when item v was at level ℓ.
471
+ For the level ℓ, based on the Observation 1 and Lemma 1 and
472
+ the fact that each level contains f · c · 2ℓ items, we conclude
473
+ v is being selected after kℓ − 1 accesses with probability (1 −
474
+ 1
475
+ f·c·2ℓ )k−1 · (
476
+ 1
477
+ f·c·2ℓ ).
478
+ I(t′, t, ℓ) = min(1,
479
+ K
480
+
481
+ kℓ=0
482
+ (1 −
483
+ 1
484
+ f · c · 2ℓ )kℓ−1 · (
485
+ 1
486
+ f · c · 2ℓ ))
487
+
488
+ = min(1, (
489
+ 1
490
+ f · c · 2ℓ ) ·
491
+ K
492
+
493
+ kℓ=0
494
+ (1 −
495
+ 1
496
+ f · c · 2ℓ )kℓ−1)
497
+ ≤ min(1, (
498
+ K
499
+ f · c · 2ℓ ))
500
+ Going back to our original goal of finding how many levels
501
+ an item goes down during a time period [t′, t], we have:
502
+ E[D(t′, t)] ≤
503
+
504
+
505
+ E[min(1, (
506
+ K
507
+ f · c · 2ℓ ))]
508
+ = log(E[K]
509
+ f · c ) + 1 = log(E[K]) − log(c) − log(f) + 1
510
+ The last equality comes from the fact that for ℓ
511
+ =
512
+ log( E[K]
513
+ f·c ), we have
514
+ K
515
+ f·c·2ℓ ≤ 1, and for all larger values of
516
+ ℓ, the value will decrease exponentially with factors of two.
517
+ From Lemma 3 we know that the expected value of K is
518
+ less than equal to 2·rankt(v); therefore, the expected increase
519
+ is:
520
+ E[D(t′, t)] ≤ log(2 · rankt(v)) − log(c) − log(f) + 1
521
+ = log(rankt(v)) − log(c) + 2 − log(f)
522
+ The following lemma shows the relation between the total
523
+ cost of the online SeedTree and fractional occupancy f. The
524
+ relation is natural: as f becomes smaller, the chance of finding
525
+ a non-full node becomes larger, and thus fewer attempts are
526
+ needed to find a non-full node.
527
+ Lemma 5. The expected cost of SeedTree is less than equal
528
+ to 2 · (⌈
529
+ 1
530
+ (1−f)⌉ + 1) times the access cost.
531
+ Proof. Let us consider the accessed item v at level ℓ. In the
532
+ first part of the algorithm, the move-to-the-root phase costs
533
+ the same as the access, which is equal to traversing ℓ edges.
534
+ As the probability of a node being non-full is 1 − f based on
535
+ Observation 2, and as the choice of nodes is uniform based
536
+ on Observation 1, only ⌈
537
+ 1
538
+ 1−f ⌉ iterations are needed during the
539
+ push-down phase for finding a non-full node, each at cost 2·ℓ.
540
+ Hence, given the linearity of expectation, we have:
541
+ E[CALG] = E[CAccess
542
+ ALG + CMove-to-the-root
543
+ ALG
544
+ + CPush-down
545
+ ALG
546
+ ]
547
+ ≤ 2 · (1 + ⌈
548
+ 1
549
+ 1 − f ⌉) · ℓ = 2 · (1 + ⌈
550
+ 1
551
+ 1 − f ⌉) · CAccess
552
+ ALG
553
+ We now describe why working set optimality is enough for
554
+ dynamic optimality, given that reconfigurations do not cost
555
+ much (which is proved in Lemma 5). Hence, any other form
556
+ of optimality, such as key independent optimality or finger
557
+ optimality is guaranteed automatically [11].
558
+ Lemma 6. For any given c, an MRU(0) algorithm is (1+e)
559
+ access competitive.
560
+ Proof. The proof relies on the potential function argument. We
561
+ describe a potential function at time t by φt, and show that
562
+ the change in the potential from time t to t + 1 is ∆φt→t+1.
563
+ Our potential function at time t, counts the number of
564
+ items that are misplaced in the tree of the optimal offline
565
+ algorithm OPT with regard to their rank. (As the definition
566
+ of MRU(0) indicates, there exists no inversion in such a tree,
567
+ that is why we only focus on the number of inversions in
568
+ OPT.) Concretely, we say a pair (v, u) is an inversion if
569
+ rankt(v) < rankt(u) but levelt(v) > levelt(u). We denote
570
+ the number of items that have an inversion with item v at time
571
+ t by invt(v), and define Bt(v) = 1 +
572
+ invt(v)
573
+ c·2levelt(v) . Furthermore,
574
+ define Bt = �n
575
+ v=1 Bt(v). We define the potential function at
576
+ time t as φt = log Bt. We assume that the online SeedTree
577
+ rearranges its required items in the tree before the optimal
578
+ algorithm’s rearrangements. Let us first describe the change
579
+ in potential due to rearrangement in the online SeedTree after
580
+ accessing item σt = v. This change has the following effects:
581
+ 1) Rank of the accessed item, v, has been set to 1.
582
+ 2) Rank of other items in the tree might have been increased
583
+ by at most 1.
584
+ Since the relative rank of items other than v does not change
585
+ because of the second effect, it does not affect the number
586
+ of inversions and hence the potential function. Therefore, we
587
+ focus on the first effect. Since OPT has not changed its
588
+ configuration, for all items u that are being stored in a lower
589
+ level than v in the OPT, a single inversion is created, therefore
590
+ we have Bt+1(u) = Bt(u) +
591
+ 1
592
+ c·2levelc(u) . For the accessed
593
+ item v, as its rank has changed to one, all of its inversions
594
+ get deleted. The number of inversions for other items, except
595
+ v, remains the same. Let us denote the number of items
596
+ with lower level than v at time t by Lt(v) and partition the
597
+ �n
598
+ i=1 Bt+1(i) into three parts as we discussed (v, items stored
599
+ in a lower level than v, and other items denoted by set Ot(v)):
600
+ n
601
+
602
+ i=1
603
+ Bt+1(i) = Bt+1(v) ·
604
+
605
+ i∈Lt(v)
606
+ Bt+1(i) ·
607
+
608
+ i∈Ot(v)
609
+ Bt+1(i)
610
+ By rewriting Bt+1(i) in terms of Bt(i), we get:
611
+ n
612
+
613
+ i=1
614
+ Bt+1(i) = 1 ·
615
+
616
+ i∈Lt(v)
617
+ (Bt(i) +
618
+ 1
619
+ c · 2levelt(i) ) ·
620
+
621
+ i∈Ot(v)
622
+ Bt(i)
623
+ Now let us look at potential due the first effect from time
624
+ t to t + 1 by ∆φ1
625
+ t→t+1, and describe it in more detail:
626
+ ∆φ1
627
+ t→t+1 = log Bt+1 − log Bt = log Bt+1
628
+ Bt
629
+ = log
630
+ n�
631
+ i=1
632
+ Bt+1(i)
633
+ n�
634
+ i=1
635
+ Bt(i)
636
+ = log(
637
+ 1
638
+ Bt(v) ·
639
+
640
+ Lt(v)
641
+ (Bt(i) +
642
+ 1
643
+ c·2levelt(i) )
644
+
645
+ Lt(v)
646
+ Bt(i)
647
+ )
648
+ ≤ log(
649
+ 1
650
+ Bt(v) · e|Lt(v)|)
651
+
652
+ in which the last inequality comes from the fact that |Lt(v)| =
653
+ c · 2levelt(i) and also the inequality that:
654
+ |Lt(v)|
655
+
656
+ i=1
657
+ (Bt(i) +
658
+ 1
659
+ |Lt(v)|) ≤
660
+ |Lt(v)|
661
+
662
+ i=1
663
+ (Bt(i) + Bt(i)
664
+ |Lt(v)|)
665
+ = (1 +
666
+ 1
667
+ |Lt(v)|)|Lt(v)| ·
668
+ |Lt(v)|
669
+
670
+ i=1
671
+ Bt(i) ≤ e|Lt(v)| ·
672
+ |Lt(v)|
673
+
674
+ i=1
675
+ Bt(i)
676
+ Now
677
+ let
678
+ us
679
+ focus
680
+ on
681
+ Bt(v),
682
+ and
683
+ first
684
+ assume
685
+ that
686
+ ⌊log ⌈ rankt(v)
687
+ c
688
+ ⌉⌋ > levelt(v). We want to find the maximum
689
+ number of items that might cause inversion with the accessed
690
+ item v.
691
+ Among all c · 2⌊log ⌈ rankt(v)
692
+ c
693
+ ⌉⌋ − 1 items that v might have
694
+ higher rank them, at most c · 2levelt(v) − 1 have lower level in
695
+ the OPT tree. Hence we have:
696
+ Bt(v) = (c · 2⌊log ⌈ rankt(v)
697
+ c
698
+ ⌉⌋ − 1) − (c · 2levelt(v) − 1)
699
+ c · 2levelt(v)
700
+ ≥ (2⌊log ⌈ rankt(v)
701
+ c
702
+ ⌉⌋ − 1)
703
+ 2levelt(v)
704
+ − 1
705
+ ≥ 2⌊log ⌈ rankt(v)
706
+ c
707
+ ⌉⌋
708
+ 2levelt(v)+1
709
+ = 2⌊log ⌈ rankt(v)
710
+ c
711
+ ⌉⌋−levelt(v)−1
712
+ hence the change in potential due to the first effect is:
713
+ ∆φ1
714
+ t→t+1 ≤ log(
715
+ 1
716
+ 2⌊log ⌈ rankt(v)
717
+ c
718
+ ⌉⌋−levelt(v)−1 · elevelt(v))
719
+ = log(2(1+log e)·levelt(v)−⌊log ⌈ rankt(v)
720
+ c
721
+ ⌉⌋)
722
+ = (1 + log e) · levelt(v) − ⌊log ⌈rankt(v)
723
+ c
724
+ ⌉⌋
725
+ For the case ⌊log ⌈ rankt(v)
726
+ c
727
+ ⌉⌋ < levelt(v), we use the fact that
728
+ Bt
729
+ v > 1, from the first inequality below:
730
+ ∆φt→t+1 = log( 1
731
+ Btv
732
+ · elevelt(v))
733
+ ≤ log(2log e·levelt(v)) = log e · levelt(v)
734
+ = (1 + log e) · levelt(v) − ⌊log ⌈rankt(v)
735
+ c
736
+ ⌉⌋
737
+ Hence, in both cases of ⌊log ⌈ rankt(v)
738
+ c
739
+ ⌉⌋ being larger or smaller
740
+ than levelt(v), we have ∆φt→t+1 ≤ (1 + log e) · levelt(v) −
741
+ ⌊log ⌈ rankt(v)
742
+ c
743
+ ⌉⌋.
744
+ We then show changes in the potential because of OPT’s
745
+ reconfiguration. Details of the computations are omitted due
746
+ to space constraints, but they are similar to the changes in
747
+ potential due to rearrangements in the ON’s algorithm, and
748
+ the result is that each OPT’s movement costs less than log e.
749
+ Summing up changes in the potential after ON’s and
750
+ OPT’s reconfiguration, assuming OPT has done wt move-
751
+ ments at time t, we end up with:
752
+ ∆φt→t+1 = (1+log e)·levelt(v)−⌊log ⌈rankt(v)
753
+ c
754
+ ⌉⌋+w·log e
755
+ And hence the cost of the online algorithm MRU(0) at time
756
+ t is at most:
757
+ Ct
758
+ MRU(0) = Ct
759
+ Amortized + ∆φt
760
+ = ⌊log ⌈rankt(v)
761
+ c
762
+ ⌉⌋ + (1 + log e) · levelt(v)
763
+ −⌊log ⌈rankt(v)
764
+ c
765
+ ⌉⌋+wt ·log e ≤ (1+log e)·(levelt(v)+wt)
766
+ And then summing up the cost of the MRU(0) and OPT for
767
+ the whole request sequence, we will get:
768
+ CON =
769
+
770
+ t
771
+ Ct
772
+ ON ≤
773
+
774
+ t
775
+ (1 + log e) · (levelt(v) + wt)
776
+ = (1 + log e) · COP T
777
+ In which the last equality comes from the fact that OPT also
778
+ needs to access the item, and as we assumed an additional wt
779
+ reconfigurations.
780
+ As the first application of Lemma 6 we prove a lower bound
781
+ on the cost of any online algorithm that only depends on the
782
+ size of the working set of accessed items in the sequence.
783
+ Theorem 2. Any online algorithm maintaining a self-adjusting
784
+ complete binary tree with capacity c > 1 on a request
785
+ sequence σ = σ1, . . . σm, requires an access cost of at least
786
+ �m
787
+ i=1⌊log ⌈ rankt(σi)
788
+ c
789
+ ⌉⌋
790
+ (1+e)
791
+ .
792
+ Proof. This proof is an extension and improvement of the
793
+ proof from [10] for any values of c > 2. A result of Lemma 6
794
+ is that even an optimal algorithm cannot be better than
795
+ 1
796
+ (1+e)
797
+ the MRU(0), otherwise contradicting Lemma 6. As the cost
798
+ of each access to the item σi is ⌊log ⌈ rankt(σi)
799
+ c
800
+ ⌉⌋ in MRU(0),
801
+ we can conclude the total cost of any algorithm should be
802
+ larger than
803
+ �m
804
+ i=1⌊log ⌈ rankt(σi)
805
+ c
806
+ ⌉⌋
807
+ (1+e)
808
+ .
809
+ Lemma 7. Any MRU(β) tree is β·(1+e)-access competitive.
810
+ Proof. Lemma 6 shows that an MRU(0) is (1 + e)-access
811
+ competitive. Any item which was in level k in MRU(0), is
812
+ in level k + β in MRU(β). As an MRU(β) algorithm keeps
813
+ items with rankc(0) at level(0), and because for any k ≥ 1,
814
+ we have k +β ≤ βk, we obtain that MRU(β) is (β)·(1+e)-
815
+ access competitive.
816
+ We conclude this section by proving our main theorem,
817
+ dynamic optimality of online SeedTree.
818
+ proof of Theorem 1. Combining Lemma 4, Lemma 5 and
819
+ Lemma 7 yields that the upper bound for competitiveness is
820
+ (1+e)·(2·(1+⌈
821
+ 1
822
+ 1−f ⌉))·(2−log(f)). The fractional occupancy
823
+ f = 1/2 in the above formula is the optimal value for f, which
824
+ gives us the 43-competitive ratio.
825
+ We need to point out that the above calculation is just an
826
+ upper bound on the competitive ratio. As we will discuss in
827
+ §V, the best results are usually achieved with a slightly higher
828
+ value of f, which we hypothesize might be because of an
829
+ overestimation of items’ depth in our theoretical analysis.
830
+
831
+ IV. APPLICATION IN RECONFIGURABLE DATACENTERS
832
+ SeedTree provides a fundamental self-adjusting structure
833
+ which is useful in different settings. For example, it may
834
+ be used to adapt the placement of containers in virtualized
835
+ settings, in order to reduce communication costs. However,
836
+ SeedTree can also be applied in reconfigurable networks in
837
+ which links can be adapted. In the following, we describe
838
+ how to use SeedTree in such a use case in more detail. In
839
+ particular, we consider reconfigurable datacenters in which the
840
+ connectivity between racks, or more specifically Top-of-the-
841
+ Rack (ToR) switches, can be adjusted dynamically, e.g., based
842
+ on optical circuit switches [6]. An optical switch provides a
843
+ matching between racks, and accordingly, the model is known
844
+ as a matching model in the literature [15]. In the following,
845
+ we will show how a SeedTree with capacity c and fractional
846
+ occupancy of f = 1
847
+ c can be seen in terms of 2 + c matchings,
848
+ and how reconfigurations can be transformed to the matching
849
+ model‡. We group these matchings into two sets:
850
+ • Topological matchings: consists of 2 static matchings,
851
+ embedding the underlying binary tree of SeedTree. The
852
+ first matching represents edges between a node and its left
853
+ child (with the ID twice the ID of the node), and similarly
854
+ the second matching for the right children (with the ID
855
+ twice plus one of the ID of their parents). An example is
856
+ depicted with solid edges in Figure 3.
857
+ • Membership matchings: has c dynamic matchings, con-
858
+ necting nodes to items inside them. If a node has more
859
+ than one item, the corresponding order of items to match-
860
+ ings is arbitrary. An example is shown with dotted edges
861
+ in Figure 3.
862
+ Having the matchings in place, let us briefly discuss how
863
+ search and reconfiguration operations are implemented. A
864
+ search for an item starts at the node with ID 001, the root
865
+ node. We then check membership matchings of this node. If
866
+ they map to the item, we have found the node which contains
867
+ the item, and our search was successful. Otherwise, we follow
868
+ the edge determined by the hash of the item, going to the
869
+ new possible node hosting the item. We repeat the process of
870
+ checking membership matchings and going along topological
871
+ matchings until we find the item. The item will be found, as
872
+ it is stored in one of the nodes in the path determined by its
873
+ hash value. Each step of moving an item can be implemented
874
+ in the matching mode with only one edge removal and one
875
+ edge addition in membership matchings.
876
+ V. EXPERIMENTAL EVALUATION
877
+ We
878
+ complement
879
+ our
880
+ analytical
881
+ results
882
+ by
883
+ evaluating
884
+ SeedTree on multiple datasets. Concretely, we are interested
885
+ in answering the following questions:
886
+ Q1 How does the access cost of our algorithm compare
887
+ to the statically-optimal algorithm (optimized based on
888
+ frequencies) and a demand-oblivious algorithm?
889
+ ‡The matching model considers perfect matchings only, however, in
890
+ practice imperfect matchings can be enforced by ignore rules in switches.
891
+ Fig. 3: A transformation from the example SeedTree shown in
892
+ Figure 1, which has capacity c = 2 and fractional occupancy
893
+ of f = 1
894
+ 2. The disco balls on top represent the reconfigurable
895
+ switches, and below are datacenter racks. Solid edges show
896
+ structural matchings, and dotted edges represent membership
897
+ matchings.
898
+ Q2 How does additional capacity improve the performance
899
+ of the online SeedTree, given fixed fractional occupancy
900
+ of each level?
901
+ Q3 What is the best initial fractional occupancy for the online
902
+ SeedTree, given a fixed capacity?
903
+ Answers to these questions would help developers tune pa-
904
+ rameters of the SeedTree based on their requirements and
905
+ needs. Before going through results, we describe the setup
906
+ that we used: Our code is written in Python 3.6 and we
907
+ used seaborn 0.11 [16] and Matplotlib 3.5 [17] libraries for
908
+ visualization. Our programs were executed on a machine with
909
+ 2x Intel Xeons E5-2697V3 SR1XF with 2.6 GHz, 14 cores
910
+ each, and a total of 128 GB DDR4 RAM.
911
+ A. Input
912
+ • Real-world dataset: Our real-world dataset is communi-
913
+ cations between servers inside three different Facebook
914
+ clusters, obtained from [1]. We post-processed this dataset
915
+ for single-source communications. Among all possible
916
+ sources, we chose the most frequent source.
917
+ • Synthetic dataset: We use the Markovian model dis-
918
+ cussed in [1], [18] for generating sequences based on a
919
+ temporal locality parameter which ranges from 0 (uni-
920
+ form distribution, no locality) to 0.9 (high temporal
921
+ locality). Our synthetic input consists of 65, 535 items
922
+ and 1 million requests. For generating such a dataset, we
923
+ start from a random sample of items. We post-process
924
+ this sequence, overwriting each request with the previous
925
+ request with the probability determined by our temporal
926
+ locality parameter. After that, we execute the second post-
927
+ processing to ensure that exactly 65, 535 items are in the
928
+ final trace.
929
+ B. Algorithm setup
930
+ We use SHA-512 [19] from the hashlib-library as the hash
931
+ function in our implementation, approximating the uniform
932
+ distribution for generating addresses of items. In order to store
933
+ items in a node we used a linked list, and when we move an
934
+ item to a node that is already full with other items, items
935
+ are stored in a temporary buffer. We assume starting from a
936
+ pre-filled tree with items, a tree which respects the fractional
937
+ occupancy parameter.
938
+
939
+ 001
940
+ 010
941
+ 011
942
+ 100
943
+ 101
944
+ 110
945
+ 111
946
+ 001
947
+ 010
948
+ 011
949
+ 100
950
+ 101
951
+ 110
952
+ 111(a)
953
+ (b)
954
+ (c)
955
+ Fig. 4: Improvements in the performance of SeedTree by fine-tuning parameters. Figures are generated using the synthetic
956
+ dataset with various locality values. (4a) Comparing the access cost of the SeedTree with fractional occupancy f = 1
957
+ 2 to the
958
+ best possible static algorithm and the demand-oblivious algorithm, all given capacity c = 4. Access costs are divided by 100
959
+ thousands. (4b) The effect of increasing capacity of nodes and temporal locality of input on the total cost of the algorithm.
960
+ The fractional occupancy is set to f = 1
961
+ 2 for all capacities. Total costs are divided by 1 million for this plot. (4c) Tradeoff
962
+ between the total cost and the fractional occupancy, given a range of temporal localities. The capacity of nodes is set to 12.
963
+ The number in each cell represents the cost, which are divided by 1 million.
964
+ (a)
965
+ (b)
966
+ Fig. 5: Improvements in the normalized access cost of the
967
+ algorithm by changing SeedTree parameters. These results
968
+ are obtained based on communications of the most frequent
969
+ source from three clusters of the real-world dataset. Costs are
970
+ normalized by the cost of the demand-oblivious algorithm. (5a)
971
+ Changes in the normalized cost by varying capacity. Fractional
972
+ occupancy is set to f = 1
973
+ 2. (5a) Changes in the normalized cost
974
+ by varying fractional occupancy. Gray dots show the minimum
975
+ values. Capacity of nodes is set to 12.
976
+ In our experiments, we range the capacities (c) from 2 to 16,
977
+ and the fractional occupancies (f) from 0.16 to 0.83. Due to
978
+ the random nature of our algorithms and input generations, we
979
+ repeat each experiment up to 100 times to ensure consistency
980
+ in our results.
981
+ C. Results
982
+ The performance of SeedTree improves significantly with
983
+ the increased temporal locality, as can be seen in Figure 4.
984
+ Furthermore, we have the following empirical answers to
985
+ questions proposed at the beginning of this section:
986
+ A1: The SeedTree improves the access cost significantly, with
987
+ increased temporal locality, as shown in Figures 4a,
988
+ which compares the access cost of SeedTree to static and
989
+ demand-oblivious algorithms.
990
+ A2: As the Figures 4b and 5a show, increasing capacity
991
+ reduces the cost of the algorithm. However, as we can
992
+ see, this increase slows down beyond capacity to 8, and
993
+ hence this value can be considered as the best option for
994
+ practical purposes.
995
+ A3: As discussed at the end of the §III and can be seen
996
+ in Figures 4c and 5b, the lowest cost can be achieved
997
+ with fractions higher or lower than 1
998
+ 2, but f = 1
999
+ 2 is near
1000
+ optimal in most scenarios.
1001
+ VI. ADDITIONAL RELATED WORK
1002
+ Self-adjusting lists and trees have already been studied
1003
+ intensively in the context of data structures. The pioneering
1004
+ work is by Sleator and Tarjan [20], who initiated the study of
1005
+ the dynamic list update problems and who also introduced the
1006
+ move-to-front algorithm, inspiring many deterministic [21],
1007
+ [22] and randomized [23]–[26] approaches for datastructures,
1008
+ as well as other variations of the problem [27].
1009
+ Self-adjusting binary search trees also aim to keep recently
1010
+ used elements close to the root, similarly to our approach
1011
+ in this paper (a summary of results is in Table I). However,
1012
+ adjustments in binary search trees are based on rotations rather
1013
+ than the movement of items between different nodes. One
1014
+ of the well-known self-adjusting binary search trees is the
1015
+ splay tree [9], although it is still unknown whether this tree is
1016
+ dynamically optimal; the problem is still open also for recent
1017
+ variations such as Zipper Tree [31], Multi Splay Tree [32]
1018
+ and Chain Splay [33] which improve the O(log n) competitive
1019
+ ratio of the splay tree to O(log log n). For Tango Trees [29],
1020
+ a matching Ω(log log n) lower bound is known. We also
1021
+ know that if we allow for free rotations after access, dynamic
1022
+
1023
+ Cluster C
1024
+ Cluster A
1025
+ Cluster B
1026
+ 0.95
1027
+ Cost
1028
+ 0.90
1029
+ Normalized
1030
+ 0.85
1031
+ 0.80
1032
+ 0.75
1033
+ 0.16
1034
+ 0.25
1035
+ 0.33
1036
+ 0.5
1037
+ 0.66
1038
+ 0.83
1039
+ Fractional Occupancy120
1040
+ 100
1041
+ Cos
1042
+ 80
1043
+ Access
1044
+ 60
1045
+ 40
1046
+ SeedTree
1047
+ Oblivious Algorithm
1048
+ 20
1049
+ Static Algorithm
1050
+ 0.15
1051
+ 0.3
1052
+ 0.45
1053
+ 0.6
1054
+ 0.75
1055
+ 0.9
1056
+ 0
1057
+ Temporal Locality60
1058
+ Total Cost
1059
+ 50
1060
+ 40
1061
+ 30
1062
+ 20
1063
+ 10
1064
+ .5
1065
+ 03
1066
+
1067
+ 6
1068
+ 5
1069
+ 06
1070
+ 8
1071
+ 9
1072
+ LO
1073
+ 4
1074
+ Z
1075
+ S
1076
+ 6
1077
+ Capacity36.7
1078
+ 29.0
1079
+ 21.3
1080
+ 5.5
1081
+ 0.16
1082
+ 51.7
1083
+ 44.2
1084
+ 13.4
1085
+ Occupancy
1086
+ 27.4
1087
+ 5.1
1088
+ 0.25
1089
+ 49.2
1090
+ 42.0
1091
+ 34.7
1092
+ 20.1
1093
+ 12.6
1094
+ 47.5
1095
+ 40.5
1096
+ 33.4
1097
+ 26.4
1098
+ 19.2
1099
+ 12.1
1100
+ 4.9
1101
+ 0.33
1102
+ 0.5
1103
+ 45.2
1104
+ 38.5
1105
+ 31.8
1106
+ 25.0
1107
+ 18.2
1108
+ 11.4
1109
+ 4.7
1110
+ Fractional
1111
+ 24.5
1112
+ 17.8
1113
+ 4.5
1114
+ 0.66
1115
+ 44.4
1116
+ 37.8
1117
+ 31.1
1118
+ 11.1
1119
+ 0.75
1120
+ 44.7
1121
+ 38.0
1122
+ 31.3
1123
+ 24.6
1124
+ 17.9
1125
+ 11.2
1126
+ 4.5
1127
+ 46.6
1128
+ 32.6
1129
+ 25.7
1130
+ 18.6
1131
+ 11.6
1132
+ 4.6
1133
+ 0.83
1134
+ 39.6
1135
+ 0.0
1136
+ 0.15
1137
+ 0.3
1138
+ 0.6
1139
+ 0.75
1140
+ 0.9
1141
+ 0.45
1142
+ Temporal LocalityCluster A
1143
+ Cluster C
1144
+ Cluster B
1145
+ 0.95
1146
+ Normalized Cost
1147
+ 0.90
1148
+ 0.85
1149
+ 0.80
1150
+ 0.75
1151
+ 2
1152
+ 6
1153
+ 10
1154
+ 12
1155
+ 8
1156
+ 14
1157
+ 16
1158
+ 4
1159
+ CapacityData Structure
1160
+ Operation
1161
+ Ratio
1162
+ Search
1163
+ Splay Tree [9]
1164
+ Rotation
1165
+ O(log n)
1166
+ Yes
1167
+ Greedy Future [28]
1168
+ Rotation
1169
+ O(log n)
1170
+ Yes
1171
+ Tango Tree [29]
1172
+ Rotation
1173
+ θ(log log n)
1174
+ Yes
1175
+ Adaptive Huffman [30]
1176
+ Subtree swap
1177
+ θ(1)
1178
+ No
1179
+ Push-down Tree [10]
1180
+ Item swap
1181
+ θ(1)
1182
+ No
1183
+ SeedTree
1184
+ Item movement
1185
+ θ(1)
1186
+ Yes
1187
+ TABLE I: Comparison of properties of self-adjusting tree data
1188
+ structures. The best known competitive ratio (to this date)
1189
+ is in terms of the data structure’s respective cost model and
1190
+ optimal offline algorithm. We note that none of the above trees
1191
+ considers additional capacity, except for our model.
1192
+ optimally becomes possible [34]. We also point out that some
1193
+ of these structures, in particular, multi splay tree and chain
1194
+ splay, benefitted from additional memory as well, however,
1195
+ there it is used differently, namely toward saving additional
1196
+ attributes for each node. Another variation which was first
1197
+ proposed by Lucas [28] in 1988 is called Greedy Future. This
1198
+ tree first received attention as an offline binary search tree
1199
+ algorithm [35], [36], but then an O(log n) amortized time
1200
+ in online settings was suggested by Fox [37]. Greedy Future
1201
+ has motivated researchers to take a geometric view of online
1202
+ binary search trees [36], [38]. We note that in contrast to binary
1203
+ search trees, our local tree does not require an ordering of the
1204
+ items in the left and right subtrees of a node.
1205
+ Self-adjusting trees have also been explored in the context
1206
+ of coding, where for example adaptive Huffman coding [30],
1207
+ [39]–[42] is used to minimize the depth of most frequent items.
1208
+ The reconfiguration cost, however, is different: in adaptive
1209
+ Huffman algorithms, two subtrees might be swapped at the
1210
+ cost of one.
1211
+ A few data structures have tried to achieve a better compet-
1212
+ itive ratio by expanding and altering binary search trees (see
1213
+ Table II for a summary): The first example, PokeTree [43],
1214
+ adds extra pointers between the internal nodes of the tree and
1215
+ achieves an O(log log n) competitive ratio in comparison to
1216
+ an optimal binary search tree. There are also self-adjusting
1217
+ data structures based on skip lists [44], [45], which have been
1218
+ introduced as an alternative for balanced trees that enforce
1219
+ probabilistic balancing instead. A biased version of skip lists
1220
+ was considered in [46], and later on, a statically optimal
1221
+ variation was given in [47] and a dynamic optimal version
1222
+ in a restricted model in [48]. Another example is Iacono’s
1223
+ working set structure [49] which combines a series of self-
1224
+ adjusting balanced binary search trees and deques, achieving
1225
+ a worst-case running time of O(log n), however, it lacks the
1226
+ dynamic optimality property. We are not aware of any work
1227
+ exploring augmentations to improve the competitive ratio of
1228
+ these data structures.
1229
+ Our work is also motivated by emerging self-adjusting
1230
+ datacenter networks. Recent optical communication technolo-
1231
+ gies enable datacenters to be reconfigured quickly and fre-
1232
+ quently [8], [18], [50]–[58], see [59] for a recent survey. The
1233
+ datacenter application mentioned in our paper is based on the
1234
+ matching model proposed by [15]. Recently [60] introduced
1235
+ Data Structure
1236
+ Structure
1237
+ Ratio
1238
+ Iacono’s structure [49]
1239
+ Trees & deques
1240
+ O(log n)
1241
+ Skip List [44]
1242
+ Linked lists
1243
+ O(log n)
1244
+ PokeTree [43]
1245
+ Tree & dynamic links
1246
+ O(log log n)
1247
+ SeedTree
1248
+ Tree
1249
+ θ(1)
1250
+ TABLE II: Comparison with other self-adjusting data struc-
1251
+ tures that support local-search. The best known competitive
1252
+ ratio (to this date) is in terms of the data structure’s respective
1253
+ cost model and optimal offline algorithm. We note that none
1254
+ of the other data structures considers capacity in their design.
1255
+ an online algorithm for constructing self-adjusting networks
1256
+ based on this model, however the authors do not provide
1257
+ dynamic optimality proof for their method.
1258
+ It has been shown that demand-aware and self-adjusting
1259
+ datacenter networks can be built from individual trees [61],
1260
+ called ego-trees, which are used in many network designs [8],
1261
+ [50], [62], [63], and also motivate our model. However, until
1262
+ now it was an open problem how to design self-adjusting
1263
+ and constant-competitive trees that support local routing and
1264
+ adjustments, a desirable property in dynamic settings.
1265
+ Last but not least, our work also features interesting con-
1266
+ nections to peer-to-peer networks [12], [64]. It is known that
1267
+ consistent hashing with previously assigned and fixed capaci-
1268
+ ties allows for significantly improved load balancing [13], [14],
1269
+ which has interesting applications and is used, e.g., in Vimeo’s
1270
+ streaming service [65] and in Google’s cloud service [13].
1271
+ Although these approaches benefit from data structures with
1272
+ capacity, these approaches are not demand-aware.
1273
+ VII. CONCLUSION AND FUTURE WORK
1274
+ This paper presented and evaluated a self-adjusting and
1275
+ local tree, SeedTree, which adapts towards the workload in
1276
+ an online, constant-competitive manner. SeedTree supports a
1277
+ capacity augmentation approach, while providing local rout-
1278
+ ing, which can be useful for other self-adjusting structures and
1279
+ applications as well. We showed a transformation of our algo-
1280
+ rithm into the matching model for application in reconfigurable
1281
+ datacenters, and evaluated our algorithm on synthetic and real-
1282
+ world communication traces. The code used for our experi-
1283
+ mental evaluation is available at github.com/inet-tub/SeedTree.
1284
+ We believe that our work opens several interesting avenues
1285
+ for future research. In particular, while we so far focused on
1286
+ randomized approaches, it would be interesting to explore de-
1287
+ terministic variants of SeedTree. Furthermore, while trees are
1288
+ a fundamental building block toward more complex networks
1289
+ (as they, e.g., arise in datacenters today), it remains to design
1290
+ and evaluate networks based on SeedTree.
1291
+ REFERENCES
1292
+ [1] C. Avin, M. Ghobadi, C. Griner, and S. Schmid, “On the complexity of
1293
+ traffic traces and implications,” in ACM SIGMETRICS, 2020.
1294
+ [2] T. Benson, A. Anand, A. Akella, and M. Zhang, “Understanding data
1295
+ center traffic characteristics,” ACM SIGCOMM CCR, 2010.
1296
+ [3] O. Michel, R. Bifulco, G. Retvari, and S. Schmid, “The programmable
1297
+ data plane: Abstractions, architectures, algorithms, and applications,” in
1298
+ ACM CSUR, 2021.
1299
+
1300
+ [4] W. Kellerer, P. Kalmbach, A. Blenk, A. Basta, M. Reisslein, and
1301
+ S. Schmid, “Adaptable and data-driven softwarized networks: Review,
1302
+ opportunities, and challenges,” in IEEE PIEEE, 2019.
1303
+ [5] A. Fischer, J. F. Botero, M. T. Beck, H. de Meer, and X. Hesselbach,
1304
+ “Virtual network embedding: A survey,” IEEE Commun. Surv. Tutor.,
1305
+ 2013.
1306
+ [6] M. N. Hall, K.-T. Foerster, S. Schmid, and R. Durairajan, “A survey of
1307
+ reconfigurable optical networks,” in OSN, 2021.
1308
+ [7] A. Borodin and R. El-Yaniv, Online computation and competitive
1309
+ analysis.
1310
+ cambridge university press, 2005.
1311
+ [8] C. Avin, K. Mondal, and S. Schmid, “Demand-aware network designs
1312
+ of bounded degree,” in DISC, 2017.
1313
+ [9] D. D. Sleator and R. E. Tarjan, “Self-adjusting binary search trees,” J.
1314
+ ACM, 1985.
1315
+ [10] C. Avin, K. Mondal, and S. Schmid, “Push-down trees: Optimal self-
1316
+ adjusting complete trees,” in IEEE/ACM, TON, 2022.
1317
+ [11] J. Iacono, “Key-independent optimality,” Algorithmica, 2005.
1318
+ [12] I. Stoica, R. T. Morris, D. Liben-Nowell, D. R. Karger, M. F. Kaashoek,
1319
+ F. Dabek, and H. Balakrishnan, “Chord: a scalable peer-to-peer lookup
1320
+ protocol for internet applications,” IEEE/ACM Trans. Netw., 2003.
1321
+ [13] V. S. Mirrokni, M. Thorup, and M. Zadimoghaddam, “Consistent
1322
+ hashing with bounded loads,” in ACM-SIAM SODA, 2018.
1323
+ [14] A. Aamand, J. B. T. Knudsen, and M. Thorup, “Load balancing with
1324
+ dynamic set of balls and bins,” in ACM SIGACT STOC, 2021.
1325
+ [15] C. Griner, J. Zerwas, A. Blenk, S. Schmid, M. Ghobadi, and C. Avin,
1326
+ “Cerberus: The power of choices in datacenter topology design (a
1327
+ throughput perspective),” in ACM SIGMETRICS, 2021.
1328
+ [16] M. L. Waskom, “seaborn: statistical data visualization,” J. of Open
1329
+ Source Softw., 2021.
1330
+ [17] J. D. Hunter, “Matplotlib: A 2d graphics environment,” Comput. Sci.
1331
+ Eng., 2007.
1332
+ [18] C. Avin, M. Bienkowski, I. Salem, R. Sama, S. Schmid, and P. Schmidt,
1333
+ “Deterministic self-adjusting tree networks using rotor walks,” in IEEE
1334
+ ICDCS, 2022.
1335
+ [19] C. Dobraunig, M. Eichlseder, and F. Mendel, “Analysis of SHA-512/224
1336
+ and SHA-512/256,” IACR Cryptol. ePrint Arch., 2016.
1337
+ [20] D. D. Sleator and R. E. Tarjan, “Amortized efficiency of list update and
1338
+ paging rules,” Commun. ACM, 1985.
1339
+ [21] S. Albers, “A competitive analysis of the list update problem with
1340
+ lookahead,” MFCS, 1994.
1341
+ [22] S. Kamali and A. L´opez-Ortiz, “A survey of algorithms and models for
1342
+ list update,” in LNTCS, 2013.
1343
+ [23] S. Albers and M. Janke, “New bounds for randomized list update in the
1344
+ paid exchange model,” in STACS, 2020.
1345
+ [24] S. Albers, B. Von Stengel, and R. Werchner, “A combined bit and
1346
+ timestamp algorithm for the list update problem,” Inf. Process. Lett.,
1347
+ 1995.
1348
+ [25] T. Garefalakis, “A new family of randomized algorithms for list access-
1349
+ ing,” in ESA, 1997.
1350
+ [26] N. Reingold, J. R. Westbrook, and D. D. Sleator, “Randomized compet-
1351
+ itive algorithms for the list update problem,” Algorithmica, 1994.
1352
+ [27] S. Albers and S. Lauer, “On list update with locality of reference,” in
1353
+ ICALP, 2008.
1354
+ [28] J. M. Lucas, Canonical forms for competitive binary search tree algo-
1355
+ rithms.
1356
+ Rutgers University, 1988.
1357
+ [29] E. D. Demaine, D. Harmon, J. Iacono, and M. Patrascu, “Dynamic
1358
+ optimality - almost,” in IEEE FOCS, 2004.
1359
+ [30] G. V. Cormack and R. N. Horspool, “Algorithms for adaptive huffman
1360
+ codes,” Inf. Process. Lett., 1984.
1361
+ [31] P. Bose, K. Dou¨ıeb, V. Dujmovi´c, and R. Fagerberg, “An o (log log n)-
1362
+ competitive binary search tree with optimal worst-case access times,” in
1363
+ SWAT, 2010.
1364
+ [32] C. C. Wang, J. Derryberry, and D. D. Sleator, “O (log log n)-competitive
1365
+ dynamic binary search trees,” in ACM-SIAM SODA, 2006.
1366
+ [33] G. F. Georgakopoulos, “Chain-splay trees, or, how to achieve and prove
1367
+ loglogn-competitiveness by splaying,” Inf. Process. Lett., 2008.
1368
+ [34] A. Blum, S. Chawla, and A. Kalai, “Static optimality and dynamic
1369
+ search-optimality in lists and trees,” in ACM-SIAM SODA, 2002.
1370
+ [35] J. I. Munro, “On the competitiveness of linear search,” in ESA, 2000.
1371
+ [36] E. D. Demaine, D. Harmon, J. Iacono, D. M. Kane, and M. Patrascu,
1372
+ “The geometry of binary search trees,” in ACM-SIAM SODA, 2009.
1373
+ [37] K. Fox, “Upper bounds for maximally greedy binary search trees,” in
1374
+ WADS, 2011.
1375
+ [38] J. Iacono, “In pursuit of the dynamic optimality conjecture,” in Space-
1376
+ Efficient Data Structures, Streams, and Algorithms, 2013.
1377
+ [39] D. E. Knuth, “Dynamic huffman coding,” J. Algorithms, 1985.
1378
+ [40] R. L. Milidi´u, E. S. Laber, and A. A. Pessoa, “Bounding the compression
1379
+ loss of the FGK algorithm,” J. Algorithms, 1999.
1380
+ [41] A. Moffat, “Huffman coding,” ACM CSUR, 2019.
1381
+ [42] J. S. Vitter, “Design and analysis of dynamic huffman codes,” J. of the
1382
+ ACM, 1987.
1383
+ [43] J. Kujala and T. Elomaa, “Poketree: A dynamically competitive data
1384
+ structure with good worst-case performance,” in ISAAC, 2006.
1385
+ [44] W. Pugh, “Skip lists: A probabilistic alternative to balanced trees,”
1386
+ Commun. ACM, 1990.
1387
+ [45] C. Avin, I. Salem, and S. Schmid, “Working set theorems for routing in
1388
+ self-adjusting skip list networks,” in IEEE INFOCOM, 2020.
1389
+ [46] A. Bagchi, A. L. Buchsbaum, and M. T. Goodrich, “Biased skip lists,”
1390
+ Algorithmica, 2005.
1391
+ [47] V. Ciriani, P. Ferragina, F. Luccio, and S. Muthukrishnan, “A data
1392
+ structure for a sequence of string accesses in external memory,” ACM
1393
+ Trans. Algorithms, 2007.
1394
+ [48] P. Bose, K. Dou¨ıeb, and S. Langerman, “Dynamic optimality for skip
1395
+ lists and b-trees,” in ACM-SIAM SODA, 2008.
1396
+ [49] J. Iacono, “Alternatives to splay trees with o(log n) worst-case access
1397
+ times,” in ACM-SIAM SODA, 2001.
1398
+ [50] C. Avin, K. Mondal, and S. Schmid, “Demand-aware network design
1399
+ with minimal congestion and route lengths,” in IEEE INFOCOM, 2019.
1400
+ [51] H. Ballani, P. Costa, R. Behrendt, D. Cletheroe, I. Haller, K. Jozwik,
1401
+ F. Karinou, S. Lange et al., “Sirius: A flat datacenter network with
1402
+ nanosecond optical switching,” in ACM SIGCOMM, 2020.
1403
+ [52] K. Chen, A. Singla, A. Singh, K. Ramachandran, L. Xu, Y. Zhang,
1404
+ X. Wen, and Y. Chen, “Osa: An optical switching architecture for data
1405
+ center networks with unprecedented flexibility,” IEEE/ACM TON, 2014.
1406
+ [53] F. Douglis, S. Robertson, E. Van den Berg, J. Micallef, M. Pucci,
1407
+ A. Aiken, M. Hattink, M. Seok, and K. Bergman, “Fleet—fast lanes for
1408
+ expedited execution at 10 terabits: Program overview,” IEEE Internet
1409
+ Comput., 2021.
1410
+ [54] K.-T. Foerster, M. Ghobadi, and S. Schmid, “Characterizing the al-
1411
+ gorithmic complexity of reconfigurable data center architectures,” in
1412
+ ACM/IEEE ANCS, 2018.
1413
+ [55] M. Ghobadi, R. Mahajan, A. Phanishayee, N. Devanur, J. Kulkarni,
1414
+ G. Ranade, P.-A. Blanche, H. Rastegarfar et al., “Projector: Agile
1415
+ reconfigurable data center interconnect,” in ACM SIGCOMM, 2016.
1416
+ [56] J. Kulkarni, S. Schmid, and P. Schmidt, “Scheduling opportunistic links
1417
+ in two-tiered reconfigurable datacenters,” in ACM SPAA, 2021.
1418
+ [57] W. M. Mellette, R. Das, Y. Guo, R. McGuinness, A. C. Snoeren, and
1419
+ G. Porter, “Expanding across time to deliver bandwidth efficiency and
1420
+ low latency,” in USENIX NSDI, 2020.
1421
+ [58] W. M. Mellette, R. McGuinness, A. Roy, A. Forencich, G. Papen, A. C.
1422
+ Snoeren, and G. Porter, “Rotornet: A scalable, low-complexity, optical
1423
+ datacenter network,” in ACM SIGCOMM, 2017.
1424
+ [59] K.-T. Foerster and S. Schmid, “Survey of reconfigurable data center
1425
+ networks: Enablers, algorithms, complexity,” in SIGACT News, 2019.
1426
+ [60] E. Feder, I. Rathod, P. Shyamsukha, R. Sama, V. Aksenov, I. Salem,
1427
+ and S. Schmid, “Lazy self-adjusting bounded-degree networks for the
1428
+ matching model,” in IEEE INFOCOM, 2022.
1429
+ [61] C. Avin and S. Schmid, “Toward demand-aware networking: a theory
1430
+ for self-adjusting networks,” ACM SIGCOMM CCR, 2018.
1431
+ [62] ——, “Renets: Statically-optimal demand-aware networks,” in SIAM
1432
+ APOCS, 2021.
1433
+ [63] B. S. Peres, O. A. de Oliveira Souza, O. Goussevskaia, C. Avin,
1434
+ and S. Schmid, “Distributed self-adjusting tree networks,” in IEEE
1435
+ INFOCOM, 2019.
1436
+ [64] D. R. Karger, E. Lehman, F. T. Leighton, R. Panigrahy, M. S. Levine, and
1437
+ D. Lewin, “Consistent hashing and random trees: Distributed caching
1438
+ protocols for relieving hot spots on the world wide web,” in ACM STOC,
1439
+ 1997.
1440
+ [65] A. Rodland, “Improving load balancing with a new consistent-hashing
1441
+ algorithm,” Vimeo Engineering Blog, Medium, 2016.
1442
+
59E1T4oBgHgl3EQfTAPi/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
5dFIT4oBgHgl3EQf7yth/content/tmp_files/2301.11399v1.pdf.txt ADDED
@@ -0,0 +1,1893 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Distributional outcome regression and its
2
+ application to modelling continuously
3
+ monitored heart rate and physical activity
4
+ Rahul Ghosal1, Sujit K. Ghosh2, Jennifer A. Schrack3, Vadim Zipunnikov4
5
+ 1 Department of Epidemiology and Biostatistics, University of South Carolina
6
+ 2Department of Statistics, North Carolina State University
7
+ 3 Department of Epidemiology, Johns Hopkins Bloomberg
8
+ School of Public Health
9
+ 4 Department of Biostatistics, Johns Hopkins Bloomberg
10
+ School of Public Health
11
+ January 30, 2023
12
+ Abstract
13
+ We propose a distributional outcome regression (DOR) with scalar and distribu-
14
+ tional predictors. Distributional observations are represented via quantile functions
15
+ and the dependence on predictors is modelled via functional regression coefficients.
16
+ DOR expands existing literature with three key contributions: handling both scalar
17
+ and distributional predictors, ensuring jointly monotone regression structure with-
18
+ out enforcing monotonicity on individual functional regression coefficients, pro-
19
+ viding a statistical inference for estimated functional coefficients. Bernstein poly-
20
+ nomial bases are employed to construct a jointly monotone regression structure
21
+ without over-restricting individual functional regression coefficients to be mono-
22
+ tone. Asymptotic projection-based joint confidence bands and a statistical test of
23
+ global significance are developed to quantify uncertainty for estimated functional
24
+ regression coefficients. Simulation studies illustrate a good performance of DOR
25
+ model in accurately estimating the distributional effects. The method is applied to
26
+ continuously monitored heart rate and physical activity data of 890 participants of
27
+ Baltimore Longitudinal Study of Aging. Daily heart rate reserve, quantified via a
28
+ subject-specific distribution of minute-level heart rate, is modelled additively as a
29
+ function of age, gender, and BMI with an adjustment for the daily distribution of
30
+ minute-level physical activity counts. Findings provide novel scientific insights in
31
+ epidemiology of heart rate reserve.
32
+ Keywords: Distributional Data Analysis; Distribution-on-distribution regression; Quan-
33
+ tile function-on-scalar Regression; BLSA; Physical Activity; Heart Rate.
34
+ 1
35
+ arXiv:2301.11399v1 [stat.ME] 26 Jan 2023
36
+
37
+ 1
38
+ Introduction
39
+ Distributional data analysis is an emerging area of research with diverse applications in
40
+ digital medicine and health (Augustin et al., 2017; Matabuena et al., 2021; Ghosal et al.,
41
+ 2021; Matabuena and Petersen, 2021), radiomics (Yang et al., 2020), neuroimaging (Tang
42
+ et al., 2020) among many others. With the advent of modern medical devices and wear-
43
+ ables, many studies collect subject-specific high frequency or high density observations
44
+ including heart rate, physical activity (steps, activity counts), continuously monitored
45
+ blood glucose, functional and structured brain images, and others. The central idea of
46
+ distributional data analysis is to capture the distributional aspect in this data and model
47
+ it within regression frameworks. Thus, distributional data analysis inherently deals with
48
+ data objects which are distributions typically represented via histograms, densities, quan-
49
+ tile functions or other distributional representations. Petersen et al. (2021) provide an
50
+ in-depth overview of recent developments in this area.
51
+ Similar to functional regression models, depending on whether the outcome or the
52
+ predictor is distributional, there are various types of distributional regression models.
53
+ Petersen and M¨uller (2016) and Hron et al. (2016) developed functional compositional
54
+ methods to analyze samples of densities. For scalar outcome and distributional predictors
55
+ represented via densities, a common idea has been to transform densities by mapping
56
+ them to a proper Hilbert space L2 and then use existing functional regression approaches
57
+ for modelling scalar outcomes. Petersen and M¨uller (2016) used a log-quantile density
58
+ transformation, whereas Talsk´a et al. (2021) used a centered log-ratio transformation.
59
+ Other approaches for modelling scalar outcomes and distributional predictors include
60
+ scalar-on-quantile function regression (Ghosal et al., 2021), kernel-based approaches using
61
+ quantile functions (Matabuena and Petersen, 2021) and many others (see Petersen et al.
62
+ (2021), Chen et al. (2021) and references therein).
63
+ In parallel, there was also a substantial work on developing models with distributional
64
+ outcome and scalar predictors.
65
+ Yang et al. (2020) developed a quantile function-on-
66
+ scalar (QFOSR) regression model, where subject-specific quantile functions of data were
67
+ modelled via scalar predictors of interest using a function-on-scalar regression approach
68
+ (Ramsay and Silverman, 2005), which make use of data-driven basis functions called
69
+ quantlets. One limitation of the approach is a no guarantee of underlying monotonicity
70
+ of the predicted quantile functions. To address this, Yang (2020) extended this approach
71
+ 2
72
+
73
+ using I-splines (Ramsay et al., 1988) or Beta CDFs which enforce monotonicity at the
74
+ estimation step.
75
+ One important limitation of this approach is enforcement of jointly
76
+ monotone (non-decreasing) regression structure via enforcement of monotonicity on each
77
+ individual functional regression coefficients. As we demonstrate in our application, this
78
+ assumption could be too restrictive in real world.
79
+ Distribution-on-distribution regression models when both outcome and predictors are
80
+ distributions have been studied by Verde and Irpino (2010); Irpino and Verde (2013);
81
+ Chen et al. (2021); Ghodrati and Panaretos (2021); Pegoraro and Beraha (2022). These
82
+ models aim to understand the association between distributions within a pre-specified,
83
+ often linear, regression structure.
84
+ Verde and Irpino (2010); Irpino and Verde (2013)
85
+ used an ordinary least square approach based on the squared L2 Wasserstein distance
86
+ between distributions. Outcome quantile function QiY (p) was modelled as a non-negative
87
+ linear combination of other quantile functions QiXj(p)s using a multiple linear regression.
88
+ This model although useful and adequate for some applications, may not be flexible
89
+ enough as it assumes a linear association between the distribution valued response and
90
+ predictors, which are additionally assumed to be constant across all quantile levels p ∈
91
+ (0, 1). Chen et al. (2021) used a geometric approach taking distributional valued outcome
92
+ and predictor to a tangent space, where regular tools of function-on-function regression
93
+ (Ramsay and Silverman, 2005; Yao et al., 2005) were applied.
94
+ Pegoraro and Beraha
95
+ (2022) used an approximation of the Wasserstein space using monotone B-spline and
96
+ developed methods for PCA and regression for distributional data. Recently, Ghodrati
97
+ and Panaretos (2021) developed a shape-constrained approach linking Frechet mean of
98
+ the outcome distribution to the predictor distribution via an optimal transport map that
99
+ was estimated by means of isotonic regression.
100
+ Many of above-mentioned methods mainly focused on dealing with constraints en-
101
+ forced by a specific functional representation. Developing inferential tools is somewhat
102
+ under-developed are of distributional data analysis. Chen et al. (2021) derived the asymp-
103
+ totic convergence rates for the estimated regression operator in their proposed method
104
+ for Wasserstein regression. Yang et al. (2020) developed joint credible bands for distribu-
105
+ tional effects, but monotonocity of the quantile function was not imposed. Yang (2020)
106
+ developed a global statistical test for estimated functional coefficients in the distribu-
107
+ tional outcome regression, however, no confidence bands was proposed to identify and
108
+ 3
109
+
110
+ test local quantile effects.
111
+ In this paper, we propose a distributional outcome regression that expands exist-
112
+ ing literature in three main directions. First, our model includes both scalar and dis-
113
+ tributional predictors.
114
+ Second, it ensures jointly monotone (non-decreasing) additive
115
+ regression structure without enforcing monotonicity of individual functional regression
116
+ coefficients.
117
+ Thirdly, it provides a toolbox of statistical inference tools for estimated
118
+ functional coefficients including asymptotic projection-based joint confidence bands and
119
+ a statistical test of global significance. We capture distributional aspect in outcome and
120
+ predictors via quantile functions and construct a jointly monotone regression model via
121
+ a specific shape-restricted functional regression model. The distributional effects of the
122
+ scalar covariates are captured via functional coefficient βj(p)’s varying over quantile lev-
123
+ els and the effect of the distributional predictor is captured via a monotone function
124
+ h(·), similar to an optimal transport approach in Ghodrati and Panaretos (2021). In the
125
+ special case, when there is no distributional predictor, the model resembles a quantile
126
+ function-on-scalar regression model, but with much more flexible constraints compared
127
+ to Yang (2020). In the absence of scalar predictors the model reduces to a distribution
128
+ on distribution regression model, where the monotone function representing the optimal
129
+ transport map is estimated by a non-parametric functional regression model under shape
130
+ constraints. We use Bernstein polynomial (BP) basis functions to model the distribu-
131
+ tional effects βj(p)s and the monotone map h(·), which are known to enjoy attractive
132
+ and optimal shape-preserving properties (Lorentz, 2013; Carnicer and Pena, 1993). Ad-
133
+ ditionally, BP is instrumental in constructing and enforcing a jointly monotone regression
134
+ structure without over-restricting individual functional regression coefficients to be mono-
135
+ tone. Finally, inferential tools are developed including joint asymptotic confidence bands
136
+ for distributional functional effects and p-values for testing the distributional effects of
137
+ predictors.
138
+ As a motivating application, we study continuously monitored heart rate and physical
139
+ activity collected in Baltimore Longitudinal Study of Aging (BLSA). We aim to study the
140
+ association between the distribution of heart rate as a distributional outcome and age, sex
141
+ and body mass index (BMI) while also adjusting for a key confounder, the distribution
142
+ of minute-level physical activity aggregated over 8am-8pm time period. Figure 1 displays
143
+ daily profiles of heart rate and physical activity between 8am-8pm for a BLSA participant
144
+ 4
145
+
146
+ along with the corresponding subject-specific quantile functions.
147
+ 8
148
+ 10
149
+ 12
150
+ 14
151
+ 16
152
+ 18
153
+ 20
154
+ 40
155
+ 60
156
+ 80
157
+ 100
158
+ 120
159
+ Time of Day
160
+ Heartrate
161
+ 0.0
162
+ 0.2
163
+ 0.4
164
+ 0.6
165
+ 0.8
166
+ 1.0
167
+ 40
168
+ 60
169
+ 80
170
+ 100
171
+ 120
172
+ p
173
+ Heartrate QF
174
+ 8
175
+ 10
176
+ 12
177
+ 14
178
+ 16
179
+ 18
180
+ 20
181
+ 0
182
+ 200
183
+ 600
184
+ 1000
185
+ Time of Day
186
+ Activity
187
+ 0.0
188
+ 0.2
189
+ 0.4
190
+ 0.6
191
+ 0.8
192
+ 1.0
193
+ 0
194
+ 200
195
+ 600
196
+ 1000
197
+ p
198
+ Activity QF
199
+ Figure 1: Diurnal profile of heart rate and physical activity between 8 a.m.- 8 p.m. and
200
+ the corresponding subject specific quantile functions for a randomly chosen subject in
201
+ the BLSA.
202
+ The rest of this article is organized as follows. We present our distributional modeling
203
+ framework and illustrate the proposed estimation method in Section 2. In Section 3, we
204
+ perform numerical simulations to evaluate the performance of the proposed method and
205
+ provide comparisons with existing methods for distributional regression. In Section 4, we
206
+ demonstrate application of the proposed method in modelling continuously monitored
207
+ heart rate reserve in BLSA study. Section 5 concludes with a brief discussion of our
208
+ proposed method and some possible extensions of this work.
209
+ 2
210
+ Methodology
211
+ 2.1
212
+ Modelling Framework and Distributional Representations
213
+ We consider the scenario, where there are repeated subject-specific measurements of a
214
+ distributional response Y along with several scalar covariates zj, j = 1, 2, . . . , q and we
215
+ 5
216
+
217
+ also have a distributional predictor X. Let us denote the subject-specific response and
218
+ covariates as Yik, Xil, zij (k = 1, . . . , n1i, l = 1, . . . , n2i), for subject i = 1, . . . , n. Here
219
+ n1i, n2i denotes the number of repeated observations of the distributional response and
220
+ predictor respectively for subject i. Assume Yik (k = 1, . . . , n1i) ∼ FiY (y), a subject-
221
+ specific cumulative distribution function (cdf), where FiY (y) = P(Yik ≤ y). Then, the
222
+ subject-specific quantile function is defined as QiY (p) = inf{y : FiY (y) ≥ p}, p ∈ [0, 1].
223
+ The quantile function completely characterizes the distribution of the individual obser-
224
+ vations. Given Yik s, the empirical quantile function can be calculated based on linear
225
+ interpolation of order statistics (Parzen, 2004) and serves as an estimate of the latent
226
+ subject specific quantile function QiY (p) (Yang et al., 2020; Yang, 2020). In particular,
227
+ for a sample (X1, X2, . . . , Xn), letX(1) ≤ X(2) ≤ . . . , ≤ X(n) be the corresponding order
228
+ statistics. The empirical quantile function, for p ∈ [
229
+ 1
230
+ n+1,
231
+ n
232
+ n+1], is then given by,
233
+ ˆQ(p) = (1 − w)X([(n+1)p]) + wX([(n+1)p]+1),
234
+ (1)
235
+ where and w is a weight satisfying (n + 1)p = [(n + 1)p] + w. Based on this formulation
236
+ and observations Yik, Xil, we can obtain the subject specific quantile functions ˆQiY (p)
237
+ and ˆQiX(p) which are estimators of the true quantile functions QiY (p),QiX(p). The em-
238
+ pirical quantile functions are consistent (Parzen, 2004) and are suitable for distributional
239
+ representaion for several attractive mathematical properties (Powley, 2013; Ghosal et al.,
240
+ 2021) without requiring any smoothing parameter selection as in density estimation.
241
+ 2.2
242
+ Distribution-on-scalar and Distribution Regression
243
+ We assume that the scalar covariates (z1, z2, . . . , zq) ∈ [0, 1]q without any loss of gener-
244
+ ality (e.g., achievable by linear transformation). We posit the following distributional
245
+ regression model, associating the distributional response QiY (p) to the scalar covariates
246
+ zij, j = 1, 2, . . . , q, and a distributional predictor QiX(p).
247
+ We will refer to this as a
248
+ distribution-on-scalar and distribution regression (DOSDR) model.
249
+ QiY (p) = β0(p) +
250
+ q
251
+
252
+ j=1
253
+ zijβj(p) + h(QiX(p)) + ϵi(p).
254
+ (2)
255
+ 6
256
+
257
+ Here β0(p) is a distributional intercept and βj(p) s are the distributional effects of the
258
+ scalar covariates Zj at quantile level p.
259
+ The unknown nonparametric function h(·)
260
+ captures the additive effect of the distributional predictor QiX(p).
261
+ The residual er-
262
+ ror process ϵi(p) is assumed to be a mean zero stochastic process with an unknown
263
+ covariance structure.
264
+ We make the following flexible and interpretable assumptions
265
+ on the coefficient functions βj(·), j = 0, 1, . . . , q and on h(·) which ensures the pre-
266
+ dicted value of the response quantile function QiY (p) conditionally on the predictors,
267
+ E(QiY (p) | zi1, zi2, . . . , ziq, qix(p)) is non-decreasing.
268
+ Theorem 1 Let the following conditions hold in the model (2).
269
+ 1. The distributional intercept β0(p) is non-decreasing.
270
+ 2. Any additive combination of β0(p) with distributional slopes βj(p) is non-decreasing,
271
+ i.e., β0(p) + �r
272
+ k=1 βjk(p) is non-decreasing for any sub-sample {j1, j2, . . . , jr} ⊂
273
+ {1, 2, . . . , q}.
274
+ 3. h(·) is non-decreasing.
275
+ Then E(QY (p) | z1, z2, . . . , zq, qx(p)) is non-decreasing.
276
+ Note that E(QY (p) | z1, z2, . . . , zq, qx(p)) is the predicted quantile function under the
277
+ squared Wasserstein loss function, which is same as the squared error loss for the quan-
278
+ tile functions. The proof is illustrated in Appendix A of the Supplementary Material.
279
+ Assumptions (1) and (2) are much weaker and flexible than the monotonicity conditions
280
+ of the QFOSR model in Yang (2020), where each of the function coefficients βj(p)s is re-
281
+ quired to be monotone, whereas, we only impose monotonicity on the sum of functional
282
+ coefficients. This is not just a technical aspect but this flexibility is important from a prac-
283
+ tical perspective, as it allows for capturing possible non-monotone association between
284
+ the distributional response and individual scalar predictors zj’s while still maintaining the
285
+ required monotonicity of the predicted response quantile function. Condition (3) matches
286
+ with the monotonicity assumption of the distributional regression model in Ghodrati and
287
+ Panaretos (2021) and in the absence of any scalar predictors, essentially captures the op-
288
+ timal transport map between the two distributions. Note that this optimal transport map
289
+ is constructed after adjusting for scalars of interest - thus, it provides a model general in-
290
+ ferential framework compared to that in Ghodrati and Panaretos (2021). Thus, the above
291
+ 7
292
+
293
+ DOSDR model extends the previous inferential framework for distributional response on
294
+ scalar and contains both the QFOSR model and the distribution-on-distribution regres-
295
+ sion model as its submodels. More succinctly, in absence of distributional predictor we
296
+ have,
297
+ QiY (p) = β0(p) +
298
+ q
299
+
300
+ j=1
301
+ zijβj(p) + ϵi(p),
302
+ (3)
303
+ which is a quantile-function-on-scalar regression (QFOSR) model ensuring monotonon-
304
+ icity under conditions (1),(2). Similarly, in absence of any scalar covariates, we have a
305
+ distribution-on-distribution regression model
306
+ QiY (p) = β0(p) + h(QiX(p)) + ϵi(p).
307
+ (4)
308
+ Model (4) is a bit more general than the one considered in Ghodrati and Panaretos
309
+ (2021), including a transnational effect β0(p). As a technical note, in models (2) and
310
+ (4), function h(·) is identifiable only up to an additive constant, and in particular, the
311
+ estimable quantity is the additive effect β0(p) + h(qx(p)) for a fixed QX(p) = qx(p).
312
+ 2.3
313
+ Estimation in DOSDR
314
+ We follow a shape constrained estimation approach (Ghosal et al., 2022a) for estimating
315
+ the distributional effects βj(p) and the nonparamatric function h() which naturally in-
316
+ corporates the constraints (1)-(3) of Theorem 1 in the estimation step. The univariate
317
+ coefficient functions βj(p) (j = 0, 1, . . . , p) are modelled in terms of univariate expansions
318
+ of Bernstein basis polynomials as
319
+ βj(p) =
320
+ N
321
+
322
+ k=0
323
+ βjkbk(p, N), where bk(p, N) =
324
+ �N
325
+ k
326
+
327
+ pk(1 − p)N−k, for 0 ≤ p ≤ 1.
328
+ (5)
329
+ The number of basis polynomials depends on the degree of the polynomial basis N (which
330
+ is assumed to be same for all βj(·) for computational tractability in this paper). The
331
+ Bernstein polynomials bk(p, N) ≥ 0 and �N
332
+ k=0 bk(p, N) = 1. Wang and Ghosh (2012)
333
+ and Ghosal et al. (2022a) illustrate that various shape constraints e.g., monotonicity,
334
+ convexity, etc. can be reduced to linear constraints on the basis coefficients of the form
335
+ ANβN
336
+ j ≥ 0, where βN
337
+ j = (βj0, βj1, . . . , βjN)T and AN is the constraint matrix chosen in
338
+ 8
339
+
340
+ a way to guarantee a desired shape restriction. In particular, in our context of DOSDR,
341
+ we need to choose constraint matrices AN in such a way which jointly ensure conditions
342
+ (1),(2) in Theorem 1 and thus guarantee a non-decreasing predicted value of the response
343
+ quantile function. The nonparametric function h(·) is modelled similarly using univariate
344
+ Bernstein polynomial expansion as
345
+ h(x) =
346
+ N
347
+
348
+ k=0
349
+ θkbk(x, N), where bk(x, N) =
350
+ �N
351
+ k
352
+
353
+ xk(1 − x)N−k, for 0 ≤ x ≤ 1.
354
+ (6)
355
+ Since the domain of h(·) modelled via Bernstein basis is [0, 1], the quantile functions of the
356
+ distributional predictor QX(p) are transformed to a [0, 1] scale using linear transformation
357
+ of the observed predictors. We make the assumption here that the distributional predic-
358
+ tors are bounded, which is reasonable in the applications we are interested in. Henceforth,
359
+ we assume QX(p) ∈ [0, 1] without loss of generality. Further, note that, b0(x, N) = 1,
360
+ and since β0(p) already contains this constant term in the DOSDR model (2), including
361
+ the constant basis while modelling h(·) will lead to model singularity. Hence we drop
362
+ the constant basis (i.e. the first term) while modelling h(·). In particular, h(QiX(p)) is
363
+ modelled as h(QiX(p)) = �N
364
+ k=1 θkbk(QiX(p), N). Note that this is equivalent to imposing
365
+ the constraint h(0) = θ0 = 0. The non-decreasing condition in (3) of Theorem 1 can
366
+ again be specified as a linear constraint on the basis coefficients of the form Rθ ≥ 0,
367
+ where θ = (θ1, . . . , θN)T, and R is the constraint matrix. The DOSDR model (2) can be
368
+ reformulated in terms of basis expansions as
369
+ QiY (p)
370
+ =
371
+ N
372
+
373
+ k=0
374
+ β0kbk(p, N) +
375
+ q
376
+
377
+ j=1
378
+ zij
379
+ N
380
+
381
+ k=0
382
+ βjkbk(p, N) +
383
+ N
384
+
385
+ k=1
386
+ θkbk(QiX(p), N) + ϵi(p).(7)
387
+ =
388
+ bN(p)Tβ0 +
389
+ q
390
+
391
+ j=1
392
+ ZT
393
+ ij(p)βj + bN(QiX(p))Tθ + ϵi(p).
394
+ Here βj = (βj0, βj1, . . . , βjN)T, bN(p)T = (b0(p, N), b1(p, N), . . . , bN(p, N)), bN(QiX(p))T =
395
+ (b1(QiX(p), N),
396
+ b2(QiX(p), N), . . . , bN(QiX(p), N)) and ZT
397
+ ij(p) = zij ∗ bN(p)T. Suppose that we have the
398
+ qunatile functions QiY (p), QiX(p) evaluated on a grid P = {p1, p2, . . . , pm} ⊂ [0, 1]. De-
399
+ note the stacked value of the quantiles for ith subject as QiY = (QiY (p1), QiY (p2), . . . , QiY (pm))T.
400
+ 9
401
+
402
+ The DOSDR model in terms of Bernstein basis expansion (7) can be reformulated as
403
+ QiY
404
+ =
405
+ B0β0 +
406
+ q
407
+
408
+ j=1
409
+ Wijβj + Siθ + ϵi,
410
+ (8)
411
+ where B0 = (bN(p1), bN(p2), . . . , bN(pm))T,Wij = (Zij(p1), Zij(p2), . . . , Zij(pm))T and
412
+ Si = (bN(QiX(p1)),
413
+ bN(QiX(p2)), . . . , bN(QiX(pm)))T and ϵi are the stacked residuals ϵi(p)s. The parameters
414
+ in the above model are the basis coefficients ψ = (βT
415
+ 0 , βT
416
+ 1 , . . . , βT
417
+ q , θT)T. For estimation
418
+ of the parameters, we use the following least square criterion, which reduces to a shape
419
+ constrained optimization problem. Namely, we obtain the estimates ˆψ by minimizing
420
+ residual sum of squares as
421
+ ˆψ = argmin
422
+ ψ
423
+ n
424
+
425
+ i=1
426
+ ||QiY − B0β0 −
427
+ q
428
+
429
+ j=1
430
+ Wijβj − Siθ||2
431
+ 2
432
+ s.t
433
+ Dψ ≥ 0.
434
+ (9)
435
+ The universal constraint matrix D on the basis coefficients is chosen to ensure the con-
436
+ ditions (1),(2),(3) in Theorem 1. Later in this section, we illustrate examples how the
437
+ constraint matrix is formed in practice. The above optimization problem (9) can be iden-
438
+ tified as a quadratic programming problem (Goldfarb and Idnani, 1982, 1983). R package
439
+ restriktor (Vanbrabant and Rosseel, 2019) can be used for performing the above opti-
440
+ mization.
441
+ Example 1: Single scalar covariate (q = 1) and a distributional predictor
442
+ We consider the case where there is a single scalar covariate z1 (q = 1) and a dis-
443
+ tribution predictor QX(p). In this case, the DOSDR model (2) is given by QiY (p) =
444
+ β0(p) + zi1β1(p) + h(QiX(p)) + ϵi(p). The sufficient conditions (1)-(3) for non-decreasing
445
+ quantile functions in this case reduces to: A) The distributional intercept β0(p) is non-
446
+ decreasing B) β0(p) + β1(p) is non-decreasing C) h(·) is non-decreasing. Note that the
447
+ above conditions do no enforce β1(p) to be non-decreasing. Once the coefficient func-
448
+ tions are modelled in terms of Bernstein basis expansions as in (4) and (5), conditions
449
+ (A)-(C) can be be enforced via the following linear restrictions on the basis coefficients
450
+ i.e., ANβ0 ≥ 0, [AN AN](βT
451
+ 0 , βT
452
+ 1 )T ≥ 0, AN−1θ ≥ 0. Here AN is a constraint matrix
453
+ which imposes monotonicity on functions fN(x) modelled with Bernstein polynomials
454
+ as fN(x) = �N
455
+ k=0 βkbk(x, N), where bk(x, N) =
456
+ �N
457
+ k
458
+
459
+ xk(1 − x)N−k, for 0 ≤ x ≤ 1. The
460
+ 10
461
+
462
+ derivative is given by f ′
463
+ N(x) = N �N−1
464
+ k=0 (βk+1 − βk)bk(x, N − 1). Hence if βk+1 ≥ βk for
465
+ k = 0, 1, . . . , N −1, fN(x) is non decreasing, which is achieved with the constraint matrix
466
+ AN. The combined linear restrictions on the parameter ψ = (βT
467
+ 0 , βT
468
+ 1 , θT)T is given by
469
+ Dψ ≥ 0. The matrices AN, D are given by
470
+ AN ≡
471
+
472
+
473
+
474
+
475
+
476
+
477
+
478
+
479
+ −1
480
+ 1
481
+ 0
482
+ . . .
483
+ 0
484
+ 0
485
+ −1
486
+ 1
487
+ 0
488
+ . . .
489
+ ...
490
+ 0
491
+ . . .
492
+ 0
493
+ −1
494
+ 1
495
+
496
+
497
+
498
+
499
+
500
+
501
+
502
+
503
+ , D =
504
+
505
+
506
+
507
+
508
+
509
+ AN
510
+ 0
511
+ 0
512
+ AN
513
+ AN
514
+ 0
515
+ 0
516
+ 0
517
+ AN−1
518
+
519
+
520
+
521
+
522
+ � .
523
+ (10)
524
+ Similar example with two scalar covariates (q = 2) and a distributional predictor is
525
+ given in Appendix B of the Supplementary Material. Our estimation ensures that the
526
+ shape restrictions are enforced everywhere and hence the predicted quantile functions
527
+ are nondecreasing in the whole domain p ∈ [0, 1] as opposed to fixed quantile levels or
528
+ design points in Ghodrati and Panaretos (2021). The order of the Bernstein polynomial
529
+ basis N controls the smoothness of the coefficient functions βj(·) and h(·). We follow a
530
+ truncated basis approach (Ramsay and Silverman, 2005; Fan et al., 2015), by restricting
531
+ the number of BP basis to ensure the resulting coefficient functions are smooth. The
532
+ optimal order of the basis functions is chosen via V -fold cross-validation method (Wang
533
+ and Ghosh, 2012) using cross-validated residual sum of squares defined as, CV SSE =
534
+ �V
535
+ v=1
536
+ �nv
537
+ i=1 ||QiY,v − ˆQ−v
538
+ iY,v||2
539
+ 2. Here ˆQ−v
540
+ iY is the fitted quantile values of observation i within
541
+ the v th fold obtained from the constrained optimization criterion (9) and trained on the
542
+ rest (V − 1) folds.
543
+ 2.4
544
+ Uncertainty Quantification and Joint Confidence Bands
545
+ To construct confidence intervals, we use the result that the constrained estimator ˆψ in
546
+ (9) is the projection of the corresponding unconstrained estimator (Ghosal et al., 2022a)
547
+ onto the restricted space: ˆψr = argmin
548
+ ψ∈ΘR
549
+ ||ψ − ˆψur||2
550
+ ˆΩ, for a non-singular matrix ˆΩ. The
551
+ restricted parameter space is given by ΘR = {ψ ∈ RKn : Dψ ≥ 0}. The DOSDR model
552
+ (8) can be reformulated as QiY = Tiψ + ϵi, where Ti = [B0 Wi1 Wi2, . . . , Wiq Si] . The
553
+ 11
554
+
555
+ unrestricted and restricted estimators are given by,
556
+ ˆψur = argmin
557
+ ψ∈RKn
558
+ n
559
+
560
+ i=1
561
+ ||QiY − Tiψ||2
562
+ 2
563
+ (11)
564
+ ˆψr = argmin
565
+ ψ∈ΘR
566
+ n
567
+
568
+ i=1
569
+ ||QiY − Tiψ||2
570
+ 2
571
+ Let us denote QT
572
+ Y = (Q1Y , Q2Y , . . . , QnY )T and T = [TT
573
+ 1 , TT
574
+ 2 , . . . , TT
575
+ n]T. Then we can
576
+ write,
577
+ 1
578
+ n||QY − Tψ||2
579
+ 2 = 1
580
+ n||QY − T ˆψur||2
581
+ 2 + 1
582
+ n||T ˆψur − Tψ||2
583
+ 2.
584
+ Hence ˆψr = argmin
585
+ ψ∈ΘR
586
+ ||ψ− ˆψur||2
587
+ ˆΩ, where ˆΩ = 1
588
+ n
589
+ �n
590
+ i=1 TT
591
+ i Ti and Ω = E( ˆΩ) is non-singular.
592
+ Thus, we can use the projection of the large sample distribution of √n( ˆψur − ψ0) to
593
+ approximate the distribution of √n( ˆψr − ψ0).
594
+ Now √n( ˆψur − ψ0) is asymptotically
595
+ distributed as N(0, ∆) under suitable regularity conditions (Huang et al., 2004, 2002) for
596
+ general choice of basis fucntions (holds true for finite sample sizes if ϵ(p) is Gaussian),
597
+ where ∆ can be estimated by a consistent estimator. In particular, we use a sandwich
598
+ covariance estimator corresponding to model QiY = Tiψ + ϵi, for estimating ∆ following
599
+ a functional principal component analysis (FPCA) approach (Ghosal and Maity, 2022)
600
+ for estimation of the covariance matrix of the residuals ϵi (i = 1, . . . , n). Details of this
601
+ estimation procedure is included in Appendix C of the Supplementary Material.
602
+ Let us consider the scenario with a single scalar covariate and distributional pre-
603
+ dictor for simplicity of illustration. The Bernstein polynomial approximation of β1(p)
604
+ be given by β1N(p) = �N
605
+ k=0 βkb1k(p, N) = ρKn(p)
606
+ ′β1. Algorithm 1 in Appendix D is
607
+ used to obtain an asymptotic 100(1 − α)% joint confidence band for the true coefficient
608
+ function β0
609
+ 1(p), corresponding to a scalar predictor of interest. Here β0
610
+ 1(p) denotes the
611
+ true distributional coefficient β1(p). The algorithm relies on two steps i) Use the asymp-
612
+ totic distribution of √n( ˆψr − ψ0) to generate samples from the asymptotic distribution
613
+ of ˆβ1r(p) (these can be used to get point-wise confidence intervals) ii) Use the gener-
614
+ ated samples and the supremum test statistic (Meyer et al., 2015; Cui et al., 2022) to
615
+ obtain joint confidence band for β0
616
+ 1(p). Similar strategy can also be employed for ob-
617
+ taining an asymptotic joint confidence band for the additive effect β0(p) + h(qx(p)), for
618
+ a fixed value of QX(p) = qx(p). Based on the joint confidence band, it is possible to
619
+ directly test for the global distributional effects β(p) (or h(x)). The p-value for the test
620
+ 12
621
+
622
+ H0 : β(p) = 0 for all p ∈ [0, 1] versus H1 : β(p) ̸= 0 for at least one p ∈ [0, 1], could be
623
+ obtained based on the 100(1 − α)% joint confidence band for β(p). In particular, follow-
624
+ ing Sergazinov et al. (2022), the p-value for the test can be defined as the smallest level
625
+ α for which at least one of the 100(1 − α)% confidence intervals around β(p) (p ∈ P)
626
+ does not contain zero. Alternatively, a nonparametric bootstrap procedure for testing the
627
+ global effects of scalar and distributional predictors is illustrated in Appendix E of the
628
+ Supplementary Material which could be useful for finite sample sizes and non Gaussian
629
+ error process.
630
+ 3
631
+ Simulation Studies
632
+ In this Section, we investigate the performance of the proposed estimation and testing
633
+ method for DOSDR via simulations. To this end, we consider the following data gener-
634
+ ating scenarios.
635
+ 3.1
636
+ Data Generating Scenarios
637
+ Scenario A1: DOSDR, Both distributional and scalar predictor
638
+ We consider the DOSDR model given by,
639
+ QiY (p) = β0(p) + zi1β1(p) + h(QiX(p)) + ϵi(p).
640
+ (12)
641
+ The distributional effects are taken to be β0(p) = 2+3p, β1(p) = sin( π
642
+ 2p) and h(x) = ( x
643
+ 10)3.
644
+ The scalar predictor zi1 is generated independently from a U(0, 1) distribution. The dis-
645
+ tributional predictor QiX(p) is generated as QiX(p) = ciQN(p, 10, 1), where QN(p, 10, 1)
646
+ denotes the pth quantile of a normal distribution N(10, 1) and ci ∼ U(1, 2). The resid-
647
+ ual error process ϵ(p) is independently sampled from N(0, 0.1) for each p. Since we do
648
+ not directly observe these quantile functions QiX(p), QiY (p) in practice we assume we
649
+ have the subject-specific observations Xi = {xi1 = QiX(ui1), xi2 = QiX(ui2), . . . , xiLi1 =
650
+ QiX(uiLi1)} and Yi = {yi1 = QiY (vi1), yi2 = QiY (vi2), . . . , yiLi2 = QiY (viLi2)}, where ui, vj
651
+ s are independently generated from U(0, 1) distribution. For simplicity, we assume that
652
+ Li1 = Li2 = L many subject specific observations are available for both the distributional
653
+ outcome and the predictor. Based on the observations Xi, Yi the subject specific quantile
654
+ 13
655
+
656
+ functions QiX(p) and QiY (p) are estimated based on empirical quantiles as illustrated in
657
+ equation (1) on a grid of p values ∈ [0, 1]. We consider number of individual measure-
658
+ ments L = 200, 400 and training sample size n = 200, 300, 400 for this data generating
659
+ scenario. The grid P = {p1, p2, .. . . . , pm} ⊂ [0, 1] is taken to be a equi-spaced grid of
660
+ length m = 100 in [0.005, 0.995]. A separate sample of size nt = 100 is used as a test set
661
+ for each of the above cases.
662
+ Scenario A2: DOSDR, Testing the effect of scalar predictor
663
+ We consider the data generating scheme (12) in scenario A1 above and test for the distri-
664
+ butional effect of the scalar predictor z1 using the proposed joint-confidence band based
665
+ test in section 2. To this end we let β1(p) = d × sin( π
666
+ 2p), where the parameter d controls
667
+ the departure from the null hypothesis H0 : β1(p) = 0 for all p ∈ [0, 1]
668
+ versus
669
+ H1 :
670
+ β1(p) ̸= 0 for some p ∈ [0, 1]. The number of subject-specific measurements L is set to
671
+ 200 and sample sizes n ∈ {200, 300, 400} are considered.
672
+ Scenario B: DOSDR, Only distributional predictor
673
+ We consider the following distribution on distribution regression model
674
+ QiY (p) = h(QiX(p)) + ϵi(p).
675
+ (13)
676
+ The distributional outcome QiY (p), the distributional predictor QiX(p) and the error
677
+ process ϵi(p) are generated similarly as in Scenario A. The number of subject-specific
678
+ measurements L is set to 200 and sample sizes n ∈ {200, 300, 400} are considered. This
679
+ scenario is used to compare the performance of the proposed DOSDR method with that
680
+ of the isotonic regression approach illustrated in Ghodrati and Panaretos (2021).
681
+ We consider 100 Monte-Carlo (M.C) replications from simulation scenarios A1 and
682
+ B to assess the performance of the proposed estimation method. For scenario A2, 200
683
+ replicated datasets are used to assess type I error and power of the proposed testing
684
+ method.
685
+ 14
686
+
687
+ 3.2
688
+ Simulation Results
689
+ Performance under scenario A1:
690
+ We evaluate the performance of our proposed method in terms of integrated mean squared
691
+ error (MSE), integrated squared Bias (Bias2) and integrated variance (Var).
692
+ For the
693
+ distributional effect β1(p), these are defined as MSE =
694
+ 1
695
+ M
696
+ �M
697
+ j=1
698
+ � 1
699
+ 0 {ˆβj
700
+ 1(p) − β1(p)}2dp,
701
+ Bias2 =
702
+ � 1
703
+ 0 {ˆ¯β1(p) − β1(p)}2dp, V ar =
704
+ 1
705
+ M
706
+ �M
707
+ j=1
708
+ � 1
709
+ 0 {ˆβj
710
+ 1(p) − ˆ¯β1(p)}2dp. Here ˆβj
711
+ 1(p) is the
712
+ estimate of β1(p) from the jth replicated dataset and ˆ¯β1(p) =
713
+ 1
714
+ M
715
+ �M
716
+ j=1 ˆβj
717
+ 1(p) is the M.C
718
+ average estimate based on the M replications. Table 1 reports the squared Bias, Variance
719
+ and MSE of the estimates of β1(p) for all cases considered in scenario A1. MSE as well
720
+ as squared Bias and Variance are found to decrease and be negligible as sample size n
721
+ or number of measurements L increase, illustrating satisfactory accuracy of the proposed
722
+ estimator.
723
+ Table 1: Integrated squared bias, variance and mean square error of estimated β1(p) over
724
+ 100 Monte-Carlo replications, Scenario A1.
725
+ Sample Size
726
+ L=200
727
+ L=400
728
+ β1(p)
729
+ Bias2
730
+ Var
731
+ MSE
732
+ Bias2
733
+ Var
734
+ MSE
735
+ n= 200
736
+ 0.0001
737
+ 0.0034
738
+ 0.0035
739
+ 2.8 × 10−5
740
+ 0.0019
741
+ 0.0019
742
+ n= 300
743
+ 1.9 × 10−5
744
+ 0.0026
745
+ 0.0026
746
+ 1.7 × 10−5
747
+ 0.0016
748
+ 0.0016
749
+ n= 400
750
+ 2.6 × 10−5
751
+ 0.0018
752
+ 0.0018
753
+ 5.5 × 10−6
754
+ 0.0010
755
+ 0.0010
756
+ Since, h(x) is not directly estimable in the DOSDR model (12), we consider estimation
757
+ of the estimable additive effect γ(p) = β0(p) + h(qx(p)) at qx(p) = 1
758
+ n
759
+ �n
760
+ i=1 QiX(p). The
761
+ performance of the estimates in terms of squared Bias, variance and MSE are reported
762
+ in Table 2, which again illustrates satisfactory performance of the proposed method in
763
+ capturing the distributional effect of the distributional predictor QX(p).
764
+ Table 2: Integrated squared bias, variance and mean square error of the estimated additive
765
+ effect γ(p) = β0(p)+h(qx(p)) at qx(p) = 1
766
+ n
767
+ �n
768
+ i=1 QiX(p) over 100 Monte-Carlo replications,
769
+ Scenario A1.
770
+ Sample Size
771
+ L=200
772
+ L=400
773
+ β0(p) + h(qx(p))
774
+ Bias2
775
+ Var
776
+ MSE
777
+ Bias2
778
+ Var
779
+ MSE
780
+ n= 200
781
+ 9.9 × 10−5
782
+ 0.023
783
+ 0.023
784
+ 4.6 × 10−5
785
+ 0.023
786
+ 0.023
787
+ n= 300
788
+ 7.3 × 10−5
789
+ 0.017
790
+ 0.017
791
+ 3.2 × 10−5
792
+ 0.017
793
+ 0.017
794
+ n= 400
795
+ 5.8 × 10−5
796
+ 0.013
797
+ 0.013
798
+ 4.8 × 10−5
799
+ 0.013
800
+ 0.013
801
+ The estimated M.C mean for the distributional effect β1(p) and γ(p) along with their
802
+ 15
803
+
804
+ respective 95% point-wise confidence intervals are displayed in Figure 2, for the case
805
+ n = 400, L = 400.
806
+ The M.C mean estimates are superimposed on the true curves
807
+ and along with the narrow confidence intervals, they illustrate low variability and high
808
+ accuracy of the estimates.
809
+ 0.0
810
+ 0.2
811
+ 0.4
812
+ 0.6
813
+ 0.8
814
+ 1.0
815
+ 0.0
816
+ 0.2
817
+ 0.4
818
+ 0.6
819
+ 0.8
820
+ 1.0
821
+ 1.2
822
+ p
823
+ β1(p)
824
+ 0.0
825
+ 0.2
826
+ 0.4
827
+ 0.6
828
+ 0.8
829
+ 1.0
830
+ 4
831
+ 6
832
+ 8
833
+ 10
834
+ p
835
+ γ(p)
836
+ Figure 2: Left: True distributional effect β1(p) (solid) and estimated ˆβ1(p) averaged over
837
+ 100 M.C replications (dashed) along with point-wise 95% confidence interval (dotted),
838
+ scenario A1, n = 400, L = 400. Right: Additive effect γ(p) = β0(p) + h(qx(p)) (solid) at
839
+ qx(p) = 1
840
+ n
841
+ �n
842
+ i=1 QiX(p) and its estimate ˆγ(p) averaged over 100 M.C replications (dashed)
843
+ along with point-wise 95% confidence interval (dotted).
844
+ As a measure of out-of-sample prediction performance, we report the average Wasser-
845
+ stein distance between the true quantile functions and the predicted ones in the test set
846
+ defined as WD =
847
+ 1
848
+ nt
849
+ �nt
850
+ i=1[
851
+ � 1
852
+ 0 {Qtest
853
+ i
854
+ (p)− ˆQi
855
+ test(p)}2dp]
856
+ 1
857
+ 2. Supplementary Table S1 reports
858
+ the summary of the average Wasserstein distance across the 100 Monte-Carlo replica-
859
+ tions. The low values of the average WD metric and their M.C standard error indicate
860
+ a satisfactory prediction performance of the proposed method. The prediction accuracy
861
+ appears to be improving with an increase in the number of measurements L. The perfor-
862
+ mance of the proposed projection based joint confidence intervals for β1(p) is investigated
863
+ in Supplementary Table S2 which reports the coverage and width of the joint confidence
864
+ bands for β1(p) for various choices of N and for the case L = 200. It is observed that
865
+ the nominal coverage of 95% lies within the two standard error limit of the estimated
866
+ coverage in the all the cases, particularly for choices of N picked by our proposed cross
867
+ 16
868
+
869
+ validation method.
870
+ Performance under scenario A2:
871
+ We assess the performance of the proposed testing method in terms of estimated type
872
+ I error and power calculated from the Monte-Carlo replications. We set the order of
873
+ the Bernstein polynomial basis N = 3 based on our results from previous section. The
874
+ estimated power curve is displayed as a function of the parameter d in Supplementary
875
+ Figure S1, using a nominal level of α = 0.05. At d = 0, the null hypothesis holds and the
876
+ power corresponds to the type I error of the test. The nominal level α = 0.05 lies within
877
+ its two standard error limit for all the sample sizes, illustrating that the test maintains
878
+ proper size. For d > 0, we see the power quickly increase to 1, showing that the proposed
879
+ test is able to capture small departures from the null hypothesis successfully.
880
+ Performance under scenario B:
881
+ We again consider estimation of the estimable additive effect we consider estimation of
882
+ the estimable additive effect γ(p) = β0(p) + h(qx(p)) at qx(p) =
883
+ 1
884
+ n
885
+ �n
886
+ i=1 QiX(p), which
887
+ can be estimated by both the proposed DOSDR (2) method and the isotonic regression
888
+ method (Ghodrati and Panaretos, 2021). Note that true β0(p) = 0, but we include a
889
+ distributional intercept in our DOSDR model, nonetheless, as this information is not
890
+ available to practitioners. For the isotonic regression method we directly fit the model
891
+ (13) without any intercept. The performance of the estimates are compared in terms
892
+ of squared Bias, variance and MSE and are reported in Table 3. We observe a similar
893
+ performance of the proposed method with the PAVA based isotnic regression method.
894
+ Table 3: Integrated squared bias, variance and mean square error of the estimated additive
895
+ effect γ(p) = β0(p)+h(qx(p)) at qx(p) = 1
896
+ n
897
+ �n
898
+ i=1 QiX(p) over 100 Monte-Carlo replications,
899
+ Scenario B, from the DOSDR method and the isotonic regression method with PAVA
900
+ (Ghodrati and Panaretos, 2021).
901
+ Sample Size
902
+ DOSDR
903
+ PAVA
904
+ β0(p) + h(qx(p))
905
+ Bias2
906
+ Var
907
+ MSE
908
+ Bias2
909
+ Var
910
+ MSE
911
+ n= 200
912
+ 0.0002
913
+ 0.022
914
+ 0.022
915
+ 2.6 × 10−5
916
+ 0.022
917
+ 0.022
918
+ n= 300
919
+ 0.0002
920
+ 0.016
921
+ 0.016
922
+ 2.4 × 10−5
923
+ 0.016
924
+ 0.016
925
+ n= 400
926
+ 0.0002
927
+ 0.012
928
+ 0.013
929
+ 3 × 10−5
930
+ 0.012
931
+ 0.012
932
+ The estimated M.C mean for the distributional effect γ(p) along with their respective
933
+ 17
934
+
935
+ 95% point-wise confidence intervals are displayed in Supplementary Figure S2, for the
936
+ case n = 400. Again, both the method are observed to perform a good job in capturing
937
+ γ(p).
938
+ The proposed DOSDR method enables conditional estimation of γ(p) = β0(p) +
939
+ h(qx(p)) on the entire domain p ∈ [0, 1], where as for the isotonic regression method,
940
+ interpolation is required from grid level estimates. The PAVA based isotonic regression
941
+ method failed to converge in 5% of the cases for sample size n = 200, where as, this
942
+ issue was not faced by our proposed method. In terms of model flexibility, the isotonic
943
+ regression method do not directly accommodate scalar predictors, or a distributional
944
+ intercept, and keeping these points in mind our proposed method certainly provide a
945
+ uniform and flexible approach for modelling distributional outcome, in the presence of
946
+ both distributional and scalar predictors.
947
+ 4
948
+ Modelling Distribution of Heart Rate in Baltimore
949
+ Longitudinal Study of Aging
950
+ In this section, we apply our proposed framework to continuously monitored heart rate
951
+ and physical activity data collected in Baltimore Longitudinal Study of Aging (BLSA),
952
+ the longest-running scientific study of aging in the United States. Specifically, the distri-
953
+ bution of minute-level heart rate is modelled via age, sex and BMI and the distribution
954
+ of minute-level activity counts capturing daily composition of physical activity. We set
955
+ our study period to be 8 a.m. - 8 p.m. and calculate distributional representation of
956
+ minute-level heart rate and (log-transformed) activity counts of BLSA participants via
957
+ subject-specific quantile functions QiY (p) (heart rate) and QiX(p) (represented via log-
958
+ transformed AC). For each participant, we consider only their first BLSA visit while
959
+ obtaining the subject-specific quantile functions QiY (p), QiX(p). Our final sample con-
960
+ stitutes of n = 890 BLSA participants, who had heart rate, physical activity and other
961
+ covariates used for the analysis available. Supplementary Table S3 presents the descrip-
962
+ tive statistics of the sample.
963
+ Supplementary Figure S3 shows the subject-specific quantile functions of heart rate
964
+ and physical activity (log-transformed, during 8 a.m.-8 p.m. time period). As a starting
965
+ point, we study the dependence of mean heart rate on mean activity count and age, sex
966
+ 18
967
+
968
+ (Male=1, Female=0) and BMI via the multiple regression model,
969
+ µH,i = θ0 + θ1agei + θ2sexi + θ3BMIi + θ4µA,i + ϵi,
970
+ where µH,i, µA,i are the subject specific means of heart rate and activity counts. Supple-
971
+ mentary Table S4 reports the results of the model fit. Mean heart rate is found to be
972
+ negatively associated with age and mean activity, and positively associated with BMI.
973
+ The above results although useful, does not paint the whole picture about how the dis-
974
+ tribution of hear rate depends on these biological factors and the distribution of physical
975
+ activity. Therefore, we use the proposed DOR model
976
+ QiY (p) = β0(p) + ageiβage(p) + BMIiβBMI(p) + sexiβsex(p) + h(QiX(p)) + ϵi(p), . (14)
977
+ The scalar covariates age, BMI as well as activity counts are transformed to be [0, 1] scale
978
+ using monotone linear transformations. The distributional effects of age, sex (Male=1,
979
+ Female=0) and BMI on heart rate are captured by βage(p), βsex(p), βBMI(p), respectively.
980
+ The monotone nonparametric function h(·) is used to link the distribution of heart rate
981
+ and the distribution of activity counts.
982
+ We use the proposed estimation method for
983
+ estimation of the distributional effects βj(p)s and h(·) (h(0) = 0 is imposed). The common
984
+ degree of the Bernstein polynomial basis used to model all the distributional coefficient
985
+ was chosen via five-fold cross-validation method that resulted in N = 5. The estimated
986
+ distributional effects along with their asymptotic 95% joint confidence bands using the
987
+ proposed projection based method are displayed in Figure 3. The p-values from the joint
988
+ confidence band based global test for the intercept and the effect of age, BMI, sex, and
989
+ distribution of activity counts are found to be 1 × 10−6, 1 × 10−6, 5 × 10−5, 3 × 10−4 and
990
+ 1 × 10−6, respectively, resulting in the significance of all the predictors.
991
+ The estimated distributional intercept ˆβ0(p) is monotone and represents the baseline
992
+ distribution of heart rate.
993
+ The estimated distributional effect of age is found to be
994
+ significant for all p, in particular, ˆβage(p) is negative and appears to be decreasing and
995
+ then stabilizing in p ∈ [0, 1] illustrating moderate-high levels of heart rate decrease at an
996
+ accelerated rate with age compared to sedentary levels of activity (Antelmi et al., 2004).
997
+ The maximal levels of heart rate (p > 0.8) are found to be decreasing with age (βage(p) <
998
+ 0) (Kostis et al., 1982; Tanaka et al., 2001; Gellish et al., 2007).
999
+ The distributional
1000
+ 19
1001
+
1002
+ effect of BMI ˆβBMI(p) is found to be positive and increasing in p (especially at higher
1003
+ quantiles), indicating that a higher maximal heart rate is associated with a higher BMI
1004
+ after adjusting for age, sex and the daily distribution of activity counts (Foy et al., 2018).
1005
+ The estimated effect of sex (Male) ˆβsex(p) illustrates that females have higher heart rate
1006
+ (Antelmi et al., 2004; Prabhavathi et al., 2014) compared to males across all quantile
1007
+ levels after adjusting for age, BMI and PA. The lower heart rate in males compared
1008
+ to females can be attributed to size of the heart, which is typically smaller in females
1009
+ than males (Prabhavathi et al., 2014) and thus need to beat faster to provide the same
1010
+ output. The estimated monotone regression map between PA and heart rate distribution
1011
+ ˆh(x) (estimated under constraint h(0) = 0) is found to be highly nonlinear and convex,
1012
+ illustrating a non-linear dependence of heart rate on physical activity, especially at higher
1013
+ values of PA. The convex nature of the map points out an accelerated increase in the
1014
+ heart rate quantiles with an increase in the corresponding quantile levels of PA (Leary
1015
+ et al., 2002). The estimated distributional effects especially for age and gender in our
1016
+ analysis, illustrate that the distributional effects have no reason to be non-decreasing,
1017
+ as enforced in the qunatile function-on-scalar regression model in Yang (2020), which
1018
+ might lead to wrong conclusions here. The proposed DOR method is more flexible in
1019
+ this regard and enforces the monotonicity of the quantile functions without requiring the
1020
+ distributional effects to be monotone.
1021
+ We also compare the predictive performance of the proposed DOSDR model with
1022
+ that of the distribution-on-distribution regression model by Ghodrati and Panaretos
1023
+ (2021) based on isotonic regression (DODR-ISO). Supplementary Figure S4 displays
1024
+ the leave-one-out-cross-validated (LOOCV) predicted quantile functions of heart rate
1025
+ from both the methods. We define the measure LOOCV R-Squared as R2
1026
+ loocv = 1 −
1027
+ �N
1028
+ i=1
1029
+ � 1
1030
+ 0 {Qi(p)− ˆ
1031
+ Qi
1032
+ loocv(p)}2dp
1033
+ �N
1034
+ i=1
1035
+ � 1
1036
+ 0 {Qi(p)− ¯Q}2dp
1037
+ , where ¯Q =
1038
+ 1
1039
+ N
1040
+ �N
1041
+ i=1
1042
+ � 1
1043
+ 0 Qi(p)dp to compare the out-of-sample
1044
+ prediction accuracy of the two methods. The R2
1045
+ loocv value for the DOSDR and the DODR-
1046
+ ISO model are calculated to be 0.60 and 0.49 respectively. This illustrates the proposed
1047
+ DOSDR method is able to predict the heart rate quantile functions more accurately with
1048
+ the use of additional information from the biological scalar factors age, sex and BMI.
1049
+ 20
1050
+
1051
+ 0.0
1052
+ 0.2
1053
+ 0.4
1054
+ 0.6
1055
+ 0.8
1056
+ 1.0
1057
+ 60
1058
+ 80
1059
+ 100
1060
+ 120
1061
+ intercept for HR
1062
+ p
1063
+ beta0_intercept
1064
+ 0.0
1065
+ 0.2
1066
+ 0.4
1067
+ 0.6
1068
+ 0.8
1069
+ 1.0
1070
+ −30
1071
+ −10
1072
+ 10
1073
+ 30
1074
+ age effect on HR
1075
+ p
1076
+ beta_age
1077
+ 0.0
1078
+ 0.2
1079
+ 0.4
1080
+ 0.6
1081
+ 0.8
1082
+ 1.0
1083
+ −20
1084
+ 0
1085
+ 10
1086
+ 30
1087
+ bmi effect on HR
1088
+ p
1089
+ beta_bmi
1090
+ 0.0
1091
+ 0.2
1092
+ 0.4
1093
+ 0.6
1094
+ 0.8
1095
+ 1.0
1096
+ −10
1097
+ −6
1098
+ −4
1099
+ −2
1100
+ 0
1101
+ sex effect on HR
1102
+ p
1103
+ beta_sex(M)
1104
+ 0
1105
+ 2
1106
+ 4
1107
+ 6
1108
+ 8
1109
+ 0
1110
+ 50
1111
+ 100
1112
+ 150
1113
+ h function
1114
+ h(x)
1115
+ Figure 3: Estimated distributional effects (solid) along with their joint 95% confidence
1116
+ bands (dotted) for age, BMI (both scaled to [0, 1]) and sex (Male) on heart rate along
1117
+ with the estimated link function h(·) (solid) (under the constraint h(0) = 0) between the
1118
+ distribution of heart rate and physical activity.
1119
+ 5
1120
+ Discussion
1121
+ In this article, we have developed a flexible distributional outcome regression. The dis-
1122
+ tributional functional effects are modelled via Bernstein polynomial basis with appro-
1123
+ priate shape constraints to ensure monotonicity of the predicted quantile functions. A
1124
+ novel construction of BP-based regression structure results in imposing much less restric-
1125
+ tive compared to existing methods for modelling monotone quantile function outcome.
1126
+ 21
1127
+
1128
+ Thus, the proposed framework enables more flexible dependencies between distributional
1129
+ outcome and scalar and distributional predictors. Inferential tools are developed that
1130
+ include projection-based asymptotic joint confidence bands and a global test of statisti-
1131
+ cal significance for estimated functional regression coefficients. Numerical analysis using
1132
+ simulations illustrate an accurate performance of the estimation method. The proposed
1133
+ test is also shown to maintain the nominal test size and have a satisfactory power. An
1134
+ additional nonparametric bootstrap test provided in the supplementary material could
1135
+ be particularly useful in finite sample sizes.
1136
+ Application of DOR is demonstrated in studying the distributional association be-
1137
+ tween heart rate reserve and key demographics while adjusting for physical activity. Our
1138
+ findings provide important insights about age and gender differences in distribution of
1139
+ heart rate. Beyond the considered epidemiological application, the proposed regression
1140
+ model could be used in other epidemiological studies to more flexibly model distributional
1141
+ aspect of high frequency and high intensity data. Additionally, it can be used for estima-
1142
+ tion of treatment effects in primary or secondary endpoints quantified via distributions.
1143
+ There are multiple research directions that remain to be explored based on this cur-
1144
+ rent work. In developing our method we have implicitly assumed that there are enough
1145
+ measurements available per subject to accurately estimate quantile functions. Scenarios
1146
+ with only a few sparse measurements pose a practical challenge and will need careful
1147
+ handling. Other aspects of studies collecting distributional data such as distributional
1148
+ measurements being multilevel (Goldsmith et al., 2015) or incorporating spatio-temporal
1149
+ structure (Yang, 2020; Ghosal et al., 2022b) would be important to consider. Another
1150
+ interesting direction of research could be to extend these models beyond the additive
1151
+ paradigm, for example the single index model (Jiang et al., 2011) could be employed
1152
+ to accommodated interaction and nonlinear effects of multiple scalar and distributional
1153
+ predictors. Extending the proposed method to such more general and complex models
1154
+ would be computationally challenging, nonetheless merits future attention because of
1155
+ their potentially diverse applications.
1156
+ 22
1157
+
1158
+ Supplementary Material
1159
+ Appendix A-E along with the Supplementary Tables and Supplementary Figures refer-
1160
+ enced in this article are available online as Supplementary Material.
1161
+ Software
1162
+ Software implementation via R (R Core Team, 2018) and illustration of the proposed
1163
+ framework is available upon request from the authors.
1164
+ References
1165
+ Antelmi, I., De Paula, R. S., Shinzato, A. R., Peres, C. A., Mansur, A. J., and Grupi,
1166
+ C. J. (2004), “Influence of age, gender, body mass index, and functional capacity on
1167
+ heart rate variability in a cohort of subjects without heart disease,” The American
1168
+ journal of cardiology, 93, 381–385.
1169
+ Augustin, N. H., Mattocks, C., Faraway, J. J., Greven, S., and Ness, A. R. (2017),
1170
+ “Modelling a response as a function of high-frequency count data: The association
1171
+ between physical activity and fat mass,” Statistical methods in medical research, 26,
1172
+ 2210–2226.
1173
+ Carnicer, J. M. and Pena, J. M. (1993), “Shape preserving representations and optimality
1174
+ of the Bernstein basis,” Advances in Computational Mathematics, 1, 173–196.
1175
+ Chen, Y., Lin, Z., and M¨uller, H.-G. (2021), “Wasserstein regression,” Journal of the
1176
+ American Statistical Association, 1–40.
1177
+ Cui, E., Leroux, A., Smirnova, E., and Crainiceanu, C. M. (2022), “Fast univariate
1178
+ inference for longitudinal functional models,” Journal of Computational and Graphical
1179
+ Statistics, 31, 219–230.
1180
+ Fan, Y., James, G. M., and Radchenko, P. (2015), “Functional additive regression,” The
1181
+ Annals of Statistics, 43, 2296–2325.
1182
+ 23
1183
+
1184
+ Foy, A. J., Mandrola, J., Liu, G., and Naccarelli, G. V. (2018), “Relation of obesity
1185
+ to new-onset atrial fibrillation and atrial flutter in adults,” The American journal of
1186
+ cardiology, 121, 1072–1075.
1187
+ Gellish, R. L., Goslin, B. R., Olson, R. E., McDONALD, A., Russi, G. D., and Moudgil,
1188
+ V. K. (2007), “Longitudinal modeling of the relationship between age and maximal
1189
+ heart rate.” Medicine and science in sports and exercise, 39, 822–829.
1190
+ Ghodrati, L. and Panaretos, V. M. (2021), “Distribution-on-Distribution Regression via
1191
+ Optimal Transport Maps,” arXiv preprint arXiv:2104.09418.
1192
+ Ghosal, R., Ghosh, S., Urbanek, J., Schrack, J. A., and Zipunnikov, V. (2022a), “Shape-
1193
+ constrained estimation in functional regression with Bernstein polynomials,” Compu-
1194
+ tational Statistics & Data Analysis, 107614.
1195
+ Ghosal, R. and Maity, A. (2022), “A Score Based Test for Functional Linear Concurrent
1196
+ Regression,” Econometrics and Statistics, 21, 114–130.
1197
+ Ghosal, R., Varma, V. R., Volfson, D., Hillel, I., Urbanek, J., Hausdorff, J. M., Watts,
1198
+ A., and Zipunnikov, V. (2021), “Distributional data analysis via quantile functions
1199
+ and its application to modelling digital biomarkers of gait in Alzheimer’s Disease,”
1200
+ Biostatistics.
1201
+ Ghosal, R., Varma, V. R., Volfson, D., Urbanek, J., Hausdorff, J. M., Watts, A., and
1202
+ Zipunnikov, V. (2022b), “Scalar on time-by-distribution regression and its application
1203
+ for modelling associations between daily-living physical activity and cognitive functions
1204
+ in Alzheimer’s Disease,” Scientific reports, 12, 1–16.
1205
+ Goldfarb, D. and Idnani, A. (1982), “Dual and primal-dual methods for solving strictly
1206
+ convex quadratic programs,” in Numerical analysis, Springer, pp. 226–239.
1207
+ — (1983), “A numerically stable dual method for solving strictly convex quadratic pro-
1208
+ grams,” Mathematical programming, 27, 1–33.
1209
+ Goldsmith, J., Zipunnikov, V., and Schrack, J. (2015), “Generalized multilevel function-
1210
+ on-scalar regression and principal component analysis,” Biometrics, 71, 344–353.
1211
+ 24
1212
+
1213
+ Hron, K., Menafoglio, A., Templ, M., Hruzova, K., and Filzmoser, P. (2016), “Simplicial
1214
+ principal component analysis for density functions in Bayes spaces,” Computational
1215
+ Statistics & Data Analysis, 94, 330–350.
1216
+ Huang, J. Z., Wu, C. O., and Zhou, L. (2002), “Varying-coefficient models and basis
1217
+ function approximations for the analysis of repeated measurements,” Biometrika, 89,
1218
+ 111–128.
1219
+ — (2004), “Polynomial spline estimation and inference for varying coefficient models with
1220
+ longitudinal data,” Statistica Sinica, 14, 763–788.
1221
+ Irpino, A. and Verde, R. (2013), “A metric based approach for the least square regression
1222
+ of multivariate modal symbolic data,” in Statistical Models for Data Analysis, Springer,
1223
+ pp. 161–169.
1224
+ Jiang, C.-R., Wang, J.-L., et al. (2011), “Functional single index models for longitudinal
1225
+ data,” The Annals of Statistics, 39, 362–388.
1226
+ Kostis, J. B., Moreyra, A., Amendo, M., Di Pietro, J., Cosgrove, N., and Kuo, P. (1982),
1227
+ “The effect of age on heart rate in subjects free of heart disease. Studies by ambulatory
1228
+ electrocardiography and maximal exercise stress test.” Circulation, 65, 141–145.
1229
+ Leary, A. C., Struthers, A. D., Donnan, P. T., MacDonald, T. M., and Murphy, M. B.
1230
+ (2002), “The morning surge in blood pressure and heart rate is dependent on levels of
1231
+ physical activity after waking,” Journal of hypertension, 20, 865–870.
1232
+ Lorentz, G. G. (2013), Bernstein polynomials, American Mathematical Soc.
1233
+ Matabuena, M. and Petersen, A. (2021), “Distributional data analysis with accelerometer
1234
+ data in a NHANES database with nonparametric survey regression models,” arXiv.
1235
+ Matabuena, M., Petersen, A., Vidal, J. C., and Gude, F. (2021), “Glucodensities: a
1236
+ new representation of glucose profiles using distributional data analysis,” Statistical
1237
+ Methods in Medical Research, 30, 1445–1464.
1238
+ Meyer, M. J., Coull, B. A., Versace, F., Cinciripini, P., and Morris, J. S. (2015), “Bayesian
1239
+ function-on-function regression for multilevel functional data,” Biometrics, 71, 563–
1240
+ 574.
1241
+ 25
1242
+
1243
+ Parzen, E. (2004), “Quantile probability and statistical data modeling,” Statistical Sci-
1244
+ ence, 19, 652–662.
1245
+ Pegoraro, M. and Beraha, M. (2022), “Projected Statistical Methods for Distributional
1246
+ Data on the Real Line with the Wasserstein Metric.” J. Mach. Learn. Res., 23, 37–1.
1247
+ Petersen, A. and M¨uller, H.-G. (2016), “Functional data analysis for density functions by
1248
+ transformation to a Hilbert space,” The Annals of Statistics, 44, 183–218.
1249
+ Petersen, A., Zhang, C., and Kokoszka, P. (2021), “Modeling Probability Density Func-
1250
+ tions as Data Objects,” Econometrics and Statistics.
1251
+ Powley, B. W. (2013), “Quantile function methods for decision analysis,” Ph.D. thesis,
1252
+ Stanford University.
1253
+ Prabhavathi, K., Selvi, K. T., Poornima, K., and Sarvanan, A. (2014), “Role of biological
1254
+ sex in normal cardiac function and in its disease outcome–a review,” Journal of clinical
1255
+ and diagnostic research: JCDR, 8, BE01.
1256
+ R Core Team (2018), R: A Language and Environment for Statistical Computing, R
1257
+ Foundation for Statistical Computing, Vienna, Austria.
1258
+ Ramsay, J. and Silverman, B. (2005), Functional Data Analysis, New York: Springer-
1259
+ Verlag.
1260
+ Ramsay, J. O. et al. (1988), “Monotone regression splines in action,” Statistical science,
1261
+ 3, 425–441.
1262
+ Sergazinov, R., Leroux, A., Cui, E., Crainiceanu, C., Aurora, R. N., Punjabi, N. M., and
1263
+ Gaynanova, I. (2022), “A case study of glucose levels during sleep using fast function
1264
+ on scalar regression inference,” arXiv preprint arXiv:2205.08439.
1265
+ Talsk´a, R., Hron, K., and Grygar, T. M. (2021), “Compositional Scalar-on-Function
1266
+ Regression with Application to Sediment Particle Size Distributions,” Mathematical
1267
+ Geosciences, 1–29.
1268
+ Tanaka, H., Monahan, K. D., and Seals, D. R. (2001), “Age-predicted maximal heart
1269
+ rate revisited,” Journal of the american college of cardiology, 37, 153–156.
1270
+ 26
1271
+
1272
+ Tang, B., Zhao, Y., Venkataraman, A., Tsapkini, K., Lindquist, M., Pekar, J. J., and
1273
+ Caffo, B. S. (2020), “Differences in functional connectivity distribution after transcra-
1274
+ nial direct-current stimulation: a connectivity density point of view,” bioRxiv.
1275
+ Vanbrabant, L. and Rosseel, Y. (2019), Restricted Statistical Estimation and Inference
1276
+ for LinearModels, 0.2-250.
1277
+ Verde, R. and Irpino, A. (2010), “Ordinary least squares for histogram data based on
1278
+ wasserstein distance,” in Proceedings of COMPSTAT’2010, Springer, pp. 581–588.
1279
+ Wang, J. and Ghosh, S. K. (2012), “Shape restricted nonparametric regression with
1280
+ Bernstein polynomials,” Computational Statistics & Data Analysis, 56, 2729–2741.
1281
+ Yang, H. (2020), “Random distributional response model based on spline method,” Jour-
1282
+ nal of Statistical Planning and Inference, 207, 27–44.
1283
+ Yang, H., Baladandayuthapani, V., Rao, A. U., and Morris, J. S. (2020), “Quantile
1284
+ function on scalar regression analysis for distributional data,” Journal of the American
1285
+ Statistical Association, 115, 90–106.
1286
+ Yao, F., M¨uller, H.-G., and Wang, J.-L. (2005), “Functional linear regression analysis for
1287
+ longitudinal data,” The Annals of Statistics, 2873–2903.
1288
+ 27
1289
+
1290
+ Supplementary Material for Distributional
1291
+ outcome regression and its application to
1292
+ modelling continuously monitored heart
1293
+ rate and physical activity
1294
+ Rahul Ghosal1,∗, Sujit Ghosh2, Jennifer A. Schrack3, Vadim Zipunnikov4
1295
+ 1 Department of Epidemiology and Biostatistics, University of South Carolina
1296
+ 2Department of Statistics, North Carolina State University
1297
+ 3 Department of Epidemiology, Johns Hopkins Bloomberg
1298
+ School of Public Health
1299
+ 4 Department of Biostatistics, Johns Hopkins Bloomberg
1300
+ School of Public Health
1301
+ January 30, 2023
1302
+ 1
1303
+ arXiv:2301.11399v1 [stat.ME] 26 Jan 2023
1304
+
1305
+ 1
1306
+ Appendix A: Proof of Theorem 1
1307
+ The predicted outcome quantile function is the conditional expectation of the outcome
1308
+ quantile function based on the distribution-on-scalar and distribution regression (DOSDR)
1309
+ model (2) and is given by,
1310
+ E(QY (p) | z1, z2, . . . , zq, qx(p)) = β0(p) +
1311
+ q
1312
+
1313
+ j=1
1314
+ zjβj(p) + h(qx(p)).
1315
+ (1)
1316
+ We will show conditions (1)-(3) are sufficient conditions to ensure E(QY (p) | z1, z2, . . . , zq, qx(p))
1317
+ is non-decreasing. Let us assume 0 ≤ zj ≤ 1, ∀j = 1, 2, . . . , J, without loss of generality.
1318
+ It is enough to show T1(p) = β0(p) + �q
1319
+ j=1 zjβj(p) and T2(p) = h(qx(p)) both are non
1320
+ decreasing. The second part is immediate as both qx(·) and h(·) (by condition (3)) are
1321
+ non decreasing. To complete the proof we only need to show T1(p) is non decreasing.
1322
+ T ′
1323
+ 1(p) = β′
1324
+ 0(p)+�q
1325
+ j=1 zjβ′
1326
+ j(p). Enough to show T ′
1327
+ 1(p) ≥ 0 for all (z1, z2, . . . , zq) ∈ [0, 1]q.
1328
+ Note that this is a linear function in (z1, z2, . . . , zq) ∈ [0, 1]q. By the well-known Bauer’s
1329
+ principle the minimum is attained at the boundary points B = {(z1, z2, . . . , zq) : zj ∈
1330
+ {0, 1}}. Hence, the sufficient conditions are β′
1331
+ 0(p) ≥ 0 and β′
1332
+ 0(p) + �r
1333
+ k=1 β′
1334
+ jk(p) ≥ 0 for
1335
+ any sub-sample {j1, j2, . . . , jr} ⊂ {1, 2, . . . , q}, which follows from condition (1) and (2).
1336
+ 2
1337
+ Appendix B: Example of DOSDR
1338
+ Example 2: Two scalar covariates (q = 2) and a distributional predictor
1339
+ We illustrate the estimation for DOSDR where there are two scalar covariates z1, z2
1340
+ (q = 1) and a single distribution predictor QX(p). The DOSDR model (2) is given by
1341
+ QiY (p) = β0(p)+zi1β1(p)+zi2β2(p)+h(QiX(p))+ϵi(p). The sufficient conditions (1)-(3) of
1342
+ Theorem 1 in this case reduce to : A) The distributional intercept β0(p) is non-decreasing.
1343
+ B) β0(p) + β1(p), β0(p) + β2(p), β0(p) + β1(p) + β2(p) is non-decreasing. C) h(·) is non-
1344
+ decreasing. Note that condition B) illustrates that as the number of scalar covariates
1345
+ increase we have more and more combinatorial combinations of the coefficint functions
1346
+ restricted to be non-decreasing. Similar to Example 1, Conditions (A)-(C) again become
1347
+ linear restrictions on the basis coefficients of the form Dψ ≥ 0, where the constraint
1348
+ 2
1349
+
1350
+ matrix is given by D =
1351
+
1352
+
1353
+
1354
+
1355
+
1356
+
1357
+
1358
+
1359
+
1360
+
1361
+
1362
+ AN
1363
+ 0
1364
+ 0
1365
+ 0
1366
+ AN
1367
+ AN
1368
+ 0
1369
+ 0
1370
+ AN
1371
+ 0
1372
+ AN
1373
+ 0
1374
+ AN
1375
+ AN
1376
+ AN
1377
+ 0
1378
+ 0
1379
+ 0
1380
+ 0
1381
+ AN−1
1382
+
1383
+
1384
+
1385
+
1386
+
1387
+
1388
+
1389
+
1390
+
1391
+
1392
+
1393
+ .
1394
+ As the number of restrictions increase the parameter space becomes smaller and smaller,
1395
+ which can result in a faster convergence of the optimization algorithm.
1396
+ 3
1397
+ Appendix C: Estimation of Asymptotic Covariance
1398
+ Matrix
1399
+ The DOSDR model (8) in the paper was reformulated as
1400
+ QiY
1401
+ = Tiψ + ϵi, ,
1402
+ where Ti = [B0 Wi1 Wi2, . . . , Wiq Si]. Under suitable regularity conditions (Huang et al.,
1403
+ 2004), √n( ˆψur − ψ0) can be shown to be asymptotically distributed as N(0, ∆) (also
1404
+ holds true for finite sample sizes if ϵ(p) is Gaussian). In reality, ∆ is unknown and we
1405
+ want to estimate ∆ by an estimator ˆ∆.
1406
+ We derive a sandwich covariance estimator
1407
+ ˆ∆ corresponding to the above model. Based on the ordinary least square optimization
1408
+ criterion for model (11) (of the paper), the unrestricted estimator is given by ˆψur =
1409
+ (TTT)−1TTQY , where QT
1410
+ Y = (Q1Y , Q2Y , . . . , QnY )T and T = [TT
1411
+ 1 , TT
1412
+ 2 , . . . , TT
1413
+ n]T. Hence,
1414
+ V ar( ˆψur) = (TTT)−1TTΣT(TTT)−1. Here Σ = V ar(ϵ), which is typically unknown. We
1415
+ apply an FPCA based estimation approach (Ghosal and Maity, 2022) to estimate Σ.
1416
+ Let us assume (Huang et al., 2004) the error process ϵ(p) can be decomposed as
1417
+ ϵ(p) = V (p) + wp, where V (p) is a smooth mean zero stochastic process with covariance
1418
+ kernel G(p1, p2) and wp is a white noise with variance σ2. The covariance function of the
1419
+ error process is then given by Σ(p1, p2) = cov{ϵ(p1), ϵ(p2)} = G(p1, p2)+σ2I(p1 = p2). For
1420
+ data observed on dense and regular grid P, the covariance matrix of the residual vector
1421
+ ϵi is Σm×m, the covariance kernel Σ(p1, p2) evaluated on the grid P = {p1, p2, . . . , pm}.
1422
+ We can estimate Σ(·, ·) nonparametrically using functional principal component analysis
1423
+ (FPCA) if the original residuals ϵij were available. Given ϵi(pj)s, FPCA (Yao et al., 2005)
1424
+ 3
1425
+
1426
+ can be used to get ˆφk(·), ˆλks and ˆσ2 to form an estimator of Σ(p1, p2) as
1427
+ ˆΣ(p1, p2) =
1428
+ K
1429
+
1430
+ k=1
1431
+ ˆλk ˆφk(p1)ˆφk(p2) + ˆσ2I(p1 = p2),
1432
+ where K is large enough such that percent of variance explained (PVE) by the selected
1433
+ eigencomponents exceeds some pre-specified value such as 99%.
1434
+ In practice, we don’t have the original residuals ϵij. Hence we fit the unconstrained
1435
+ DOSDR model (11) and and obtain the residuals eij = QiY (pj) − ˆ
1436
+ QiY (pj). Then treating
1437
+ eij as our original residuals, we can obtain ˆΣ(p1, p2) and ˆΣm×m using the FPCA approach
1438
+ outlined above. Then
1439
+ ˆ
1440
+ V ar(ϵ) = ˆΣ = diag{ˆΣm×m, ˆΣm×m, . . . , ˆΣm×m}. Ghosal and Maity
1441
+ (2022) discusses consistency of ˆΣ under standard regularity conditions. Hence an consis-
1442
+ tent estimator of the covariance matrix is given by
1443
+ ˆ
1444
+ V ar( ˆψur) = (TTT)−1TT ˆΣT(TTT)−1.
1445
+ In particular, ˆ∆n = ˆ∆/n = ˆ
1446
+ cov( ˆψur) = (TTT)−1TT ˆΣT(TTT)−1.
1447
+ 4
1448
+
1449
+ 4
1450
+ Appendix D: Algorithm 1 for Joint Confidence Band
1451
+ Algorithm 1 Joint confidence band of β0
1452
+ 1(p)
1453
+ 1. Fit the unconstrained model and obtain the unconstrained estimator
1454
+ ˆψur =
1455
+ argmin
1456
+ ψ∈RKn
1457
+ �n
1458
+ i=1 ||QiY − Tiψ||2
1459
+ 2.
1460
+ 2. Fit
1461
+ the
1462
+ constrained
1463
+ model
1464
+ and
1465
+ obtain
1466
+ the
1467
+ constrained
1468
+ estimator
1469
+ ˆψr
1470
+ =
1471
+ argmin
1472
+ ψ∈ΘR
1473
+ �n
1474
+ i=1 ||QiY −Tiψ||2
1475
+ 2. Obtain the constrained estimator of β0
1476
+ 1(p) as ˆβ1r(p) =
1477
+ ρKn(p)
1478
+ ′ ˆβ1r.
1479
+ 3. Let ˆ∆n be an estimate of the asymptotic covariance matrix of the unconstrained
1480
+ estimator given by ˆ∆n = ˆ∆/n = ˆ
1481
+ cov( ˆψur)
1482
+ 4. For b = 1 to B
1483
+ - generate Zb ∼ NKn( ˆψur, ˆ∆n).
1484
+ - compute the projection of Zb as ˆψr,b = argmin
1485
+ ψ∈ΘR
1486
+ ||ψ − Zb||2
1487
+ ˆΩ.
1488
+ - End For
1489
+ 5. For each generated sample ˆψr,b calculate estimate of β0
1490
+ 1(p) as ˆβ1r,b(p) = ρKn(p)
1491
+ ′ ˆβ1r,b
1492
+ (b = 1, . . . , B). Compute V ar(ˆβ1r(p)) based on these samples.
1493
+ 6. For b = 1 to B
1494
+ - calculate ub = max
1495
+ p∈P
1496
+ |ˆβ1r,b(p)−ˆβ1r(p)|
1497
+
1498
+ V ar(ˆβ1r(p)) .
1499
+ - End For
1500
+ 7. Calculate q1−α the (1 − α) empirical quantile of {ub}B
1501
+ b=1.
1502
+ 8. 100(1−α)% joint confidence band for β0
1503
+ 1(p) is given by ˆβ1r(p)±q1−α
1504
+
1505
+ V ar(ˆβ1r(p)).
1506
+ 5
1507
+ Appendix E: Bootstrap Test for Global Distribu-
1508
+ tional Effects
1509
+ A practical question of interest in the DOSDR model is to directly test for the global
1510
+ distributional effect of the scalar covariates Zj or test for the distributional effect of the
1511
+ distributional predictor QX(p). In this section, we illustrate an nonparametric bootstrap
1512
+ test based on our proposed estimation method which also easily lends itself to the required
1513
+ 5
1514
+
1515
+ shape constraints of the regression problem. In particular, we obtain the residual sum of
1516
+ squares of the null and the full model and come up with the F-type test statistic defined
1517
+ as
1518
+ TD = RSSN − RSSF
1519
+ RSSF
1520
+ .
1521
+ (2)
1522
+ Here RSSN, RSSF are the residual sum of squares under the null and the full model
1523
+ respectively. For example, let us consider the case of testing
1524
+ H0 : βr(p) = 0 for all p ∈ [0, 1]
1525
+ versus
1526
+ H1 : βr(p) ̸= 0 for some p ∈ [0, 1].
1527
+ Let r = q without loss of generality. The residual sum of of squares for the full model
1528
+ is given by RSSF = �n
1529
+ i=1 ||QiY − B0 ˆβ0 − �q
1530
+ j=1 Wij ˆβj − Si ˆθ||2
1531
+ 2, where the estimates are
1532
+ obtained from the optimization criterion (9) in the paper, with the constraint DFψ ≥ 0
1533
+ (denoting the constraint matrix for the full model as DF). Similarly, we have RSSN =
1534
+ �n
1535
+ i=1 ||QiY − B0 ˆβ0 − �q−1
1536
+ j=1 Wij ˆβj − Si ˆθ||2
1537
+ 2, where the estimates are again obtained from
1538
+ (9) with the constraint DNψ ≥ 0. Note that, in this case the constraint matrix is denoted
1539
+ by DN and this is essentially a submatrix of DF as the conditions for monotinicity in (1)-
1540
+ (3) (Theorem 1) for the reduced model is a subset of the original constrains for the full
1541
+ model. The null distribution of the test statistic TD is nonstandard, hence we use residual
1542
+ bootstrap to approximate the null distribution. The complete bootstrap procedure for
1543
+ testing the distributional effect of a scalar predictor is presented in algorithm (2) below.
1544
+ Similar strategy could be employed for testing the distributional effect of a distributional
1545
+ predictor or multiple scalar predictors.
1546
+ 6
1547
+
1548
+ Algorithm 2 Bootstrap algorithm for testing the distributional effect of a scalar predictor
1549
+ 1. Fit the full DOSDR model in the paper using the optimization criterion
1550
+ ˆψF = argmin
1551
+ ψ
1552
+ n
1553
+
1554
+ i=1
1555
+ ||QiY − B0β0 −
1556
+ q
1557
+
1558
+ j=1
1559
+ Wijβj − Siθ||2
1560
+ 2
1561
+ s.t
1562
+ DFψ ≥ 0.
1563
+ and calculate the residuals ei(pl) = QiY (pl) − ˆQiY (pl), for i = 1, 2, . . . , n and l =
1564
+ 1, 2, . . . , m.
1565
+ 2. Fit the reduced model corresponding to H0 (the null) and estimate the parameters
1566
+ using the minimization criteria,
1567
+ ˆψN = argmin
1568
+ ψ
1569
+ n
1570
+
1571
+ i=1
1572
+ ||QiY − B0β0 −
1573
+ q−1
1574
+
1575
+ j=1
1576
+ Wijβj − Siθ||2
1577
+ 2
1578
+ s.t
1579
+ DNψ ≥ 0.
1580
+ Denote the estimates of the distributional effects as ˆβN
1581
+ j (p) for j = 0, 1, . . . , q − 1
1582
+ and ˆhN(x).
1583
+ 3. Compute test statistic TD (2) based on these null and full model fits, denote this
1584
+ as Tobs.
1585
+ 4. Resample B sets of bootstrap residuals {e∗
1586
+ b,i(p)}n
1587
+ i=1 from residuals {ei(p)}n
1588
+ i=1 ob-
1589
+ tained in step 1.
1590
+ 5. for b = 1 to B
1591
+ 6. Generate distributional response under the reduced DOSDR model as
1592
+ Q∗
1593
+ b,iY (p) = ˆβN
1594
+ 0 (p) +
1595
+ q−1
1596
+
1597
+ j=1
1598
+ zij ˆβN
1599
+ j (p) + ˆhN(QiX(p)) + e∗
1600
+ b,i(p).
1601
+ 7. Given the bootstrap data set {QiX(p), Q∗
1602
+ b,iY (p), z1, z2, . . . , zq}n
1603
+ i=1 fit the null and the
1604
+ full model to compute the test statistic T ∗
1605
+ b .
1606
+ 8. end for
1607
+ 9. Calculate the p-value of the test as ˆp =
1608
+ �B
1609
+ b=1 I(T ∗
1610
+ b ≥Tobs)
1611
+ B
1612
+ .
1613
+ 7
1614
+
1615
+ 6
1616
+ Supplementary Tables
1617
+ Table S1: Average Wasserstein distance (standard error) between true and predicted
1618
+ quantile functions in the test set over 100 Monte-Carlo replications, Scenario A1.
1619
+ Sample Size
1620
+ L=200
1621
+ L=400
1622
+ n= 200
1623
+ 0.2587 (0.0154)
1624
+ 0.1882 (0.0138)
1625
+ n= 300
1626
+ 0.2568 (0.0132)
1627
+ 0.1858 (0.0105)
1628
+ n= 400
1629
+ 0.2554 (0.0141)
1630
+ 0.1865 (0.0120)
1631
+ Table S2: Coverage of the projection-based 95% joint confidence interval for β1(p), for
1632
+ various choices of the order of the Bernstein polynomial (BP) basis, scenario A1, based
1633
+ on 100 M.C replications with L = 200. Average width of the joint confidence interval is
1634
+ given in the parenthesis. The average choices of N from cross-validation for this scenario
1635
+ are highlighted in bold.
1636
+ BP order (N)
1637
+ Sample size (n=200)
1638
+ Sample size (n=300)
1639
+ Sample size (n=400)
1640
+ 2
1641
+ 0.92 (0.29)
1642
+ 0.9 (0.24)
1643
+ 0.9 (0.20)
1644
+ 3
1645
+ 0.92 (0.31)
1646
+ 0.94 (0.25)
1647
+ 0.96 (0.22)
1648
+ 4
1649
+ 0.93 (0.33)
1650
+ 0.93 (0.26)
1651
+ 0.93 (0.23)
1652
+ Table S3: Descriptive statistics of age and BMI for the complete, male and female samples
1653
+ in the BLSA analysis.
1654
+ Characteristic
1655
+ Complete (n=890)
1656
+ Male (n=432)
1657
+ Female (n=458)
1658
+ P value
1659
+ Mean
1660
+ SD
1661
+ Mean
1662
+ SD
1663
+ Mean
1664
+ SD
1665
+ Age
1666
+ 66.66
1667
+ 13.35
1668
+ 68.03
1669
+ 13.41
1670
+ 65.37
1671
+ 13.17
1672
+ 0.003
1673
+ BMI (kg/m2)
1674
+ 27.40
1675
+ 4.96
1676
+ 27.52
1677
+ 4.23
1678
+ 27.28
1679
+ 5.57
1680
+ 0.45
1681
+ Table S4: Results from multiple linear regression model of mean heart rate on age, sex
1682
+ (Male), BMI and mean activity count. Reported are the estimated fixed effects along
1683
+ with their standard error and P-values.
1684
+ Dependent variable : Mean heart rate
1685
+ Value
1686
+ Std.Error
1687
+ P-value
1688
+ Intercept
1689
+ 82.47
1690
+ 3.458
1691
+ < 2 × 10−16∗∗∗
1692
+ age
1693
+ −0.18
1694
+ 0.026
1695
+ < 1.2 × 10−11∗∗∗
1696
+ sex
1697
+ −4.19
1698
+ 0.659
1699
+ < 3.2 × 10−10∗∗∗
1700
+ BMI
1701
+ 0.18
1702
+ 0.067
1703
+ 0.0091∗∗
1704
+ Mean activity
1705
+ 2.44
1706
+ 0.697
1707
+ 0.0005∗∗∗
1708
+ Observations
1709
+ 890
1710
+ Adjusted R2
1711
+ 0.142
1712
+ Note:
1713
+ ∗p<0.05; ∗∗p<0.01; ∗∗∗p<0.001
1714
+ 8
1715
+
1716
+ 7
1717
+ Supplementary Figures
1718
+ 0.00
1719
+ 0.05
1720
+ 0.10
1721
+ 0.15
1722
+ 0.20
1723
+ 0.25
1724
+ 0.30
1725
+ 0.2
1726
+ 0.4
1727
+ 0.6
1728
+ 0.8
1729
+ 1.0
1730
+ d
1731
+ Power
1732
+ n=200
1733
+ n=300
1734
+ n=400
1735
+ Figure S1:
1736
+ Displayed are the estimated power curves for simulation scenario A2.
1737
+ The parameter d controls the departure from the null and the power curves for n ∈
1738
+ {200, 300, 400} are shown by solid, dashed and dotted lines. The dashed horizontal line
1739
+ at the bottom corresponds to the nominal level of α = 0.05.
1740
+ 9
1741
+
1742
+ 0.0
1743
+ 0.2
1744
+ 0.4
1745
+ 0.6
1746
+ 0.8
1747
+ 1.0
1748
+ 2
1749
+ 3
1750
+ 4
1751
+ 5
1752
+ 6
1753
+ 7
1754
+ DODSR
1755
+ p
1756
+ γ(p)
1757
+ 0.0
1758
+ 0.2
1759
+ 0.4
1760
+ 0.6
1761
+ 0.8
1762
+ 1.0
1763
+ 2
1764
+ 3
1765
+ 4
1766
+ 5
1767
+ 6
1768
+ 7
1769
+ PAVA
1770
+ p
1771
+ γ(p)
1772
+ Figure S2: Displayed are estimates of additive effect γ(p) = β0(p) + h(qx(p)) (solid)
1773
+ at at qx(p) =
1774
+ 1
1775
+ n
1776
+ �n
1777
+ i=1 QiX(p) and its estimate ˆγ(p) averaged over 100 M.C replications
1778
+ (dashed) along with point-wise 95% confidence interval (dotted) for scenario B, n = 400.
1779
+ Left: Estimates from the proposed DOSDR method. Right: Isotonic regression method
1780
+ with PAVA.
1781
+ 10
1782
+
1783
+ 0.0
1784
+ 0.2
1785
+ 0.4
1786
+ 0.6
1787
+ 0.8
1788
+ 1.0
1789
+ 0
1790
+ 50
1791
+ 100
1792
+ 150
1793
+ 200
1794
+ 250
1795
+ Heartrate
1796
+ p
1797
+ Heartrate QF (raw)
1798
+ 0.0
1799
+ 0.2
1800
+ 0.4
1801
+ 0.6
1802
+ 0.8
1803
+ 1.0
1804
+ 0
1805
+ 2
1806
+ 4
1807
+ 6
1808
+ 8
1809
+ Activity
1810
+ p
1811
+ Activity QF (log)
1812
+ Figure S3: Subject-specific quantile functions of heart rate and log-transformed activity
1813
+ counts during 8 a.m.- 8 p.m. period. Color profiles show four randomly chosen partici-
1814
+ pants.
1815
+ 11
1816
+
1817
+ 0.0
1818
+ 0.2
1819
+ 0.4
1820
+ 0.6
1821
+ 0.8
1822
+ 1.0
1823
+ 0
1824
+ 50
1825
+ 100
1826
+ 150
1827
+ 200
1828
+ 250
1829
+ HR
1830
+ p
1831
+ Q(p)
1832
+ 0.0
1833
+ 0.2
1834
+ 0.4
1835
+ 0.6
1836
+ 0.8
1837
+ 1.0
1838
+ 0
1839
+ 50
1840
+ 100
1841
+ 150
1842
+ 200
1843
+ 250
1844
+ DODSR Predicted HR
1845
+ p
1846
+ Q(p)
1847
+ 0.0
1848
+ 0.2
1849
+ 0.4
1850
+ 0.6
1851
+ 0.8
1852
+ 1.0
1853
+ 0
1854
+ 50
1855
+ 100
1856
+ 150
1857
+ 200
1858
+ 250
1859
+ HR
1860
+ p
1861
+ Q(p)
1862
+ 0.0
1863
+ 0.2
1864
+ 0.4
1865
+ 0.6
1866
+ 0.8
1867
+ 1.0
1868
+ 0
1869
+ 50
1870
+ 100
1871
+ 150
1872
+ 200
1873
+ 250
1874
+ PAVA Predicted HR
1875
+ p
1876
+ Q(p)
1877
+ Figure S4: Top: LOOCV predictions of quantile functions of heart rate from DOSDR
1878
+ method based on age, sex, BMI and PA distribution. Bottom: LOOCV predictions of
1879
+ quantile functions of heart rate from PAVA method (Ghodrati and Panaretos, 2021) based
1880
+ on PA distribution.
1881
+ References
1882
+ Ghodrati, L. and V. M. Panaretos (2021). Distribution-on-distribution regression via
1883
+ optimal transport maps. arXiv preprint arXiv:2104.09418.
1884
+ Ghosal, R. and A. Maity (2022). A score based test for functional linear concurrent
1885
+ regression. Econometrics and Statistics 21, 114–130.
1886
+ 12
1887
+
1888
+ Huang, J. Z., C. O. Wu, and L. Zhou (2004). Polynomial spline estimation and inference
1889
+ for varying coefficient models with longitudinal data. Statistica Sinica 14, 763–788.
1890
+ Yao, F., H.-G. M¨uller, and J.-L. Wang (2005). Functional linear regression analysis for
1891
+ longitudinal data. The Annals of Statistics, 2873–2903.
1892
+ 13
1893
+
5dFIT4oBgHgl3EQf7yth/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
8NE5T4oBgHgl3EQfQg5R/content/tmp_files/2301.05513v1.pdf.txt ADDED
@@ -0,0 +1,1532 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 1
2
+
3
+ Exploring the substrate-driven morphological changes
4
+ in Nd0.6Sr0.4MnO3 thin films
5
+
6
+ R S Mrinaleni 1, 2, E P Amaladass1, 2*, S Amirthapandian 1, 2, A. T. Sathyanarayana 1, 2,
7
+ Jegadeesan P 1, 2, Ganesan K 1, 2, R M Sarguna 1, 2, P. N. Rao 3, Pooja Gupta 3, 4, T
8
+ Geetha Kumary1, 2, and S. K. Rai 3, 4, Awadhesh Mani1, 2
9
+
10
+ 1Material Science Group, Indira Gandhi Centre for Atomic Research, Kalpakkam, 603102,
11
+ India
12
+ 2Homi Bhabha National Institute, Indira Gandhi Centre for Atomic Research, Kalpakkam
13
+ 603102, India
14
+ 3Synchrotrons Utilisation Section, Raja Ramanna Centre for Advanced Technology, PO
15
+ RRCAT, Indore, Madhya Pradesh 452013, India
16
+ 4Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai,
17
+ Maharashtra 400094, India
18
+ *Corresponding author: [email protected]
19
+
20
+ ABSTRACT
21
+ Manganite thin films are promising candidates for studying the strongly correlated electron
22
+ systems. Understanding the growth-and morphology-driven changes in the physical properties
23
+ of manganite thin films is vital for their applications in oxitronics. This work reports the
24
+ morphological,
25
+ structural,
26
+ and
27
+ electrical
28
+ transport
29
+ properties
30
+ of
31
+ nanostructured
32
+ Nd0.6Sr0.4MnO3 (NSMO) thin films fabricated using the pulsed laser deposition technique.
33
+ Scanning electron microscopy (SEM) imaging of the thin films revealed two prominent surface
34
+ morphologies: a granular and a unique crossed-nano-rod-type morphology. From X-ray
35
+ diffraction (XRD) and atomic force microscopy (AFM) analysis, we found that the observed
36
+ nanostructures resulted from altered growth modes occurring on the terraced substrate surface.
37
+ Furthermore, investigations on the electrical-transport properties of thin films revealed that the
38
+ films with crossed-nano-rod type morphology showed a sharp resistive transition near the
39
+ metal-to-insulator transition (MIT). An enhanced temperature coefficient of resistance (TCR)
40
+ of up to one order of magnitude was also observed compared to the films with granular
41
+ morphology. Such enhancement in TCR % by tuning the morphology makes these thin films
42
+ promising candidates for developing oxide-based temperature sensors and detectors.
43
+
44
+
45
+ 2
46
+
47
+ INTRODUCTION
48
+ Nd0.6Sr0.4MnO3 (NSMO) belongs to the class of magnetic oxides RE1-xAxMnO3 (where RE=
49
+ La3+, Nd3+, Pr3+, Sm3+, and A = Ca2+, Sr2+, Ba2+, etc.) with perovskite (ABO3) structure which
50
+ exhibits a variety of magnetic phases by tuning the dopant concentration x (x = 0 to 0.9)1–3.
51
+ Manganites are known for their exotic properties such as the Colossal magnetoresistive (CMR)
52
+ phenomenon4, Metal-insulator-transition (MIT) accompanied by a magnetic transition from
53
+ paramagnetic (PM) to ferromagnetic (FM) state5, half-metallicity6, and tuneable in-plane and
54
+ out of plane magnetic anisotropy7. These properties are exploited for potential spintronics
55
+ applications such as spin injection devices8, Magnetic tunnel junctions9–11, and magnetic
56
+ storage devices (MRAMs)12. In recent times, the perovskite-manganite systems are the ideal
57
+ oxide candidates for developing superlattices, self-assembled nano-arrays13, nano-ribbons14,
58
+ nano-wires, vertically aligned nanocomposite (VAN) thin films15–19, etc. which offer enhanced
59
+ Low-field magnetoresistance (LFMR), switchable magnetic anisotropy and for studying other
60
+ interesting interface effects such as magnetic exchange bias20. Focus on growth dynamics is
61
+ required to tune exclusive nano-architectures in the thin film as it offers additional handles to
62
+ tailor its physical properties such as a high CMR %, high Curie & MIT temperature, high-
63
+ temperature coefficient of resistance (TCR %), and enhanced magnetoresistive (MR)
64
+ phenomenon. The manganite system is highly sensitive to external perturbations due to the
65
+ strong connection between the spin-charge and lattice degrees of freedom21,22. This poses a
66
+ major challenge in obtaining epitaxial/patterned thin films for useful applications.
67
+ The pulsed laser deposition (PLD) technique has been extensively used to fabricate oxide-
68
+ based manganite thin films. This is because it offers good stoichiometric transfer of the target
69
+ material onto the substrate in addition to deposition in an oxygen background. Various studies
70
+ have been carried out to obtain epitaxial thin films by tuning the deposition parameters such as
71
+ the oxygen partial pressure, substrate temperature, laser energy density, and repetition rate,
72
+ affecting its growth and physical properties23,24. Additionally, the growth of the thin film is
73
+ influenced by the substrate. The strain offered by the substrate affects the surface morphology
74
+ and microstructure of the manganite thin film. Different methodologies such as i) varying the
75
+ substrates for different lattice matching25–27 (ii) choice of substrates with different
76
+ crystallographic orientations with corresponding chemical terminations14 iii) varying the
77
+ thickness of the thin films28, and iv) high-temperature annealing17 are adopted to tune the strain
78
+ and morphology of the thin films. Therefore, thin films with unique morphology and long-
79
+ range ordered nanostructures can be obtained by fine-tuning the growth parameters. Compared
80
+
81
+ 3
82
+
83
+ to the previous works on VAN and other nanostructures of the popular manganite system La-
84
+ Sr-Mn-O, we have observed a granular nanostructure and another distinct nanostructure with
85
+ crossed-nano-rods in our thin films. We have synthesized NSMO thin films using the PLD
86
+ technique on single-crystal SrTiO3 (100) oriented substrates (STO). The effects of PLD
87
+ parameters and annealing conditions on the surface morphology were investigated. Using
88
+ SEM, AFM, and XRD techniques, the growth mechanism leading to a specific type of nano-
89
+ structuring in the NSMO thin films is studied. Additionally, the morphology-driven changes in
90
+ the temperature dependence of resistivity are investigated, and we observed a signature trend
91
+ in the MIT corresponding to the particular morphology.
92
+ EXPERIMENTAL METHODS
93
+ The NSMO thin films were fabricated using the PLD technique using a commercial NSMO
94
+ pellet as the target. Before deposition, SrTiO3 (STO) (1 0 0) single crystals substrate was
95
+ cleaned by boiling in de-ionized(DI) water for 3 minutes, followed by ultra-sonication in DI
96
+ water, acetone, and iso-propyl alcohol followed by rinsing in DI water. With the water leaching
97
+ procedure, the SrO terminations present in the substrate surface can be effectively dissolved
98
+ and removed with DI at elevated temperatures > 60 oC followed by ultra-sonication. A KrF
99
+ Excimer laser source (λ = 248 nm) operated with a laser energy density of 1.75 J/cm2 at 3Hz
100
+ was used to ablate the target. The films were deposited in an oxygen partial of 0.36 mbar with
101
+ substrate temperature fixed at 750 oC. After deposition, the films were in situ annealed at 750
102
+ oC for 2h, and the PLD chamber was maintained with O2 background pressure of 0 to 1 bar.
103
+ Further, the films were ex-situ annealed in a tube furnace at 950 oC in an oxygen atmosphere
104
+ with a flow rate of ~ 20 sccm for 2h.
105
+ The surface morphology of the thin films was examined using a Scanning electron microscope
106
+ (SEM) from Carl Zeiss, crossbeam 340, and the images were collected in inlens-duo mode at
107
+ 3-5 kV. Atomic force microscopy (AFM) was used for 2D and 3D visualization of the surface
108
+ of substrates and the films. XRD studies have been carried out at Engineering Applications
109
+ Beamline, BL-02, Indus-2 synchrotron source, India using beam energy of 15 keV for the
110
+ structural characterization of the films29. The Grazing incidence (GI) and ω-2θ scans were
111
+ performed, and data were collected using the Dectris detector (MYTHEN2 X 1K) in reflection
112
+ geometry. In the GI-scan, the incident angle is kept fixed at ω = 0.5o, and the detector moves
113
+ along the given 2θ range. The monochromatic high-resolution mode of the beamline was used,
114
+
115
+ 4
116
+
117
+ keeping the beam energy at 15 keV (λ = 0.826 Å). The peaks were indexed with reference to
118
+ the ICDD data30 (ICDD number - 01-085-6743)
119
+
120
+ RESULTS AND DISCUSSION:
121
+ 1. Morphology studies of the nanostructured thin films:
122
+ The NSMO thin films prepared under the above conditions possessed two prominent
123
+ surface morphology – granular and rod-type. Two representative films with granular
124
+ nanostructure and crossed-rod nanostructure were chosen to study the physical properties.
125
+ These two systems will be referred to as NS-G and NS-R, where NS stands for NSMO thin
126
+ film, and ‘G’/’R’ stands for the type of morphology. The thickness of NS-G and NS-R thin
127
+ films is determined to be ~ 100 nm by cross-sectional SEM.
128
+ Figure 1(a) shows the SEM image of NS-G thin films with granular morphology. The
129
+ film is uniformly covered with multifaceted grains. Figure 1(c) (i) shows the average grain size
130
+
131
+
132
+ 0
133
+ 20
134
+ 40
135
+ 60
136
+ 80
137
+ 100
138
+ 0
139
+ 100
140
+ 200
141
+ 300
142
+ 400
143
+ Frequency (count)
144
+ Grain size (nm)
145
+ Frequency count
146
+ Lognormal fitting
147
+ Avg grain size
148
+ = 38.9 nm 0.1 nm
149
+ 0
150
+ 200
151
+ 400
152
+ 600
153
+ 800 1000
154
+ 0
155
+ 100
156
+ 200
157
+ 300
158
+ 400
159
+ 500
160
+ Frequency (count)
161
+ Length of rod (nm)
162
+ Frequency count
163
+ Lognormal fitting
164
+ Avg length of rod
165
+ = 188.7 nm 1.7 nm
166
+ 0
167
+ 20
168
+ 40
169
+ 60
170
+ 80
171
+ 100
172
+ 120
173
+ 0
174
+ 100
175
+ 200
176
+ 300
177
+ Frequency (count)
178
+ Width of rod (nm)
179
+ Frequency count
180
+ Lognormal fitting
181
+ Avg width of rod
182
+ = 39.6 nm 0.2 nm
183
+ c) (i)
184
+ (ii)
185
+ (iii)
186
+ Figure 1: Scanning electron microscopy images of NSMO thin films on STO. a) NS-G -
187
+ granular morphology. b) NS-R – self-aligned-crossed-Nano-rod-morphology. c) The
188
+ histograms illustrate the grain size calculation for NS-G and NS-R thin film. (i) Average grain
189
+ size estimated for NS-G. (ii) Average rod length estimated for NS-R. (iii) Average rod width
190
+ estimated for NS-R.
191
+
192
+ 100 nm100 nm5
193
+
194
+ estimated to be 38.9 nm. Figure 1(b) shows the SEM image of NS-R thin films with unique
195
+ surface morphology. The thin film surface is uniformly covered with nano-rods crossed at right
196
+ angles embedded in a matrix of NSMO containing square/rectangular pits. In NS-R, the
197
+ average rod length is estimated to be 188 nm with an average width of 39.6 nm, as shown in
198
+ the Figure 1(c), (ii) and (iii). Further, AFM measurements have been carried out on the NS-G
199
+ and NS-R thin films. The 2D and 3D AFM scan in Figure 2 show columnar/island-type features
200
+ in the NS-G thin film and crossed-rod features in the NS-R thin film.
201
+ 2. Structural analysis of the thin film:
202
+ The bulk NSMO compound has an orthorhombic crystal structure belonging to the Pbnm space
203
+ group. In the pseudo-cubic (pc) representation, the unit cell parameter is given by apc ≈ c / 2 ≈
204
+ 3.849 Å. The substrate STO has a cubic crystal structure with a lattice constant aSTO = 3.905
205
+ Å. NSMO grown on the STO substrate experiences a tensile strain due to the lattice mismatch
206
+
207
+
208
+
209
+
210
+
211
+
212
+ Fig. 2: a), b), 2D and c), d) 3D AFM scans of grain-type NSMO thin film (left) and rod-
213
+ type sample NSMO thin film on STO substrate.
214
+ a)
215
+ b)
216
+ c)
217
+ d)
218
+ Figure 2: a), b), 2D, and c), d) 3D AFM scans of grain-type NSMO thin film (left) and rod-type
219
+ sample NSMO thin film on STO substrate.
220
+
221
+ nm
222
+ 20
223
+ 0
224
+ 2.0
225
+ 0
226
+ 1.8
227
+ 0.2
228
+ 1.6
229
+ 0.4
230
+ 1.4
231
+ 0.6
232
+ 1.2
233
+ 0.8
234
+ 1.0
235
+ 1.0
236
+ 0.8
237
+ 1.2
238
+ 0.6
239
+ 1.4
240
+ 0.4
241
+ 1.6
242
+ 0.2
243
+ 1.8
244
+ 0
245
+ 2.048.3 1F:Height
246
+ 2
247
+ 8
248
+ 1.6
249
+ 61
250
+ 4-
251
+ 21
252
+ 9
253
+ nm
254
+ 2
255
+ 41
256
+ 0
257
+ 0.2
258
+ 0.4
259
+ 0.6
260
+ 8'0
261
+ 1.0
262
+ 1.2
263
+ 1.4
264
+ 1.6
265
+ 1.8
266
+ 2.0
267
+ um21. 1F:Height
268
+ 2
269
+ 51
270
+ 1.8...
271
+ 1.6
272
+ 51
273
+ 1.4
274
+ 1.2
275
+ 1.0
276
+ 51
277
+ nm
278
+ 8'0
279
+ 0.6
280
+ 51
281
+ 0.4
282
+ 2
283
+ 0
284
+ 0.2
285
+ 0.4
286
+ 0.6
287
+ 0.8
288
+ 1.0
289
+ 1.2
290
+ 1.4
291
+ 1.6
292
+ 1.8
293
+ 2.0
294
+ umnm
295
+ 8
296
+ 0
297
+ 0
298
+ 2.0
299
+ 0.2
300
+ 1.8
301
+ 0.4
302
+ 1.6
303
+ 0.6
304
+ 1.4
305
+ 0.8
306
+ 1.2
307
+ 1.0
308
+ 1.0
309
+ 0.8
310
+ 1.2
311
+ 0.6
312
+ 1.4
313
+ 0.4
314
+ 1.6
315
+ 0.2
316
+ 1.8
317
+ 0
318
+ 2.06
319
+
320
+ of 1.4 %. The GI-XRD and high-resolution XRD (HR-XRD) reflections of the films are shown
321
+ in Figure 3(a) and (b). The presence of multiple reflections in the GI-XRD scan of NS-G in
322
+ Figure 3(a) reveals that the granular thin film is polycrystalline. In NS-R, the reflections of
323
+ NSMO are absent, as seen in Figure 3(b). This may be due to its out-of-plane orientation with
324
+ respect to the substrate. At the high 2θ angle ≈ 39.1o , the (3 1 0) STO plane gets aligned,
325
+ resulting in high STO (3 1 0) reflection along with the NSMO (2 4 0) peak. This shows that the
326
+ films are well-oriented, mirroring the substrate. Though NS-G is oriented, the crystallographic
327
+ difference between NS-G and NS-R is attributed to the type of nano-structuring in the films.
328
+
329
+ Figure 3: GI-XRD scans of NSMO thin films a) NS-G b) NS-R indexed using ICDD data (* -
330
+ STO peaks)
331
+
332
+ 3. Effect of ex-situ annealing on morphology:
333
+ To gain insight into the type of growth across these films, we compare the
334
+ morphological changes in the in-situ annealed and ex-situ annealed samples in Figure 4. In
335
+ granular thin films, no significant changes have been observed after in-situ and ex-situ
336
+ annealing, apart from a minor increase in grain size, as seen in Figure 4(a). Whereas the sample
337
+ with rod-type morphology obtained after ex-situ annealing in Figure 4(d) exhibits facetted
338
+ droplets embedded in a matrix with rectangular holes and rod features in the in-situ annealed
339
+
340
+ 5
341
+ 10
342
+ 15
343
+ 20
344
+ 25
345
+ 30
346
+ 35
347
+ 40
348
+ 45
349
+ 50
350
+ 1E+02
351
+ 1E+03
352
+ 1E+04
353
+ 5
354
+ 10
355
+ 15
356
+ 20
357
+ 25
358
+ 30
359
+ 35
360
+ 40
361
+ 45
362
+ 50
363
+ 1E+02
364
+ 1E+03
365
+ 1E+04
366
+ * (310)
367
+ (240),(332)
368
+ (040), (224)
369
+ (024), (132)
370
+ * (200)
371
+ (220), (004)
372
+ (202), (022)
373
+ (020), (112)
374
+ (002), (110)
375
+ Intensity (arb. units)
376
+ 2(deg)
377
+ NS-G
378
+ (a)
379
+ (b)
380
+ * (100)
381
+ (240),(332)
382
+ * (310)
383
+ * (200)
384
+ 2(deg)
385
+ NS-R
386
+
387
+ 7
388
+
389
+ case, Figure 4(c). It is evident that once the initial growth mode is set, the ex-situ annealing
390
+ aids in increasing grain size, relieving the strain in thin films in addition to decreasing oxygen
391
+ defects in NSMO thin films23. We inspect the HR-XRD scans of the NS-G and NS-R thin films
392
+ in the in-situ and ex-situ annealed cases to verify this claim.
393
+ Figure 5(a) and (b) show the HR-XRD scan performed over a range of 2θ (10o – 40o)
394
+ for the films NS-G and NS-R after in-situ and ex-situ annealing. It is observed that the (0 0 4)
395
+ NSMO peak is absent in the in-situ annealed NS-G thin film, whereas upon ex-situ annealing,
396
+ NS-G shows improved texturing with the (0 0 4) NSMO peak close to the (0 0 2) substrate
397
+ peak. In the case of NS-R, along with the substrate’s (002) reflection, corresponding (0 0 l)
398
+ pseudo-cubic reflections from NSMO are present with significant intensity even in the in-situ
399
+ annealed condition. Further, as we compare HR-XRD scans of NS-G and NS-R after ex-situ
400
+ annealing, the NS-R thin film has increased relative intensity compared with the NS-G.
401
+ a)
402
+ b)
403
+
404
+ Pristine thin film after in-situ annealing Further upon ex-situ annealing
405
+
406
+ a)
407
+
408
+
409
+
410
+
411
+
412
+ b)
413
+
414
+ Fig. 3: Effect ex-situ annealing on NSMO thin films with a) granular b)rod-type
415
+ surface morphology
416
+ c)
417
+ d)
418
+ Figure 4: Illustration of the effect of ex-situ annealing on NSMO thin films. a) SEM image of
419
+ in-situ annealed granular thin film b) SEM image of the granular thin film after ex-situ
420
+ annealing c) SEM image of the thin film after in-situ annealing showing rods and squared
421
+ blocks in the encircled regions. d) SEM image of the same thin film after ex-situ annealing
422
+ showing crossed-rod type morphology.
423
+
424
+ 100nm100 nm100 nm100nm8
425
+
426
+
427
+ Figure 5: High resolution-XRD scan of NSMO thin films around the STO-(200) reflection
428
+ inset: fine scan of NSMO (004) of NS-R sample showing double peaks – P1 and P2 (* - STO
429
+ peaks)
430
+ Therefore, NS-R is highly oriented and more crystalline, which can be attributed to its
431
+ epitaxial nature of growth. Additionally, the HRXRD scan of NS-R thin films after ex-situ
432
+ annealing shows a doublet feature at its (004) reflection. A high-resolution fine scan was
433
+ performed on the NS-R thin film to confirm the double peaks. Referring to the literature, we
434
+ found that a similar doublet feature has been reported due to strain relaxation in PSMO thin
435
+ films on STO substrate31. By fitting the peaks using the pseudo-Voigt function, as shown in
436
+ figure S1 of supplementary information, the peaks were de-convoluted to evaluate the out-of-
437
+ plane lattice parameter (tabulated in table T1 – supplementary information). The first peak was
438
+ at 2θ=24.88o with a c-lattice constant of 7.66 Å, and the second peak was at 2θ=24.97o with a
439
+ c-lattice constant of 7.63 Å. The reduction in the c-lattice constant of the second peak shows
440
+ that there is compression of the lattice along the c-axis because of the tensile strain experienced
441
+ by the thin film due to the substrate. Such a splitting in the peak was absent in films of thickness
442
+ < 80 nm, indicating that this double peak is due to partial strain relaxation in the thicker film
443
+ initiated by ex-situ annealing.
444
+ Thus, from the detailed XRD studies and discussions in the previous section, it is inferred
445
+ that difference in initial-growth mode, and subsequent ex-situ annealing has prominently tuned
446
+ the resulting surface morphology of the NSMO thin films. The granular thin film NS-G has
447
+ multiple orientations similar to a polycrystalline system, whereas NS-R shows improved
448
+ crystallinity and orientation mirroring the substrate. The parameters affecting the initial growth
449
+ are discussed in the upcoming section.
450
+
451
+ 22
452
+ 24
453
+ 26
454
+ 28
455
+ 10
456
+ -1
457
+ 10
458
+ 0
459
+ 10
460
+ 1
461
+ 10
462
+ 2
463
+ 10
464
+ 3
465
+ 10
466
+ 4
467
+ 10
468
+ 5
469
+ 10
470
+ 6
471
+ 10
472
+ 7
473
+ 10
474
+ 8
475
+ 22
476
+ 24
477
+ 26
478
+ 28
479
+ 10
480
+ -1
481
+ 10
482
+ 0
483
+ 10
484
+ 1
485
+ 10
486
+ 2
487
+ 10
488
+ 3
489
+ 10
490
+ 4
491
+ 10
492
+ 5
493
+ 10
494
+ 6
495
+ 10
496
+ 7
497
+ 10
498
+ 8
499
+ b)
500
+ Intensity (arb. units)
501
+ 2(deg)
502
+ Ex-situ annealed
503
+ In-situ annealed
504
+ * (200)
505
+ (004)
506
+ NS- R
507
+ P1
508
+ a)
509
+ NS- G
510
+ 2(deg)
511
+ Intensity (arb. units)
512
+ Ex-situ annealed
513
+ In-situ annealed
514
+ * (200)
515
+ (004)
516
+ 24.8
517
+ 25.0
518
+ 25.2
519
+ 1000
520
+ 10000
521
+ P2
522
+ 2(deg)
523
+
524
+ 9
525
+
526
+ 4. Effect of PLD parameters in tuning the morphology:
527
+ PLD Parameters like laser energy density, oxygen partial pressure, and substrate temperature
528
+ highly influence the type of growth. Changes in these parameters lead to variations in the
529
+ energy of the ad-atoms deposited on the substrate. To understand the role of O2 partial pressure
530
+ and laser energy density during the deposition, we have prepared NSMO thin films by varying
531
+ these parameters. Post deposition, the films were in-situ annealed at 750 oC for 2h in an oxygen
532
+ background pressure of 1 bar. Ex-situ annealing was carried out subsequently.
533
+ Figure 6 presents the morphology of films deposited under different laser energy
534
+ densities varied from 1 to 1.75 J/cm2. During deposition, the oxygen partial pressure and
535
+ substrate temperature were maintained at 0.36 mbar and 750 oC. Figure 7 presents the
536
+ morphology of films obtained at different oxygen partial pressure of 0.3mbar, 0.4 mbar, and
537
+ 0.5 mbar while the laser energy density and substrate temperature were maintained at 1 J/cm2
538
+ and 750 oC during deposition, respectively.
539
+ We found that changes in oxygen partial pressure and laser energy density did not
540
+ influence the surface morphology, as both type of morphologies have been observed in
541
+ different deposition runs with the same parameters. Further, as we have obtained granular and
542
+ rod-type films for the same substrate temperature of 750 oC, the role of substrate temperature
543
+ is also ruled out. Thus, irrespective of changes in the parameters mentioned above, thin films
544
+ of either granular or crossed-rod nanostructure were obtained. Therefore we suspect the
545
+
546
+ a)
547
+ b)
548
+ c)
549
+ d)
550
+ e)
551
+ f)
552
+ 1 J/cm2
553
+ 1 J/cm2
554
+ 1.5 J/cm2
555
+ 1.5 J/cm2
556
+ 1.75 J/cm2
557
+ 1.75 J/cm2
558
+ Figure 6: The SEM images of NSMO thin films with granular morphology (a), (b), and (c) and
559
+ rod morphology from (d), (e), and (f) obtained at corresponding laser energy density- 1 J/cm2,
560
+ 1.5 J/cm2, and 1.75 J/cm2.
561
+
562
+ 200 nm200nm200nm200nm200nm200nm10
563
+
564
+ substrate and the strain it offers to plays a vital role in altering the growth mode of the thin
565
+ film.
566
+ 5. Effect of miscut angle in tuning the morphology:
567
+ The commercial STO substrates used here are one-sided polished, and their surface was
568
+ found to have a miscut. In commercially purchased wafers, the occurrence of a miscut in the
569
+ range of 0.05o-0.3o is well known and unavoidable due to mechanical cutting and polishing of
570
+ single crystal STO wafers14,32. In Figure 7(a), the as-received STO substrate, after cleaning,
571
+ shows clear terrace features in the AFM scan, confirming the presence of miscut on the
572
+ substrate surface. In a given wafer, the miscut can be in-plane or out-of-plane or both (some
573
+ works refer to this as miscut directions φ and θ instead of in-plane and out-of-plane,
574
+ respectively). The miscut angle and direction can alter the growth mode as the lattice strain is
575
+ anisotropic along the substrate surface and step edges6, thus resulting in different surface
576
+ morphology by forming anisotropic structural domains33. Several works are available in
577
+ literature 33–35 on the growth of manganite thin film on STO substrate with miscut. These
578
+ reports claim that the value of the miscut angle and appropriate adjustments in growth
579
+ conditions can control the number of structural domains in the thin film. As we have already
580
+ ruled out the possibility of growth conditions influencing the resulting morphology, we tried
581
+ to evaluate the value of miscut present in our STO substrates to see if it has affected the
582
+ resulting morphology.
583
+ a)
584
+ b)
585
+ c)
586
+ d)
587
+ e)
588
+ f)
589
+
590
+ 0.3 mbar
591
+ 0.4 mbar
592
+ 0.5 mbar
593
+ 0.5 mbar
594
+ 0.4 mbar
595
+ 0.3 mbar
596
+ Figure 7: The SEM images of NSMO thin films prepared at oxygen partial pressure of 0.3 mbar,
597
+ 0.4 mbar, and 0.5 mbar. a), b), and c) are granular NSMO thin, and films with rod morphology
598
+ are shown in d), e), and f) at corresponding oxygen partial pressure.
599
+
600
+ 200 nm200nm200nm200 nm200 nm200 nm11
601
+
602
+ To determine the value of miscut present in the substrates, we have followed the XRD-
603
+ protocol from literature36. This was carried out in a BRUKER D8, Lab source XRD setup.
604
+ According to the protocol, a low incident angle (~0.2o) rocking-scan was initially performed to
605
+ ensure that the sample was aligned with the X-ray. This was done to optimize the angle of the
606
+ sample holder, and the offset in the 2θ value (~0.4o) was noted as ζ. Following that, a rocking
607
+ scan was performed around the (200) peak of STO (46.483o), and phi & chi scans were done
608
+ to orient the wafer. Further, the rocking scan around the (200) peak of STO was repeated, fixing
609
+ the X-ray tube position. Finally, a detector scan was performed around the (200) peak of STO
610
+ and this time the offset in 2θ was noted as ζ’. The difference δζ, between ζ and ζ’, gives the
611
+ estimate of miscut. Next, the sample was rotated by 90o, and the scans mentioned above are
612
+ repeated in the same order. The difference between the offsets obtained this time was denoted
613
+ as δξ. Finally, the out-of-plane miscut angle was evaluated using equation (1). After
614
+ determining miscut on various STO wafers, we found that the value out of plane miscut angle
615
+ varies from 0.13o up to 0.48o.
616
+ 𝜃𝑜𝑢𝑡−𝑜𝑓−𝑝𝑙𝑎𝑛𝑒 = 𝑎𝑟𝑐𝑡𝑎𝑛 √tan2(𝛿𝜁) + tan2(𝛿𝜉) (1)
617
+ Table 1 : This table illustrates the morphology of NSMO thin films obtained on STO substrates
618
+ with different values of miscut.
619
+
620
+ Sample
621
+ Miscut
622
+ angle
623
+ Granular
624
+ Morphology
625
+ Sample
626
+ Miscut
627
+ angle
628
+ Rod type
629
+ Morphology
630
+ NS-G
631
+ 0.48 o
632
+
633
+ NS-R
634
+ 0.31 o
635
+
636
+ G1
637
+ 0.31 o
638
+
639
+ R1
640
+ 0.19 o
641
+
642
+ G2
643
+ 0.30 o
644
+
645
+ R2
646
+ 0.25 o
647
+
648
+ Table T1 : This table illustrates the morphology of NSMO thin films obtained on STO
649
+ substrates with different values of miscut.
650
+
651
+ 100nm100nm100nm100 nm100nm100nm12
652
+
653
+
654
+ We see from table T1 that, both granular and rod-type morphology was observed on substrates
655
+ with miscut angle varying from 0.13o up to 0.48o. Sample G1 with granular morphology and
656
+ NS-R with rod-type morphology, possess the same miscut angle of ~ 0.3o. This is very
657
+ interesting, as the value of the miscut angle has not influenced the altered growth modes present
658
+ in our samples. Therefore to comprehend the resulting morphology, we have further
659
+ investigated the type of growth occurring on the terraced surface.
660
+ 6. Thin film growth on the terraced surface:
661
+ A miscut on the substrate is useful for epitaxial thin films37 as the steps and terrace
662
+ edges act as nucleation centres and result in a step flow growth mode38. But the actual processes
663
+ governing the step-flow growth are more complex. The basic parameters driving this type of
664
+ growth are the coefficient of diffusion and the height of the Ehrlich-Schwoebel (ES) barrier39.
665
+ The diffusion of the adatoms on the surface and their incorporation into the crystal structure
666
+ govern the formation of different morphologies at the surface. Additionally, the ES barrier at
667
+ the terrace/step edges introduces an asymmetry in the potential energy at the edge. An adatom,
668
+ reaching the terrace, either nucleates or descends into the step depending on the ES barrier
669
+ height. Similarly, an adatom reaching below the step experiences an inverse step barrier which
670
+ prevents the particles from attaching to the step from below.
671
+ If the barrier height is appropriate, ad-atoms can properly attach themselves to the step
672
+ edges resulting in a step flow growth. However, the existence of the barrier makes the growth
673
+ on the stepped surfaces highly unstable resulting in modified surface features such as step
674
+ meandering, nano-columns/wire formation, spirals/mound formations, and faceted pits. In a
675
+ recent work by Magdalena et al.40, a simulation using the Cellular Automaton model in (2+1)D
676
+ gave rise to different patterns of surface morphology on vicinal surfaces. According to the
677
+ simulation, different processes occurred depending on the values assigned to the barrier height
678
+ at step edges. The adatom could either attach to the step to build the crystal by jumping/
679
+ descending at the step edge or scatter away from the barrier resulting in the formation of
680
+ islands. For a fixed adatom flux, diffusion of adatoms takes place on the vicinal surface, and
681
+ probabilities are assigned for each of the processes mentioned above. Depending on the
682
+ probability value, various surface patterns were simulated for three cases. In case (i), for a high
683
+ ES barrier, the three-dimensional surface formation resulted in square/rectangular islands
684
+ following the cubic lattice symmetry at the middle of the terraces. In case (ii), with a reduced
685
+
686
+ 13
687
+
688
+ ES barrier height, more atoms were trapped at the top of the step, and a new pattern of
689
+ nanocolumns emerged consisting of cubic formations with deep narrow cubic pits. Finally, in
690
+ case (iii), when the height of the barrier was adjusted such that the probability of the adatoms
691
+ descending the step is equal/of the same order as the probability of the adatoms jumping up to
692
+ the step from below, it resulted with nano-wire or a columnar growth. Further, the presence of
693
+ additional local sinks that alters the potential barrier also resulted in nano-columns/islands at
694
+ random positions.
695
+ Thus, we can understand that our resulting granular morphology on the miscut STO
696
+ substrate is precisely similar to the surface morphology resulting from the case (iii). In the STO
697
+ susbtrate, the presence of disoriented terraces and improperly removed SRO terminations may
698
+ have altered the ES barrier resulting in local sinks at the substrate surface, thus resulting in the
699
+ island/columnar growth. Finally, the surface morphology of the NS-R thin film resembles the
700
+
701
+ As received substrate
702
+ after cleaning
703
+ After treatment for
704
+ TiO2 termination
705
+ a)
706
+
707
+ b)
708
+
709
+ c)
710
+
711
+
712
+
713
+
714
+
715
+
716
+
717
+
718
+
719
+ d)
720
+ Figure 8: AFM scan of the STO substrate a) as-received commercial substrate after
721
+ cleaning b) the same substrate after TiO2 termination obtained after heat treatment method
722
+ with a step height of ~ 0.4 Å (one-unit cell height of STO). c), d) NSMO thin films grown on
723
+ the corresponding substrates
724
+
725
+ 10-3nm
726
+ 400
727
+ um
728
+ um
729
+ 0
730
+ 1.0
731
+ 1.0
732
+ 0.8
733
+ 0.8
734
+ 0.6
735
+ 0.6
736
+ 0.4
737
+ 0.4
738
+ 0.2
739
+ 0.2
740
+ 010-3nm
741
+ 600
742
+ 0
743
+ 0
744
+ 0.2
745
+ 0.2
746
+ 0.4
747
+ 0.4
748
+ 0.6
749
+ 0.6
750
+ 0.8
751
+ 0.8
752
+ 1.0
753
+ 1.0
754
+ um
755
+ wn100 nm100 nm14
756
+
757
+ morphology they obtained in case (ii). This fact can be verified from a close inspection of the
758
+ surface of NS-R at high magnification in Figure 9(a). The surface morphology clearly shows
759
+ layer-by-layer growth with squared pits.
760
+ Further, in attempting to reduce the local sinks, NSMO thin films were synthesized on
761
+ pure TiO2 terminated substrates. The substrates are treated with DI water and then annealed at
762
+ high temperatures according to the protocol for TiO2-termination41. The treatment produced
763
+ clear step and terrace characteristics in the substrate, as observed in the AFM scan shown in
764
+ Figure 8(b). NSMO thin films were deposited on these substrates and, subsequently, ex-situ
765
+ annealed. The SEM imaging revealed that they exhibited similar rod-type morphology where
766
+ the rods are self-aligned and crossed at right angles embedded in a matrix of NSMO with
767
+ rectangular features, shown in figure 8(d). This procedure was repeated on several TiO2 -
768
+ terminated STO substrates, and we could reproduce the same morphology. This is because the
769
+ complete removal of SrO assures the absence of local sinks and suppresses the island/columnar
770
+ growth. However, rods in the thin film are believed to arise from droplets deposited due to high
771
+ laser energy density (1.75 J/cm2). This is verified in the SEM images of in-situ annealed
772
+ NSMO thin film shown in Figure 4(b), where the droplets are elongated into rods upon ex-situ
773
+ annealing.
774
+ Figure 9: SEM images of NSMO thin films under high magnification. a) NSMO thin film grown
775
+ on as-received, cleaned STO substrate deposited b) NSMO thin film grown with laser fluence
776
+ on TiO2 terminated STO substrate
777
+ Lastly, to obtain smoother films, we have synthesized NSMO thin films on fully TiO2
778
+ terminated STO substrate at low laser energy density (1 J/cm2), reducing droplets' density. As
779
+ expected, we obtained thin films with reduced density of rods with the same type of
780
+ morphology. The SEM image of the film is shown in Figure 9(b), free of nano-rods. The films
781
+ have rectangular faceted pits, and layer-by-layer growth is evident through the holes.
782
+ a)
783
+ b)
784
+
785
+
786
+ 100 nm100 nm15
787
+
788
+ Therefore the ES barrier plays a significant role in vicinal surfaces and can result in the
789
+ spontaneous ordering of adatoms resulting in unique surface nanostructures. Thus we
790
+ emphasize that when films are grown on a commercial substrate, the resulting morphology can
791
+ be either granular or rod-type depending on the potential energy landscape that depends upon
792
+ a wide range of parameters, including the size, shape of terraces, and type of terminations
793
+ present at the substrate.
794
+ 7. Electrical-transport measurements:
795
+ The nanostructure plays a vital role in the transport behaviour of a manganite thin film
796
+ system30. To understand the transport behaviour of the nanostructured NSMO thin films, the
797
+ resistivity measurements are carried out using the standard 4-probe geometry 42 and plotted as
798
+ a function of temperature in Figure 10. It is observed that the granular film NS-G has higher
799
+ resistivity as compared to NS-R. Both thin films, NS-G and NS-R, exhibit the insulator-to-
800
+ metal transition (MIT), and the transition temperature TMIT is found to be 147 K for sample N-
801
+ G and 135 K for NS-R. The transition into the metallic regime is sharper in the case of NS-R
802
+ compared to NS-G thin film. The electrical transport behaviour has been analysed using
803
+ different theoretical models and fitted in the corresponding temperature regimes. The best fit
804
+ in each region is chosen based on the reduced 2 value.
805
+ 𝜌 (𝑇) = 𝜌𝑅 𝑇𝑒𝑥𝑝 (𝐸𝑎 𝜅𝐵𝑇
806
+
807
+ ) …(2)
808
+ 𝜌 (𝑇) = 𝜌0 𝑒𝑥𝑝 (𝑇0 𝑇
809
+ ⁄ )
810
+ 1 4
811
+
812
+ …(3)
813
+ 𝐸ℎ𝑜𝑝𝑝𝑖𝑛𝑔 =
814
+ 𝜅𝐵 𝑇𝑜
815
+ 1 4
816
+ ⁄ 𝑇
817
+ 3 4
818
+
819
+ 4
820
+ …(4)
821
+ The high-temperature insulating phase is studied using the small polaron hopping
822
+ (SPH) model and the variable range hopping (VRH) mechanism given by equations (2) and
823
+ (3), and hopping energy is calculated from equation (4) 42,43. The VRH model better fits the
824
+ high-temperature region (≈ 195 K to 300 K) for both films. The hopping energy in the case of
825
+ NS-G is 128 meV and 125 meV for NS-R, in agreement with the order of value reported for
826
+ manganite thin films (~100 meV) 43,44. The resistivity in the metallic region below TMIT is
827
+ generally fitted with an empirical equation (5). At low temperatures, in addition to the
828
+ temperature-independent scattering effects from defects and grain boundaries (GBs) (ρo), etc.,
829
+
830
+ 16
831
+
832
+ scattering effects due to electron-electron (ρ2), electron-magnon (ρ4.5) and electron-phonon (ρP)
833
+ dominate along with the strong correlation effects (ρ0.5) 45.
834
+ A low-temperature resistive upturn is observed below 50 K in both films. In figure S3
835
+ of supplementary information, the resistive upturn in the low-temperature region from 4 K up
836
+ to 60 K is fitted using equation (6), which considers all the scattering mechanisms mentioned
837
+ above. An enhanced resistive upturn is observed at low temperatures in NS-G. This is due to
838
+ the enhanced GB-scattering effect and the contribution from other scattering mechanisms at
839
+ low-temperature. The contributions from different scattering mechanisms are analysed, and the
840
+ values are tabulated in supplementary information Table T2.
841
+ The intermediate temperature regime from 90 K to 134 K in the ferromagnetic-metallic
842
+ state is fitted using the equation (7). The addition of the polaronic term to the resistivity gives
843
+ a better fitting in this region as theoretical models claim the formation of polaron near the
844
+ MIT46.
845
+ 𝜌 (𝑇) = 𝜌𝑜 + 𝜌𝑚𝑇𝑚 (5)
846
+
847
+ 0
848
+ 50
849
+ 100
850
+ 150
851
+ 200
852
+ 250
853
+ 300
854
+ 0
855
+ 5
856
+ 10
857
+ 15
858
+ 20
859
+ 25
860
+ 30
861
+ 0
862
+ 100
863
+ 200
864
+ 300
865
+ 0.0
866
+ 1.5
867
+ 3.0
868
+ 4.5
869
+ 0
870
+ 100
871
+ 200
872
+ 300
873
+ 0.0
874
+ 0.2
875
+ 0.4
876
+ 0.6
877
+ 46
878
+ .0
879
+
880
+
881
+
882
+ T
883
+
884
+ 
885
+ Temperature (K)
886
+ NS-G
887
+ NS-R
888
+ Linear fit from
889
+ 110K to 125K- NS-G
890
+ Linear fit from
891
+ 110K to 125K- NS-R
892
+ 33
893
+ .1
894
+
895
+
896
+
897
+ T
898
+
899
+ (b)
900
+ Resistivity - NS -G
901
+ VRH fit
902
+ FM-metallic fit
903
+ Low-temp uptrun
904
+ TIMT  147 K
905
+ (cm)
906
+ Temperature (K)
907
+ (a)
908
+ (c)
909
+ Resistivity - NS-R
910
+ VRH fit
911
+ FM-metallic fit
912
+ Low-temp uptrun
913
+ TIMT  135 K
914
+ (cm)
915
+ Temperature (K)
916
+ Figure 10: a), b) Resistivity vs. temperature curve of NSMO thin films – NS-G and NS-R
917
+ showing insulator to metal transition with decreasing temperature and fitted according to
918
+ theoretical models in different temperature regimes c) Normalized resistivity plot of NS-G and
919
+ NS-R thin film. Inset: Plot of variation of TCR with respect to temperature.
920
+
921
+ 17
922
+
923
+ An interesting feature is observed in the resistivity plots of the NS-G and NS-R thin
924
+ films apart from the low-temperature resistive upturn. In Figure 10(c), the resistivity of both
925
+ the thin films (NS-G and NS-R) has been normalized with their resistivity at 300 K, and a linear
926
+ fitting in the metallic region below TMIT (110 K to 125 K) is carried out to determine the slope.
927
+ The resistivity slope of samples with rod morphology differs from samples with granular
928
+ morphology up to an order of magnitude. The increase in slope value below the transition
929
+ temperature indicates the sharpness of the resistive transition for the samples with rod
930
+ morphology. This characteristic increase in slope up to an order is evident in all our thin films
931
+ with rod-type morphology (see supplementary figure S2). To characterize the sensitivity of
932
+ resistance with respect to changes in temperature, the temperature-coefficient of resistance
933
+ (TCR) has been evaluated using equation (7). It was found that NS-R has a higher value of
934
+ TCR %, ~ 12 %, compared to NS-G with TCR %, ~ 7 %. Additionally, the samples with rod
935
+ morphology were found to have enhanced TCR% (supplementary information figure S2). To
936
+ comprehend this result, we discuss the effect of GBs on the conduction mechanism.
937
+ The manganite system undergoes a disorder-induced phase transition from PM to FM
938
+ state with decreasing temperature21. Due to phase co-existence during the transition, the
939
+ conduction channel is presumed to have filamentary FM paths in the PM matrix 47. Conduction
940
+ takes place through the percolation of current across the well-connected FM regions. In
941
+ addition to the FM filamentary path, the GBs also play a significant role in the conduction
942
+ mechanism. We refer to Verutruyen et al.’s 48 work which explores the effect of a single GB
943
+ in the La-Ca-Mn-O (LCMO) system. They showed that the resistivity falls sharply at the
944
+ transition temperature when measured on a single grain of LCMO (free of GBs). However,
945
+ when measured across a single GB, the resistivity initially decreased, followed by a broad
946
+ resistive feature near the transition temperature. Thus, in a granular system, though the
947
+ conduction takes place through the percolation paths of well-connected FM regions, the GBs
948
+ cause increased resistivity due to increased spin-dependent scattering across the GB47. The
949
+ above explanation is consistent with our results, where the thin film with granular morphology
950
+ (NS-G) shows a broad resistive transition below the transition temperature with reduced TCR
951
+ 𝜌 (𝑇) = 𝜌𝑜 + 𝜌2𝑇2 + 𝜌4.5𝑇4.5 + 𝜌𝑃𝑇5 + 𝜌0.5𝑇0.5 (6)
952
+ 𝜌 (𝑇) = 𝜌𝑜 + 𝜌2𝑇2 + 𝜌4.5𝑇4.5 + 𝜌𝑃𝑇5 + 𝜌0.5𝑇0.5 + 𝜌7.5𝑇7.5 (7)
953
+ 𝑇𝐶𝑅 % =
954
+ 1
955
+ 𝜌 (
956
+ 𝑑𝜌
957
+ 𝑑𝑇) x 100 (8)
958
+
959
+ 18
960
+
961
+ %. If the connectivity is enhanced between the grains, a sharper decrease in the resistivity can
962
+ occur in the metallic regime. Remarkably, we observe that all of our thin films with rod-
963
+ morphology show sharp resistive transition near MIT irrespective of the thickness of the film.
964
+ Thus, this nanostructure aids improved conduction in the FM metallic phase, leading to the
965
+ sharp resistive transition with enhanced TCR % comparable to that of a highly-crystalline
966
+ system. Attempts to enhance the TCR % have been carried out by doping with elements such
967
+ as Ag, as high TCR % is required for applications in sensors and infrared detectors49,50. These
968
+ elements precipitate as nanocomposite in the manganite system and improve the conductivity,
969
+ leading to a sharper resistive transition. However, in our study, we have substantiated that the
970
+ enhancement of TCR % is possible with proper tuning of the nanostructured morphology of
971
+ thin films.
972
+ CONCLUSION:
973
+ In conclusion, the PLD-grown NSMO thin films were observed to have two prominent
974
+ surface morphologies – granular and crossed-nano rods. The metal-to-insulator transition
975
+ (MIT) temperature, TMIT, was found to be 147 K for a granular NSMO (NS-G) thin film and
976
+ 135 K for a thin film with crossed-rod morphology (NS-R). The nature of the resistive
977
+ transition is broad in the former, whereas the latter exhibits a sharp MIT feature. The
978
+ temperature coefficient of resistance (TCR) was evaluated, and NS-R thin film has a higher
979
+ value of TCR %, ~ 12 %, compared to NS-G with TCR % ~ 7 %. Additionally, we have
980
+ observed that all the films with rod-type morphology exhibit a significant enhancement in
981
+ TCR% up to one order of magnitude compared to the granular thin film. Thus, we have
982
+ demonstrated that TCR % can be enhanced with proper tuning of the nanostructures in thin
983
+ films, which is relevant for technological applications. The reason for such nano-structuring is
984
+ explored in great detail. It was found that parameters like laser energy density, O2 partial
985
+ pressure, and the substrate miscut angle had minimal effect. At the same time, the difference
986
+ in the potential landscape of the Ehrlich-Schwoebel (ES) barrier is believed to play a vital role
987
+ in the growth dynamics of the films. Films grown with reduced laser energy density (1 J/cm2)
988
+ on the TiO2 terminated substrates exhibited highly reproducible layer-by-layer growth. This
989
+ substantiates the presence of reduced local sinks and ES barrier height, resulting in epitaxial
990
+ growth of NSMO thin films. Therefore, a fine-tuning of a wide range of parameters, including
991
+ strain and surface terminations, is required to obtain a fine control of the ES barrier that
992
+ influences the growth process of thin films. This paves the way for investigation into the role
993
+ of the ES barrier in manganite thin film growth. Using RHEED and in-situ STM techniques, a
994
+
995
+ 19
996
+
997
+ few groups have already attempted to experimentally determine the value of the ES barrier on
998
+ SrTiO3 substrates for the growth of La-Ca-Mn-O manganite system51. It would be interesting
999
+ to explore the relationship between the value of the ES-barrier and the type of morphology
1000
+ experimentally in the future.
1001
+
1002
+ Author contributions
1003
+ The division of work is as follows: NSMO thin film samples were prepared by R.S.M. SEM
1004
+ imaging was carried out by S.A, J.P. AFM measurements were carried out by K.G. XRD
1005
+ measurements were carried out by R.M.S, P.N.R, PG, S.K.R. Magneto-transport measurements
1006
+ were carried out by R.S.M and E.P.A. Analysis were done by R.S.M, E.P.A, and S.A. Writing
1007
+ was carried out by R.S.M, and all authors discussed the results and commented on the
1008
+ manuscript. E.P.A., T.G.K and A.M. supervised this research work.
1009
+
1010
+ Conflict of interest:
1011
+ The authors declare no conflict of interest.
1012
+ Acknowledgments
1013
+ One of the authors (R S Mrinaleni) would like to acknowledge the Department of Atomic
1014
+ Energy, India for the provision of experimental facilities. We thank UGC-DAE CSR,
1015
+ Kalpakkam node, for providing access to magnetic and magnetotransport measurement
1016
+ systems. The authors are grateful to RRCAT, Indore, for beam line facilities.
1017
+ Funding statement:
1018
+ One of the authors (R S Mrinaleni) would like to acknowledge the funding support from the
1019
+ Department of Atomic Energy, India.
1020
+
1021
+ References:
1022
+ 1.
1023
+ E. Dagotto. Nanoscale phase seperation and CMR.
1024
+ 2.
1025
+ Tokura, Y. Critical features of colossal magnetoresistive manganites. Reports Prog.
1026
+ Phys. 69, 797–851 (2006).
1027
+ 3.
1028
+ Ebata, K. et al. Chemical potential shift induced by double-exchange and polaronic
1029
+ effects in Nd1-x Srx Mn O3. Phys. Rev. B - Condens. Matter Mater. Phys. 77, (2008).
1030
+
1031
+ 20
1032
+
1033
+ 4.
1034
+ Haghiri-Gosnet, A. M. & Renard, J. P. CMR manganites: Physics, thin films and
1035
+ devices. J. Phys. D. Appl. Phys. 36, (2003).
1036
+ 5.
1037
+ Tokura, Y. & Tomioka, Y. Colossal magnetoresistive manganites. J. Magn. Magn.
1038
+ Mater. 200, 1–23 (1999).
1039
+ 6.
1040
+ Perna, P. et al. Tailoring magnetic anisotropy in epitaxial half metallic
1041
+ La0.7Sr0.3MnO3 thin films. J. Appl. Phys. 110, 013919 (2011).
1042
+ 7.
1043
+ Song, C. et al. Emergent perpendicular magnetic anisotropy at the interface of an oxide
1044
+ heterostructure. Phys. Rev. B 104, (2021).
1045
+ 8.
1046
+ Li, X., Lindfors-Vrejoiu, I., Ziese, M., Gloter, A. & van Aken, P. A. Impact of
1047
+ interfacial coupling of oxygen octahedra on ferromagnetic order in
1048
+ La0.7Sr0.3MnO3/SrTiO3 heterostructures. Sci. Rep. 7, 40068 (2017).
1049
+ 9.
1050
+ Liu, Q. et al. Perpendicular Manganite Magnetic Tunnel Junctions Induced by
1051
+ Interfacial Coupling. ACS Appl. Mater. Interfaces 14, 13883–13890 (2022).
1052
+ 10.
1053
+ Liu, Q. et al. Perpendicular Manganite Magnetic Tunnel Junctions Induced by
1054
+ Interfacial Coupling. ACS Appl. Mater. Interfaces 14, 13883–13890 (2022).
1055
+ 11.
1056
+ Chi, X. et al. Enhanced Tunneling Magnetoresistance Effect via Ferroelectric Control
1057
+ of Interface Electronic/Magnetic Reconstructions. ACS Appl. Mater. Interfaces 13,
1058
+ 56638–56644 (2021).
1059
+ 12.
1060
+ Gajek, M. et al. Tunnel junctions with multiferroic barriers. Nat. Mater. 6, 296–302
1061
+ (2007).
1062
+ 13.
1063
+ Kim, D. H., Ning, S. & Ross, C. A. Self-assembled multiferroic perovskite–spinel
1064
+ nanocomposite thin films: epitaxial growth, templating and integration on silicon. J.
1065
+ Mater. Chem. C 7, 9128–9148 (2019).
1066
+ 14.
1067
+ Sánchez, F., Ocal, C. & Fontcuberta, J. Tailored surfaces of perovskite oxide
1068
+ substrates for conducted growth of thin films. Chemical Society Reviews vol. 43 2272–
1069
+ 2285 (2014).
1070
+ 15.
1071
+ Ning, X., Wang, Z. & Zhang, Z. Large, Temperature-Tunable Low-Field
1072
+ Magnetoresistance in La0.7Sr0.3MnO3:NiO Nanocomposite Films Modulated by
1073
+ Microstructures. Adv. Funct. Mater. 24, 5393–5401 (2014).
1074
+
1075
+ 21
1076
+
1077
+ 16.
1078
+ Zhang, W., Ramesh, R., MacManus-Driscoll, J. L. & Wang, H. Multifunctional, self-
1079
+ assembled oxide nanocomposite thin films and devices. MRS Bull. 40, 736–745
1080
+ (2015).
1081
+ 17.
1082
+ Chen, A., Bi, Z., Jia, Q., MacManus-Driscoll, J. L. & Wang, H. Microstructure,
1083
+ vertical strain control and tunable functionalities in self-assembled, vertically aligned
1084
+ nanocomposite thin films. Acta Mater. 61, 2783–2792 (2013).
1085
+ 18.
1086
+ Zhang, C. et al. Large Low-Field Magnetoresistance (LFMR) Effect in Free-Standing
1087
+ La0.7Sr0.3MnO3 Films. ACS Appl. Mater. Interfaces 13, 28442–28450 (2021).
1088
+ 19.
1089
+ Huang, J. et al. Exchange Bias in a La0.67Sr0.33MnO3/NiO Heterointerface
1090
+ Integrated on a Flexible Mica Substrate. ACS Appl. Mater. Interfaces 12, 39920–39925
1091
+ (2020).
1092
+ 20.
1093
+ Qin, Q. et al. Interfacial antiferromagnetic coupling between SrRu O3 and L a0.7 S
1094
+ r0.3Mn O3 with orthogonal easy axis. Phys. Rev. Mater. 2, 104405 (2018).
1095
+ 21.
1096
+ Dagotto, E., Hotta, T. & Moreo, A. Colossal magnetoresistant materials: the key role
1097
+ of phase separation. Phys. Rep. 344, 1–153 (2001).
1098
+ 22.
1099
+ Krivoruchko, V. N. The Griffiths phase and the metal-insulator transition in substituted
1100
+ manganites (Review Article). Low Temperature Physics vol. 40 586–599 (2014).
1101
+ 23.
1102
+ Bhat, S. G. & Kumar, P. S. A. Tuning the Curie temperature of epitaxial
1103
+ Nd0.6Sr0.4MnO3 thin films. J. Magn. Magn. Mater. 448, 378–386 (2018).
1104
+ 24.
1105
+ Kumari, S. et al. Effects of Oxygen Modification on the Structural and Magnetic
1106
+ Properties of Highly Epitaxial La0.7Sr0.3MnO3 (LSMO) thin films. Sci. Rep. 10,
1107
+ (2020).
1108
+ 25.
1109
+ Wang, H. S., Li, Q., Liu, K. & Chien, C. L. Low-field magnetoresistance anisotropy in
1110
+ ultrathin Pr0.67Sr0.33MnO3 films grown on different substrates. Appl. Phys. Lett. 74,
1111
+ 2212–2214 (1999).
1112
+ 26.
1113
+ Huang, J., Wang, H., Sun, X., Zhang, X. & Wang, H. Multifunctional La 0.67 Sr 0.33
1114
+ MnO 3 (LSMO) Thin Films Integrated on Mica Substrates toward Flexible Spintronics
1115
+ and Electronics. ACS Appl. Mater. Interfaces 10, 42698–42705 (2018).
1116
+ 27.
1117
+ Boileau, A. et al. Textured Manganite Films Anywhere. ACS Appl. Mater. Interfaces
1118
+
1119
+ 22
1120
+
1121
+ 11, 37302–37312 (2019).
1122
+ 28.
1123
+ Greculeasa, S. G. et al. Influence of Thickness on the Magnetic and Magnetotransport
1124
+ Properties of Epitaxial La0.7Sr0.3MnO3 Films Deposited on STO (0 0 1).
1125
+ Nanomaterials vol. 11 (2021).
1126
+ 29.
1127
+ Gupta, P. et al. BL-02: A versatile X-ray scattering and diffraction beamline for
1128
+ engineering applications at Indus-2 synchrotron source. J. Synchrotron Radiat. 28,
1129
+ 1193–1201 (2021).
1130
+ 30.
1131
+ Arun, B., Suneesh, M. V. & Vasundhara, M. Comparative Study of Magnetic Ordering
1132
+ and Electrical Transport in Bulk and Nano-Grained Nd0.67Sr0.33MnO3 Manganites.
1133
+ J. Magn. Magn. Mater. 418, 265–272 (2016).
1134
+ 31.
1135
+ Zhang, B. et al. Effects of strain relaxation in Pr0.67Sr0.33MnO3 films probed by
1136
+ polarization dependent X-ray absorption near edge structure. Sci. Rep. 6, 19886
1137
+ (2016).
1138
+ 32.
1139
+ Pai, Y. Y., Tylan-Tyler, A., Irvin, P. & Levy, J. Physics of SrTiO3-based
1140
+ heterostructures and nanostructures: A review. Reports on Progress in Physics vol. 81
1141
+ 036503 (2018).
1142
+ 33.
1143
+ Paudel, B. et al. Anisotropic domains and antiferrodistortive-transition controlled
1144
+ magnetization in epitaxial manganite films on vicinal SrTiO3 substrates. Appl. Phys.
1145
+ Lett. 117, (2020).
1146
+ 34.
1147
+ Boschker, J. E. et al. In-plane structural order of domain engineered La0.7Sr 0.3MnO3
1148
+ thin films. Philos. Mag. 93, 1549–1562 (2013).
1149
+ 35.
1150
+ Konstantinović, Z., Sandiumenge, F., Santiso, J., Balcells, L. & Martínez, B. Self-
1151
+ assembled pit arrays as templates for the integration of Au nanocrystals in oxide
1152
+ surfaces. Nanoscale 5, 1001–1008 (2013).
1153
+ 36.
1154
+ Wang, J. et al. Quick determination of included angles distribution for miscut
1155
+ substrate. Meas. J. Int. Meas. Confed. 89, 300–304 (2016).
1156
+ 37.
1157
+ Scheel, H. J. Control of Epitaxial Growth Modes for High‐Performance Devices.
1158
+ Cryst. growth Technol. 621–644 (2003).
1159
+ 38.
1160
+ Chae, R. H., Rao, R. A., Gan, Q. & Eom, C. B. Initial Stage Nucleation and Growth of
1161
+
1162
+ 23
1163
+
1164
+ Epitaxial SrRuO3 Thin Films on (0 0 1) SrTiO3 Substrates. J. Electroceramics 4, 345–
1165
+ 349 (2000).
1166
+ 39.
1167
+ Schwoebel, R. L. & Shipsey, E. J. Step Motion on Crystal Surfaces. J. Appl. Phys. 37,
1168
+ 3682–3686 (1966).
1169
+ 40.
1170
+ Załuska-Kotur, Magdalena, Hristina Popova, and V. T. Step Bunches, Nanowires and
1171
+ Other Vicinal “Creatures”—Ehrlich–Schwoebel Effect by Cellular Automata. Crystals
1172
+ 11, 1135 (2021).
1173
+ 41.
1174
+ Connell, J. G., Isaac, B. J., Ekanayake, G. B., Strachan, D. R. & Seo, S. S. A.
1175
+ Preparation of atomically flat SrTiO3 surfaces using a deionized-water leaching and
1176
+ thermal annealing procedure. Appl. Phys. Lett. 101, (2012).
1177
+ 42.
1178
+ Miccoli, I., Edler, F., Pfnür, H. & Tegenkamp, C. The 100th anniversary of the four-
1179
+ point probe technique: the role of probe geometries in isotropic and anisotropic
1180
+ systems. J. Phys. Condens. Matter 27, 223201 (2015).
1181
+ 43.
1182
+ Gopalarao, T. R., Ravi, S. & Pamu, D. Electrical transport and magnetic properties of
1183
+ epitaxial Nd0.7Sr0.3MnO3 thin films on (001)-oriented LaAlO3 substrate. J. Magn.
1184
+ Magn. Mater. 409, 148–154 (2016).
1185
+ 44.
1186
+ Gopalarao, T. R. & Ravi, S. Study of Electrical Transport and Magnetic Properties of
1187
+ Nd0.7Sr0.3MnO3/Nd0.8Na0.2MnO3 Bilayer Thin Films. J. Supercond. Nov. Magn.
1188
+ 31, 1149–1154 (2018).
1189
+ 45.
1190
+ Arun, B., Suneesh, M. V & Vasundhara, M. Comparative Study of Magnetic Ordering
1191
+ and Electrical Transport in Bulk and Nano-Grained Nd0.67Sr0.33MnO3 Manganites.
1192
+ J. Magn. Magn. Mater. 418, 265–272 (2016).
1193
+ 46.
1194
+ Sudakshina, B., Supin, K. K. & Vasundhara, M. Effects of Nd-deficiency in
1195
+ Nd0.67Ba0.33MnO3 manganites on structural, magnetic and electrical transport
1196
+ properties. J. Magn. Magn. Mater. 542, 168595 (2022).
1197
+ 47.
1198
+ de Andrés, A., García-Hernández, M. & Martínez, J. L. Conduction channels and
1199
+ magnetoresistance in polycrystalline manganites. Phys. Rev. B 60, 7328–7334 (1999).
1200
+ 48.
1201
+ Vertruyen, B. et al. Magnetotransport properties of a single grain boundary in a bulk
1202
+ La-Ca-Mn-O material. J. Appl. Phys. 90, 5692–5697 (2001).
1203
+
1204
+ 24
1205
+
1206
+ 49.
1207
+ Li, J. et al. Improvement of electrical and magnetic properties in
1208
+ La0.67Ca0.33Mn0.97Co0.03O3 ceramic by Ag doping. Ceram. Int. (2022)
1209
+ doi:10.1016/j.ceramint.2022.08.255.
1210
+ 50.
1211
+ Jin, F. et al. La0.7Ca0.3MnO3-δ:Ag nanocomposite thin films with large temperature
1212
+ coefficient of resistance (TCR). J. Mater. (2022) doi:10.1016/j.jmat.2022.01.010.
1213
+ 51.
1214
+ Gianfrancesco, A. G., Tselev, A., Baddorf, A. P., Kalinin, S. V & Vasudevan, R. K.
1215
+ The Ehrlich–Schwoebel barrier on an oxide surface: a combined Monte-Carlo and in
1216
+ situ scanning tunneling microscopy approach. Nanotechnology 26, 455705 (2015).
1217
+
1218
+ SUPPLEMENTARY INFORMATION
1219
+
1220
+ I-
1221
+ Deconvolution of NSMO (004) reflection:
1222
+
1223
+ Figure S1: The double peak in the HR-XRD scan of NS-R thin film is confirmed by a HR-fine
1224
+ scan. The individual peak positions are noted as the centre of the fitted peaks P1 and P2.
1225
+ Table ST1: The table illustrates the values of c-lattice parameters evaluated from the (004) NSMO
1226
+ reflection.
1227
+ 24.4
1228
+ 24.6
1229
+ 24.8
1230
+ 25.0
1231
+ 25.2
1232
+ 25.4
1233
+ 25.6
1234
+ 25.8
1235
+ 0.0
1236
+ 5.0x10
1237
+ 4
1238
+ 1.0x10
1239
+ 5
1240
+ 1.5x10
1241
+ 5
1242
+ 2.0x10
1243
+ 5
1244
+ Intensity (arb. units)
1245
+ 2(deg)
1246
+ NS-R - fine scan
1247
+ Fit Peak 1
1248
+ Fit Peak 2
1249
+ Cumulative Fit Peak
1250
+ Sample
1251
+ 2θ for (004) reflection
1252
+ (o deg)
1253
+ Calculated c-lattice
1254
+ parameter (Å)
1255
+ NS-G
1256
+ 25.05o
1257
+ 7.61
1258
+ NS-R
1259
+ 24.88 o – P1
1260
+ 24.97 o – P2
1261
+ 7.66
1262
+ 7.63
1263
+
1264
+ 25
1265
+
1266
+ II-
1267
+ Transport studies on NSMO thin films
1268
+ Three samples with granular morphology G-A, G-B, G-C, and rod morphology R-A, R-B, R-
1269
+ C, were selected and their resistivity was measured using 4-probe technique. The normalized-
1270
+ resistivity plot for the selected NSMO thin films are shown Figure. 4. The value of resistivity
1271
+ is different across the NSMO thin films, since they are deposited under slightly different PLD
1272
+ conditions but all of them exhibited MIT. Observing the nature of MIT transition in these
1273
+ selected samples, G-A, G-B, G-C with granular morphology have a broad resistive transition
1274
+ below their MIT temperature. The samples R-A, R-B, R-C with rod-morphology show a sharp
1275
+ resistive transition in the FM-metallic state below their MIT temperature. The value of slope is
1276
+ evaluated from the linear fit in the metallic region and it shows that samples with rod-type
1277
+ morphology have increased slope up to one order as compared to the granular samples.
1278
+ Temperature coefficient of resistance (TCR) is evaluated for these films and it is found that
1279
+ samples G-A, G-B, G-C have peak-TCR % of 5 %, 4 %, and 8 % at 105 K, 77 K, and 121 K,
1280
+ respectively. An enhanced TCRpeak % is obtained for samples with rod-morphology. The
1281
+ samples R-A, R-B, R-C have peak-TCR % of 21 %, 14.5 %, and 18 % at 98 K, 80 K, and 100
1282
+ K, respectively.
1283
+
1284
+ 26
1285
+
1286
+
1287
+ Figure S2: Plots of normalized resistivity vs. temperature of NSMO films with granular and
1288
+ rod-type morphology. (a),(c),(e): Samples with granular morphology G-A, G-B, G-C.
1289
+ (b),(d),(f): Samples with rod morphology R-A, R-B, R-C. A linear fit in the FM-metallic region
1290
+ give the rate of change of resistivity with respect to temperature.
1291
+
1292
+ III- Low-temperature studies on NSMO thin films – NS-G
1293
+ and NS-R
1294
+
1295
+ To study the low-temperature transport across the thin films with different morphology, the plot of low-
1296
+ temperature resistivity of the granular thin film NS-G and rod-type thin film NS-R is shown in figure
1297
+ S5. An enhanced low-temperature resistive upturn is observed in NS-G from figure. S5. Using the low-
1298
+ temperature transport equation the resistivity data is fit and the fitting parameters are summarized in
1299
+ table ST2. The first term, ρo which represents the contribution from grain-boundary (GB) scattering is
1300
+ 0
1301
+ 50
1302
+ 100
1303
+ 150
1304
+ 200
1305
+ 250
1306
+ 300
1307
+ 0
1308
+ 2
1309
+ 4
1310
+ 6
1311
+ 8
1312
+ 0
1313
+ 50
1314
+ 100
1315
+ 150
1316
+ 200
1317
+ 250
1318
+ 300
1319
+ 0
1320
+ 30
1321
+ 60
1322
+ 90
1323
+ 0
1324
+ 50
1325
+ 100
1326
+ 150
1327
+ 200
1328
+ 250
1329
+ 300
1330
+ 0
1331
+ 2
1332
+ 4
1333
+ 6
1334
+ 8
1335
+ 10
1336
+ 12
1337
+ 14
1338
+ 16
1339
+ 0
1340
+ 50
1341
+ 100
1342
+ 150
1343
+ 200
1344
+ 250
1345
+ 300
1346
+ 0
1347
+ 30
1348
+ 60
1349
+ 90
1350
+ 120
1351
+ 0
1352
+ 50
1353
+ 100
1354
+ 150
1355
+ 200
1356
+ 250
1357
+ 300
1358
+ 0
1359
+ 2
1360
+ 4
1361
+ 6
1362
+ 8
1363
+ 10
1364
+ 12
1365
+ 14
1366
+ 0
1367
+ 50
1368
+ 100
1369
+ 150
1370
+ 200
1371
+ 250
1372
+ 300
1373
+ 0
1374
+ 5
1375
+ 10
1376
+ 15
1377
+ 20
1378
+ 25
1379
+ 15
1380
+ .0
1381
+
1382
+
1383
+
1384
+ T
1385
+
1386
+ Sample: G - A
1387
+ Linear fit in
1388
+ FM-metallic region
1389
+ 
1390
+ Temperature (K)
1391
+ TCRpeak %  5 %
1392
+
1393
+ TCRpeak %  21 %
1394
+ 74
1395
+ .5
1396
+
1397
+
1398
+
1399
+ T
1400
+
1401
+ Sample: R - A
1402
+ Linear fit in
1403
+ FM-metallic region
1404
+ 
1405
+ Temperature (K)
1406
+ (b)
1407
+ TCRpeak %  4 %
1408
+ 0.28
1409
+
1410
+
1411
+
1412
+ T
1413
+
1414
+ 
1415
+ Temperature (K)
1416
+ Sample: G - B
1417
+ Linear fit in
1418
+ FM-metallic region
1419
+ (c)
1420
+ TCRpeak %  14 %
1421
+ 76
1422
+ .6
1423
+
1424
+
1425
+
1426
+ T
1427
+
1428
+ 
1429
+ Temperature (K)
1430
+ Sample: R - B
1431
+ Linear fit in
1432
+ FM-metallic region
1433
+ (d)
1434
+ TCRpeak %  8 %
1435
+ 0.35
1436
+
1437
+
1438
+
1439
+ T
1440
+
1441
+ Sample: G - C
1442
+ Linear fit in
1443
+ FM-metallic region
1444
+ 
1445
+ Temperature (K)
1446
+ (e)
1447
+ TCRpeak %  18 %
1448
+ 20
1449
+ .1
1450
+
1451
+
1452
+
1453
+ T
1454
+
1455
+ 
1456
+ Temperature (K)
1457
+ Sample: R - C
1458
+ Linear fit in
1459
+ FM-metallic region
1460
+ (a)
1461
+ (f)
1462
+
1463
+ 27
1464
+
1465
+ found to be higher by more than one-order in NS-G as compared to NS-R. This is expected as NS-G
1466
+ has a granular morphology and increased contribution from GB scattering affects the transport
1467
+ mechanism even at low-temperatures. Additionally, ρo’s value is higher by orders of magnitude as
1468
+ compared to the other coefficients. This shows that GB scattering effects dominate the transport
1469
+ mechanism compared to other contributions to the electronic transport.
1470
+
1471
+
1472
+ Figure S3: Low-temperature resistive up-turn is observed in the NSMO thin films NS-G and
1473
+ NS-R. The temperature regime from 4 K up to 60K is fit using the low-temperature transport
1474
+ equation.
1475
+
1476
+
1477
+
1478
+ 0
1479
+ 10
1480
+ 20
1481
+ 30
1482
+ 40
1483
+ 50
1484
+ 0.07
1485
+ 0.08
1486
+ 0.09
1487
+ 0.10
1488
+ 0
1489
+ 10
1490
+ 20
1491
+ 30
1492
+ 40
1493
+ 50
1494
+ 0.0020
1495
+ 0.0025
1496
+ 0.0030
1497
+ 0.0035
1498
+ 0.0040
1499
+ Resistivity - NS-G
1500
+ Low-temp uptrun
1501
+ TIMT  147 K
1502
+ (cm)
1503
+ Temperature (K)
1504
+ TIMT  135 K
1505
+ Resistivity - NS-R
1506
+ Low-temp uptrun
1507
+ (cm)
1508
+ Temperature (K)
1509
+ Sample
1510
+ 𝝆𝒐
1511
+ 𝝆𝟐
1512
+ 𝝆𝟒.𝟓
1513
+ 𝝆𝑷
1514
+ 𝝆𝟎.𝟓
1515
+ R2 (%)
1516
+ NS-G
1517
+ 0.09272
1518
+ 1.16E-5
1519
+ -1.00E-9
1520
+ 1.33E-10
1521
+ -0.0038
1522
+ 99.99
1523
+ NS-R
1524
+ 0.00305
1525
+ 3.53E-7
1526
+ -2.90E-11
1527
+ 4.90E-12
1528
+ -8.38E-5
1529
+ 99.99
1530
+ Table ST2: The table illustrates the values of coefficients of low-temperature transport after
1531
+ fitting.
1532
+
9tE0T4oBgHgl3EQffwDm/content/tmp_files/2301.02410v1.pdf.txt ADDED
@@ -0,0 +1,1798 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Codepod: A Namespace-Aware, Hierarchical Jupyter
2
+ for Interactive Development at Scale
3
+ Hebi Li, Forrest Sheng Bao, Qi Xiao, Jin Tian
4
+ Dept. of Computer Science, Iowa State University
5
+ Ames, Iowa, USA
6
+ {hebi,qxiao,jtian}@iastate.edu,[email protected]
7
+ ABSTRACT
8
+ Jupyter is a browser-based interactive development environment
9
+ that has been popular recently. Jupyter models programs in code
10
+ blocks, and makes it easy to develop code blocks interactively by
11
+ running the code blocks and attaching rich media output. How-
12
+ ever, Jupyter provides no support for module systems and names-
13
+ paces. Code blocks are linear and live in the global namespace;
14
+ therefore, it is hard to develop large projects that require modular-
15
+ ization in Jupyter. As a result, large-code projects are still devel-
16
+ oped in traditional text files, and Jupyter is only used as a surface
17
+ presentation. We present Codepod, a namespace-aware Jupyter
18
+ that is suitable for interactive development at scale. Instead of
19
+ linear code blocks, Codepod models code blocks as hierarchical
20
+ code pods, and provides a simple yet powerful module system for
21
+ namespace-aware incremental evaluation. Codepod is open source
22
+ at https://github.com/codepod-io/codepod.
23
+ ACM Reference Format:
24
+ Hebi Li, Forrest Sheng Bao, Qi Xiao, Jin Tian. 2023. Codepod: A Namespace-
25
+ Aware, Hierarchical Jupyter for Interactive Development at Scale. In Pro-
26
+ ceedings of (Conference’23). ACM, New York, NY, USA, 10 pages. https:
27
+ //doi.org/10.1145/nnnnnnn.nnnnnnn
28
+ 1
29
+ INTRODUCTION
30
+ Traditional software development is typically closely tied with file
31
+ systems. Developers write code into a set of files in the file-system
32
+ hierarchy. For example, developers write functions in files using
33
+ a text editor and invoke a compiler or an interpreter to run or
34
+ evaluate the code in the files. Modern Integrated Development
35
+ Environments (IDEs) provide a file system browser and integrate
36
+ debuggers to help run and debug over the files.
37
+ Jupyter notebook [6] is a browser-based interactive development
38
+ environment that has been widely adopted by many different com-
39
+ munities, both in science and industry. Jupyter notebooks support
40
+ literate programming that combines code, text, and execution re-
41
+ sults with rich media visualizations. Juyter models the code as a
42
+ sequence of "code cells". This provides a clean separation between
43
+ code blocks, whereas text editors do not partition code in the same
44
+ text file but instead relying on developers and editor plugins to do
45
+ so. Code cells can be interactively (re)-run and display results in rich
46
+ media such as data visualization right beside the cell, providing de-
47
+ velopers an interactive Read-Eval-Print-Loop (REPL) development
48
+ experience. Jupyter has been popular recently in software devel-
49
+ opment [10–12, 15], proving such interactive cycle is beneficial to
50
+ software development.
51
+ Conference’23, Jan, 2023, Ames, IA, USA
52
+ 2023. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00
53
+ https://doi.org/10.1145/nnnnnnn.nnnnnnn
54
+ However, Jupyter falls short for module systems and namespaces.
55
+ Code blocks in Jupyter notebooks are linear and live in the global
56
+ namespace, making it non-scalable for large software projects of
57
+ hundreds of function definitions with potential naming conflicts.
58
+ As a result, large code projects are still developed in traditional
59
+ text files, and Jupyter is primarily used as a surface presentation
60
+ of the projects, consisting of only a fraction of the entire codebase.
61
+ Our case study in Section 4.3 found that Jupyter notebook shares
62
+ less than 5% of the code of real-world open-source projects. All
63
+ functions defined in the Jupyter notebook are only accessed in the
64
+ same notebook. There are calls from Jupyter to the code in the text
65
+ files, but no calls from text files to Jupyter code.
66
+ The Jupyter-file hybrid development model has several disadvan-
67
+ tages. Changes in files are not in sync with the Jupyter runtime. This
68
+ effectively breaks the REPL interactive development functionality.
69
+ The hybrid model still relies on text editors, external debuggers,
70
+ and IDEs and thus still suffers from the drawbacks of file-based
71
+ software development, which we will detail below.
72
+ Although computers store information into files, organizing code
73
+ into text files where information is linearly presented is counter-
74
+ productive. Complex software requires proper abstraction and seg-
75
+ mentation of code, typically by defining functions and hierarchical
76
+ modules. For simplicity, in this paper, we assume functions are the
77
+ building blocks of software projects and refer to the functions when
78
+ we talk about “code blocks”. File-based approaches force developers
79
+ to maintain the correspondence between code and files, which differ
80
+ significantly in granularity: code blocks are small in size, but large
81
+ in amount, while files are typically long but few. The unbalance
82
+ in granularity poses dilemmas to developers: including too many
83
+ code blocks into one file makes the hierarchy hard to maintain,
84
+ while including few code blocks into one file creates many small
85
+ files and deep directories that are also hard to work with. Besides,
86
+ programming languages typically design module systems around
87
+ file systems, e.g., a file is a module. It becomes tedious to reference
88
+ and import from different modules scattered over multiple files
89
+ and levels of directories. This is the case in the real world. Among
90
+ highly regarded open source projects, each project contains tens to
91
+ hundreds of files, possibly with levels of different directories. For
92
+ a file containing tens of functions, about half of the functions are
93
+ internal to the file and are not called in other files.
94
+ To overcome the above disadvantages of both Jupyter and text-
95
+ file-based development, we propose Codepod, a namespace-aware
96
+ Jupyter for interactive software development at scale. Codepod
97
+ models a program as hierarchical code blocks and represents it
98
+ accordingly. Developers write each function as a code pod and
99
+ place it at an appropriate hierarchy. In Codepod, the code blocks are
100
+ organized into a tree of code pods and decks. A code pod resembles
101
+ arXiv:2301.02410v1 [cs.SE] 6 Jan 2023
102
+
103
+ Conference’23, Jan, 2023, Ames, IA, USA
104
+ Hebi Li, Forrest Sheng Bao, Qi Xiao, Jin Tian
105
+ a cell in Jupyter. The partition of pods is maintained by grouping
106
+ them into decks. A deck can also contain child decks. All code of
107
+ the entire project can be developed without needing files.
108
+ In addition, Codepod features a simple yet powerful module sys-
109
+ tem that abstracts over the native module system of programming
110
+ languages to provide a consistent and straightforward evaluation
111
+ model for different languages. Codepod’s module system consists
112
+ of five namespace rules, inspired by the hierarchical nature of code
113
+ blocks and the access pattern among them. (1) namespace separa-
114
+ tion by default: in Codepod, each deck is a namespace, and the root
115
+ deck is the global namespace. Pods in different decks are defined in
116
+ separate namespaces and cannot see each other; (2) public pods: a
117
+ pod can be marked as "public" and is made available to its parent
118
+ deck; (3) utility pods: A pod or deck in Codepod can be marked
119
+ as a “utility pod/deck”. Such a pod is visible to the parent deck
120
+ node’s sub-tree; (4) testing pods: a testing pod or deck can access
121
+ its parent deck’s namespace; and (5) explicit path: a pod is always
122
+ accessible by specifying the full path within the tree, providing the
123
+ compatibility for arbitrary imports. The detailed rationale of the
124
+ rules is discussed in Section 2.
125
+ Last but not least, Codepod provides a namespace-aware incre-
126
+ mental evaluation model. In Codepod, every pod can be executed,
127
+ and the evaluation happens in the appropriate namespace. Similar
128
+ to Jupyter notebooks, the results are displayed right beside the
129
+ code pod for easy debugging and intuitive interactive development.
130
+ When a pod is changed, the pod can be re-evaluated, and the up-
131
+ dated definition is applied incrementally in that scope in the active
132
+ runtime, and the new definition is visible to all other pods using it
133
+ in the entire codebase without restarting the current runtime.
134
+ We have implemented a fully working Codepod as a Web ap-
135
+ plication and currently have implemented full namespace-aware
136
+ runtime support for four language kernels: Python, JavaScript, Julia,
137
+ and Scheme/Racket. New kernels can be easily developed based
138
+ on existing Jupyter notebook kernels. Codepod is open-sourced at
139
+ https://example.com
140
+ In summary, we make the following contributions in this work:
141
+ • we propose Codepod, a novel namespace-aware interactive
142
+ development environment
143
+ • we propose a simple yet powerful module system abstraction
144
+ for Codepod
145
+ • we provide a fully working Codepod implementation with
146
+ namespace-awareness and incremental runtime support for
147
+ four programming languages, and make it open source
148
+ • we conduct case studies of real-world open-source projects
149
+ to statistically show that our Codepod model will be useful
150
+ for real-world development.
151
+ 2
152
+ HIERARCHICAL PODS
153
+ In this section, we introduce the Codepod model and its namespace
154
+ rules. In the next section, we describe the incremental evaluation
155
+ runtime and algorithms.
156
+ 2.1
157
+ Codepod Interface
158
+ In Codepod, code blocks are organized into a tree. In the tree, non-
159
+ leaf nodes are decks, and leaf nodes are pods. Thus a deck may
160
+ contain a list of pods and a list of sub-decks. A pod is a text editor
161
+ containing the real code, and a deck is the container of the pods.
162
+ We will use “node” to refer to a node in the tree, which can be either
163
+ a deck or a pod.
164
+ An overview demo of the Codepod interface is shown in Figure
165
+ 1, implementing a simplified Python regular expression compiler.
166
+ The code is organized into a tree, which starts from the leftmost
167
+ ROOT node, and grows to the right. The background level of grey
168
+ of the deck indicates the level of the deck in the tree.
169
+ In order to define the interactions between code pods in the tree,
170
+ Codepod provides simple yet powerful namespace rules abstracting
171
+ different languages’ native module systems and providing a consis-
172
+ tent module system for all languages. In the following sections, we
173
+ introduce the rules in detail. We will revisit this overview exam-
174
+ ple in Section 2.7 for the meaning of different kinds of pods after
175
+ introducing the namespace rules.
176
+ A typical workflow using Codepod starts from an empty tree
177
+ of a single ROOT deck. Developers can create pods as a child of
178
+ the deck and start to develop in the global namespace. To develop
179
+ hierarchical modules, developers can create a deck under the ROOT
180
+ deck and create pods under the new deck. Pods and decks can be
181
+ moved from one node to another node in different levels to group
182
+ the pods and re-order the code hierarchy. Pods and decks can be
183
+ folded so that only the pods of interest are displayed during the
184
+ development. A pod can be evaluated, and the possibly rich media
185
+ result will be displayed under the pod.
186
+ 2.2
187
+ NS Rule 1: Namespace Separation
188
+ In Codepod, the code blocks are organized into a tree of decks and
189
+ pods. Each deck can contain multiple pods and child decks. A pod
190
+ contains the actual code, and a deck declares a namespace. A pod
191
+ belongs to the namespace of its parent deck. The first rule is the
192
+ basic namespace separation: pods in the same namespace are visible
193
+ to each other, but pods in different namespaces are not. For example,
194
+ in Fig. 2, there are 5 decks, and thus 5 namespaces. In Deck-2, there
195
+ are two pods defining functions a and b. Functions a and b can call
196
+ each other without a problem because they are in the same deck
197
+ and thus the same namespace. In all other four decks, the reference
198
+ to either a or b will throw errors because they belong to different
199
+ namespaces.
200
+ 2.3
201
+ NS Rule 2: Public Interface to Parent
202
+ In order to build up the software, we have to establish connections
203
+ between the definitions of code pods in different namespaces. Soft-
204
+ ware programs are often highly hierarchical: lower-level functions
205
+ are composed together to build higher-level functions. This is a
206
+ natural fit to the Codepod model, where code blocks are ordered
207
+ hierarchically. Thus in this rule, we allow public interfaces to be
208
+ exposed from child decks to parent decks. More specifically, each
209
+ pod can be marked as “public”. Such public pods are visible in the
210
+ parent deck of the current deck. For example, in Fig. 3, there are
211
+ 3 decks, thus 3 namespaces. The three namespaces are composed
212
+ hierarchically; Deck-A is the parent of Deck-B, which is the parent
213
+ of Deck-C. In Deck-C, there are four pods, defining four functions
214
+ c1, c2, c3, and c4. Those functions can see each other because they
215
+ are in the same namespace. The pods for c1, c2, and c4 are marked
216
+ public (indicated by highlight), while c3 is not. In its parent deck
217
+
218
+ Codepod: A Namespace-Aware, Hierarchical Jupyter
219
+ for Interactive Development at Scale
220
+ Conference’23, Jan, 2023, Ames, IA, USA
221
+ Figure 1: Codepod overview example for Regular Expression code.
222
+ Figure 2: NS Rule 1: separate namespace by default
223
+ Figure 3: NS Rule 2: export to parent namespace. Yellow
224
+ highlights indicate pods to be exported/exposed to parent
225
+ decks.
226
+ (Deck-B), the call of c1, c2, c4 is allowed, meaning that they are
227
+ available in this parent namespace. However, the usage of c3 will
228
+ raise an error because it is not exposed.
229
+ The public functions are exported only to the parent deck but
230
+ not to the child decks. For example, function b1 is defined in Deck-
231
+ B, and the pod is marked public. This function b1 is visible to its
232
+ parent deck, Deck-A, but not to its child deck, Deck-C.
233
+ Lastly, the public interface is only exposed to one level up the
234
+ hierarchy. If the names are desired to be visible further up, the
235
+ names can be further exposed up to the root deck. For example,
236
+ although c4 is marked as public, it is only visible to its immediate
237
+ parent deck, Deck-B. Calling c4 in the for pod for a2 in Deck-A will
238
+ raise an error as c4 is not visible in Deck-A. In the middle deck,
239
+ Deck-B, the functions c1 and c2 are re-exported to the parent deck,
240
+ and thus c1 and c2 are available in the top deck, Deck-A.
241
+ In summary, this “up-rule” allows users to mark a pod public
242
+ and expose it to one-level deck up, and can be re-exported to upper
243
+ levels explicitly until the root pod. This namespace rule closely
244
+ resembles the hierarchical nature of software and is natural to use
245
+ this to build up complex functionalities from the ground up.
246
+ 2.4
247
+ NS Rule 3: Utility Pods
248
+ Although exposing pods from child decks to parent decks is natural
249
+ for building software, it cannot cover all use-cases. One particular
250
+ access pattern is utility functions that are supposed to be called
251
+ in many other pods at different levels. This is commonly used in
252
+ real-world software. For example, many software projects will have
253
+ a utils folder that implements utility functions such as string
254
+ manipulation, general parsing, logging functions. Such utility func-
255
+ tions are used by other functions at different hierarchy levels. In
256
+ the Codepod hierarchy, pods for such functions need to be children
257
+ for all other pods calling the utility functions; thus, the model will
258
+ no longer be a tree but a graph. However, modeling code blocks as
259
+ graphs is not as scalable as trees, and too many utility pods will
260
+
261
+ CPiwTe3yqmmC
262
+ sre parse
263
+ Pattern
264
+ parse
265
+ class Pattern():
266
+ 1 def parse():
267
+ def closegroup(): pass
268
+ 2
269
+ _parse_sub()
270
+ 3
271
+ def opengroup(): pass
272
+ 3
273
+ isstring()
274
+ sre_compile
275
+ 4
276
+ Tokenizer.match()
277
+ SubPattern
278
+ class SubPattern():
279
+ compile
280
+ _parse_sub
281
+ 2
282
+ def closegroup(): pass
283
+ 1 def compile():
284
+ 1
285
+ def
286
+ _parse_sub():
287
+ m
288
+ def append(): pass
289
+ re
290
+ 2
291
+ _code()
292
+ 2
293
+ _parse()
294
+ 4
295
+ def getwidth(): pass
296
+ 3
297
+ parse()
298
+ 3
299
+ Tokenizer.match()
300
+ compile
301
+ 4
302
+ isstring()
303
+ Tokenizer
304
+ def re.compile():
305
+ _parse
306
+ 1 class Tokenizer():
307
+ compile()
308
+ _code
309
+ def _parse():
310
+ 2
311
+ def get(): pass
312
+ 1
313
+ def _code():
314
+ 2
315
+ _parse_sub()
316
+ 3
317
+ def match(): pass
318
+ match
319
+ 2
320
+ _compile()
321
+ 3
322
+ _escape()
323
+ def match():
324
+ 3
325
+ compile_info()
326
+ 4
327
+ Pattern.opengroup()
328
+ _compile().search()
329
+ CPQW6M4jqdqG
330
+ +Test
331
+ 5
332
+ SubPattern.getwidth()
333
+ _compile_info
334
+ Tokenizer.get()
335
+ 1 p = Pattern()
336
+ search
337
+ 1 def _compile_info():
338
+ 2 p.opengroup("a")
339
+ def search():
340
+ 2
341
+ SubPattern.getwidth()
342
+ ROOT
343
+ escape
344
+ 3 print(p)
345
+ 2
346
+ _compile().match()
347
+ 1 def _escape():
348
+ _compile
349
+ 1 _parse(p,"hello abc")
350
+ Pattern.checkgroup()
351
+ _compile
352
+ 1 def _compile():
353
+ 1 _escape(p,"a")
354
+ def
355
+ _compileO:
356
+ 2
357
+ _simple()
358
+ 2
359
+ sre_compile.compile()
360
+ 3
361
+ SubPattern
362
+ CPbxCz8GETTx
363
+ AUtility
364
+ _simple
365
+ 1 def _simple():
366
+ isstring
367
+ 2
368
+ SubPattern.getwidth()
369
+ 1 def isstring(obj):
370
+ CPGGBRbcaBGb
371
+ +Test
372
+ 2
373
+ return isinstance(
374
+ 1 print("Testing ..")
375
+ 3
376
+ obj,(str,bytes))
377
+ 2 isstring("abc")
378
+ isnumber
379
+ 1 isstring(1)
380
+ def isnumber(obj):
381
+ 2
382
+ return isinstance(
383
+ 3
384
+ obj,(int, float))Deck-2
385
+ Deck-4
386
+ a
387
+ # ERROR not defined
388
+ 1
389
+ def a():
390
+ 2 a()
391
+ 2
392
+ # ok
393
+ 3
394
+ return b()
395
+ Deck-1
396
+ Deck-5
397
+ b
398
+ 1 # ERROR not defined
399
+ 1
400
+ def b():
401
+ 1 # ERROR not defined
402
+ 2 a()
403
+ 2
404
+ # ok
405
+ 2 a()
406
+ 3
407
+ return a()
408
+ Deck-3
409
+ 1 # ERROR not defined
410
+ 2 a()Deck-B
411
+ Deck-C
412
+ Deck-A
413
+ b1
414
+ def b1():
415
+ c1
416
+ 1
417
+ al
418
+ 2
419
+ # ok, cl exported
420
+ def cl():
421
+ 1
422
+ def al():
423
+ 3
424
+ return cl()
425
+ 2
426
+ return c1() + c3()
427
+ 2
428
+ # ok
429
+ 3
430
+ b1()
431
+ cl,c2
432
+ c2
433
+ 4
434
+ # ok
435
+ 1 # dummy pod
436
+ def c2():
437
+ 5
438
+ c1()
439
+ 2 # re-export cl,c2
440
+ 2
441
+ c3()
442
+ a2
443
+ b3
444
+ c3
445
+ 1
446
+ def a2():
447
+ 1
448
+ def b3():
449
+ 1
450
+ def c3():
451
+ 2
452
+ # ERROR not
453
+ defined
454
+ 2
455
+ #
456
+ ERROR
457
+ R not defined
458
+ 2
459
+ pass
460
+ 3
461
+ b4()
462
+ 3
463
+ c3()
464
+ c4
465
+ 4
466
+ # ERROR not
467
+ defined
468
+ c4()
469
+ b4
470
+ 5
471
+ 1
472
+ def c4():
473
+ 1
474
+ def b4():
475
+ 2
476
+ pass
477
+ c4()Conference’23, Jan, 2023, Ames, IA, USA
478
+ Hebi Li, Forrest Sheng Bao, Qi Xiao, Jin Tian
479
+ Figure 4: NS Rule 3: Utility pods/decks (indicated in green
480
+ icon Utility)
481
+ make it impossible to layout the pod hierarchy cleanly in a 2D space
482
+ without many intersections.
483
+ Thus, we design a “utility rule”: a deck/pod can be marked as
484
+ a utility deck/pod. Such a utility pod is meant to provide utility
485
+ functions to the parent deck’s sub-tree, and thus all the public
486
+ functions in the utility deck are visible in the parent deck’s whole
487
+ sub-tree. The utility pods are also namespace-aware: it is only
488
+ visible to the parent deck, but not the grand-parent deck and above.
489
+ Thus the utility decks can also be hierarchically ordered to build
490
+ utility functions at different abstraction levels. As a special case, a
491
+ utility deck under the root deck defines global utility functions that
492
+ can be accessed throughout the entire codebase.
493
+ For example, in Fig. 4, there are three regular decks and two
494
+ utility decks. The public functions utils_b1 and utils_b2 defined
495
+ in utility deck B are visible in its parent deck B’s sub-tree, including
496
+ decks B, C. Another utility deck, A, is defined in the upper level
497
+ and has a greater visible scope.
498
+ 2.5
499
+ NS Rule 4: Testing Pods
500
+ Figure 5: NS Rule 4: Testing pods/decks (indicated in green
501
+ icon Test)
502
+ Another essential pattern in interactive software development
503
+ is to test whether the functions work as expected by writing some
504
+ testing code and observing results. Such testing code must access
505
+ the functions being tested, thus having to be in the same namespace
506
+ or the parent namespace. However, either option has problems.
507
+ On the one hand, testing code might create variables, introduce
508
+ helper functions, and produce side effects. Thus they should be
509
+ in a separate namespace to avoid polluting the namespace of the
510
+ functions under testing. On the other hand, placing the testing deck
511
+ as the parent deck of the function under testing is not logically
512
+ natural because it does not provide upper-level functions.
513
+ Therefore, we allow a deck/pod to be marked as a testing deck/-
514
+ pod. A testing deck is placed as a child deck in the same namespace
515
+ of the functions being tested. Although the testing deck/pod is
516
+ a child-namespace, it can access the definitions visible within its
517
+ parent deck, thus is able to call and test the function of interest.
518
+ The testing pods are also namespace-aware: it can only access the
519
+ function definitions in its parent deck, but not the grand-parent
520
+ deck or siblings. Thus the testing decks can also be hierarchically
521
+ ordered to build testing functions at different abstraction levels.
522
+ For example, in Fig. 5, there are three regular decks and two
523
+ testing decks, and one testing pod inside the regular deck A. The
524
+ code pods in the testing deck are visible within the same testing
525
+ deck, allowing for a testing setup like defining variables x and y
526
+ and using them in other pods in the same testing deck. The pods in
527
+ the testing deck run in a separate namespace, thus will not pollute
528
+ other namespaces. A testing deck can access functions defined in its
529
+ parent deck and can thus call and test whether the function yields
530
+ expected results. A testing pod is similar to a testing deck, running
531
+ in a separate namespace, and has access to the function definitions
532
+ in the deck it belongs to.
533
+ 2.6
534
+ NS Rule 5: Explicit Access via Full Path
535
+ Figure 6: NS Rule 5: explicit imports by full path
536
+ Finally, the 5th rule is the “brute-force rule”: a pod can always
537
+ be accessible by specifying the full path within the tree. In other
538
+ words, all pods are accessible via an explicit full path. This provides
539
+ compatibility for arbitrary imports. This is considered the last resort
540
+ and is ideally not needed but can be helpful in some cases. The path
541
+ can be either a relative path connecting two pods or the absolute
542
+ path from the root deck. In order to specify the path, the decks have
543
+ to be named. In Codepod, an unnamed deck receives a UUID as the
544
+
545
+ Deck-C
546
+ 1
547
+ def c1():
548
+ 2
549
+ util_bl()
550
+ 3
551
+ Deck-B
552
+ 1
553
+ def c2():
554
+ 1
555
+ def b():
556
+ 2
557
+ util_b2()
558
+ 2
559
+ # OK
560
+ 3
561
+ 3
562
+ util_al()
563
+ Deck-A
564
+ 4
565
+ util_a2()
566
+ 5
567
+ # OK
568
+ UtilityDeck-B
569
+ AUtility
570
+ 1
571
+ def al():
572
+ 6
573
+ util_bl()
574
+ 2
575
+ util_al()
576
+ 7
577
+ util_b2()
578
+ util_bl
579
+ 3
580
+ util_a2()
581
+ 1 def util_b(): pass
582
+ L
583
+ def a() :
584
+ 2
585
+ util_b2
586
+ # ERROR not defined
587
+ 3
588
+ util_bi()
589
+ 1 def util_b2(): pass
590
+ UtilityDeck-A
591
+ AUtility
592
+ util_al
593
+ 1 def util_al(): pass
594
+ util_a2
595
+ 1 def util a2(): passDeck-C
596
+ c1
597
+ def c1():
598
+ Deck-B
599
+ 2
600
+ pass
601
+ b1
602
+ c2
603
+ def b1():
604
+ 1
605
+ def c2():
606
+ Deck-A
607
+ 2
608
+ return c()
609
+ 2
610
+ pass
611
+ a
612
+ b2
613
+ 1
614
+ def b2():
615
+ 1
616
+ TestDeck-B
617
+ +Test
618
+ 2
619
+ def a() :
620
+ 2
621
+ 1
622
+ print("Testing b ..")
623
+ 3
624
+ 2
625
+ assert bl() == 1
626
+ 4
627
+ assert b2() == 2
628
+ 5
629
+ 4
630
+ assert cl() == 3
631
+ 1
632
+ print("Testing a
633
+ + Test
634
+ assert a() == 3
635
+ TestDeck-A
636
+ +Test
637
+ 1 # test context setup
638
+ 2x=2
639
+ 3y=3
640
+ 1 # x and y are available
641
+ 2 print("Testing a ..")
642
+ 3 assert a(x) == yC
643
+ C
644
+ 1 # explicit relative import
645
+ 2
646
+ import d from ../D
647
+ B
648
+ 3
649
+ def c() :
650
+ 4
651
+ ()p
652
+ 1
653
+ 2
654
+ A
655
+ D
656
+ 1
657
+ d
658
+ 2
659
+ def d():
660
+ 2
661
+ CPrFHVDyXTrC
662
+ # explicit absolute import
663
+ 2
664
+ import c from /A/B/C
665
+ 3
666
+ def f() :
667
+ 4
668
+ c()Codepod: A Namespace-Aware, Hierarchical Jupyter
669
+ for Interactive Development at Scale
670
+ Conference’23, Jan, 2023, Ames, IA, USA
671
+ name. Most of the time, developers do not need to specify names
672
+ to the decks, as the first four rules will make the modules system
673
+ usable without specifying names. Named decks are also helpful as
674
+ a document for naming important module hierarchies.
675
+ For example, in Fig. 6, there are 5 decks in the codebase. In Deck-
676
+ D, a function d is defined. In Deck-C, the function d is not accessible.
677
+ However, it is still able to be imported by a relative path ../D. As
678
+ another example, in the bottom deck, a full absolute path /A/B/C
679
+ is used to access the function c defined in Deck-C.
680
+ 2.7
681
+ Discussion
682
+ In summary, based on the hierarchical pod model, Codepod pro-
683
+ vides a simple yet powerful module system including five rules:
684
+ namespace separation, public pods, utility pods, testing pods, and
685
+ full-path explicit access. These rules are highly hierarchical, and
686
+ therefore are well suited for building hierarchical software projects
687
+ from the ground up. This module system abstracts over different
688
+ programming languages’ native module systems and provides a con-
689
+ sistent module system across languages. The following section will
690
+ discuss the runtime system and algorithms to support the Codepod
691
+ module system.
692
+ Let us revisit the Codepod example in Fig. 1, and see how these
693
+ namespace rules are useful in real-world applications. This ex-
694
+ ample implements a simplified Python regular expression com-
695
+ piler. The functions are ordered into the decks re, sre_compile
696
+ and sre_parse decks. sre_parse is the basic buiding block. It
697
+ contains a child deck that defines three internal classes, Pattern,
698
+ SubPattern, Tokenizer. These classes are only used in sre_parse
699
+ module and not exposed to the upper level. The sre_parse mod-
700
+ ule defines several helper functions including _parse_sub, _parse,
701
+ _escape, and they are used to build a parse function which is ex-
702
+ posed to parent module sre_compile. Similarly, module sre_compile
703
+ defines some internal helper functions that are composed to pro-
704
+ vide compile to the parent re module to build the top level API
705
+ compile, match and search. The general functions isstring and
706
+ isnumber are defined in a utility deck and are accessed through the
707
+ modules sre_compile and sre_parse. Finally, the testing pods at
708
+ different hierarchy make it easy to test and debug the functions at
709
+ different levels.
710
+ Ordering code blocks in Codepod is natural in building the dif-
711
+ ferent levels of abstractions, and the hierarchy of code is close to
712
+ the call graph, and is more cleanly maintained compared to file
713
+ editors. The Codepod implementation maintains a clear code hi-
714
+ erarchy and makes it easy to develop the project interactively. In
715
+ comparison, the file-based implementation using VSCode would
716
+ spread the functions into files, and within the file, the hierarchy of
717
+ the functions is not clearly maintained. Writing all the functions
718
+ into a Jupyter notebook is challenging due to the lack of namespace
719
+ support, and the code hierarchy cannot be maintained within a
720
+ single global namespace.
721
+ 2.8
722
+ Version Control
723
+ Implementing a version control system requires tremendous effort.
724
+ There already exists mature file-based version control systems such
725
+ as git and svn. We re-use the git version control system to apply
726
+ version control to Codepod. Specifically, the code pods are first
727
+ CodePod
728
+ Front-end
729
+ Runtime
730
+ Kernel
731
+ RunCode
732
+ Request Complete
733
+ EvalInNS(code, ns)
734
+ AddImport(from, to, name)
735
+ DeleteImport(from, name)
736
+ Jupyter Protocol
737
+ CodePod added protocol
738
+ Figure 7: Kernel Communication Protocols
739
+ exported to files. Then git version control is applied to the generated
740
+ files. For front-end rendering, we query the git version control
741
+ system for the diff between two versions, e.g., the current changes
742
+ or specific git commit changes. Then the diff results are parsed to
743
+ show pod-level diffs.
744
+ 3
745
+ INCREMENTAL RUNTIME
746
+ Codepod develops an intuitive, effective, consistent cross-language
747
+ scope-aware incremental runtime abstraction. In Codepod, the run-
748
+ time loads the code pods in the hierarchy. When a pod in some scope
749
+ is changed, the updated definition can be applied incrementally in
750
+ that scope in the active runtime without restarting the runtime,
751
+ and the new definition is visible for other pods depending on it.
752
+ 3.1
753
+ Runtime Kernel Communication
754
+ We build the Codepod Kernel communication based on the Jupyter
755
+ Message Queue protocol. The Jupyter kernel protocols and our
756
+ added protocols are shown in Fig. 7. In its simplest form, Jupyter
757
+ kernel messaging supports eval and complete. We add the fol-
758
+ lowing protocol to the messaging queue: EvalInNS, AddImport,
759
+ DeleteImport. The protocol EvalInNS instructs the langauge ker-
760
+ nel to evaluate code in a specific namespace. The AddImport proto-
761
+ col makes a function “name” defined in “from” namespace available
762
+ in the “to” namespace, and DeleteImport undo the change. The
763
+ algorithm for the language kernel is given in next section.
764
+ 3.2
765
+ Language Kernel Algorithm
766
+ The language kernel needs to support four functions to work with
767
+ Codepod: GetModule, EvalInNS, AddImport, DeleteImport. Most
768
+ languages do not natively support all these operations. We imple-
769
+ ment a thin wrapper of these functions as shown in Algorithm 1.
770
+ GetModule is a function that gets the module instance for a spe-
771
+ cific namespace. Since we need to refer to the same module given
772
+ the same name, we need to maintain a mapping from namespace
773
+ names to the module instance. In line 1, the nsmap object is created
774
+ as a global variable, initialized with an empty dictionary. The Get-
775
+ Module function will query this map, and if the module already
776
+ exists, return the module instance (lines 3-4). If the module is not
777
+ found, create a new module by calling the language’s createMod
778
+ API, record the module in the nsmap, and return the module (lines
779
+ 6-8).
780
+
781
+ Conference’23, Jan, 2023, Ames, IA, USA
782
+ Hebi Li, Forrest Sheng Bao, Qi Xiao, Jin Tian
783
+ Algorithm 1 Namespace-aware Runtime: Language Kernel
784
+ 1: 𝑛𝑠𝑚𝑎𝑝 ← 𝐸𝑚𝑝𝑡𝑦𝐷𝑖𝑐𝑡 ()
785
+ 2: function GetModule(ns)
786
+ 3:
787
+ if 𝑛𝑠 ∈ 𝑛𝑠𝑚𝑎𝑝 then
788
+ 4:
789
+ return 𝑛𝑠𝑚𝑎𝑝 [𝑛𝑠]
790
+ 5:
791
+ else
792
+ 6:
793
+ 𝑚𝑜𝑑 ← 𝑐𝑟𝑒𝑎𝑡𝑒𝑀𝑜𝑑 (𝑛𝑠)
794
+ 7:
795
+ 𝑛𝑠𝑚𝑎𝑝 [𝑛𝑠] ← 𝑚𝑜𝑑
796
+ 8:
797
+ return 𝑚𝑜𝑑
798
+ 9:
799
+ end if
800
+ 10: end function
801
+ 11: function EvalInNS(ns, code)
802
+ 12:
803
+ 𝑚𝑜𝑑 ← GetModule(ns)
804
+ 13:
805
+ 𝑖𝑠𝐸𝑥𝑝𝑟 ← IsExpr(code)
806
+ 14:
807
+ if isExpr then
808
+ 15:
809
+ return eval(code, mod)
810
+ 16:
811
+ else
812
+ 17:
813
+ exec(code, mod)
814
+ 18:
815
+ return NULL
816
+ 19:
817
+ end if
818
+ 20: end function
819
+ 21: function AddImport(from, to, name)
820
+ 22:
821
+ 𝑠 ← "$name=eval($name,from)"
822
+ 23:
823
+ EvalInNS(to, s)
824
+ 24: end function
825
+ 25: function DeleteImport(from, name)
826
+ 26:
827
+ 𝑠 ← "del $name"
828
+ 27:
829
+ EvalInNS(from, s)
830
+ 28: end function
831
+ The EvalInNS function is responsible for loading the module
832
+ specified by the namespace and evaluating code with that mod-
833
+ ule instance active. It first loads or creates the module instance
834
+ by the GetModule function. Many languages treat expression and
835
+ statement separately, where expressions return values, while state-
836
+ ments do not. Thus the algorithm first checks whether this code
837
+ is an expression or statement. If it is an expression, the language’s
838
+ EVAL API is called with the code and module instance and returns
839
+ the expression results for display in the front-end. Otherwise, the
840
+ language’s EXEC API is called to evaluate the code in the module
841
+ instance for side-effect only and return NULL to the user.
842
+ Finally, AddImport and DeleteImport work by meta-programming:
843
+ a new program string is constructed to add or delete names in the
844
+ target namespace. The function AddImport receives the name to
845
+ import and the "from" and "to" namespace. The goal is to import the
846
+ function "name" from the "from" namespace and make it available in
847
+ the "to" namespace. A string is constructed to assign a variable with
848
+ the same name "name" with the evaluation of the name in the "from"
849
+ namespace (lines 22-23). The DeleteImport works by constructing
850
+ a "delete $name" code and evaluating the target namespace (lines
851
+ 26-27).
852
+ We note that the EVAL and EXEC API with module awareness are
853
+ different across languages. Some languages provide native support,
854
+ while others might need to perform reflection-level operations,
855
+ such as manually passing in Python symbol maps. The code strings
856
+ for AddImport and DeleteImport are generally different across
857
+ languages too. We supply the core code of the four kernels we have
858
+ implemented in Figure 8, 9, 10, and 11.
859
+ 3.3
860
+ Pod Hierarchy Algorithm
861
+ This section formally describes the key algorithms in Algorithm 2
862
+ that implement Codepod’s module system, the namespace rules, and
863
+ how the evaluation of pods/decks is handled in the pod hierarchy.
864
+ A pod is evaluated with the RunPod function, which calls EvalInNS
865
+ with the pod’s code content and namespace string. A deck is eval-
866
+ uated with the RunDeck function, which evaluates the subtree of
867
+ the deck. The function first evaluates all the child decks in a DFS
868
+ (depth-first search) manner, then evaluates all the pods in this deck.
869
+ 1
870
+ d = {}
871
+ 2
872
+ 3
873
+ def getmod(ns):
874
+ 4
875
+ if ns not in d:
876
+ 5
877
+ d[ns] = types.ModuleType(ns)
878
+ 6
879
+ d[ns]. __dict__["
880
+ CODEPOD_GETMOD"] = getmod
881
+ 7
882
+ return d[ns]
883
+ 8
884
+ 9
885
+ def add_import(src , dst , name):
886
+ 10
887
+ return eval_func("""
888
+ 11
889
+ {name}= getmod ({src}).
890
+ __dict__ ["{ name }"]
891
+ 12
892
+ """, dst)
893
+ 13
894
+ 14
895
+ def delete_import(ns, name):
896
+ 15
897
+ eval_func("""del {name}""", ns)
898
+ 1
899
+ def eval_func(code , ns):
900
+ 2
901
+ mod = getmod(ns)
902
+ 3
903
+ [stmt , expr] = code2parts(code)
904
+ 4
905
+ if stmt:
906
+ 5
907
+ exec(stmt , mod.__dict__)
908
+ 6
909
+ if expr:
910
+ 7
911
+ return eval(expr , mod.
912
+ __dict__)
913
+ Figure 8: Python Kernel Namespace Implementation
914
+ 1
915
+ var NSMAP = NSMAP || {};
916
+ 2
917
+ function eval(code , ns, names) {
918
+ 3
919
+ if (!NSMAP[ns]) {
920
+ 4
921
+ NSMAP[ns] = {};
922
+ 5
923
+ }
924
+ 6
925
+ for (let k of keys(NSMAP[ns])) {
926
+ 7
927
+ eval(
928
+ 8
929
+ `var ${k}=NSMAP["${ns}"].${k}`
930
+ 9
931
+ );
932
+ 10
933
+ }
934
+ 11
935
+ let res = eval(code);
936
+ 12
937
+ for (let name of names) {
938
+ 13
939
+ eval(
940
+ 14
941
+ `NSMAP["${ns}"].${name}=${name}`
942
+ 15
943
+ );
944
+ 16
945
+ }
946
+ 17
947
+ return res;
948
+ 18
949
+ }
950
+ 1
951
+ function addImport(from , to, name) {
952
+ 2
953
+ if (!NSMAP[from]) {
954
+ 3
955
+ NSMAP[from] = {};
956
+ 4
957
+ }
958
+ 5
959
+ if (!NSMAP[to]) {
960
+ 6
961
+ NSMAP[to] = {};
962
+ 7
963
+ }
964
+ 8
965
+ eval(`
966
+ 9
967
+ NSMAP["${to}"].${name}=
968
+ 10
969
+ NSMAP["${from}"].${name}`);
970
+ 11
971
+ }
972
+ 12
973
+ function deleteImport(ns, name) {
974
+ 13
975
+ if (!NSMAP[ns]) {
976
+ 14
977
+ NSMAP[ns] = {};
978
+ 15
979
+ }
980
+ 16
981
+ eval(
982
+ 17
983
+ `delete NSMAP["${ns}"].${name}`)
984
+ ;
985
+ 18
986
+ }
987
+ Figure 9: Javascript Kernel Namespace Implementation
988
+ 1
989
+ function isModuleDefined(names)
990
+ 2
991
+ mod = :(Main)
992
+ 3
993
+ for name in names
994
+ 4
995
+ name = Symbol(name)
996
+ 5
997
+ if !isdefined(eval(mod), name)
998
+ 6
999
+ return false
1000
+ 7
1001
+ end
1002
+ 8
1003
+ mod = :($mod.$name)
1004
+ 9
1005
+ end
1006
+ 10
1007
+ return true
1008
+ 11
1009
+ end
1010
+ 12
1011
+ function ensureModule(namespace)
1012
+ 13
1013
+ names = split(namespace , "/",
1014
+ 14
1015
+ keepempty=false)
1016
+ 15
1017
+ mod = :(Main)
1018
+ 16
1019
+ for name in names
1020
+ 17
1021
+ name = Symbol(name)
1022
+ 18
1023
+ if !isdefined(eval(mod), name)
1024
+ 19
1025
+ include_string(eval(mod),
1026
+ 20
1027
+ "module $name end")
1028
+ 21
1029
+ end
1030
+ 22
1031
+ mod = :($mod.$name)
1032
+ 23
1033
+ end
1034
+ 24
1035
+ return mod , eval(mod)
1036
+ 25
1037
+ end
1038
+ 1
1039
+ function eval(code , ns)
1040
+ 2
1041
+ _, mod = ensureModule(ns)
1042
+ 3
1043
+ include_string(mod , code)
1044
+ 4
1045
+ end
1046
+ 5
1047
+ function addImport(from , to, name)
1048
+ 6
1049
+ from_name , _ = ensureModule(from)
1050
+ 7
1051
+ _, to_mod = ensureModule(to)
1052
+ 8
1053
+ code = """
1054
+ 9
1055
+ using $from_name: $name as CP$name
1056
+ 10
1057
+ $name=CP$name
1058
+ 11
1059
+ $name
1060
+ 12
1061
+ """
1062
+ 13
1063
+ include_string(to_mod , code)
1064
+ 14
1065
+ end
1066
+ 15
1067
+ function deleteImport(ns, name)
1068
+ 16
1069
+ _, mod = ensureModule(ns)
1070
+ 17
1071
+ include_string(mod , "$name=nothing")
1072
+ 18
1073
+ end
1074
+ Figure 10: Julia Kernel Namespace Implementation
1075
+ The reason to evaluate child decks before child pods is that the child
1076
+ decks might define public functions exposed to this deck; thus, the
1077
+ pods must have those definitions in place before execution. The
1078
+ reason to use DFS is to make sure the lowest-level pods are first
1079
+ evaluated before moving up the hierarchy. Child pods are evaluated
1080
+ sequentially. When running the child pods, the algorithm examines
1081
+
1082
+ Codepod: A Namespace-Aware, Hierarchical Jupyter
1083
+ for Interactive Development at Scale
1084
+ Conference’23, Jan, 2023, Ames, IA, USA
1085
+ 1
1086
+ (compile-enforce-module-constants #f)
1087
+ 2
1088
+ 3
1089
+ (define (ns- >submod ns)
1090
+ 4
1091
+ (let ([names (string-split ns "/")])
1092
+ 5
1093
+ (when (not (empty? names))
1094
+ 6
1095
+ (let ([one (string- >symbol
1096
+ 7
1097
+ (first names))]
1098
+ 8
1099
+ [two (map string- >symbol
1100
+ 9
1101
+ (rest names))])
1102
+ 10
1103
+ ‘(submod ',one
1104
+ 11
1105
+ ,@two)))))
1106
+ 12
1107
+ (define (ns- >enter ns)
1108
+ 13
1109
+ (let ([mod (ns- >submod ns)])
1110
+ 14
1111
+ (if
1112
+ (void? mod) '(void)
1113
+ 15
1114
+ ‘(dynamic-enter! ',mod))))
1115
+ 16
1116
+ 17
1117
+ (define (ns- >ensure-module ns)
1118
+ 18
1119
+ (let loop
1120
+ 19
1121
+ ([names (string-split ns "/")])
1122
+ 20
1123
+ (if (empty? names)
1124
+ 21
1125
+ '(void)
1126
+ 22
1127
+ ‘(module
1128
+ 23
1129
+ ,(string- >symbol
1130
+ 24
1131
+ (first names))
1132
+ 25
1133
+ racket/base
1134
+ 26
1135
+ ,(loop (rest names))))))
1136
+ 1
1137
+ (define (add-import from to name)
1138
+ 2
1139
+ (let ([name (string- >symbol name)])
1140
+ 3
1141
+ (eval (ns- >enter to))
1142
+ 4
1143
+ (eval
1144
+ 5
1145
+ ‘(define ,name
1146
+ 6
1147
+ (dynamic-require/expose
1148
+ 7
1149
+ ',(ns- >submod from)
1150
+ 8
1151
+ ',name)))))
1152
+ 9
1153
+ 10
1154
+ (define (delete-import ns name)
1155
+ 11
1156
+ (eval (ns- >enter ns))
1157
+ 12
1158
+ (namespace-undefine-variable!
1159
+ 13
1160
+ (string- >symbol name)))
1161
+ 14
1162
+ 15
1163
+ (define (string- >sexp s)
1164
+ 16
1165
+ (call-with-input-string
1166
+ 17
1167
+ s
1168
+ 18
1169
+ (lambda (in)
1170
+ 19
1171
+ (read in))))
1172
+ 20
1173
+ 21
1174
+ (define (codepod-eval code ns)
1175
+ 22
1176
+ (eval (ns- >ensure-module ns))
1177
+ 23
1178
+ (eval (ns- >enter ns))
1179
+ 24
1180
+ (begin0
1181
+ 25
1182
+ (eval
1183
+ 26
1184
+ (string- >sexp
1185
+ 27
1186
+ (~a "(begin " code ")")))
1187
+ 28
1188
+ (enter! #f)))
1189
+ Figure 11: Racket Kernel Namespace Implementation
1190
+ Algorithm 2 Namespace-aware Runtime: Pod Hierarchy
1191
+ 1: function RunPod(pod)
1192
+ 2:
1193
+ EvalInNS(pod.ns, pod.code)
1194
+ 3: end function
1195
+ 4: function RunTest(ns, code)
1196
+ 5:
1197
+ for 𝑛𝑎𝑚𝑒 ← 𝑝𝑜𝑑.𝑝𝑎𝑟𝑒𝑛𝑡.𝑛𝑎𝑚𝑒𝑠 do
1198
+ 6:
1199
+ AddImport(pod.parent.ns, pod.ns, name)
1200
+ 7:
1201
+ end for
1202
+ 8:
1203
+ EvalInNS(pod.ns, pod.code)
1204
+ 9: end function
1205
+ 10: function RunUtility(from, name)
1206
+ 11:
1207
+ EvalInNS(pod.ns, pod.code)
1208
+ 12:
1209
+ function dfs(parent)
1210
+ 13:
1211
+ for 𝑐ℎ𝑖𝑙𝑑 ← 𝑝𝑎𝑟𝑒𝑛𝑡.𝑐ℎ𝑖𝑙𝑑_𝑑𝑒𝑐𝑘𝑠 do
1212
+ 14:
1213
+ for 𝑛𝑎𝑚𝑒 ← 𝑝𝑜𝑑.𝑝𝑎𝑟𝑒𝑛𝑡.𝑛𝑎𝑚𝑒𝑠 do
1214
+ 15:
1215
+ AddImport(pod.parent.ns, pod.ns, name)
1216
+ 16:
1217
+ end for
1218
+ 17:
1219
+ dfs(child)
1220
+ 18:
1221
+ end for
1222
+ 19:
1223
+ end function
1224
+ 20:
1225
+ dfs(pod.parent)
1226
+ 21: end function
1227
+ 22: function RunTree(root)
1228
+ 23:
1229
+ function dfs(parent)
1230
+ 24:
1231
+ for 𝑐ℎ𝑖𝑙𝑑 ← 𝑝𝑎𝑟𝑒𝑛𝑡.𝑐ℎ𝑖𝑙𝑑_𝑑𝑒𝑐𝑘𝑠 do
1232
+ 25:
1233
+ RunTree(child)
1234
+ 26:
1235
+ end for
1236
+ 27:
1237
+ end function
1238
+ 28:
1239
+ dfs(pod)
1240
+ 29:
1241
+ for 𝑝𝑜𝑑 ← 𝑟𝑜𝑜𝑡.𝑐ℎ𝑖𝑙𝑑_𝑝𝑜𝑑𝑠 do
1242
+ 30:
1243
+ if pod.type is "Pod" then
1244
+ 31:
1245
+ RunPod(pod)
1246
+ 32:
1247
+ else if pod.type is "Test" then
1248
+ 33:
1249
+ RunTest(pod)
1250
+ 34:
1251
+ else if pod.type is "Utility" then
1252
+ 35:
1253
+ RunUtility(pod)
1254
+ 36:
1255
+ end if
1256
+ 37:
1257
+ end for
1258
+ 38: end function
1259
+ the type of the pods and calls the corresponding functions RunPod,
1260
+ RunUtility, and RunTest for different types accordingly.
1261
+ Hierarchical
1262
+ Layout
1263
+ Commnication
1264
+ Protocol
1265
+ Kernel
1266
+ Runtime
1267
+ CodeServer
1268
+ API
1269
+ Total
1270
+ LOC
1271
+ 4.1k
1272
+ 1.8k
1273
+ 1k
1274
+ 1k
1275
+ 7.9k
1276
+ Table 1: Codepod Implemenatation Statistics
1277
+ The RunUtility function will first evaluate the pods in the names-
1278
+ pace. Then, the public names are exported to the parent’s subtree
1279
+ by traversing the parent’s subtree in a DFS manner and calling
1280
+ AddImport for each of the namespaces in the sub-tree during tra-
1281
+ versal. In this way, the name of the utility function is available in
1282
+ all the decks of the parent’s sub-tree. The RunTest function will
1283
+ loop through the parent deck’s public names and run AddImport
1284
+ for all the names from the parent’s namespace to the testing pod’s
1285
+ namespace, and then evaluate the test pods in the namespace where
1286
+ the parent’s function definitions are available.
1287
+ 3.4
1288
+ Fallback Execution
1289
+ Codepod requires the programming languages to support interpret-
1290
+ ing in order to run and evaluate code interactively. With interactive
1291
+ development becoming popular, there have been many interpreters
1292
+ implementations for even compiled languages. For example, C++
1293
+ has a highly regarded interpreter, Cling. Another requirement is
1294
+ that the language needs to support namespace and provide a way
1295
+ to evaluate code blocks within the namespace.
1296
+ Suppose the support of namespace-aware interactive develop-
1297
+ ment is not fully supported due to the limitation of the language
1298
+ interpreter. In that case, Codepod provides a fallback option: export-
1299
+ ing code to files and invoking the language interpreter/compiler to
1300
+ run the program as a whole. The downside of this approach is that
1301
+ the variables are not persisted in memory across runs because each
1302
+ invocation starts a new process.
1303
+ 4
1304
+ CASE STUDIES
1305
+ 4.1
1306
+ Implementation
1307
+ We have implemented a fully working Codepod as a Web applica-
1308
+ tion (front-end in React and backend server in Node.js) and cur-
1309
+ rently implemented full namespace-aware runtime support for
1310
+ four language kernels, including Python, JavaScript, Julia, and
1311
+ Scheme/Racket. New kernels can be easily developed upon exist-
1312
+ ing Jupyter notebook kernels. Codepod is open-sourced at https:
1313
+ //example.com.
1314
+ Codepod implementation contains 4 major parts, the LOC sta-
1315
+ tistics is shown in Table 1. The hierarchical layout implements the
1316
+ front-end hierarchical pods and tools. The communication proto-
1317
+ col implements the RunTree/RunPod logic and how the front-end
1318
+ communicates with the backend API server and language runtime
1319
+ kernels. Kernel runtime implements the kernels and WebSocket
1320
+ protocol message handling. Finally, CodeServer API implements the
1321
+ API for retrieving and updating pods hierarchy from the front-end
1322
+ and talking to the database.
1323
+
1324
+ Conference’23, Jan, 2023, Ames, IA, USA
1325
+ Hebi Li, Forrest Sheng Bao, Qi Xiao, Jin Tian
1326
+ 4.2
1327
+ Python and Julia Projects Statistics
1328
+ In this section, we study several highly regarded open-source projects
1329
+ and visually show how the representation of Codepod can poten-
1330
+ tially help develop code projects. This case study aims to see how
1331
+ functions are distributed in the files and directories and the calling
1332
+ relations of the functions. This helps us evaluate how the Codepod
1333
+ model can help with real-world open-source software projects.
1334
+ The projects are obtained from GitHub’s top-rated Python and
1335
+ Julia projects; the information about the projects is shown in Ta-
1336
+ ble 2. We analyze the projects with the help of tree-sitter [1] parser
1337
+ framework. We count only the source directory of the projects,
1338
+ ignoring testing code and documents. Our study focuses on func-
1339
+ tions as the code block granularity. Python projects might contain
1340
+ classes, and we treat each method as a function.
1341
+ In Table 2, we can see that software projects often contain a large
1342
+ number of functions, and those functions are distributed in a large
1343
+ number of files, possibly in a deep directory hierarchy of tens of
1344
+ directories. For example, the you-get project contains 444 functions
1345
+ and is distributed into 133 files in 8 directories. The LightGraph.jl
1346
+ project contains 242 functions, distributed in 110 files within 27
1347
+ directories. It can be pretty challenging to grasp the hierarchy of
1348
+ the functions by browsing through the files and directories.
1349
+ We also count the number of internal and external functions of
1350
+ each file. A file’s internal functions are defined as those only called
1351
+ by the other functions in the same file. These internal functions are
1352
+ helper functions that are used to implement the external functions
1353
+ that act as the public interface of the file, called by other files. In
1354
+ Table 2, we see that approximately half of the files are internal
1355
+ and are used as building blocks of other functions. However, this
1356
+ dependency information is not clear in a file because the functions
1357
+ are considered linear within a file, and no clear hierarchy can be
1358
+ effectively maintained. Thus potentially, Codepod can help to apply
1359
+ hierarchical relations to the functions inside each file.
1360
+ Figure 12: Statistics for top open source Python Projects
1361
+ To understand the distributions of functions in each file and call-
1362
+ ing relationships, we plot the in-degree,out-degree,LOC,#functions-
1363
+ per-file of the Python and Julia Projects in Fig. 12 and Fig. 13, re-
1364
+ spectively. The in/out-degrees of a function is defined by how many
1365
+ calls to the functions and how many calls this function makes to
1366
+ other functions. Both in/out-degrees are computed based on the call
1367
+ Figure 13: Statistics for top open source Julia Projects
1368
+ graphs over the functions defined in the project; thus, the statistics
1369
+ do not include language or library API calls.
1370
+ From the plot Fig. 12.(a) and Fig. 13.(a), we see that most func-
1371
+ tions are being called no more than two times. This shows that
1372
+ most functions are “local”, and only used to implement higher-level
1373
+ functions. Also, in Fig. 12.(b) and Fig. 13.(b), we observe that the
1374
+ out-degree is more than in-degree, and most functions have less
1375
+ than five function calls to other functions. This is also consistent
1376
+ with Codepod’s tree-based model: a deck in a tree node might have
1377
+ multiple sub-decks. A pod defined in a deck might use the functions
1378
+ defined in the sub-decks to implement higher-level functionality.
1379
+ From the function per file plot Fig. 12.(d) and Fig. 13.(d), we can
1380
+ see that the distribution of functions into files are not even: a large
1381
+ number of files contain only 1 or 2 functions, while there can also be
1382
+ a few very large file containing tens of functions. This data shows
1383
+ that in order to maintain a cleaner hierarchy, developers might use
1384
+ a separate file for each function, resulting in too many small files
1385
+ and deep directories, which are relatively hard to maintain, edit
1386
+ and reference using file browsers and file editors. Also, within the
1387
+ large files with tens of functions, the hierarchy of those functions
1388
+ cannot be cleanly maintained within a file, and Codepod could help
1389
+ build a finer-granular hierarchy for the functions.
1390
+ 4.3
1391
+ Jupyter Project Statistics
1392
+ In this study, we investigate how Jupyter notebooks are used in real-
1393
+ world open projects. This study aims to see how code is distributed
1394
+ among Jupyter notebooks and text files and how the Jupyter note-
1395
+ books interact with the functions defined in text files regarding
1396
+ calling relationships. We query GitHub APIs to find top Python
1397
+ projects whose primary language is Python and whose secondary
1398
+ language is Jupyter Notebook. The projects and statistics are shown
1399
+ in Table 3.
1400
+ In Table 3, we can see that there are typically more text files
1401
+ than Jupyter notebooks. This is also the case for the total LOC
1402
+ and number of functions in Jupyter notebooks vs. text files. In
1403
+ fact, the LOC of the Jupyter notebook consists only 3.7% of the
1404
+ codebase for these projects. This means that the majority of the
1405
+ code is implemented in the text files.
1406
+ To understand the calling relationships of the Jupyter notebooks
1407
+ and text files, we calculate the percentage of internal functions
1408
+
1409
+ ab-no
1410
+ DOE
1411
+ cookiecutter
1412
+ locust
1413
+ 150
1414
+ unoo
1415
+ requests
1416
+ 200
1417
+ 100
1418
+ 100
1419
+ 50
1420
+ 5
1421
+ 10
1422
+ 5
1423
+ 1D
1424
+ 15
1425
+ (afuncbion indegree
1426
+ bj functinoutdegree
1427
+ 60
1428
+ 60
1429
+ 40
1430
+ 40
1431
+ no
1432
+ 20
1433
+ 20
1434
+ 0
1435
+ 0
1436
+ 0
1437
+ 25
1438
+ 50
1439
+ 75
1440
+ 100
1441
+ 0
1442
+ 5
1443
+ 10
1444
+ 15
1445
+ 20
1446
+ (ciavglocperfunction
1447
+ dyfunctionsperfileJuliaDB.jl
1448
+ 100
1449
+ HTTP.jI
1450
+ 100
1451
+ Flux.jl
1452
+ un
1453
+ LightGraphs.jl
1454
+ 50
1455
+ 50
1456
+ 5
1457
+ 10
1458
+ 0
1459
+ 5
1460
+ 10
1461
+ 15
1462
+ al function indegree
1463
+ (b) functin outdegree
1464
+ 30
1465
+ 60
1466
+ 20
1467
+ 40
1468
+ 10
1469
+ 20
1470
+ 0
1471
+ 0
1472
+ 0
1473
+ 25
1474
+ 50
1475
+ 75
1476
+ 100
1477
+ 5
1478
+ 10
1479
+ 15
1480
+ 20
1481
+ (c) avg loc perfunction
1482
+ (d) functionsper fileCodepod: A Namespace-Aware, Hierarchical Jupyter
1483
+ for Interactive Development at Scale
1484
+ Conference’23, Jan, 2023, Ames, IA, USA
1485
+ #stars
1486
+ #dirs
1487
+ #files
1488
+ #loc
1489
+ #funcs
1490
+ #internal funcs
1491
+ description
1492
+ soimort/you-get
1493
+ 41.6k
1494
+ 8
1495
+ 133
1496
+ 14707
1497
+ 444
1498
+ 273
1499
+ CMD-line utility to download media from Web
1500
+ cookiecutter/cookiecutter
1501
+ 15.2k
1502
+ 0
1503
+ 18
1504
+ 2139
1505
+ 51
1506
+ 30
1507
+ CMD-line utility to create projects from templates
1508
+ locustio/locust
1509
+ 17k
1510
+ 10
1511
+ 59
1512
+ 18671
1513
+ 529
1514
+ 144
1515
+ performance testing tool
1516
+ psf/requests
1517
+ 45.9k
1518
+ 0
1519
+ 18
1520
+ 5183
1521
+ 135
1522
+ 73
1523
+ HTTP library
1524
+ JuliaData/JuliaDB.jl
1525
+ 706
1526
+ 0
1527
+ 20
1528
+ 3113
1529
+ 106
1530
+ 82
1531
+ Parallel analytical database
1532
+ JuliaWeb/HTTP.jl
1533
+ 439
1534
+ 0
1535
+ 38
1536
+ 7513
1537
+ 65
1538
+ 36
1539
+ HTTP client and server
1540
+ FluxML/Flux.jl
1541
+ 3.2k
1542
+ 5
1543
+ 33
1544
+ 6408
1545
+ 84
1546
+ 41
1547
+ Machine Learning Framework
1548
+ JuliaGraphs/LightGraphs.jl
1549
+ 675
1550
+ 27
1551
+ 110
1552
+ 16963
1553
+ 242
1554
+ 101
1555
+ Network and graph analysis.
1556
+ Table 2: Function Statistics in Open Source Projects (Python and Julia)
1557
+ #stars
1558
+ #file
1559
+ (ipynb/files)
1560
+ #loc
1561
+ (ipynb/files)
1562
+ #call
1563
+ fs-to-nb
1564
+ #call
1565
+ nb-to-fs
1566
+ #func
1567
+ (ipynb/files)
1568
+ #%internal
1569
+ (ipynb/files)
1570
+ description
1571
+ blei-lab/edward
1572
+ 4.6k
1573
+ 14/42
1574
+ 1340/5449
1575
+ 0
1576
+ 11
1577
+ 10/102
1578
+ 100%/93%
1579
+ A probabilistic programming language
1580
+ tqdm/tqdm
1581
+ 19.3k
1582
+ 2/30
1583
+ 284/2257
1584
+ 0
1585
+ 9
1586
+ 0/83
1587
+ NA/80%
1588
+ A Fast, Extensible Progress Bar
1589
+ google/jax
1590
+ 14.1k
1591
+ 3/196
1592
+ 104/42879
1593
+ 0
1594
+ 8
1595
+ 0/2418
1596
+ NA/38%
1597
+ Autograd and Optimizing Compiler for ML
1598
+ google-research/bert
1599
+ 29k
1600
+ 1/13
1601
+ 322/4547
1602
+ 0
1603
+ 8
1604
+ 7/92
1605
+ 100%/88%
1606
+ State-of-the-art NLP language model
1607
+ quantopian/zipline
1608
+ 14.4k
1609
+ 2/183
1610
+ 115/30087
1611
+ 0
1612
+ 3
1613
+ 3/851
1614
+ 100%/61%
1615
+ Algorithmic Trading Library
1616
+ Table 3: Jupyter Notebooks statistics in Open Source Projects
1617
+ for files and Jupyter notebooks. The internal function is defined
1618
+ the same as above, i.e., the functions that are only called from the
1619
+ same file or Jupyter notebook. From the result, two projects contain
1620
+ no function definitions in the Jupyter notebooks, and the other
1621
+ three projects’ functions are 100% internal to the notebook files. In
1622
+ contrast, the text files contain 38% to 93% internal functions. Also,
1623
+ we calculate the number of calls from the Jupyter notebook to text
1624
+ files and vice versa and show that there are no calls from source
1625
+ text files to notebooks, but only calls from the Jupyter notebook to
1626
+ the text files. This means that the Jupyter notebook is not used to
1627
+ develop functions. The projects are implemented in text files, and
1628
+ Jupyter notebooks are only used to call those files’ APIs and are
1629
+ most likely for presentation and tutorial purposes.
1630
+ 5
1631
+ RELATED WORK
1632
+ Running and debugging code is a major activity in software de-
1633
+ velopment. There have been many tools to help the debugging
1634
+ process. In the simplest form, developers write code in files using
1635
+ some editors such as VIM and Emacs [13], and compile and run
1636
+ files in the command line. The problem with this approach is that
1637
+ it is non-interactive, and the program always needs to restart from
1638
+ the beginning. An interactive development method is to launch
1639
+ a Read-Eval-Print-Loop (REPL) [7, 14], type, load, and evaluate
1640
+ code expressions in the REPL. Although REPL is interactive, the
1641
+ code being evaluated is not editable, and users have to type code
1642
+ into the REPL. It is also common to open a file editor and send a
1643
+ code region to the REPL for evaluation. Integrated Development
1644
+ Environments (IDEs) such as VSCode integrate file editors with
1645
+ a file browser, command line, plugins, and debuggers. There also
1646
+ exist editor plugins to navigate between the functions of a project.
1647
+ However, those plugins still do not give a within-file hierarchy
1648
+ to the code and depend on the programming language designs to
1649
+ support the within-file module system, which only a handful of
1650
+ languages support to various degrees. File-based approaches force
1651
+ developers to maintain the correspondence between code and files,
1652
+ which is tricky due to the significantly different granularity of code
1653
+ blocks and files. The unbalance in granularity poses dilemmas to
1654
+ developers: including too many code blocks into one file makes the
1655
+ hierarchy hard to maintain, while including few code blocks into
1656
+ one file creates many small files and deep directories that are also
1657
+ hard to work with.
1658
+ Figure 14: Interactive Development with Jupyter. Image from [11].
1659
+ A recent new paradigm is a web-based interactive notebook
1660
+ called Jupyter Notebook [6]. The interface of Jupyter is shown in
1661
+ Figure 14. The notebook consists of code cells. Each cell can be run
1662
+ in arbitrary order, and the results are displayed under the cell. The
1663
+ code output can be visualized, e.g., plotting a figure. Thus Jupyter
1664
+ notebooks support literate programming that combines code, text,
1665
+
1666
+ Fibonnaci
1667
+ In [3]:
1668
+ def fib(x):
1669
+ if x <= 1:
1670
+ Markdown
1671
+ return x
1672
+ Cells
1673
+ return fib(x-1) + fib(x-2)
1674
+ fib(10)
1675
+ Out[3]:
1676
+ 55
1677
+ Output 1
1678
+ Code
1679
+ Cells
1680
+ Let's plot the numbers
1681
+ In [8]:
1682
+ from matplotlib import pyplot
1683
+ %matplotlib inline
1684
+ x = range(15)
1685
+ y = [fib(n) for n in x]
1686
+ pyplot.plot(x, y);
1687
+ Execution
1688
+ 350
1689
+ 300
1690
+ Counter
1691
+ 250
1692
+ 200
1693
+ Output 2
1694
+ 150
1695
+ 100
1696
+ 50
1697
+ 0
1698
+ 6
1699
+ 10
1700
+ 12
1701
+ 14Conference’23, Jan, 2023, Ames, IA, USA
1702
+ Hebi Li, Forrest Sheng Bao, Qi Xiao, Jin Tian
1703
+ and execution results with rich media visualizations. However, the
1704
+ Jupyter notebook cells are linear, and all the code blocks live in the
1705
+ global namespace. Jupyter notebook lacks a module system that
1706
+ is crucial for complex software. This makes it hard to develop a
1707
+ large-scale software system. Thus Jupyter notebook is typically
1708
+ used only for surface demo and visualization purposes. The real
1709
+ code of the projects is still developed in text files with text editors
1710
+ and external runtime. In order to define hierarchical code, users
1711
+ have to write code into text files and import the module from text
1712
+ files into the notebook. Such a Jupyter-file hybrid model has several
1713
+ disadvantages. Changes in files are not in sync with the Jupyter
1714
+ runtime. When a function definition in a file is changed, the Jupyter
1715
+ runtime must be restarted, and the file must be reloaded for the
1716
+ change to take effect.
1717
+ Another programming paradigm is Visual Programming, e.g. Lab-
1718
+ View [2], and Google’s Blockly [9], and Microsoft’s MakeCode [5].
1719
+ Such a visual programming paradigm has distinguished advantages:
1720
+ it is visually clean, easy to learn, and impossible to make syntax er-
1721
+ rors. However, all these visual programming systems use such block
1722
+ style items down to each expression (e.g., a+b), which can be ver-
1723
+ bose. Also, visual programming blocks live in a global namespace,
1724
+ and there is no module system available for developing large-scale
1725
+ software.
1726
+ There exist code analyzers to generate a visual presentation of
1727
+ code, such as Unified Modeling Language (UML) [3, 8]. Tools such
1728
+ as call graph visualizers [4] are also developed to help to understand
1729
+ a codebase. However, those visual presentations are not editable,
1730
+ making them less useful during development.
1731
+ 6
1732
+ CONCLUSION
1733
+ In this paper, we propose Codepod, a namespace-aware hierarchi-
1734
+ cal interactive development environment. Codepod uses a novel
1735
+ hierarchical code block model to represent code and abstracts away
1736
+ files. We propose namespace rules that make it easy to organize
1737
+ the pods and provide a consistent module system across languages.
1738
+ Codepod provides an incremental evaluation runtime that helps
1739
+ interactively develop a large-scale software project.
1740
+ We hope Codepod can provide a novel way to drive the software
1741
+ development process and inspire other research. In the future, we
1742
+ will push Codepod forward with the contribution from the com-
1743
+ munity, perform user studies to compare Codepod with VSCode
1744
+ and Jupyter. It is interesting to integrate program analysis tools
1745
+ into the Codepod model. We are also interested in integrating other
1746
+ programming paradigms into the Codepod framework; for exam-
1747
+ ple, it would be helpful to integrate graphical programming as a
1748
+ “graphical pod” and mix graphical programs with plain-text code.
1749
+ REFERENCES
1750
+ [1] Tree-sitter: An incremental parsing system for programming tools. https://tree-
1751
+ sitter.github.io/tree-sitter/. Accessed: 2021-07-30.
1752
+ [2] Rick Bitter, Taqi Mohiuddin, and Matt Nawrocki. LabVIEW: Advanced program-
1753
+ ming techniques. Crc Press, 2006.
1754
+ [3] Grady Booch, Ivar Jacobson, James Rumbaugh, et al. The unified modeling
1755
+ language. Unix Review, 14(13):5, 1996.
1756
+ [4] David Callahan, Alan Carle, Mary W. Hall, and Ken Kennedy. Constructing the
1757
+ procedure call multigraph. IEEE Transactions on Software Engineering, 16(4):483–
1758
+ 487, 1990.
1759
+ [5] James Devine, Joe Finney, Peli de Halleux, Michał Moskal, Thomas Ball, and
1760
+ Steve Hodges. Makecode and codal: intuitive and efficient embedded systems
1761
+ programming for education. ACM SIGPLAN Notices, 53(6):19–30, 2018.
1762
+ [6] Thomas Kluyver, Benjamin Ragan-Kelley, Fernando Pérez, Brian Granger,
1763
+ Matthias Bussonnier, Jonathan Frederic, Kyle Kelley, Jessica Hamrick, Jason
1764
+ Grout, Sylvain Corlay, Paul Ivanov, Damián Avila, Safia Abdalla, and Carol Will-
1765
+ ing. Jupyter notebooks – a publishing format for reproducible computational
1766
+ workflows. In F. Loizides and B. Schmidt, editors, Positioning and Power in
1767
+ Academic Publishing: Players, Agents and Agendas, pages 87 – 90. IOS Press, 2016.
1768
+ [7] John McCarthy, Michael I Levin, Paul W Abrahams, Daniel J Edwards, and
1769
+ Timothy P Hart. LISP 1.5 programmer’s manual. MIT press, 1965.
1770
+ [8] Nenad Medvidovic, David S Rosenblum, David F Redmiles, and Jason E Rob-
1771
+ bins. Modeling software architectures in the unified modeling language. ACM
1772
+ Transactions on Software Engineering and Methodology (TOSEM), 11(1):2–57, 2002.
1773
+ [9] Erik Pasternak, Rachel Fenichel, and Andrew N Marshall. Tips for creating a
1774
+ block language with blockly. In 2017 IEEE Blocks and Beyond Workshop (B&B),
1775
+ pages 21–24. IEEE, 2017.
1776
+ [10] Jeffrey M Perkel. Why jupyter is data scientists’ computational notebook of
1777
+ choice. Nature, 563(7732):145–147, 2018.
1778
+ [11] João Felipe Pimentel, Leonardo Murta, Vanessa Braganholo, and Juliana Freire. A
1779
+ large-scale study about quality and reproducibility of jupyter notebooks. In 2019
1780
+ IEEE/ACM 16th International Conference on Mining Software Repositories (MSR),
1781
+ pages 507–517. IEEE, 2019.
1782
+ [12] Bernadette M Randles, Irene V Pasquetto, Milena S Golshan, and Christine L
1783
+ Borgman. Using the jupyter notebook as a tool for open science: An empirical
1784
+ study. In 2017 ACM/IEEE Joint Conference on Digital Libraries (JCDL), pages 1–2.
1785
+ IEEE, 2017.
1786
+ [13] Richard M Stallman. Emacs the extensible, customizable self-documenting dis-
1787
+ play editor. In Proceedings of the ACM SIGPLAN SIGOA symposium on Text
1788
+ manipulation, pages 147–156, 1981.
1789
+ [14] L Thomas van Binsbergen, Mauricio Verano Merino, Pierre Jeanjean, Tijs van der
1790
+ Storm, Benoit Combemale, and Olivier Barais. A principled approach to repl
1791
+ interpreters. In Proceedings of the 2020 ACM SIGPLAN International Symposium on
1792
+ New Ideas, New Paradigms, and Reflections on Programming and Software, pages
1793
+ 84–100, 2020.
1794
+ [15] Jiawei Wang, Li Li, and Andreas Zeller. Better code, better sharing: on the need of
1795
+ analyzing jupyter notebooks. In Proceedings of the ACM/IEEE 42nd International
1796
+ Conference on Software Engineering: New Ideas and Emerging Results, pages 53–56,
1797
+ 2020.
1798
+
9tE0T4oBgHgl3EQffwDm/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
A9AyT4oBgHgl3EQf3_rL/content/2301.00780v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ec27fb84d17c0584e2d126260964d04cdfaca1cde3eb197900d8ce83b5d4ba4
3
+ size 1591849
A9AyT4oBgHgl3EQf3_rL/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a880306d2e50668b9b37347f0b3ad6a6c952af60401f32dc2b878222a6bda0d3
3
+ size 318267
BdAzT4oBgHgl3EQfh_0J/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3070f9c974d729f946c33467ab3b6c18a0c64c8c931ef22d6224bbd4e53e8029
3
+ size 6029357
CNAzT4oBgHgl3EQfTvwq/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ccc13d39768ab1bc38400b44c1a9f44a9bc5660fa56e65816a8b8b346fa008b
3
+ size 3801133
CdE0T4oBgHgl3EQfyQI7/content/tmp_files/2301.02656v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
CdE0T4oBgHgl3EQfyQI7/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
ENAyT4oBgHgl3EQfSPf9/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b1ef832b36f68231a7efd8d022f02a037a504ef60840dec7dc1370ccd7dbb27
3
+ size 1441837
FdE1T4oBgHgl3EQfqwVD/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5152d62da04a6f9abc7c5de01493f9a07078ce7683db3a07750436c9ebc0bc45
3
+ size 200935
FtE1T4oBgHgl3EQfEwPV/content/tmp_files/2301.02895v1.pdf.txt ADDED
@@ -0,0 +1,2479 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.02895v1 [cond-mat.stat-mech] 7 Jan 2023
2
+ RENEWAL EQUATIONS FOR SINGLE-PARTICLE DIFFUSION IN
3
+ MULTI-LAYERED MEDIA
4
+ PAUL C. BRESSLOFF∗
5
+ Abstract. Diffusion in heterogeneous media partitioned by semi-permeable interfaces has a wide
6
+ range of applications in the physical and life sciences, ranging from thermal conduction in composite
7
+ media, gas permeation in soils, diffusion magnetic resonance imaging (dMRI), drug delivery, and
8
+ intercellular gap junctions. Many of these systems involve three-dimensional (3D) diffusion in an
9
+ array of parallel planes with homogeneity in the lateral directions, so that they can be reduced to
10
+ effective one-dimensional (1D) models.
11
+ In this paper we develop a probabilistic model of single-
12
+ particle diffusion in 1D multi-layered media by constructing a multi-layered version of so-called
13
+ snapping out Brownian motion (BM). The latter sews together successive rounds of reflected BM,
14
+ each of which is restricted to a single layer. Each round of reflected BM is killed when the local time
15
+ at one end of the layer exceeds an independent, exponentially distributed random variable. (The
16
+ local time specifies the amount of time a reflected Brownian particle spends in a neighborhood of
17
+ a boundary.) The particle then immediately resumes reflected BM in the same layer or the layer
18
+ on the other side of the boundary with equal probability, and the process is iterated We proceed
19
+ by constructing a last renewal equation for multi-layered snapping out BM that relates the full
20
+ probability density to the probability densities of partially reflected BM in each layer. We then show
21
+ how transfer matrices can be used to solve the Laplace transformed renewal equation, and prove that
22
+ the renewal equation and corresponding multi-layer diffusion equation are equivalent. We illustrate
23
+ the theory by analyzing the first passage time (FPT) problem for escape at the exterior boundaries
24
+ of the domain. Finally, we use the renewal approach to incorporate a generalization of snapping out
25
+ BM based on the encounter-based method for surface absorption; each round of reflected BM is now
26
+ killed according to a non-exponential distribution for each local time threshold. This is achieved
27
+ by considering a corresponding first renewal equation that relates the full probability density to
28
+ the FPT densities for killing each round of reflected BM. We show that for certain configurations,
29
+ non-exponential killing leads to an effective time-dependent permeability that is normalizable but
30
+ heavy-tailed.
31
+ 1. Introduction. Diffusion in heterogeneous media partitioned by semi per-
32
+ meable barriers has a wide range of applications in natural and artificial systems.
33
+ Examples include multilayer electrodes and semi-conductors [27, 18, 24, 34], thermal
34
+ conduction in composite media [3, 33, 17, 44], waste disposal and gas permeation
35
+ in soils [55, 43, 52], diffusion magnetic resonance imaging (dMRI) [53, 13, 16], drug
36
+ delivery [49, 54], and intercellular gap junctions [20, 15, 26]. Many of these systems
37
+ involve three-dimensional (3D) diffusion in an array of parallel planes with homo-
38
+ geneity in the lateral directions, which means that they can be reduced to effective
39
+ one-dimensional (1D) models. Consequently, there have been a variety of analytical
40
+ and numerical studies of 1D multilayer diffusion that incorporate methods such as
41
+ spectral decompositions, Greens functions, and Laplace transforms [11, 51, 50, 19, 36,
42
+ 37, 29, 35, 30, 14, 6, 46].
43
+ Almost all studies of multilayer diffusion have focused on macroscopic models
44
+ in which the relevant field is the concentration of diffusing particles. Many of the
45
+ analytical challenges concern the derivation of time-dependent solutions that charac-
46
+ terize short-time transients or threshold crossing events. This requires either carrying
47
+ out a non-trivial spectral decomposition of the solution and/or inverting a highly
48
+ complicated Laplace transform. In general, it is necessary to develop some form of
49
+ approximation scheme or to supplement a semi-analytical solution with numerical
50
+ computations. As far as we are aware, single-particle diffusion or Brownian motion
51
+ ∗Department
52
+ of
53
+ Mathematics,
54
+ University
55
+ of
56
+ Utah,
57
+ Salt
58
+ Lake
59
+ City,
60
+ UT
61
+ 84112
62
+ USA
63
64
+ 1
65
+
66
+ (BM) in multilayer media has not been investigated to anything like the same ex-
67
+ tent, with the possible exception of spatially discrete random walks [40, 47, 39, 2].
68
+ On the other hand, a rigorous probabilistic formulation of 1D diffusion through a
69
+ single semi-permeable barrier has recently been introduced by Lejay [41] in terms of
70
+ so-called snapping out BM, see also Refs. [1, 42, 12]. Snapping out BM sews together
71
+ successive rounds of reflected BM that are restricted to either x < 0 or x > 0 with a
72
+ semi-permeable barrier at x = 0. Each round of reflected BM is killed when its local
73
+ time at x = 0± exceeds an exponentially distributed random variable with constant
74
+ rate κ0. (Roughly speaking, the local time at x = 0+ (x = 0−) specifies the amount
75
+ of time a positively (negatively) reflected Brownian particle spends in a neighborhood
76
+ of the right-hand (left-hand) side of the barrier [38].) It then immediately resumes
77
+ either negatively or positively reflected BM with equal probability, and so on.
78
+ We recently reformulated 1D snapping out BM in terms of a renewal equation
79
+ that relates the full probability density to the probability densities of partially re-
80
+ flected BMs on either side of the barrier [9]. (The theory of semigroups and resolvent
81
+ operators were used in Ref. [41] to derive a corresponding backward equation.) We
82
+ established the equivalence of the renewal equation with the corresponding single-
83
+ particle diffusion equation, and showed how to solve the former using a combination
84
+ of Laplace transforms and Green’s function methods. We subsequently extended the
85
+ theory to bounded domains and higher spatial dimensions [10]. Formulating interfa-
86
+ cial diffusion in terms of snapping out BM has at least two useful features. First, it
87
+ provides a more general probabilistic framework for modeling semi-permeable mem-
88
+ branes. For example, each round of partially reflected BM on either side of an in-
89
+ terface could be killed according to a non-Markovian process, along analogous lines
90
+ to encounter-based models of surface absorption [31, 32, 7, 8].
91
+ That is, partially
92
+ reflected BM is terminated when its local time at the interface exceeds a random
93
+ threshold that is not exponentially distributed. As we have shown elsewhere, this
94
+ leads to a time-dependent permeability that tends to be heavy-tailed [9, 10]. Sec-
95
+ ond, numerical simulations of snapping out BM generate sample paths that can be
96
+ used to obtain approximate solutions of boundary value problems in the presence of
97
+ semi-permeable interfaces [41].1
98
+ In this paper we develop a multi-layered version of snapping out BM and its as-
99
+ sociated renewal equations for both exponential and non-Markovian killing processes.
100
+ In particular, we consider a single particle diffusing in a finite interval [0, L] that is
101
+ partitioned into m subintervals (or layers) (aj, aj+1), j = 0, . . . , m − 1, with a0 = 0,
102
+ am = L, see Fig. 1.1. The interior interfaces at x = a1, . . . , am−1 are taken to be
103
+ semi-permeable barriers with constant permeabilities κj, j = 1, . . . , m − 1, whereas
104
+ partially reflecting or Robin boundary conditions are imposed at the ends x = 0, L
105
+ with absorption rates 2κ0 and 2κl, respectively.
106
+ (The factors of 2 are convenient
107
+ when formulating snapping out BM.) The diffusion coefficient is also heterogeneous
108
+ with D(x) = Dj for all x ∈ (aj−1, aj). We begin in section 2 by writing down the
109
+ multi-layered diffusion equation, which we formally solve using Laplace transforms
110
+ and an iterative method based on transfer matrices, following along analogous lines
111
+ to Refs. [51, 46]. In section 3, we construct the multi-layered version of snapping out
112
+ BM and write down the corresponding last renewal equation, which relates the full
113
+ 1An efficient computational schemes for finding solutions to the single-particle diffusion equation
114
+ in the presence of one or more semi-permeable interfaces has also been developed in terms of under-
115
+ damped Langevin equations [21, 22]. However, this is distinct from snapping out BM, which is an
116
+ exact single-particle realization of diffusion through an interface in the overdamped limit.
117
+ 2
118
+
119
+ x = a1
120
+ x = 0
121
+ x = L
122
+ x = a m-2
123
+ x = a m-1
124
+ x = a 2
125
+ layer 1
126
+ layer 2
127
+ layer m-1
128
+ layer m
129
+ Fig. 1.1. A 1D layered medium consisting of m layers x ∈ (aj, aj+1), j = 0, 1, . . . m − 1, with
130
+ a0 = 0 and am = L. The interior interfaces at x = aj, j = 1, . . . m − 1 act as semi-permeable
131
+ membranes, whereas partially absorbing boundary conditions are imposed on the exterior boundaries
132
+ at x = 0, L.
133
+ probability density to the probability densities of partially reflected BM in each layer.
134
+ We then show how transfer matrices can be used to solve the Laplace transformed
135
+ renewal equation, although the details differ significantly from the iterative solution
136
+ of the diffusion equation. We also prove that the renewal equation and diffusion equa-
137
+ tion are equivalent. This exploits a subtle feature of partially reflected BM, namely,
138
+ the Robin boundary condition is modified when the initial position of the particle is
139
+ on the boundary itself. In section 4 we illustrate the theory by analyzing the first
140
+ passage time (FPT) problem for the particle to escape from one of the ends of the
141
+ domain. The FPT statistics can be analyzed in terms of the small-s behavior of the
142
+ Laplace transformed probability fluxes at the ends x = 0, L, where s is the Laplace
143
+ variable. This means that it is sufficient to solve the multi-layer renewal equation in
144
+ Laplace space, without having to invert the Laplace transformed solution using some
145
+ form of spectral decomposition, for example. Finally, in section 5, we use the renewal
146
+ approach to incorporate a generalization of snapping out BM based on the encounter-
147
+ based method for surface absorption. This is achieved by considering a corresponding
148
+ first renewal equation that relates the full probability density to the FPT densities
149
+ for killing each round of reflected BM.
150
+ 2. Single-particle diffusion equation in a 1D layered medium. Before
151
+ developing the more general renewal approach for single-particle diffusion in the multi-
152
+ layer domain of Fig. 1.1, it is useful to briefly consider the classical formulation in
153
+ terms of the diffusion equation with constant permeabilities. Let ρj(x, t) denote the
154
+ probability density of the particle position in the j-th layer. For concreteness, we
155
+ assume that the particle starts in the first layer, that is, x0 ∈ [0, a1], although it
156
+ is straightforward to adapt the analysis to include more general initial conditions,
157
+ see section 3. (For notational convenience, we drop the explicit dependence of ρj on
158
+ x0.) Single-particle diffusion can be represented by the following piecewise system of
159
+ partial differential equations (PDEs):
160
+ ∂ρj
161
+ ∂t = Dj
162
+ ∂2ρj
163
+ ∂x2 ,
164
+ x ∈ (aj−1, aj), j = 1, . . . m,
165
+ (2.1a)
166
+ Dj
167
+ ∂ρj(x, t)
168
+ ∂x
169
+ ����
170
+ x=a−
171
+ j
172
+ = Dj+1
173
+ ∂ρj+1(x, t)
174
+ ∂x
175
+ ����
176
+ x=a+
177
+ j
178
+ = κj[ρj+1(a+
179
+ j , t) − ρj(a−
180
+ j , t)],
181
+ j = 1, . . . m − 1,
182
+ (2.1b)
183
+ D1
184
+ ∂ρ1(x, t)
185
+ ∂x
186
+ ����
187
+ x=0
188
+ = 2κ0ρ1(0, t),
189
+ Dm
190
+ ∂ρm(x, t)
191
+ ∂x
192
+ ����
193
+ x=L
194
+ = −2κmρm(L, t),
195
+ (2.1c)
196
+ together with the initial condition ρj(x, t) = δ(x − x0)δj,1. Finally, we denote the
197
+ composite solution on the domain G = ∪m
198
+ j=1[a+
199
+ j−1, a−
200
+ j ] by ρ(x, t). Laplace transforming
201
+ 3
202
+
203
+ equations (2.1a)–(2.1c) gives
204
+ Dj
205
+ ∂2�ρj
206
+ ∂x2 − s�ρj = −δ(x − x0)δj,1,
207
+ x ∈ (aj−1, aj), j = 1, . . . m,
208
+ (2.2a)
209
+ Dj
210
+ ∂�ρj(x, s)
211
+ ∂x
212
+ ����
213
+ x=a−
214
+ j
215
+ = Dj+1
216
+ ∂�ρj+1(x, s)
217
+ ∂x
218
+ ����
219
+ x=a+
220
+ j
221
+ = κj[�ρj+1(a+
222
+ j , s) − �ρj(a−
223
+ j , s)],
224
+ j = 1, . . . m − 1,
225
+ (2.2b)
226
+ D1
227
+ ∂�ρ1(x, s)
228
+ ∂x
229
+ ����
230
+ x=0
231
+ = 2κ0�ρ1(0, s),
232
+ Dm
233
+ ∂�ρm(x, s)
234
+ ∂x
235
+ ����
236
+ x=L
237
+ = −2κm�ρm(L, s).
238
+ (2.2c)
239
+ Equations (2.2a)–(2.2b) can be solved using transfer matrices along similar lines to
240
+ Refs. [51, 46]. We sketch the steps here.
241
+ First, note that for all 1 ≤ j ≤ m, equation (2.2a) has the general solution
242
+ �ρj(x, s) = Al
243
+ j(s) cosh(
244
+
245
+ s/Dj[x − aj−1]) + Bl
246
+ j(s) sinh(
247
+
248
+ s/Dj[x − aj−1])
249
+ (2.3)
250
+ or, equivalently
251
+ �ρj(x, s) = Ar
252
+ j(s) cosh(
253
+
254
+ s/Dj[x − aj]) + Br
255
+ j (s) sinh(
256
+
257
+ s/Dj[x − aj]).
258
+ (2.4)
259
+ For 1 < j ≤ m, the coefficients Al
260
+ j, Bl
261
+ j are related to Ar
262
+ j, Br
263
+ j according to
264
+ � Ar
265
+ j
266
+ Br
267
+ j
268
+
269
+ = Uj(s)
270
+ � Al
271
+ j
272
+ Bl
273
+ j
274
+
275
+ ,
276
+ Uj(s) =
277
+
278
+ cosh(
279
+
280
+ s/DjLj)
281
+ sinh(
282
+
283
+ s/DjLj)
284
+ sinh(
285
+
286
+ s/DjLj)
287
+ cosh(
288
+
289
+ s/DjLj)
290
+
291
+ ,
292
+ (2.5)
293
+ where Lj = aj − aj−1 is the length of the j-th layer. The presence of the Dirac delta
294
+ function for j = 1 means that the relationship between the coefficients (Ar
295
+ 1(s), Br
296
+ 1(s))
297
+ and (Al
298
+ 1(s), Bl
299
+ 1(s)) is determined by imposing the continuity condition �ρ1(x+
300
+ 0 , s) =
301
+ �ρ1(x−
302
+ 0 , s) and the flux discontinuity condition ∂x�ρ1(x+
303
+ 0 , s) − ∂x�ρ1(x−
304
+ 0 , s) = −1/D1.
305
+ This yields the result
306
+ � Ar
307
+ 1
308
+ Br
309
+ 1
310
+
311
+ = U1(s)
312
+ � Al
313
+ 1
314
+ Bl
315
+ 1
316
+
317
+ +
318
+ 1
319
+ √sD1
320
+
321
+ sinh(
322
+
323
+ s/D1[x0 − a1])
324
+ − cosh(
325
+
326
+ s/D1[x0 − a1])
327
+
328
+ .
329
+ (2.6)
330
+ Given the relationships �ρj(aj, s) = Ar
331
+ j(s), �ρj(aj−1, s) = Al
332
+ j(s), Dj∂x�ρj(aj, s) =
333
+ �sDjBr
334
+ j (s) and Dj∂x�ρj(aj−1, s) = �sDjBl
335
+ j(s), the boundary conditions (2.2b) can
336
+ be written in the form
337
+
338
+ sDjBr
339
+ j (s) =
340
+
341
+ sDj+1Bl
342
+ j+1(s) = κj[Al
343
+ j+1(s) − Ar
344
+ j(s)].
345
+ (2.7)
346
+ That is, for 1 ≤ j < m,
347
+ � Al
348
+ j+1
349
+ Bl
350
+ j+1
351
+
352
+ = Vj(s)
353
+ � Ar
354
+ j
355
+ Br
356
+ j
357
+
358
+ ,
359
+ Vj(s) =
360
+
361
+
362
+ 1
363
+
364
+ sDj/κj
365
+ 0
366
+
367
+ Dj/Dj+1
368
+
369
+  .
370
+ (2.8)
371
+ Iterating equations (2.5) and (2.8) for m ≥ 2, we have
372
+ � Ar
373
+ m
374
+ Br
375
+ m
376
+
377
+ = Mm(s)
378
+ � Ar
379
+ 1
380
+ Br
381
+ 1
382
+
383
+ ,
384
+ (2.9)
385
+ 4
386
+
387
+ with
388
+ M2(s) = U2(s)V1(s),
389
+ Mm(s) = Um(s)
390
+
391
+
392
+ m−1
393
+
394
+ j=2
395
+ Vj(s)Uj(s)
396
+
397
+  V1(s) for m ≥ 3.
398
+ (2.10)
399
+ Hence, we have shown how the solution in any layer can be expressed in terms of
400
+ the two unknown coefficients Al
401
+ 1(s) and Bl
402
+ 1(s). The latter are then determined by
403
+ imposing the Robin boundary conditions at x = 0, L:
404
+
405
+ sD1Bl
406
+ 1(s) = 2κ0Al
407
+ 1(s),
408
+
409
+ sDmBr
410
+ m(s) = −2κmAr
411
+ m(s).
412
+ (2.11)
413
+ 3. Snapping out BM in a 1D layered medium. We now develop an al-
414
+ ternative formulation of multi-layer diffusion, which is based on a generalization of
415
+ 1D snapping out BM for a single semi-permeable interface [41, 7]. In particular, we
416
+ construct a renewal equation that relates ρ(x, t) on G to the probability densities of
417
+ partially reflected BM in each of the layers [aj−1, aj], j = 1, . . . , m.
418
+ 3.1. Single layer with partially reflecting boundaries. Consider BM in the
419
+ interval [aj−1, aj] with both ends totally reflecting. Let X(t) ∈ [aj−1, aj] denote the
420
+ position of the Brownian particle at time t and introduce the pair of Brownian local
421
+ times
422
+ ℓ+
423
+ j−1(t) = lim
424
+ h→0
425
+ Dj
426
+ h
427
+ ˆ t
428
+ 0
429
+ H(aj−1 + h − X(τ))dτ,
430
+ (3.1a)
431
+ ℓ−
432
+ j (t) = lim
433
+ h→0
434
+ Dj
435
+ h
436
+ ˆ t
437
+ 0
438
+ H(aj − h − X(τ))dτ,
439
+ (3.1b)
440
+ where H is the Heaviside function. Note that ℓ+
441
+ j−1(t) determines the amount of time
442
+ that the Brownian particle spends in a neighborhood to the right of x = aj−1 over the
443
+ interval [0, t]. Similarly, ℓ−
444
+ j (t) determines the amount of time spent in a neighborhood
445
+ to the left of x = aj. (The inclusion of the factor Dj means that the local times have
446
+ units of length.) It can be shown that the local times exist and are nondecreasing,
447
+ continuous function of t [38]. The corresponding stochastic differential equation (SDE)
448
+ for X(t) is given by the Skorokhod equation
449
+ dX(t) =
450
+
451
+ 2DjdW(t) + dℓ+
452
+ j−1(t) − dℓ−
453
+ j (t).
454
+ (3.2)
455
+ Roughly speaking, each time the particle hits one of the ends it is given an impul-
456
+ sive kick back into the bulk domain. It can be proven that the probability density
457
+ for particle position evolves according to the single-particle diffusion equation with
458
+ Neumann boundary conditions at both ends.
459
+ Partially reflected BM can now be defined by introducing a pair of exponentially
460
+ distributed independent random local time thresholds �ℓ+
461
+ j−1 and �ℓ−
462
+ j such that
463
+ P[�ℓ+
464
+ j−1 > ℓ] = e−2κj−1ℓ/Dj,
465
+ P[�ℓ−
466
+ j > ℓ] = e−2κjℓ/Dj.
467
+ (3.3)
468
+ The stochastic process is then killed as soon as one of the local times exceeds its
469
+ corresponding threshold, which occurs at the stopping time Tj = min{τ −
470
+ j , τ +
471
+ j } with
472
+ τ +
473
+ j = inf{t > 0 : ℓ+
474
+ j−1(t) > �ℓ+
475
+ j−1},
476
+ τ −
477
+ j = inf{t > 0 : ℓ−
478
+ j (t) > �ℓ−
479
+ j }.
480
+ (3.4)
481
+ 5
482
+
483
+ local time lj-1
484
+ Tj
485
+ x
486
+ time t
487
+ x0
488
+ interface
489
+ aj-1
490
+ aj
491
+ totally reflecting
492
+ *
493
+ Fig. 3.1. Sketch of a course-grained trajectory of a Brownian particle in the interval [aj−1, aj]
494
+ with a partially reflecting boundary at x = aj−1 and a totally reflecting boundary at x = aj. The
495
+ particle is absorbed as soon as the time ℓj−1(t) spent in a boundary layer around x = aj−1 exceeds
496
+ an exponentially distribution threshold �ℓj−1, which occurs at the stopping time Tj.
497
+ In Fig. 3.1 we illustrate the basic construction using a simplified version of partially
498
+ reflected BM in which x = aj−1 is partially reflecting (0 < κj−1 < ∞) but x = aj is
499
+ totally reflecting (κj = 0).
500
+ It can be shown that the probability density for particle position prior to absorp-
501
+ tion at one of the ends (see also section 5),
502
+ pj(x, t|x0)dx = P[x ≤ X(t) < x + dx, t < Tj|X0 = x0], x ∈ [aj−1, aj],
503
+ (3.5)
504
+ satisfies the single-particle diffusion equation (Fokker-Planck equation) with Robin
505
+ boundary conditions at x = aj−1, aj [25, 48, 45, 5, 28]:
506
+ ∂pj(x, t|x0)
507
+ ∂t
508
+ = Dj
509
+ ∂2pj(x, t|x0)
510
+ ∂x2
511
+ ,
512
+ aj−1 < x0, x < aj,
513
+ (3.6a)
514
+ Dj∂xpj(aj−1, t|x0) = 2κj−1p(aj−1, t|x0),
515
+ (3.6b)
516
+ Dj∂xpj(aj, t|x0) = −2κjp(aj, t|x0),
517
+ (3.6c)
518
+ and pj(x, 0|x0) = δ(x − x0).
519
+ It is convenient to Laplace transform with respect to t, which gives
520
+ Dj
521
+ ∂2�pj(x, s|x0)
522
+ ∂x2
523
+ − s�pj(x, s|x0) = −δ(x − x0),
524
+ aj−1 < x0, x < aj
525
+ (3.7a)
526
+ Dj∂x�pj(aj−1, s|x0) = 2κj−1�pj(aj−1, s|x0),
527
+ (3.7b)
528
+ Dj∂x�p(aj, s|x0) = −2κj�pj(aj, s|x0).
529
+ (3.7c)
530
+ We can identify �pj(x, s|x0) as the Green’s function of the modified Helmholtz equation
531
+ with Robin boundary conditions at x = aj−1, aj:
532
+ �pj(x, s|x0) =
533
+
534
+
535
+
536
+ AjFj(x, s)F j(x0, s),
537
+ aj−1 ≤ x ≤ x0
538
+ AjFj(x0, s)Fj(x, s),
539
+ x0 ≤ x ≤ aj
540
+ (3.8)
541
+ 6
542
+
543
+ where
544
+ Fj(x, s) =
545
+
546
+ sDj cosh(
547
+
548
+ s/Dj[x − aj−1]) + 2κj−1 sinh(
549
+
550
+ s/Dj[x − aj−1]),
551
+ (3.9a)
552
+ Fj(x, s) =
553
+
554
+ sDj cosh(
555
+
556
+ s/Dj[aj − x]) + 2κj sinh(
557
+
558
+ s/Dj[aj − x]),
559
+ (3.9b)
560
+ Aj =
561
+ 1
562
+ �sDj
563
+ 1
564
+ 2(κj−1 + κj)
565
+
566
+ sDj cosh(
567
+
568
+ s/DjLj) + [sDj + 4κj−1κj] sinh(
569
+
570
+ s/DjLj)
571
+ ,
572
+ (3.9c)
573
+ and Lj = aj − aj−1 is the width of the layer. It can be checked that the Robin
574
+ boundary conditions are satisfied at x = aj−1, aj for all aj−1 < x0 < aj. However, for
575
+ x0 = aj−1, aj, we have
576
+ Dj∂x�pj(aj−1, s|aj−1) = 2κj−1�p(aj−1, s|aj−1) − 1,
577
+ (3.10a)
578
+ Dj∂x�pj(aj, s|aj) = −2κj�pj(aj, s|aj) + 1.
579
+ (3.10b)
580
+ In other words,
581
+ lim
582
+ ǫ→0
583
+
584
+
585
+ ∂x
586
+ ����
587
+ x=aj
588
+ �pj(x, s|aj − ǫ)
589
+
590
+ ̸=
591
+
592
+ ∂x
593
+ ����
594
+ x=aj
595
+
596
+ lim
597
+ ǫ→0 �pj(x, s|aj − ǫ)
598
+
599
+ (3.11)
600
+ etc. The modification of the Robin boundary condition when the particle starts at
601
+ the barrier plays a significant role in establishing the equivalence of snapping out BM
602
+ with single particle diffusion in a multi-layered medium (see section 3.3).
603
+ 3.2. Last renewal equation. We now construct snapping out BM in the multi-
604
+ layered domain shown in Fig. 1.1 by sewing together multiple rounds of reflected BM.
605
+ For the moment, assume that the exterior boundaries are totally reflecting. For each
606
+ interface we introduce a pair of local time ℓ±
607
+ j and a corresponding pair of independent
608
+ exponentially distributed thresholds �ℓ±
609
+ j with rates 2κj, j = 1, . . . , m − 1. Suppose
610
+ that the particle starts at x = x0 in the first layer. It realizes positively reflected
611
+ BM until its local time ℓ−
612
+ 1 (t) at x = a1 exceeds the random threshold �ℓ−
613
+ 1 with rate
614
+ 2κ1. The process immediately restarts as a new reflected BM with probability 1/2 in
615
+ either [0, a1] or [a1, a2]. If the particle is in layer 2, then the reflected BM is stopped
616
+ as soon as one of the local times (ℓ+
617
+ 1 (t), ℓ−
618
+ 2 (t)) exceeds its corresponding threshold.
619
+ Each time the BM is restarted all local times are reset to zero. Finally, taking the
620
+ exterior boundaries to be partially reflecting, we introduce an additional pair of local
621
+ times, ℓ0(t), ℓm(t) for the external boundaries at x = 0, L, and a corresponding pair of
622
+ exponentially distributed random thresholds �ℓ0, �ℓm with rates 2κ0, 2κm, respectively.
623
+ The stochastic process is then permanently terminated at the stopping time
624
+ T = min{T0, Tm},
625
+ Tk = inf{t > 0 : ℓk(t) > �ℓk}, k = 0, m.
626
+ (3.12)
627
+ We illustrate the basic construction in Fig.
628
+ 3.2 in the simplified case of a single
629
+ semi-permeable interface at x = aj and totally reflecting boundaries x = aj−1 and
630
+ x = aj+1. The statistics of diffusion across the interface can be captured by sewing
631
+ together successive rounds of partially reflected BM in the intervals [aj−1, a−
632
+ j ] and
633
+ [a+
634
+ j , aj+1] with each round killed according to an exponentially distributed local time
635
+ threshold, and the new domain selected with probability 1/2.
636
+ 7
637
+
638
+ x = aj
639
+ x = aj-1
640
+ reflecting
641
+ x0
642
+ (a)
643
+ Robin
644
+ x = aj+1
645
+ Robin
646
+ reflecting
647
+ reflecting
648
+ reflecting
649
+ x = aj
650
+ x = aj-1
651
+ x = aj+1
652
+ x0
653
+ reflecting
654
+ reflecting
655
+ x = aj
656
+ x = aj-1
657
+ x = aj+1
658
+ (b)
659
+ (a)
660
+ (c)
661
+ Fig. 3.2. Decomposition of snapping out BM on the interval [aj−1, aj+1] with reflecting bound-
662
+ ary conditions at the ends and a semi-permeable barrier at x = aj. (a) Diffusion across the interface.
663
+ (b) Partially reflected BM in [a+
664
+ j , aj+1]. (c) Partially reflected BM in [aj−1, a−
665
+ j ].
666
+ Consider a general initial probability density φ(x0) with x0 ∈ G and set
667
+ ρj(x, t) =
668
+ ˆ
669
+ G
670
+ ρj(x, t|x0)φ(x0)dx0,
671
+ pj(x, t) =
672
+ ˆ
673
+ G
674
+ pj(x, t|x0)φ(x0)dx0.
675
+ (3.13)
676
+ Following our previous work on snapping out BM for single semi-permeable interfaces
677
+ [9, 10], the renewal equation for the j-th interior layer, j = 2, . . . , m − 1, takes the
678
+ form
679
+ ρj(x, t) = pj(x, t) + κj−1
680
+ ˆ t
681
+ 0
682
+ pj(x, τ|aj−1)[ρj−1(a−
683
+ j−1, t − τ) + ρj(a+
684
+ j−1, t − τ)]dτ
685
+ + κj
686
+ ˆ t
687
+ 0
688
+ pj(x, τ|aj)[ρj(a−
689
+ j , t − τ) + ρj+1(a+
690
+ j , t − τ)]dτ
691
+ (3.14a)
692
+ for all x ∈ (a+
693
+ j−1, a−
694
+ j ), with the probability density pj(x, τ|y) given by the solution
695
+ to equations (3.6). The first term pj(x, t) on the right-hand side of equation (3.14a)
696
+ represents all trajectories that reach x at time t without ever being absorbed by the
697
+ interfaces at x = a+
698
+ j−1, a−
699
+ j . The first integral on the right-hand side sums over all
700
+ trajectories that were last absorbed (stopped) at time t − τ by hitting the interface
701
+ at x = aj−1 from either the left-hand or right-hand side and then switching with
702
+ probability 1/2 to BM in the j-th layer such that it is at position x ∈ (a+
703
+ j−1, a−
704
+ j ) at
705
+ time t. Since the particle is not absorbed over the interval (t − τ, t], the probability of
706
+ reaching x is pj(x, τ|aj−1). In addition, the probability that the last stopping event
707
+ occurred in the interval (t−τ, t−τ+dτ) irrespective of previous events is 2κj−1dτ. (We
708
+ see that the inclusion of the factor 2 in the definition of the permeability cancels the
709
+ probability factor of 1/2.) The second integral has the corresponding interpretation
710
+ for trajectories that were last stopped by hitting the interface at x = aj. In the case
711
+ of the end layers, we have
712
+ ρ1(x, t) = p1(x, t) + κ1
713
+ ˆ t
714
+ 0
715
+ p1(x, τ|a1)[ρ1(a−
716
+ 1 , t − τ) + ρ2(a+
717
+ 1 , t − τ)]dτ,
718
+ (3.14b)
719
+ ρm(x, t) = pm(x, t)
720
+ (3.14c)
721
+ +κm−1
722
+ ˆ t
723
+ 0
724
+ pm(x, τ|am−1)[ρm−1(a−
725
+ m−1, t − τ) + ρm(a+
726
+ m−1, t − τ)]dτ.
727
+ 8
728
+
729
+ Note that there is only a single integral contribution in the end layers since only one
730
+ of the boundaries is semi-permeable. One interesting difference between the renewal
731
+ equation formulation and the PDE analyzed in section 2 is that the exterior boundary
732
+ conditions are already incorporated into the solutions p1(x, t|x0) and pm(x, t|x0), so
733
+ that they do not have to be imposed separately.
734
+ Given the fact that the renewal equations (3.14a)–(3.14c) are convolutions in time,
735
+ it is convenient to Laplace transform them by setting �ρj(x, s) =
736
+ ´ ∞
737
+ 0
738
+ e−stρj(x, t)dt etc.
739
+ This gives
740
+ �ρ1(x, s) = �p1(x, s) + κ1�p1(x, s|a1)Σ1(s), x ∈ [0+, a−
741
+ 1 ],
742
+ (3.15a)
743
+ �ρj(x, s) = �pj(x, s) + κj−1�pj(x, s|aj−1)Σj−1(s) + κj �pj(x, s|aj)Σj(s), x ∈ [a+
744
+ j−1, a−
745
+ j ],
746
+ 1 < j < m,
747
+ (3.15b)
748
+ �ρm(x, s) = �pm(x, s) + κm−1�pm(x, s|am−1)Σm−1(s), x ∈ [a+
749
+ m−1, L−],
750
+ (3.15c)
751
+ where
752
+ Σj(s) = �ρj(a−
753
+ j , s) + �ρj+1(a+
754
+ j , s).
755
+ (3.16)
756
+ The functions Σj(s) can be determined self-consistently by setting x = a±
757
+ k for k =
758
+ 1, . . . , m−1 and performing various summations. More specifically, substituting equa-
759
+ tion (3.15b) into the right-hand side of (3.16) for 1 < j < m gives
760
+ Σj(s) = Σp
761
+ j(s) + κj−1�pj(aj, s|aj−1)Σj−1(s) + κj �pj(aj, s|aj)Σj(s)
762
+ + κj �pj+1(aj, s|aj)Σj(s) + κj+1�pj+1(aj, s|aj+1)Σj+1(s)
763
+ (3.17a)
764
+ for 1 < j < m − 1 and Σp
765
+ j(s) ≡ �pj(aj, s) + �pj+1(aj, s). On the other hand, equations
766
+ (3.15b) and (3.15a) for j = 2 implies that
767
+ Σ1(s) = Σp
768
+ 1(s) + κ1�p1(a1, s|a1)Σ1(s)
769
+ + κ1�p2(a1, s|a1)Σ1(s) + κ2�p2(a1, s|a2)Σ2(s),
770
+ (3.17b)
771
+ while equations (3.15c) and (3.15a) for j = m − 1 yields
772
+ Σm−1(s) = Σp
773
+ m−1(s) + κm−1�pm(am−1, s|am−1)Σm−1(s)
774
+ (3.17c)
775
+ + κm−2�pm−1(am−1, s|am−2)Σm−2(s) + κm−1�pm−1(am−1, s|am−1)Σm−1(s).
776
+ Equations (3.17a)–(3.17c) can be rewritten in the more compact matrix form
777
+ m−1
778
+
779
+ k=1
780
+ Θjk(s)Σk(s) = −Σp
781
+ j(s),
782
+ (3.18)
783
+ where Θ(s) is a tridiagonal matrix with non-zero elements
784
+ Θj,j(s) = dj(s) ≡ κj[�pj+1(aj, s|aj) + �pj(aj, s|aj)] − 1, j = 1, . . . m − 1,
785
+ (3.19a)
786
+ Θj,j−1(s) = cj(s) ≡ κj−1�pj(aj, s|aj−1),
787
+ j = 2, . . . m − 1,
788
+ (3.19b)
789
+ Θj,j−1(s) = bj(s) ≡ κj+1�pj+1(aj, s|aj+1),
790
+ j = 1, . . . , m − 2.
791
+ (3.19c)
792
+ Assuming that the matrix Θ(s) is invertible, we obtain the formal solution
793
+ Σj(s) = −
794
+ m−1
795
+
796
+ k=1
797
+ Θ−1
798
+ jk (s)Σp
799
+ k(s).
800
+ (3.20)
801
+ 9
802
+
803
+ Substituting into equations (3.15a)–(3.15c) gives
804
+ �ρj(x, s) = �pj(x, s) −
805
+ m−1
806
+
807
+ k=1
808
+
809
+ κj−1�pj(x, s|aj−1)Θ−1
810
+ j−1,k(s) + κj �pj(x, s|aj)Θ−1
811
+ jk (s)
812
+
813
+ × [�pj(aj, s) + �pj+1(aj+1, s)].
814
+ (3.21)
815
+ An alternative way to solve for Σj(s) is to use transfer matrices analogous to the
816
+ analysis of the PDE in section 2. For simplicity, suppose that the particle starts in
817
+ the first layer at a point x0 ∈ [0, a1] so that �pj(x, s) = �p1(x, s|x0)δj,1. It follows that
818
+ equations (3.17a)–(3.17c) can be rewritten in the iterative form
819
+
820
+ Σj
821
+ Σj+1
822
+
823
+ = Wj(s)
824
+ � Σj−1
825
+ Σj
826
+
827
+ ,
828
+ Wj(s) =
829
+
830
+
831
+ 0
832
+ 1
833
+ −cj(s)
834
+ bj(s)
835
+ −dj(s)
836
+ bj(s)
837
+
838
+
839
+ (3.22)
840
+ for 1 < j < m − 1. In particular,
841
+ � Σm−2
842
+ Σm−1
843
+
844
+ = N(s)
845
+ � Σ1
846
+ Σ2
847
+
848
+ ,
849
+ N(s) =
850
+ m−2
851
+
852
+ k=2
853
+ Wk(s),
854
+ (3.23)
855
+ with, see equation (3.17b),
856
+ Σ2(s) = −
857
+ 1
858
+ b1(s) (�p1(a1, s|x0) + d1(s)Σ1(s)) .
859
+ (3.24)
860
+ Finally, having determined Σ2, . . . , Σm−1 in terms of Σ1, we can calculate Σ1 by
861
+ imposing equation (3.17c), after rewriting it in the more compact form
862
+ Σm−2(s) = −dm−1(s)
863
+ cm−2(s) Σm−1(s).
864
+ (3.25)
865
+ We thus obtain the following self-consistency condition for Σ1:
866
+
867
+ 1, dm−1(s)
868
+ cm−2(s)
869
+
870
+ N(s)
871
+
872
+ Σ1(s)
873
+
874
+ 1
875
+ b1(s) (�p1(a1, s|x0) + d1(s)Σ1(s))
876
+
877
+ = 0.
878
+ (3.26)
879
+ 3.3. Equivalence of the renewal and diffusion equations. We now have
880
+ two alternative methods of solution in Laplace space, one based on the diffusion
881
+ equations (2.2a)–(2.2c) and the other based on the renewal equations (3.15a)–(3.15c).
882
+ Both methods involve transfer matrices that can be iterated to express the solution in
883
+ the final layer in terms of the solution in the first layer. It is useful to check that the
884
+ renewal equations (3.15a)–(3.15c) are indeed equivalent to the Laplace transformed
885
+ diffusion equations (2.1a)–(2.1c).
886
+ (This is simpler than showing that the iterative
887
+ solutions are equivalent.) Clearly, the composite density �ρ(x, s) satisfies the diffusion
888
+ equation in the bulk and the exterior boundary conditions, so we only have to check
889
+ the boundary conditions across the interior interfaces. First, differentiating equations
890
+ (3.15a) and (3.15b) for j = 2 with respect to x and setting x = a±
891
+ 1 gives
892
+ ∂x�ρ1(a−
893
+ 1 , s) = ∂x�p1(a1, s|x0) + κ1∂x�p1(a1, s|a1)Σ1(s),
894
+ (3.27a)
895
+ ∂x�ρ2(a+
896
+ 1 , s) = κ1∂x�p2(a1, s|a1)Σ1(s) + κ2∂x�p2(a1, s|a2)Σ2(s).
897
+ (3.27b)
898
+ 10
899
+
900
+ Imposing the Robin boundary condition (3.6) implies that
901
+ D1∂x�p1(a1, s|x0) = −2κ1�p(a1, s|x0),
902
+ D2∂x�p2(a1, s|a2) = 2κ1�p(a1, s|a2).
903
+ On the other hand, equations (3.10a) and (3.10b) yield
904
+ D1∂x�p1(a1, s|a1) = −2κ1�p(a1, s|a1) + 1,
905
+ D2∂x�p2(a1, s|a1) = 2κ1�p2(a1, s|a1) − 1.
906
+ Substituting into equations (3.27a) and (3.27b), we have
907
+ D1∂x�ρ1(a−
908
+ 1 , s) = −2κ1�p1(a1, s|x0) − κ1[2κ1�p1(a1, s|a1) − 1]Σ1(s),
909
+ (3.28a)
910
+ D2∂x�ρ2(a+
911
+ 1 , s) = κ1[2κ1�p2(a1, s|a1) − 1]Σ1(s) + 2κ2κ1�p2(a1, s|a2)Σ2(s).
912
+ (3.28b)
913
+ Subtracting equations (3.28a) and (3.28b), and using equation (3.17b) implies that
914
+ D2∂x�ρ2(a+
915
+ 1 , s) − D1∂x�ρ1(a−
916
+ 1 , s) = 2κ1
917
+
918
+ κ1�p2(a1, s|a1)Σ1(s) + κ2�p2(a1, s|a2)Σ2(s)
919
+ + �p1(a1, s|x0) + κ1�p1(a1, s|a1)Σ1(s) − Σ1(s)
920
+
921
+ = 0.
922
+ (3.29)
923
+ Similarly, adding equations (3.28a) and (3.28b) gives
924
+ D2∂x�ρ2(a+
925
+ 1 , s) + D1∂x�ρ1(a−
926
+ 1 , s)] = 2κ1
927
+
928
+ κ1�p2(a1, s|a1)Σ1(s) + κ2�p2(a1, s|a2)Σ2(s)
929
+ − �p1(a1, s|x0) − κ1�p1(a1, s|a1)Σ1(s)
930
+
931
+ .
932
+ (3.30)
933
+ On the other hand setting x = a±
934
+ 1 in equations (3.15a) and (3.15b) for j = 2 shows
935
+ that
936
+ �ρ1(a−
937
+ 1 , s) = �p1(a1, s|x0) + κ1�p1(a1, s|a1)Σ1(s),
938
+ (3.31a)
939
+ �ρ2(a+
940
+ 1 , s) = κ1�p2(a1, s|a1)Σ1(s) + κ2�p2(a1, s|a2)Σ2(s).
941
+ (3.31b)
942
+ Hence, we obtain the expected semi-permeable boundary conditions at x = a1,
943
+ D2∂x�ρ2(a+
944
+ 1 , s) = D1∂x�ρ1(a−
945
+ 1 , s) = κ1[�ρ2(a+
946
+ 1 , s) − �ρ1(a−
947
+ 1 , s)].
948
+ (3.32)
949
+ A similar analysis can be carried out at the other interfaces.
950
+ We have thus established the equivalence of the renewal equations (3.14a)–(3.14c)
951
+ and the Laplace transformed diffusion equations (2.2a)–(2.2c). Hence, snapping out
952
+ BM X(t) on G is the single-particle realization of the stochastic process whose prob-
953
+ ability density evolves according to the multi-layer diffusion equation.
954
+ 4. First-passage time problem. One of the useful features of working in
955
+ Laplace space is that one can solve various first passage time problems without having
956
+ to calculate any inverse Laplace transforms. We will illustrate this by considering the
957
+ escape of the Brownian particle from one of the ends at x = 0, L. For simplicity,
958
+ we again assume that the particle starts in the first layer. Let Q(x0, t) denote the
959
+ survival probability that a particle starting at x0 ∈ (0, a1) has not been absorbed at
960
+ either end over the interval [0, t). It follows that
961
+ Q(x0, t) =
962
+ ˆ L
963
+ 0
964
+ ρ(x, t)dx =
965
+ m−1
966
+
967
+ j=0
968
+ ˆ aj+1
969
+ aj
970
+ ρj(x, t)dx.
971
+ (4.1)
972
+ 11
973
+
974
+ (We drop the explicit dependence of ρ and ρj on the initial position x0 for notational
975
+ convenience.) Differentiating both sides of equation (4.1) with respect to t and using
976
+ equations (2.1a)–(2.1c) shows that
977
+ dQ(x0, t)
978
+ dt
979
+ =
980
+ m
981
+
982
+ j=1
983
+ ˆ aj
984
+ aj−1
985
+ ∂ρj(x, t)
986
+ ∂t
987
+ dx =
988
+ m
989
+
990
+ j=1
991
+ ˆ aj
992
+ aj−1
993
+ Dj
994
+ ∂2ρj(x, t)
995
+ ∂x2
996
+ dx
997
+ =
998
+ m
999
+
1000
+ j=1
1001
+ Dj
1002
+ �∂ρj(aj, t)
1003
+ ∂x
1004
+ − ∂ρj(aj−1, t)
1005
+ ∂x
1006
+
1007
+ = Dm
1008
+ ∂ρm(am, t)
1009
+ ∂x
1010
+ − D1
1011
+ ∂ρ1(a0, t)
1012
+ ∂t
1013
+ ≡ −Jm(x0, t) − J0(x0, t).
1014
+ (4.2)
1015
+ We have used flux continuity across each interior interface so that the survival proba-
1016
+ bility decreases at a rate equal to the sum of the outward fluxes at the ends x = 0, L,
1017
+ which are denoted by J0 and JL respectively. Laplace transforming equation (4.2)
1018
+ and imposing the initial condition Q(x0, 0) = 1 gives
1019
+ s �Q(x0, s) − 1 = − �J0(x0, s) − �JL(x0, s).
1020
+ (4.3)
1021
+ Assuming that κ0+κm > 0, the particle is eventually absorbed at one of the ends with
1022
+ probability one, which means that limt→∞ Q(x0, t) = lims→0 s �Q(x0, s) = 0. Hence,
1023
+ �J0(x0, 0)+ �Jm(x0, 0) = 1. Let π0(x0) and πL(x0) denote the splitting probabilities for
1024
+ absorption at x = 0 and x = L, respectively, and denote the corresponding conditional
1025
+ MFPTs by T0(x0) and TL(x0). It can then be shown that
1026
+ π0(x0) = �J0(x0, 0),
1027
+ πL(x0) = �JL(x0, 0),
1028
+ (4.4)
1029
+ and
1030
+ π0(x0)T0(x0) = − ∂
1031
+ ∂s
1032
+ �J0(x0, s)
1033
+ ����
1034
+ s=0
1035
+ ,
1036
+ πL(x0)TL(x0) = − ∂
1037
+ ∂s
1038
+ �JL(x0, s)
1039
+ ����
1040
+ s=0
1041
+ .
1042
+ (4.5)
1043
+ Hence, analyzing the statistics of escape from the domain [0, L] reduces to determining
1044
+ the small-s behavior of the solutions ∂x�ρ1(0, s) and ∂x�ρm(L, s). We will proceed using
1045
+ the renewal equation approach of section 3.
1046
+ 4.1. Identical layers. A considerable simplification of the iterative equation
1047
+ (3.22) occurs in the case of identical layers with Dj = D, κj = κ and aj = ja for all
1048
+ j = 1, . . . , m. The solution (3.8) for partially reflected BM is now the same in each
1049
+ layer. That is, �pj(x, s|x0) = �p(x − (j − 1)a, s|x0 − (j − 1)a) for x, x0 ∈ [aj−1, aj] with
1050
+ �p(x, s|x0) =
1051
+
1052
+
1053
+
1054
+ AF(x, s)F(x0, s),
1055
+ a ≤ x ≤ x0
1056
+ AF(x0, s)F(x, s),
1057
+ x0 ≤ x ≤ a
1058
+ ,
1059
+ (4.6)
1060
+ F(x, s) =
1061
+
1062
+ sD cosh(
1063
+
1064
+ s/D[x − a]) + 2κ sinh(
1065
+
1066
+ s/D[x − a]),
1067
+ (4.7a)
1068
+ F(x, s) =
1069
+
1070
+ sD cosh(
1071
+
1072
+ s/D[a − x]) + 2κ sinh(
1073
+
1074
+ s/D[a − x]),
1075
+ (4.7b)
1076
+ A =
1077
+ 1
1078
+
1079
+ sD
1080
+ 1
1081
+
1082
+
1083
+ sD cosh(
1084
+
1085
+ s/Da) + [sD + 4κ2] sinh(
1086
+
1087
+ s/Da)
1088
+ .
1089
+ (4.7c)
1090
+ 12
1091
+
1092
+ In addition equations (3.22)–(3.26) for identical layers imply that
1093
+ N(s) = W(s)m−3,
1094
+ W(s) =
1095
+
1096
+ 0
1097
+ 1
1098
+ −1
1099
+ −g(a, s)
1100
+
1101
+ ,
1102
+ (4.8)
1103
+ with
1104
+ g(y, s) ≡ 2κ�p(a, s|y) − 1
1105
+ κ�p(a, s|0)
1106
+ = 2g0(y, s) − g1(s),
1107
+ (4.9)
1108
+ where
1109
+ g0(y, s) ≡ �p(a, s|y)
1110
+ �p(a, s|0) =
1111
+
1112
+ sD cosh(
1113
+
1114
+ s/Dy) + 2κ sinh(
1115
+
1116
+ s/Dy)
1117
+
1118
+ sD
1119
+ ,
1120
+ (4.10a)
1121
+ g1(s) ≡
1122
+ 1
1123
+ κ�p(a, s|0) = 4κ
1124
+
1125
+ sD cosh(
1126
+
1127
+ s/Da) + [sD + 4κ2] sinh(
1128
+
1129
+ s/Da)
1130
+ κ
1131
+
1132
+ sD
1133
+ .
1134
+ (4.10b)
1135
+ The matrix W(s) can be diagonalized according to
1136
+ W(s) = UWd(s)U†,
1137
+ Wd(s) = diag(λ+(s), λ−(s)),
1138
+ (4.11)
1139
+ with
1140
+ λ±(s) = −g(a, s) ±
1141
+
1142
+ g(a, s)2 − 4
1143
+ 2
1144
+ ,
1145
+ λ+ + λ− = −g,
1146
+ λ+λ− = 1,
1147
+ (4.12)
1148
+ and
1149
+ U =
1150
+
1151
+ 1
1152
+ 1
1153
+ λ+
1154
+ λ−
1155
+
1156
+ ,
1157
+ U† =
1158
+
1159
+ 1
1160
+ 1−λ2
1161
+ +
1162
+
1163
+ λ+
1164
+ 1−λ2
1165
+ +
1166
+ 1
1167
+ 1−λ2
1168
+
1169
+
1170
+ λ−
1171
+ 1−λ2
1172
+
1173
+
1174
+ ,
1175
+ U†U = UU† =
1176
+ � 1
1177
+ 0
1178
+ 0
1179
+ 1
1180
+
1181
+ .
1182
+ (4.13)
1183
+ Substituting (4.11) into (3.23) and (3.26) gives
1184
+
1185
+ 1, g(a, s)
1186
+
1187
+ U(s)Wd(s)m−3U†(s)
1188
+
1189
+ Σ1(s)
1190
+ Σ2(s)
1191
+
1192
+ = 0,
1193
+ (4.14)
1194
+ and
1195
+ Σm−1(s) =
1196
+
1197
+ 0, 1
1198
+
1199
+ U(s)Wd(s)m−3U†(s)
1200
+ � Σ1(s)
1201
+ Σ2(s)
1202
+
1203
+ ,
1204
+ (4.15)
1205
+ with
1206
+ Σ2(s) = −g0(x0, s)
1207
+ κ
1208
+ − g(a, s)Σ1(s).
1209
+ (4.16)
1210
+ In addition, from equations (3.15a) and (3.15c) we have
1211
+ �J0(x0, s) = f(x0, s) + κf(a, s)Σ1(s),
1212
+ �JL(x0, s) = κf(a, s)Σm−1(s),
1213
+ (4.17)
1214
+ where
1215
+ D∂x�p(0, s|y) = f(y, s) ≡ 2κ[
1216
+
1217
+ sD cosh(
1218
+
1219
+ s/D[a − y]) + 2κ sinh(
1220
+
1221
+ s/D[a − y])]
1222
+
1223
+
1224
+ sD cosh(
1225
+
1226
+ s/Da) + [sD + 4κ2] sinh(
1227
+
1228
+ s/Da)
1229
+ ,
1230
+ (4.18)
1231
+ 13
1232
+
1233
+ 0
1234
+ 0.1
1235
+ 0.2
1236
+ 0.3
1237
+ 0.4
1238
+ 0.5
1239
+ 0.6
1240
+ 0.7
1241
+ 0.8
1242
+ 0.9
1243
+ 1
1244
+ κ = 100
1245
+ κ = 10
1246
+ κ = 1
1247
+ κ = 0.1
1248
+ splitting probability
1249
+ initial position x0
1250
+ 0
1251
+ 0.2
1252
+ 0.5
1253
+ 0.8
1254
+ 1
1255
+ 0.3
1256
+ 0.7
1257
+ 0.1
1258
+ 0.4
1259
+ 0.6
1260
+ 0.9
1261
+ Fig. 4.1. Splitting probabilities for escape from a three-layer, homogeneous medium. Plots of
1262
+ π0(x0) and πL(x0) as a function of x0 for various rates κ. Other parameters are D = 1 and a = 1.
1263
+ and D∂x�p(L, s|L − a) = −f(a, s).
1264
+ For the sake of illustration, consider three layers (m = 3). Equation (4.14) implies
1265
+ that for κ > 0
1266
+ Σ1(s) = 1
1267
+ κ
1268
+ g(a, s)g0(x0, s)
1269
+ 1 − g(a, s)2
1270
+ ,
1271
+ Σ2(s) = − 1
1272
+ κ
1273
+ g0(x0, s)
1274
+ 1 − g(a, s)2 .
1275
+ (4.19)
1276
+ Using the limits
1277
+ lim
1278
+ s→0 g0(y, s) = 1 + 2κy/D,
1279
+ lim
1280
+ s→0 g1(s) = 4(1 + κa/D),
1281
+ (4.20)
1282
+ lim
1283
+ s→0 g(y, s) = −2(1 + 2κ[a − y]/D),
1284
+ lim
1285
+ s→0 f(y, s) = (1 + 2κ[a − y]/D)
1286
+ 2(1 + κa/D)
1287
+ ,
1288
+ (4.21)
1289
+ we can thus determine the splitting probabilities π0(x0) and πL(x0). Example plots
1290
+ of π0(x0) and πL(x0) as a function of x0 ∈ [0, a] are shown in Fig. 3.2 for a = D = 1.
1291
+ It can be checked that π0(x0) + πL(x0) = 1 for all x0. Moreover, in the limit κ → ∞,
1292
+ we see that π0(0) → 1 and πL(0) → 0 as expected. Also note that for x0 < 1/2
1293
+ (x0 > 1/2), π0(x0) is an increasing (a decreasing) function of κ.
1294
+ 4.2. Large number of layers (m → ∞). For a large number of layers (m ≫ 1)
1295
+ we have
1296
+ Wm−3
1297
+ d
1298
+ =
1299
+
1300
+ λm−3
1301
+ +
1302
+ 0
1303
+ 0
1304
+ λm−3
1305
+
1306
+
1307
+ = λm−3
1308
+
1309
+
1310
+ ǫ
1311
+ 0
1312
+ 0
1313
+ 1
1314
+
1315
+ ,
1316
+ ǫ =
1317
+ �λ+
1318
+ λ−
1319
+ �m−3
1320
+ (4.22)
1321
+ with |ǫ| ≪ 1 since |λ−| > |λ+|. It follows that
1322
+ N(s) = U(s)Wd(s)m−3U†(s) = λ−(s)m−3{M0(s) + ǫM1(s)},
1323
+ (4.23)
1324
+ where
1325
+ M0 =
1326
+ 1
1327
+ 1 − λ2
1328
+
1329
+
1330
+ 1
1331
+ −λ−
1332
+ λ−
1333
+ −λ2
1334
+
1335
+
1336
+ ,
1337
+ M1 =
1338
+ 1
1339
+ 1 − λ2
1340
+ +
1341
+
1342
+ 1
1343
+ −λ+
1344
+ λ+
1345
+ −λ2
1346
+ +
1347
+
1348
+ .
1349
+ (4.24)
1350
+ 14
1351
+
1352
+ The next step is to introduce the series expansions
1353
+ Σj(s) = Σ(0)
1354
+ j (s) + ǫΣ(1)
1355
+ j (s) + O(ǫ2),
1356
+ j = 1, 2,
1357
+ (4.25)
1358
+ with
1359
+ Σ(0)
1360
+ 2 (s) = −g0(x0, s)
1361
+ κ
1362
+ − g(a, s)Σ(0)
1363
+ 1 (s),
1364
+ Σ(n)
1365
+ 2 (s) = −g(a, s)Σ(n)
1366
+ 1 (s) for n ≥ 1. (4.26)
1367
+ Substituting equations (4.23) and (4.25) into (4.14) and collecting terms in powers of
1368
+ ǫ gives the O(1) and O(ǫ) equations
1369
+
1370
+ 1, g(a, s)
1371
+
1372
+ M0(s)
1373
+
1374
+ Σ(0)
1375
+ 1 (s)
1376
+ Σ(0)
1377
+ 2 (s)
1378
+
1379
+ = 0,
1380
+ (4.27a)
1381
+
1382
+ 1, g(a, s)
1383
+ � �
1384
+ M0(s)
1385
+
1386
+ Σ(1)
1387
+ 1 (s)
1388
+ Σ(1)
1389
+ 2 (s)
1390
+
1391
+ + M1(s)
1392
+
1393
+ Σ(0)
1394
+ 1 (s)
1395
+ Σ(0)
1396
+ 2 (s)
1397
+ ��
1398
+ = 0.
1399
+ (4.27b)
1400
+ Equation (4.27a) has the solution
1401
+ Σ(0)
1402
+ 1 (s) = −
1403
+ λ−(s)g0(x0, s)
1404
+ κ(1 + g(a, s)λ−(s)) = g0(x0, s)
1405
+ κλ−(s) ,
1406
+ (4.28)
1407
+ so that
1408
+ Σ(1)
1409
+ 1 (s) = λ+(s)4
1410
+ λ−(s)4
1411
+ 1 − λ+(s)2
1412
+ 1 − λ−(s)2
1413
+
1414
+ Σ(0)
1415
+ 1
1416
+ − g0(x0, s)
1417
+ κλ+(s)
1418
+
1419
+ .
1420
+ (4.29)
1421
+ Finally,
1422
+ Σm−1(s) = (0, 1)N(s)
1423
+ � Σ1(s)
1424
+ Σ2(s)
1425
+
1426
+ (4.30)
1427
+ = λ−(s)m−3(0, 1){M0(s) + ǫM1(s)}
1428
+
1429
+ Σ(0)
1430
+ 1 (s) + ǫΣ(1)
1431
+ 1 (s) + O(ǫ2)
1432
+ Σ(0)
1433
+ 2 (s) + ǫΣ(1)
1434
+ 1 (s) + O(ǫ2)
1435
+
1436
+ = λ+(s)m−3(0, 1)
1437
+
1438
+ M0(s)
1439
+
1440
+ Σ(1)
1441
+ 1 (s)
1442
+ Σ(1)
1443
+ 2 (s)
1444
+
1445
+ + M1(s)
1446
+
1447
+ Σ(0)
1448
+ 1 (s)
1449
+ Σ(0)
1450
+ 2 (s)
1451
+
1452
+ + O(ǫ)
1453
+
1454
+ .
1455
+ We have used the fact the O(1) solution (Σ(0)
1456
+ 1 , Σ(0)
1457
+ 2 )⊤ is actually a null-vector of the
1458
+ matrix M0 so the leading contribution to Σm−1(s) is proportional to ǫλ−(s)m−3 =
1459
+ λ+(s)m−3. Hence, Σm−1(s) → 0 as m → ∞ due to the fact that |λ+(s)| < 1 for all s.
1460
+ Equations (4.4) and (4.17) then imply that πm(x0) → 0 as m → ∞, with the rate of
1461
+ decay determined by λ+(0)m−3.
1462
+ 5. Generalized model of multi-layer diffusion. The analysis of the FPT
1463
+ problem in section 4 could also have been carried out using the solution of the diffusion
1464
+ equation constructed in section 2. However, one advantage of the renewal approach
1465
+ is that it is based on snapping out BM, which can be used to generate sample paths
1466
+ of single-particle diffusion in a multi-layer medium. Rather than exploring numerical
1467
+ aspects here, we consider another advantage of the renewal approach, namely, it
1468
+ supports a more general model of semi-permeable membranes. This is based on an
1469
+ extension of snapping out BM that modifies the rule for killing each round of reflected
1470
+ BM within a layer. We proceed by applying the encounter-based model of absorption
1471
+ [31, 32, 7, 8] to reflected BM in each of the layers separately.
1472
+ 15
1473
+
1474
+ 5.1. Local time propagator for a single layer. As we mentioned in sec-
1475
+ tion 3.1, partially reflected BM in an interval can be implemented by introducing
1476
+ exponentially distributed local time thresholds at either end of the interval, which
1477
+ then determine when reflected BM is killed. Here we generalize the killing mecha-
1478
+ nism. Given the local times (3.1a) and (3.1b) of the j-th layer with totally reflecting
1479
+ boundaries, the local time propagator is defined according to [31]
1480
+ Pj(x, ℓ, ℓ′, t|x0)dx dℓ ℓ′
1481
+ = P[x < X(t) < x + dx, ℓ < ℓ+
1482
+ j−1 < ℓ + dℓ, ℓ′ < ℓ−
1483
+ j < ℓ′ + dℓ′|X(0) = x0].
1484
+ (5.1)
1485
+ Next, for each interface we introduce a pair of independent identically distributed
1486
+ random local time thresholds �ℓ±
1487
+ j such that P[�ℓ±
1488
+ j > ℓ] ≡ Ψ±
1489
+ j (ℓ). The special case of
1490
+ exponential distributions is given by equations (3.3). The stochastic process in the
1491
+ j-th layer is then killed as soon as one of the local times ℓ+
1492
+ j−1 and ℓ−
1493
+ j exceeds its
1494
+ corresponding threshold, which occurs at the FPT time Tj = min{τ +
1495
+ j , τ −
1496
+ j }, see equa-
1497
+ tion (3.4). Since the corresponding local time thresholds �ℓ+
1498
+ j−1 and �ℓ−
1499
+ j are statistically
1500
+ independent, the relationship between the resulting probability density pj(x, t|x0) for
1501
+ partially reflected BM in the j-th layer and Pj(x, ℓ1, ℓ2, t|x0) can be established as
1502
+ follows:
1503
+ pj(x, t|x0)dx = P[X(t) ∈ (x, x + dx), t < Tj|X0 = x0]
1504
+ = P[X(t) ∈ (x, x + dx), ℓ+
1505
+ j−1(t) < �ℓ+
1506
+ j−1, ℓ−
1507
+ j (t) < �ℓ−
1508
+ j |X0 = x0]
1509
+ =
1510
+ ˆ ∞
1511
+ 0
1512
+ dℓψ+
1513
+ j−1(ℓ)
1514
+ ˆ ∞
1515
+ 0
1516
+ dℓ′ψ−
1517
+ j (ℓ′)P[X(t) ∈ (x, x + dx), ℓ+
1518
+ j−1 < ℓ, ℓ−
1519
+ j < ℓ′|X0 = x0]
1520
+ =
1521
+ ˆ ∞
1522
+ 0
1523
+ dℓψ+
1524
+ j−1(ℓ)
1525
+ ˆ ∞
1526
+ 0
1527
+ dℓ′ψ−
1528
+ j (ℓ′)
1529
+ ˆ ℓ
1530
+ 0
1531
+ dˆℓ
1532
+ ˆ ℓ′
1533
+ 0
1534
+ dˆℓ′[Pj(x, ˆℓ, ˆℓ′, t|x0)dx].
1535
+ We have also introduced the probability densities ψ±
1536
+ j (ℓ) = −∂ℓΨ±
1537
+ j (ℓ). Reversing the
1538
+ orders of integration yields the result
1539
+ pj(x, t|x0) =
1540
+ ˆ ∞
1541
+ 0
1542
+ dℓΨ+
1543
+ j−1(ℓ)
1544
+ ˆ ∞
1545
+ 0
1546
+ dℓ′Ψ−
1547
+ j (ℓ′)Pj(x, ℓ, ℓ′, t|x0).
1548
+ (5.2)
1549
+ An evolution equation for the local time propagator can be derived as follows
1550
+ [7, 8]. Since the local times only change at the boundaries x = aj−1, aj, the propagator
1551
+ satisfies the diffusion equation in the bulk of the domain
1552
+ ∂Pj
1553
+ ∂t = Dj
1554
+ ∂2Pj
1555
+ ∂x2 , x ∈ (aj−1, aj).
1556
+ (5.3)
1557
+ The nontrivial step is determining the boundary conditions at x = aj−1, aj. Here we
1558
+ give a heuristic derivation based on a boundary layer construction. For concreteness,
1559
+ consider the left-hand boundary layer [aj−1, aj−1 + h] and define
1560
+ ℓh
1561
+ j−1(t) = Dj
1562
+ h
1563
+ ˆ t
1564
+ 0
1565
+ �ˆ h
1566
+ 0
1567
+ δ(Xt′ − x)dx
1568
+
1569
+ dt′.
1570
+ (5.4)
1571
+ By definition, hℓh
1572
+ j−1(t)/Dj is the residence or occupation time of the process X(t)
1573
+ in the boundary layer up to time t. Although the width h and the residence time
1574
+ 16
1575
+
1576
+ in the boundary layer vanish in the limit h → 0, the rescaling by 1/h ensures that
1577
+ limh→0 ℓh
1578
+ j−1(t) = ℓ+
1579
+ j−1(t). Moreover, from conservation of probability, the flux into
1580
+ the boundary layer over the residence time hδℓ/Dj generates a corresponding shift in
1581
+ the probability Pj within the boundary layer from ℓ → ℓ + δℓ. That is, for ℓ > 0,
1582
+ −Jj(aj−1 + h, ℓ, ℓ′, t|x0)hδℓ = [Pj(aj−1, ℓ + δℓ, ℓ′, t|x0) − Pj(aj−1, ℓ, ℓ′, t|x0)]h,
1583
+ where Jj(x, ℓ, ℓ′, t|x0) = −D∂xPj(x, ℓ, ℓ′, t|x0). Dividing through by hδℓ and taking
1584
+ the limits h → 0 and δℓ → 0 yields
1585
+ −Jj(aj−1, ℓ, ℓ′, t|x0) = ∂ℓPj(aj−1, ℓ, ℓ′, t|x0), ℓ > 0.
1586
+ Moreover, when ℓ = 0 the probability flux Jj(aj−1, 0, ℓ′, t|x0) is identical to that
1587
+ of a Brownian particle with a totally absorbing boundary at x = aj−1, which we
1588
+ denote by Jj,∞(aj−1, ℓ′, t|x0). In addition, it can be shown that Pj(aj−1, 0, ℓ′, t|x0) =
1589
+ −Jj,∞(aj−1, ℓ′, t|x0). Applying a similar argument at the end x = aj, we obtain the
1590
+ pair of boundary conditions
1591
+ D∂xPj(aj−1, ℓ, ℓ′, t|x0) = −Pj(aj−1, 0, ℓ′, t|x0)δ(ℓ) + ∂Pj(aj−1, ℓ, ℓ′, t|x0)
1592
+ ∂ℓ
1593
+ ,
1594
+ (5.5a)
1595
+ −D∂xPj(aj, ℓ, ℓ′, t|x0) = −Pj(aj, ℓ, 0, t|x0)δ(ℓ′) + ∂Pj(aj, ℓ, ℓ′, t|x0)
1596
+ ∂ℓ′
1597
+ .
1598
+ (5.5b)
1599
+ The crucial step in the encounter-based approach is to note that for exponentially
1600
+ distributed local time thresholds, see equation (3.3), the right-hand side of equation
1601
+ (5.2) reduces to a double Laplace transform of the local time propagator:
1602
+ pj(x, t|x0) = Pj(x, z+
1603
+ j−1, z−
1604
+ j , t|x0),
1605
+ z+
1606
+ j−1 = 2κj−1
1607
+ Dj
1608
+ ,
1609
+ z−
1610
+ j = 2κj
1611
+ Dj
1612
+ ,
1613
+ (5.6)
1614
+ with
1615
+ Pj(x, z, z′, t|x0) ≡
1616
+ ˆ ∞
1617
+ 0
1618
+ dℓe−zℓ
1619
+ ˆ ∞
1620
+ 0
1621
+ dℓ′e−z′ℓ′Pj(x, ℓ, ℓ′, t|x0).
1622
+ (5.7)
1623
+ Laplace transforming the propagator boundary conditions (5.5a) and (5.5b) then
1624
+ shows that the probability density pj of equation (5.6) is the solution to the Robin
1625
+ BVP given by equations (3.6a) and (3.6b). Hence, the probability density of partially
1626
+ reflected BM in the j-th layer is equivalent to the doubly Laplace transformed local
1627
+ time propagator with the pair of Laplace variables z+
1628
+ j−1 and z−
1629
+ j . Assuming that the
1630
+ Laplace transforms can be inverted, we can then incorporate non-exponential proba-
1631
+ bility distributions Ψ+
1632
+ j−1(ℓ) and Ψ−
1633
+ j (ℓ′) such that the corresponding marginal density
1634
+ is now
1635
+ pj(x, t|x0) =
1636
+ ˆ ∞
1637
+ 0
1638
+ dℓΨ+
1639
+ j−1(ℓ)
1640
+ ˆ ∞
1641
+ 0
1642
+ dℓ′Ψ−
1643
+ j (ℓ′)L−1
1644
+ ℓ L−1
1645
+ ℓ′ Pj(x, z, z′, t|x0),
1646
+ (5.8)
1647
+ where L−1 denotes the inverse Laplace transform. One major difference from the
1648
+ exponential case is that the stochastic process X(t) is no longer Markovian.
1649
+ 5.2. Killing time densities. In order to sew together successive rounds of re-
1650
+ flected BM in the case of general distributions Ψj we will need the conditional FPT
1651
+ densities f +
1652
+ j−1(x0, t) and f −
1653
+ j (x0, t) for partially reflected BM in the j-th layer to be
1654
+ killed at the ends x = aj−1 and x = aj, respectively. The corresponding conditional
1655
+ 17
1656
+
1657
+ killing times were defined in equation (3.4).
1658
+ The FPT densities are given by the
1659
+ outward probability fluxes at the two ends:
1660
+ f +
1661
+ j−1(x0, t) = Dj∂xpj(aj−1, t|x0),
1662
+ f −
1663
+ j (x0, t) = −Dj∂xpj(aj, t|x0).
1664
+ (5.9)
1665
+ As in previous sections, it is convenient to Laplace transform with respect to t. Laplace
1666
+ transforming equation (5.8) and using the Green’s function (3.8) gives
1667
+ �pj(x, s|x0) =
1668
+ ˆ ∞
1669
+ 0
1670
+ dℓΨ+
1671
+ j−1(ℓ)
1672
+ ˆ ∞
1673
+ 0
1674
+ dℓ′Ψ−
1675
+ j (ℓ′)L−1
1676
+ ℓ L−1
1677
+ ℓ′ �Pj(x, z, z′, s|x0),
1678
+ (5.10)
1679
+ where
1680
+ �Pj(x, z, z′, s|x0) =
1681
+
1682
+
1683
+
1684
+ Aj(z, z′, s)Fj(x, z, s)Fj(x0, z′, s),
1685
+ aj−1 ≤ x ≤ x0,
1686
+ Aj(z, z′, s)Fj(x0, z, s)Fj(x, z′, s),
1687
+ x0 ≤ x ≤ aj,
1688
+ (5.11)
1689
+ with
1690
+ Fj(x, z, s) =
1691
+
1692
+ s/Dj cosh(
1693
+
1694
+ s/Dj[x − aj−1]) + z sinh(
1695
+
1696
+ s/Dj[x − aj−1]),
1697
+ (5.12a)
1698
+ Fj(x, z′, s) =
1699
+
1700
+ s/Dj cosh(
1701
+
1702
+ s/Dj[aj − x]) + z′ sinh(
1703
+
1704
+ s/Dj[aj − x]),
1705
+ (5.12b)
1706
+ Aj =
1707
+ 1
1708
+
1709
+ sDj
1710
+ 1
1711
+ (z + z′)
1712
+
1713
+ s/Dj cosh(
1714
+
1715
+ s/DjLj) + [s/Dj + zz′] sinh(
1716
+
1717
+ s/DjLj)
1718
+ .
1719
+ (5.12c)
1720
+ Since �Pj(x, z, z, s|x0) satisfies the Robin boundary conditions
1721
+ Dj∂x �Pj(aj−1, z, z′, s|x0) = Djz �Pj(aj−1, z, z′, s|x0),
1722
+ Dj∂x �Pj(aj, z, z′, s|x0) = −Djz′ �Pj(aj, z, z′, s|x0),
1723
+ it follows that
1724
+ �f +
1725
+ j−1(x0, s) ≡ Dj∂x�pj(aj−1, s|x0)
1726
+ = Dj
1727
+ ˆ ∞
1728
+ 0
1729
+ dℓΨ+
1730
+ j−1(ℓ)
1731
+ ˆ ∞
1732
+ 0
1733
+ dℓ′Ψ−
1734
+ j (ℓ′)
1735
+
1736
+ ∂ℓ �Pj(aj−1, ℓ, ℓ′, s|x0) + �Pj(aj−1, 0, ℓ′, s|x0)
1737
+
1738
+ = Dj
1739
+ ˆ ∞
1740
+ 0
1741
+ dℓψ+
1742
+ j−1(ℓ)
1743
+ ˆ ∞
1744
+ 0
1745
+ dℓ′Ψ−
1746
+ j (ℓ′) �Pj(aj−1, ℓ, ℓ′, s|x0).
1747
+ (5.13)
1748
+ Similarly,
1749
+ �f −
1750
+ j (x0, s) ≡ −Dj∂x�pj(aj, s|x0) = Dj
1751
+ ˆ ∞
1752
+ 0
1753
+ dℓΨ+
1754
+ j−1(ℓ)
1755
+ ˆ ∞
1756
+ 0
1757
+ dℓ′ψ−
1758
+ j (ℓ′) �Pj(aj, ℓ, ℓ′, s|x0).
1759
+ (5.14)
1760
+ Evaluation of the FPT densities reduces to the problem of calculating the prop-
1761
+ agator �Pj(ak, ℓ, ℓ′, s|x0) by inverting the double Laplace transform �Pj(ak, z, z′, s|x0)
1762
+ with respect to z and z′, k = j − 1, j, and then evaluating the double integrals in
1763
+ equations (5.13) and (5.14). In general, this is a non-trivial calculation. However, a
1764
+ 18
1765
+
1766
+ major simplification occurs if we take one of the densities Ψ+
1767
+ j−1 or Ψ−
1768
+ j to be an expo-
1769
+ nential. First suppose that Ψ+
1770
+ j−1(ℓ) = e−2κj−1ℓ/Dj. We then have a Robin boundary
1771
+ condition at x = aj−1,
1772
+ �f +
1773
+ j−1(x0, s) = 2κj−1�pj(aj−1, s|x0),
1774
+ (5.15)
1775
+ whereas
1776
+ �f −
1777
+ j (x0, s) = Dj
1778
+ ˆ ∞
1779
+ 0
1780
+ dℓ′ψ−
1781
+ j (ℓ′) �Pj(aj, z+
1782
+ j−1, ℓ′, s|x0).
1783
+ (5.16)
1784
+ From equation (5.10) we find that
1785
+ �Pj(aj, zj, z′, s|x0) = 1
1786
+ Dj
1787
+ Λj(x0, s)
1788
+ z′ + hj(s),
1789
+ (5.17)
1790
+ where
1791
+ Λj(x0, s) =
1792
+
1793
+ s/Dj cosh(
1794
+
1795
+ s/Dj[x0 − aj−1]) + z+
1796
+ j−1 sinh(
1797
+
1798
+ s/Dj[x0 − aj−1])
1799
+
1800
+ s/Dj cosh(
1801
+
1802
+ s/DjLj) + z+
1803
+ j−1 sinh(
1804
+
1805
+ s/DjLj
1806
+ ,
1807
+ (5.18)
1808
+ and
1809
+ hj(s) =
1810
+
1811
+ s/Dj
1812
+
1813
+ s/Dj tanh(
1814
+
1815
+ s/DjLj) + z+
1816
+ j−1
1817
+
1818
+ s/Dj + z+
1819
+ j−1 tanh(
1820
+
1821
+ s/DjLj)
1822
+ .
1823
+ (5.19)
1824
+ Inverting the Laplace transform with respect to z′ then gives
1825
+ �Pj(aj, z+
1826
+ j−1, ℓ′, s|x0) = D−1
1827
+ j Λj(x0, s)e−hj(s)ℓ′
1828
+ (5.20)
1829
+ and, hence,
1830
+ �f −
1831
+ j (x0, s) = Λj(x0, s) �ψ−
1832
+ j (hj(s)).
1833
+ (5.21)
1834
+ On the other hand,
1835
+ �pj(aj, s|x0) = D−1
1836
+ j Λj(x0, s)�Ψ−
1837
+ j (hj(s)).
1838
+ (5.22)
1839
+ We thus obtain the following boundary condition at x = aj:
1840
+ �f −
1841
+ j (x0, s) = �K−
1842
+ j (s)�pj(aj, s|x0),
1843
+ �K−
1844
+ j (s) =
1845
+ Dj �ψ−
1846
+ j (hj(s))
1847
+ �Ψ−
1848
+ j (hj(s))
1849
+ .
1850
+ (5.23)
1851
+ Finally, using the convolution theorem, the boundary condition at x = aj in the time
1852
+ domain takes the form
1853
+ Dj∂xpj(aj, t|x0) = −
1854
+ ˆ t
1855
+ 0
1856
+ K−
1857
+ j (τ)pj(aj, t − τ|x0)dτ.
1858
+ (5.24)
1859
+ That is, in the case of a non-Markovian density for killing partially reflected BM at
1860
+ one end of an interval, the corresponding boundary condition involves an effective
1861
+ time-dependent absorption rate K−
1862
+ j (t), which acts as a memory kernel.
1863
+ 19
1864
+
1865
+ Now suppose that Ψ−
1866
+ j (ℓ) = e−2κjℓ/Dj so that
1867
+ �f −
1868
+ j (x0, s) = 2κj�pj(aj, s|x0), �f +
1869
+ j−1(x0, s) = Dj
1870
+ ˆ ∞
1871
+ 0
1872
+ dℓψ+
1873
+ j−1(ℓ) �Pj(aj−1, ℓ, z−
1874
+ j , s|x0).
1875
+ (5.25)
1876
+ From equation (5.10) we have
1877
+ �Pj(aj, z, z−
1878
+ j , s|x0) = 1
1879
+ Dj
1880
+ Λj(x0, s)
1881
+ z + hj(s),
1882
+ (5.26)
1883
+ where
1884
+ Λj(x0, s) =
1885
+
1886
+ s/Dj cosh(
1887
+
1888
+ s/Dj[aj − x0]) + z−
1889
+ j sinh(
1890
+
1891
+ s/Dj[aj − x0])
1892
+
1893
+ s/Dj cosh(
1894
+
1895
+ s/DjLj) + z−
1896
+ j sinh(
1897
+
1898
+ s/DjLj
1899
+ ,
1900
+ (5.27)
1901
+ and
1902
+ hj(s) =
1903
+
1904
+ s/Dj
1905
+
1906
+ s/Dj tanh(
1907
+
1908
+ s/DjLj) + z−
1909
+ j
1910
+
1911
+ s/Dj + z−
1912
+ j tanh(
1913
+
1914
+ s/DjLj)
1915
+ .
1916
+ (5.28)
1917
+ Using identical arguments to the previous case we find that the boundary condition
1918
+ at x = aj−1 is
1919
+ �f +
1920
+ j−1(x0, s) = �K+
1921
+ j−1(s)�pj(aj−1, s|x0),
1922
+ �K+
1923
+ j−1(s) =
1924
+ Dj �ψ+
1925
+ j−1(hj(s))
1926
+ �Ψ+
1927
+ j−1(hj(s))
1928
+ .
1929
+ (5.29)
1930
+ 5.3. Generalized snapping out BM and the first renewal equation. We
1931
+ now define a generalized snapping out BM by sewing together successive rounds of
1932
+ reflected BM along identical lines to section 3.2, except that now each round is killed
1933
+ according to the general process introduced in section 5.1. (For simplicity, we assume
1934
+ that the exterior boundaries at x = 0, L are totally reflecting.) Although each round
1935
+ of partially reflected Brownian motion is non-Markovian, all history is lost following
1936
+ absorption and restart so that we can construct a renewal equation. However, it is
1937
+ now more convenient to use a first rather than a last renewal equation. Again we
1938
+ consider a general probability density φ(x0) of initial conditions x0 ∈ G.
1939
+ Let f +
1940
+ j−1(t) and f −
1941
+ j (t) denote the conditional FPT densities for partially reflected
1942
+ BM in the j-th layer to be killed at the end x = aj−1 and x = aj, respectively, in the
1943
+ case of a general initial distribution φ(x0). It follows that
1944
+ f +
1945
+ j−1(t) =
1946
+ ˆ aj
1947
+ aj−1
1948
+ f +
1949
+ j−1(x0, t)φ(x0)dx0 = Dj
1950
+ ˆ aj
1951
+ aj−1
1952
+ ∂xpj(aj−1, t|x0)φ(x0)dx0,
1953
+ (5.30a)
1954
+ f −
1955
+ j (t) =
1956
+ ˆ aj
1957
+ aj−1
1958
+ f −
1959
+ j (x0, t)φ(x0)dx0 = −Dj
1960
+ ˆ aj
1961
+ aj−1
1962
+ ∂xpj(aj, t|x0)φ(x0)dx0.
1963
+ (5.30b)
1964
+ with f +
1965
+ j−1(x0, t) and f −
1966
+ j (x0, t) defined in equations (5.9). We also set f +
1967
+ 1 (t) ≡ 0 and
1968
+ f −
1969
+ m(t) ≡ 0. Generalizing previous work [9, 10], the first renewal equation in the j-th
1970
+ layer, 1 ≤ j ≤ m, takes the form
1971
+ ρj(x, t) ≡
1972
+ ˆ
1973
+ G
1974
+ ρj(x, t|x0)φ(x0)dx0
1975
+ (5.31)
1976
+ = pj(x, t) + 1
1977
+ 2
1978
+ m−1
1979
+
1980
+ k=1
1981
+ ˆ t
1982
+ 0
1983
+ [ρj(x, t − τ|a−
1984
+ k ) + ρj(x, t − τ|a+
1985
+ k )][f −
1986
+ k (τ) + f +
1987
+ k (τ)]dτ
1988
+ 20
1989
+
1990
+ for x ∈ (aj−1, aj) and
1991
+ pj(x, t) =
1992
+ ˆ aj
1993
+ aj−1
1994
+ pj(x, t|x0)φ(x0)dx0.
1995
+ (5.32)
1996
+ The first term on the right-hand side of equation (5.31) represents all sample trajecto-
1997
+ ries that start in the k-th layer and have not been absorbed at the ends x = ak−1, ak
1998
+ up to time t. The integral term represents all trajectories that were first absorbed
1999
+ (stopped) at a semi-permeable interface at time τ and then switched to either posi-
2000
+ tively or negatively reflected BM state with probability 1/2, after which an arbitrary
2001
+ number of switches can occur before reaching x ∈ (aj−1, aj) at time t. The probability
2002
+ that the first stopping event occurred at the k-th interface in the interval (τ, τ + dτ)
2003
+ is [f +
2004
+ k (τ) + f −
2005
+ k (τ)]dτ. Laplace transforming the renewal equation (5.31) with respect
2006
+ to time t gives
2007
+ �ρj(x, s) = �pj(x, s) + 1
2008
+ 2
2009
+ m−1
2010
+
2011
+ k=1
2012
+ [�ρj(x, s|a−
2013
+ k ) + �ρj(x, s|a+
2014
+ k )][ �f −
2015
+ k (s) + �f +
2016
+ k (s)].
2017
+ (5.33)
2018
+ In order to determine the factors
2019
+ Σjk(x, s) = �ρj(x, s|a−
2020
+ k ) + �ρj(x, s|a+
2021
+ k ),
2022
+ 1 ≤ k < m,
2023
+ (5.34)
2024
+ we substitute into equation (5.33) the initial density φ(x0) = 1
2025
+ 2[δ(x0−a−
2026
+ k )+δ(x0−a+
2027
+ k )].
2028
+ This gives
2029
+ Σjk(x, s) = �pj(x, s|ak)[δj,k + δj,k+1] + 1
2030
+ 2Σjk(x, s)[ �f −
2031
+ k (a−
2032
+ k , s) + �f +
2033
+ k (a+
2034
+ k , s)]
2035
+ + 1
2036
+ 2Σj,k−1(x, s) �f +
2037
+ k−1(a−
2038
+ k , s) + 1
2039
+ 2Σj,k+1(x, s) �f −
2040
+ k+1(a+
2041
+ k , s)].
2042
+ (5.35)
2043
+ Comparison with equations (3.17a)–(3.17c) implies that the above equation can
2044
+ be rewritten in the matrix form
2045
+ m−1
2046
+
2047
+ l=1
2048
+ Θkl(s)Σjl(s) = −�pj(x, s|ak)[δj,k + δj,k+1],
2049
+ (5.36)
2050
+ where Θ(s) is a tridiagonal matrix with non-zero elements
2051
+ Θk,k(s) = dk(s) ≡ �f −
2052
+ k (a−
2053
+ k , s) + �f +
2054
+ k (a+
2055
+ k , s) − 1, k = 1, . . . m − 1,
2056
+ (5.37a)
2057
+ Θk,k−1(s) = ck(s) ≡ �f +
2058
+ k−1(a−
2059
+ k , s),
2060
+ k = 2, . . . m − 1,
2061
+ (5.37b)
2062
+ Θk,k+1(s) = bk(s) ≡ �f −
2063
+ k+1(a+
2064
+ k , s),
2065
+ k = 1, . . . , m − 2.
2066
+ (5.37c)
2067
+ Assuming that the matrix Θ(s) is invertible, we obtain the formal solution
2068
+ Σjk(x, s) = −Θ
2069
+ −1
2070
+ kj (s)�pj(x, s|aj) − Θ
2071
+ −1
2072
+ k,j−1(s)�pj(x, s|aj−1).
2073
+ (5.38)
2074
+ Substituting into equation (5.33) yields the result
2075
+ �ρj(x, s) = �pj(x, s) + 1
2076
+ 2
2077
+ m−1
2078
+
2079
+ k=1
2080
+
2081
+ −1
2082
+ kj (s)�pj(x, s|aj) + Θ
2083
+ −1
2084
+ k,j−1(s)�pj(x, s|aj−1)]
2085
+ (5.39)
2086
+ × [ �f −
2087
+ k (s) + �f +
2088
+ k (s)].
2089
+ 21
2090
+
2091
+ Equivalence of first and last renewal equations for exponential killing. An im-
2092
+ portant check of our analysis is to show that the solution (5.39) of the first renewal
2093
+ equation is equivalent to the solution (3.21) of the last renewal equation when each
2094
+ round of reflecting BM is killed according to an independent exponential distribu-
2095
+ tion for each local time threshold. Since �pj(x, s|x0) then satisfies Robin boundary
2096
+ conditions at x = aj−1, aj we find that
2097
+ Θk,k(s) = κk[�pk+1(ak, s|ak) + �pk(ak, s|ak)] − 1 = Θkk(s),
2098
+ (5.40a)
2099
+ Θk,k−1(s) = κk−1�pk(ak−1, s|ak) = κk−1�pk(ak, s|ak−1) = Θk,k−1(s),
2100
+ (5.40b)
2101
+ Θk,k+1(s) = κk+1�pk+1(ak+1, s|ak) = κk+1�pk+1(ak, s|ak+1) = Θk,k+1(s).
2102
+ (5.40c)
2103
+ We have used two important properties of partially reflected BM:
2104
+ i) Symmetry of the Green’s function �p(x, s|x0) = �p(x0, s|x).
2105
+ ii) The solution for the functions Σjk(x, s) is obtained by introducing the initial con-
2106
+ ditions (5.34). The FPT densities are thus evaluated at the initial points a±
2107
+ k . This
2108
+ means that when we impose the Robin boundary conditions we do not pick up the
2109
+ additional constant term in equations (3.10a) and (3.10b).
2110
+ It follows that the solution (5.39) reduces to the form
2111
+ �ρj(x, s) = �pj(x, s) +
2112
+ m−1
2113
+
2114
+ k=1
2115
+
2116
+ �pj(x, s|aj−1)Θ−1
2117
+ k,j−1(s) + �pj(x, s|aj)Θ−1
2118
+ kj (s)
2119
+
2120
+ × κk[�pk(ak, s) + �pk+1(ak, s)].
2121
+ (5.41)
2122
+ Finally, using the fact that κkΘ−1
2123
+ kj (s) = κjΘ−1
2124
+ jk (s), we recover the solution (3.21).
2125
+ Non-exponential killing. The above analysis shows that the same solution struc-
2126
+ ture holds for both exponential and non-exponential killing, provided that we ex-
2127
+ press the tridiagonal matrix Θij in terms of the conditional FPT densities �f ±
2128
+ k (a±
2129
+ k , s),
2130
+ �f +
2131
+ k−1(a−
2132
+ k , s) and �f −
2133
+ k+1(a+
2134
+ k , s). The latter are themselves determined from equations
2135
+ (5.13) and (5.14). One configuration that is analytically tractable is a 1D domain
2136
+ with a sequence of semi-permeable barriers whose distributions Ψ±
2137
+ j
2138
+ alternate be-
2139
+ tween exponential and non-exponential. For example, suppose Ψ−
2140
+ j (ℓ) = e−2κj/Dj and
2141
+ Ψ+
2142
+ j (ℓ) = e−2κj/Dj+1 for all odd layers j = 1, 3, . . ., whereas Ψ±
2143
+ j (ℓ) are non-exponential
2144
+ for even layers j = 2, 4, . . .. Combining the analysis of the FPT densities in section
2145
+ 5.2 with the analysis of the first renewal equation and its relationship with the last
2146
+ renewal equation, we obtain the following generalization of the interfacial boundary
2147
+ conditions (2.2b):
2148
+ Dj∂xρj(a−
2149
+ j , s) = Dj+1∂xρj+1(a+
2150
+ j , s) = 1
2151
+ 2[ �K+
2152
+ j (s)ρj+1(a+
2153
+ j , s) − �K−
2154
+ j (s)ρj(a−
2155
+ j , s)],
2156
+ (5.42)
2157
+ with �K±
2158
+ j (s) = 2κj for odd j and
2159
+ �K−
2160
+ j (s) =
2161
+ Dj �ψ−
2162
+ j (hj(s))
2163
+ �Ψ−
2164
+ j (hj(s))
2165
+ ,
2166
+ �K+
2167
+ j (s) =
2168
+ Dj+1 �ψ+
2169
+ j (hj+1(s))
2170
+ �Ψ+
2171
+ j (hj+1(s))
2172
+ (5.43)
2173
+ for even j, with hj(s) and hj+1(s) given by equations (5.19) and (5.28), respectively.
2174
+ We thus have the setup shown in Fig. 5.1. Note, in particular, that the time-dependent
2175
+ permeability kernels of the even interfaces are asymmetric.
2176
+ 22
2177
+
2178
+ x = a1
2179
+ x = 0
2180
+ x = a4
2181
+ x = a2
2182
+ x = a3
2183
+ K2-(t)
2184
+ κ1
2185
+ κ1
2186
+ K2+(t)
2187
+ K4-(t)
2188
+ κ2
2189
+ κ2
2190
+ K4+(t)
2191
+ Fig. 5.1. A 1D layered medium partitioned by a sequence of semi-permeable interfaces that
2192
+ alternate between symmetric constant permeabilities κj, j = 1, 3, . . . and asymmetric time-dependent
2193
+ permeabilities K±
2194
+ j (t), j = 2, 4, . . ..
2195
+ Permeability kernels for the gamma distribution. For the sake of illustration, sup-
2196
+ pose that ψ±
2197
+ j (ℓ) for even j are given by the gamma distributions
2198
+ ψ±
2199
+ j (ℓ) =
2200
+
2201
+ j (z±
2202
+ j ℓ)µ−1e−z±
2203
+ j ℓ
2204
+ Γ(µ)
2205
+ , µ > 0,
2206
+ (5.44)
2207
+ where Γ(µ) is the gamma function. The corresponding Laplace transforms are
2208
+ �ψ±
2209
+ j (z) =
2210
+
2211
+
2212
+ j
2213
+
2214
+ j + z
2215
+ �µ
2216
+ ,
2217
+ �Ψ±
2218
+ j (z) = 1 − �ψ±
2219
+ j (z)
2220
+ z
2221
+ .
2222
+ (5.45)
2223
+ If µ = 1 then ψ±
2224
+ j reduce to the exponential distributions with constant reactivity κj.
2225
+ The parameter µ thus characterizes the deviation of ψ±
2226
+ j (ℓ) from the exponential case.
2227
+ If µ < 1 (µ > 1) then ψ±
2228
+ j (ℓ) decreases more rapidly (slowly) as a function of the local
2229
+ time ℓ. Substituting the gamma distributions into equations (5.43) yields
2230
+ �K−
2231
+ j (s) =
2232
+ Djhj(s)(z−
2233
+ j )µ
2234
+ [z−
2235
+ j + hj(s)]µ − (z−
2236
+ j )µ , �K+
2237
+ j (s) =
2238
+ Dj+1hj+1(s)(z+
2239
+ j )µ
2240
+ [z+
2241
+ j + hj+1(s)]µ − (z+
2242
+ j )µ .
2243
+ (5.46)
2244
+ If µ = 1 then �K±
2245
+ j (s) = 2κj as expected. On the other hand if µ = 2, say, then
2246
+ �K−
2247
+ j (s) =
2248
+ 2κj
2249
+ 2 + Djhj(s)/2κj
2250
+ ,
2251
+ �K+
2252
+ j (s) =
2253
+ 2κj
2254
+ 2 + Dj+1hj+1(s)/2κj
2255
+ .
2256
+ (5.47)
2257
+ The corresponding time-dependent kernels K±
2258
+ j (t) are normalizable since
2259
+ ˆ ∞
2260
+ 0
2261
+ K−
2262
+ j (t)dt = �K−
2263
+ j (0) =
2264
+ 2κj
2265
+ 2 + κ−1
2266
+ j κj−1/[1 + 2κj−1Lj/Dj),
2267
+ (5.48a)
2268
+ ˆ ∞
2269
+ 0
2270
+ K+
2271
+ j (t)dt = �K+
2272
+ j (0) =
2273
+ 2κj
2274
+ 2 + κ−1
2275
+ j κj+1/[1 + 2κj+1Lj/Dj+1).
2276
+ (5.48b)
2277
+ However, the kernels are heavy-tailed with infinite moments. For example,
2278
+ ⟨t⟩− ≡
2279
+ 1
2280
+ �K−
2281
+ j (0)
2282
+ ˆ ∞
2283
+ 0
2284
+ tK−
2285
+ j (t)dt = −
2286
+ 1
2287
+ �K−
2288
+ j (0)
2289
+ lim
2290
+ s→0 ∂s �K−
2291
+ j (s)
2292
+ =
2293
+ 1
2294
+ �K−
2295
+ j (0)
2296
+ lim
2297
+ s→0
2298
+ Djh′
2299
+ j(s)/2
2300
+ [2 + Djhj(s)/2κj]2 = Dj
2301
+ 4κj
2302
+ �K−
2303
+ j (0)
2304
+ 2κj
2305
+ lim
2306
+ s→0 h′
2307
+ j(s) = ∞.
2308
+ (5.49)
2309
+ That is, all moments are infinite since all derivatives of hj(s) are singular at s = 0.
2310
+ An analogous result was previously found for a single interface in 1D and 3D [9, 10].
2311
+ 23
2312
+
2313
+ 6. Discussion. In this paper we developed a probabilistic framework for analyz-
2314
+ ing single-particle diffusion in heterogeneous multi-layered media. Our approach was
2315
+ based on a multi-layered version of snapping out BM. We showed that the distribution
2316
+ of sample trajectories satisfied a last renewal equation that related the full probabil-
2317
+ ity density to the probability densities of partially reflected BM in each layer. The
2318
+ renewal equation was solved using a combination of Laplace transforms and transfer
2319
+ matrices.
2320
+ We also proved the equivalence of the renewal equation and the corre-
2321
+ sponding multi-layered diffusion equation in the case of constant permeabilities. We
2322
+ then used the renewal approach to incorporate a more general probabilistic model of
2323
+ semipermeable interfaces. This involved killing each round of partially reflected BM
2324
+ according to a non-Markovian encounter-based model of absorption at an interface.
2325
+ We constructed a corresponding first renewal equation that related the full probability
2326
+ density to the FPT densities for killing each round of reflected BM. In particular, we
2327
+ showed that non-Markovian models of absorption can generate asymmetric, heavy-
2328
+ tailed time-dependent permeabilities.
2329
+ In developing the basic mathematical framework, we focused on relatively simple
2330
+ examples such as identical layers with constant permeabilities or alternating Marko-
2331
+ vian and non-Markovian interfaces. We also restricted our analysis to the Laplace
2332
+ domain rather than the time domain. However, it is clear that in order to apply
2333
+ the theory more widely, it will be necessary to develop efficient numerical schemes
2334
+ for solving the last or first renewal equations in Laplace space, and then inverting
2335
+ the Laplace transformed probability density to obtain the solution in the time do-
2336
+ main. In the case of non-Markovian models of absorption at both ends of a layer, it
2337
+ will also be necessary to compute the double inverse Laplace transform of the local
2338
+ time propagator and evaluate the resulting double integral in equation (5.8). Another
2339
+ computational issue is developing an efficient numerical scheme for simulating sample
2340
+ trajectories of snapping out BM in heterogeneous multi-layer media.
2341
+ Finally, from a modeling perspective, it would be interesting to identify plausible
2342
+ biophysical mechanisms underlying non-Markovian models of semi-permeable mem-
2343
+ branes. As previously highlighted within the context of encounter-based models of
2344
+ absorption [31, 32, 7, 8], various surface-based reactions are better modeled in terms
2345
+ of a reactivity that is a function of the local time. For example, the surface may
2346
+ become progressively activated by repeated encounters with a diffusing particle, or an
2347
+ initially highly reactive surface may become less active due to multiple interactions
2348
+ with the particle (passivation) [4, 23].
2349
+ REFERENCES
2350
+ [1] V. Aho, K. Mattila, T. K¨uhn, P. Kek¨al¨ainen, O. Pulkkine, R. B. Minussi, M. Vihinen-
2351
+ Ranta and J. Timonen Diffusion through thin membranes: Modeling across scales. Phy.
2352
+ Rev. E 93 (2016) 043309
2353
+ [2] I. Alemany, J. N. Rose, J. Garnier-Brun, A. D. Scott and D. J. Doorly Random walk dif-
2354
+ fusion simulations in semi-permeable layered media with varying diffusivity Science Reports
2355
+ 12 (2022) 10759
2356
+ [3] S. Barbaro, C. Giaconia and A. Orioli A computer oriented method for the analysis of non
2357
+ steady state thermal behaviour of buildings. Build. Environ. 23 (1988) 19-24
2358
+ [4] C. H. Bartholomew. Mechanisms of catalyst deactivation. Appl. Catal. A: Gen. 212 (2001)
2359
+ 17-60.
2360
+ [5] A. N. Borodin and P. Salminen. Handbook of Brownian Motion:
2361
+ Facts and Formulae
2362
+ Birkhauser Verlag, Basel-Boston-Berlin (1996).
2363
+ [6] P.C. Bressloff Diffusion in cells with stochastically-gated gap junctions. SIAM J. Appl. Math.
2364
+ 76 (2016) 1658-1682
2365
+ 24
2366
+
2367
+ [7] P.C. Bressloff Diffusion-mediated absorption by partially reactive targets: Brownian function-
2368
+ als and generalized propagators. J. Phys. A. 55 (2022) 205001
2369
+ [8] P.C. Bressloff Spectral theory of diffusion in partially absorbing media. Proc. R. Soc. A 478
2370
+ (2022) 20220319
2371
+ [9] P.C. Bressloff A probabilistic model of diffusion through a semipermeable barrier. Proc. Roy.
2372
+ Soc. A 478 (2022) 20220615.
2373
+ [10] P.C. Bressloff Renewal equation for single-particle diffusion through a semipermeable inter-
2374
+ face. Phys. Rev. E. In press (2023)
2375
+ [11] P. R. Brink and S. V. Ramanan A model for the diffusion of fluorescent probes in the sep-
2376
+ tate giant axon of earthworm: axoplasmic diffusion and junctional membrane permeability.
2377
+ Biophys. J. 48 (1985) 299-309
2378
+ [12] A. Bobrowski. Semigroup-theoretic approach to diffusion in thin layers separated by semi-
2379
+ permeable membranes. J. Evol. Equ. 21 (2021) 1019-1057
2380
+ [13] P. T. Callaghan, A. Coy, T. P. J. Halpin, D. MacGowan, K. J. Packer and F. O. Zelaya.
2381
+ Diffusion in porous systems and the influence of pore morphology in pulsed gradient spin-
2382
+ echo nuclear magnetic resonance studies. J. Chem. Phys. 97 (1992) 651-662
2383
+ [14] E. Carr and I. Turner A semi-analytical solution for multilayer diffusion in a composite
2384
+ medium consisting of a large number of layers. Appl. Math. Model. 40 (2016) 7034-7050
2385
+ [15] B. W. Connors and M. A. Long Electrical synapses in the mammalian brain. Ann. Rev.
2386
+ Neurosci. 27 (2004) 393-418
2387
+ [16] A. Coy and P. T. Callaghan. Pulsed gradient spin echo nuclear magnetic resonance for
2388
+ molecules diffusing between partially reflecting rectangular barriers. J. Chem. Phys. 101
2389
+ (1994) 4599-4609.
2390
+ [17] F. deMonte. Transient heat conduction in one-dimensional composites lab. A natural analytic
2391
+ approach. Int. J. Heat Mass Transf. 43 (2000) 3607-3619
2392
+ [18] J.-P. Diard, N. Glandut, C. Montella and J.-Y. Sanchez. One layer, two layers, etc. An
2393
+ introduction to the EIS study of multilayer electrodes. Part 1: Theory. J. Electroanal. Chem.
2394
+ 578 (2005) 247-257
2395
+ [19] O. K. Dudko, A. M. Berezhkovskii and G. H. Weiss. Diffusion in the presence of periodically
2396
+ spaced permeable membranes. J. Chem. Phys. 121 (2004) 11283
2397
+ [20] W. J. Evans and P. E. Martin Gap junctions: structure and function. Mol. Membr. Biol. 19
2398
+ (2002) 121-136
2399
+ [21] S. Regev and O. Farago. Application of underdamped Langevin dynamics simulations for the
2400
+ study of diffusion from a drug-eluting stent. Phys. A, Stat. Mech. Appl. 507 (2018) 231-239
2401
+ [22] O. Farago Algorithms for Brownian dynamics across discontinuities. J. Comput. Phys. 423
2402
+ (2020) 109802.
2403
+ [23] M. Filoche, D. S. Grebenkov, J. S. Andrade and B. Sapoval. Passivation of irregular
2404
+ surfaces accessed by diffusion. Proc. Natl. Acad. Sci. 105 (2008) 7636-7640.
2405
+ [24] V. Freger. Diffusion impedance and equivalent circuit of a multilayer film. Electrochem. Com-
2406
+ mun. 7 (2005) 957-961
2407
+ [25] M. Freidlin. Functional Integration and Partial Differential Equations Annals of Mathematics
2408
+ Studies, Princeton University Press, Princeton (1985) New Jersey
2409
+ [26] D. A. Goodenough and D. L. Paul Gap junctions. Cold Spring Harb Perspect Biol 1 (2009)
2410
+ a002576
2411
+ [27] G. L. Graff, R. E. Williford and P. E. Burrows. Mechanisms of vapor permeation through
2412
+ multilayer barrier films: lag time versus equilibrium permeation. J. Appl. Phys. 96 (2004)
2413
+ 1840-1849
2414
+ [28] D. S. Grebenkov Partially Reflected Brownian Motion: A Stochastic Approach to Transport
2415
+ Phenomena, in “Focus on Probability Theory”, Ed. Velle LR pp. 135-169. Hauppauge: Nova
2416
+ Science Publishers (2006)
2417
+ [29] D. S. Grebenkov Pulsed-gradient spin-echo monitoring of restricted diffusion in multilayered
2418
+ structures. J. Magn. Reson. 205 (2010) 181-195
2419
+ [30] D. S. Grebenkov, D. V. Nguyen and J.-R. Li Exploring diffusion across permeable barriers
2420
+ at high gradients. I. Narrow pulse approximation. J. Magn. Reson. 248 (2014) 153-163.
2421
+ [31] D. S. Grebenkov Paradigm shift in diffusion-mediated surface phenomena. Phys. Rev. Lett.
2422
+ (2020) 125, 078102.
2423
+ [32] D. S. Grebenkov An encounter-based approach for restricted diffusion with a gradient drift.
2424
+ J. Phys. A. (2022) 55 045203.
2425
+ [33] P. Grossel and F. Depasse. Alternating heat diffusion in thermophysical depth profiles: mul-
2426
+ tilayer and continuous descriptions. J. Phys. D: Appl. Phys. 31 (1998) 216.
2427
+ [34] Y. Gurevich, I. Lashkevich and C. G. delaCruz. Effective thermal parameters of layered
2428
+ films:an application to pulsed photothermal techniques. Int. J. Heat Mass Transf. 52 (2009)
2429
+ 25
2430
+
2431
+ 4302-4307.
2432
+ [35] D. W. Hahn and M. N. Ozisik One-Dimensional Composite Medium Ch. 10 pp. 393-432.
2433
+ Wiley, Hoboken (2012)
2434
+ [36] R. Hickson, S. Barry and G. Mercer. Critical times in multilayer diffusion. Part 1: Exact
2435
+ solutions. Int. J. Heat Mass Transf. 52 (2009) 5776-5783.
2436
+ [37] R. Hickson, S. Barry and G. Mercer. Critical times in multilayer. diffusion. Part. 2: Ap-
2437
+ proximate solutions. Int. J. Heat Mass Transf. 52 (2009) 5784-5791.
2438
+ [38] K. Ito and H. P. McKean. Diffusion Processes and Their Sample Paths Springer-Verlag,
2439
+ Berlin (1965)
2440
+ [39] T. Kay and Giuggioli. Diffusion through permeable interfaces: Fundamental equations and
2441
+ their application to first-passage and local time statistics. Phys. Rev. Res. 4 (2022) L032039
2442
+ [40] V. M. Kenkre, L. Giuggiol and Z. Kalay. Molecular motion in cell membranes: analytic
2443
+ study of fence-hindered random walks. Phys. Rev. E 77 (2008) 051907
2444
+ [41] A. Lejay The snapping out Brownian motion. The Annals of Applied Probability 26 (2016)
2445
+ 1727-1742.
2446
+ [42] A. Lejay Monte Carlo estimation of the mean residence time in cells surrounded by thin layers.
2447
+ Mathematics and Computers in Simulation 143 (2018) 65-77
2448
+ [43] G. Liu, L. Barbour and B. C. Si. Unified multilayer diffusion model and application to diffu-
2449
+ sion experiment in porous media by method of chambers. Environ. Sci. Technol. 43 (2009)
2450
+ 2412-2416
2451
+ [44] X. Lu and P. Tervola. Transient heat conduction in the composites lab-analytical method. J.
2452
+ Phys. A: Math. Gen. 38 (2005) 81
2453
+ [45] G. N. Milshtein. The solving of boundary value problems by numerical integration of stochastic
2454
+ equations. Math. Comp. Sim. 38(1995) 77-85
2455
+ [46] N. Moutal and D. S. Grebenkov Diffusion across semi-permeable barriers: spectral proper-
2456
+ ties, efficient computation, and applications J. Sci. Comput. 81 (2019) 1630-1654
2457
+ [47] D. Novikov, E. Fieremans, J. Jensen and J. A. Helpern. Random walks with barriers. Nat.
2458
+ Phys. 7 (2011) 508-514
2459
+ [48] V. G. Papanicolaou. The probabilistic solution of the third boundary value problem for second
2460
+ order elliptic equations Probab. Th. Rel. Fields 87 (1990) 27-77
2461
+ [49] G. Pontrelli and F. de Monte. Mass diffusion through two-layer porous media: an applica-
2462
+ tion to the drug-eluting stent. Int. J. Heat Mass Transf. 50 (2007) 3658-3669.
2463
+ [50] J. G. Powles, M. Mallett, G. Rickayzen and W. Evans. Exact analytic solutions for dif-
2464
+ fusion impeded by an infinite array of partially permeable barriers. Proc. R. Soc. Lond. A
2465
+ 436 (1992) 391
2466
+ [51] S. V. Ramanan and P. R. Brink. Exact solution of a model of diffusion in an infinite chain
2467
+ or monlolayer of cells coupled by gap junctions. Biophys. J. 58 (1990) 631-639
2468
+ [52] C. D. Shackelford and S. M. Moore. Fickian diffusion of radio nuclides for engineered
2469
+ containment barriers: diffusion coefficients, porosities, and complicating issues. Eng. Geol.
2470
+ 152 (2013) 133-147. 123
2471
+ [53] J. E. Tanner. Transient diffusion in a system partitioned by permeable barriers. application to
2472
+ NMR measurements with a pulsed field gradient. J. Chem. Phys. 69 (1978) 1748.
2473
+ [54] H. Todo, T. Oshizaka, W. R. Kadhum and K. Sugibayashi. Mathematical model to predict
2474
+ skin concentration after topical application of drugs. Pharmaceutics 5 (2013) 634-651.
2475
+ [55] S. R. Yates, S. K. Papiernik, F. Gao and J. Gan. Analytical solutions for the transport of
2476
+ volatile organic chemicals in unsaturated layered systems. Water Resour. Res. 36 (2000)
2477
+ 1993-2000.
2478
+ 26
2479
+
FtE1T4oBgHgl3EQfEwPV/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
GdAzT4oBgHgl3EQfHfsT/content/tmp_files/2301.01044v1.pdf.txt ADDED
@@ -0,0 +1,1583 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Analysis of Label-Flip Poisoning Attack
2
+ on Machine Learning Based Malware Detector
3
+ Kshitiz Aryal
4
+ Department of Computer Science
5
+ Tennessee Technological University
6
+ Cookeville, TN, USA
7
8
+ Maanak Gupta
9
+ Department of Computer Science
10
+ Tennessee Technological University
11
+ Cookeville, TN, USA
12
13
+ Mahmoud Abdelsalam
14
+ Department of Computer Science
15
+ North Carolina A&T State University
16
+ Greensboro, NC, USA
17
18
+ Abstract—With the increase in machine learning (ML) applica-
19
+ tions in different domains, incentives for deceiving these models
20
+ have reached more than ever. As data is the core backbone of ML
21
+ algorithms, attackers shifted their interest towards polluting the
22
+ training data itself. Data credibility is at even higher risk with
23
+ the rise of state-of-art research topics like open design principles,
24
+ federated learning, and crowd-sourcing. Since the machine learn-
25
+ ing model depends on different stakeholders for obtaining data,
26
+ there are no existing reliable automated mechanisms to verify
27
+ the veracity of data from each source.
28
+ Malware detection is arduous due to its malicious nature with
29
+ the addition of metamorphic and polymorphic ability in the
30
+ evolving samples. ML has proven to solve the zero-day malware
31
+ detection problem, which is unresolved by traditional signature-
32
+ based approaches. The poisoning of malware training data can
33
+ allow the malware files to go undetected by the ML-based
34
+ malware detectors, helping the attackers to fulfill their malicious
35
+ goals. A feasibility analysis of the data poisoning threat in the
36
+ malware detection domain is still lacking. Our work will focus on
37
+ two major sections: training ML-based malware detectors and
38
+ poisoning the training data using the label-poisoning approach.
39
+ We will analyze the robustness of different machine learning
40
+ models against data poisoning with varying volumes of poisoning
41
+ data.
42
+ Index
43
+ Terms—Cybersecurity,
44
+ Poisoning
45
+ Attacks,
46
+ Machine
47
+ Learning, Malware Detectors, Adversarial Malware Analysis
48
+ I. INTRODUCTION
49
+ Machine Learning (ML) techniques have been emerging
50
+ rapidly, providing computational intelligence to various ap-
51
+ plications. The ability of machine learning to generalize to
52
+ unseen data has paved its way from labs to the real world. It
53
+ has already gained unprecedented success in many fields like
54
+ image processing [1], [2], natural language processing [3], [4],
55
+ recommendation systems used by Google, YouTube and Face-
56
+ book, cybersecurity [5], [6], robotics [7], drug research [8], [9],
57
+ and many other domains. ML-based systems are achieving
58
+ unparalleled performance through modern deep neural net-
59
+ works bringing revolutions in AI-based services. Recent works
60
+ have shown significant achievements in fields like self-driving
61
+ cars and voice-controlled systems used by tech giants like
62
+ autopilot in Tesla, Apple Siri, Amazon Alexa, and Microsoft
63
+ Cortana. With machine learning being applied to such critical
64
+ applications, continuous security threats are never a bombshell.
65
+ In addition to traditional security threats like malware at-
66
+ tack [10], phishing [11], man-in-the-middle attack [12], denial-
67
+ of-service [13], SQL injection [14], adversaries are finding
68
+ novel ways to sneak into ML models [15].
69
+ Data poisoning and evasion attacks [16]–[20] are the latest
70
+ menaces against the security of machine learning models.
71
+ Poisoning attacks enable attackers to control the model’s
72
+ behavior by manipulating a model’s data, algorithms, or hyper-
73
+ parameters during the model training phase. On the other hand,
74
+ an evasion attack is carried out during the test time by manip-
75
+ ulating the test sample. Adversaries can craft legitimate inputs
76
+ imperceptible to humans but force models to make wrong
77
+ predictions. Szegedy et al. [21] discovered the vulnerability
78
+ of deep learning architecture against adversarial attacks, and
79
+ ever since, there have been several major successful adversar-
80
+ ial attacks against machine learning architectures [22], [23].
81
+ Sophisticated attackers are motivated by very high incentives
82
+ to manipulate the result of the machine learning models. With
83
+ the current data scale with which machine learning models are
84
+ trained, it is impossible to verify each data point individually.
85
+ In most scenarios, it is unlikely that an attacker gets access
86
+ to training data. However, with many systems adopting online
87
+ learning [24], crowd-sourcing [25] for training data, open
88
+ design principles, and federated learning, poisoning attacks
89
+ already pose a serious threat to ML models [26]. There have
90
+ been instances [27] when big companies have been compro-
91
+ mised by a data poisoning attack. Malware public databases
92
+ like VirusTotal1, which rely on crowdsourced malware files
93
+ for training its algorithm, can be poisoned by attackers while
94
+ Google’s mail spam filter can be thrown out of track by wrong
95
+ reporting of spam emails.
96
+ Data poisoning relates to adding training data that either
97
+ leaves a backdoor on the model or negatively impacts the
98
+ model’s performance. Figure 1 shows the architecture of the
99
+ poisoning attack. In the given figure, the addition of poisoned
100
+ data in the training bag forces the model to learn and predict
101
+ so that attackers benefit from it. This type of poisoning is
102
+ not limited to particular domains but has extended across
103
+ all ML applications. Label flipping attack is carried out to
104
+ flip the prediction of machine learning detectors. Among
105
+ all the existing approaches, we chose one of the simplest
106
+ poisoning techniques called label poisoning. We swap the
107
+ 1https://www.virustotal.com/
108
+ arXiv:2301.01044v1 [cs.CR] 3 Jan 2023
109
+
110
+ Fig. 1. General architecture for Poisoning Machine Learning Models
111
+ existing training data labels in label poisoning to check the
112
+ ML models’ robustness.
113
+ In this work, we perform a comparative analysis of different
114
+ machine learning-based malware detectors’ robustness against
115
+ label-flipping data poisoning attacks. Unlike the existing ap-
116
+ proaches, we are demonstrating the impact of simple label-
117
+ switching data poisoning in different malware detectors. We
118
+ will first train eight different ML models widely used to detect
119
+ malware, namely Stochastic Gradient Descent (SGD), Random
120
+ Forest (RF), Logistic Regression (LR), K-Nearest Neighbor
121
+ Classifier (KNN), Linear Support Vector Machine (SVM),
122
+ Decision Tree (DT), Perceptron, and Multi-Layer Perceptron
123
+ (MLP). This will be followed by poisoning 10% and 20% of
124
+ training data by flipping the label of data samples. All of the
125
+ models are retrained after data poisoning, and the performance
126
+ of each model is evaluated. The major contributions of this
127
+ paper are as follows.
128
+ • We taxonomize the existing data poisoning attacks on ma-
129
+ chine learning models in terms of domains, approaches, and
130
+ targets.
131
+ • We provide threat modeling for adversarial poisoning attacks
132
+ against malware detectors. The threat is modeled in terms of
133
+ the attack surface, the attacker’s knowledge, the attacker’s
134
+ capability, and adversarial goals.
135
+ • We train eight different machine learning-based malware
136
+ detectors from malware data obtained from VirusTotal and
137
+ VirusShare2. We compare the performance of these malware
138
+ detectors with training and testing data in terms of accuracy,
139
+ precision, and recall.
140
+ • Finally, we show a simple label-switching approach to
141
+ poison the data without any knowledge of training models.
142
+ 2https://virusshare.com/
143
+ Fig. 2. Taxonomy of poisoning attack on attack domain, approach and target
144
+ The performance of malware detectors is analyzed while
145
+ poisoning 10% and 20% of the total training data.
146
+ The rest of the paper is organized as follows. The existing
147
+ literature for data poisoning attacks in different domains,
148
+ including malware, is discussed in Section II. Section III
149
+ provides the threat modeling for data poisoning attacks. An
150
+ overview of ML algorithms that are used to train the malware
151
+ detector in this paper is discussed in Section IV. Section V dis-
152
+ cusses experimental methodology elaborating on the algorithm
153
+ and the testbed used for the experiment. The evaluation and
154
+ discussion on the performed experiments are given in Section
155
+ VI. Finally, Section VII concludes this work.
156
+ II. LITERATURE REVIEW
157
+ Data poisoning attacks have been used against the machine
158
+ learning domain for a long time. The existing literature on
159
+ data poisoning attacks can be taxonomized in terms of attack
160
+ domains, approach, and the target (victim), as illustrated in
161
+ Figure 2. The recently trending technologies like crowd-
162
+ sourcing and federated learning are always vulnerable as the
163
+ veracity of individual data can never be verified. The recent
164
+ victims of poisoning attacks have spread in security, network,
165
+ and speech recognition domains. We also classified the major
166
+ approaches that are taken to produce or optimize the poisoning
167
+ attacks in Figure 2. The existing data poisoning approaches
168
+ have targeted almost all the machine learning algorithms
169
+ ranging from traditional algorithms like regression to modern
170
+ deep neural network architectures.
171
+ Table I summarizes the existing literature on poisoning
172
+ attacks. Biggio et al. [43] attacked a support vector machine
173
+ using gradient ascent. To make poisoning attacks closer to
174
+ the real world, Yang et al. [44] used a generative adversarial
175
+ network with an autoencoder to poison deep neural nets.
176
+ Gongalez et al. [45] extended poisoning from binary learning
177
+ to multi-class problems. Shafahi et al. [28] proposed a targeted
178
+ clean label poisoning attack on neural networks using an
179
+ optimization-based crafting method. Shen et al. [31] performed
180
+ an imperceptible poisoning attack on a deep neural network
181
+ by clogging the back-propagation from gradient tensors during
182
+ training while also minimizing the gradient norm. Jiang et
183
+
184
+ Machine learning
185
+ Machine learning with
186
+ without Data Poisoning
187
+ Data Poisoning
188
+ Data
189
+ Data
190
+ Collection
191
+ Collection
192
+ Poisoned
193
+ 直盲自
194
+ 自自自
195
+ Data
196
+ 18168
197
+ 0100
198
+ 1818
199
+ 0100
200
+ Preprocessing
201
+ Preprocessing
202
+ Data
203
+ 110
204
+ 0110
205
+ Data
206
+ Training
207
+ Training Data with
208
+ Data
209
+ Poisoned dataset
210
+ Training
211
+ Training
212
+ Poisoned Mode
213
+ Trained
214
+ Prediction
215
+ Poisoned
216
+ Model
217
+ Prediction ModelData Poisoning
218
+ Y
219
+ Domains
220
+ Approach
221
+ Target
222
+ > Image
223
+ Gradient
224
+ Neural
225
+ Network
226
+ Crowd Sourcing
227
+ Reinforcement learning
228
+ Support Vector
229
+ Machine
230
+ > Graph
231
+ Label Flipping
232
+ Regression
233
+ Federated Learning
234
+ Generative Adversarial
235
+ Network
236
+ Truth-Finder
237
+ Dawid-Skene
238
+ Security
239
+ Empirical Inverstigations
240
+ Graph
241
+ Recommendation
242
+ FakeUsers Insertion
243
+ Embedding
244
+ Spectrum
245
+ Online learning
246
+ Network
247
+ SpeechRecognitionTABLE I
248
+ DATA POISONING ATTACKS
249
+ Domains
250
+ Approach
251
+ Target
252
+ Publications
253
+ Image
254
+ Crowd
255
+ Sourcing
256
+ Graph
257
+ Federated
258
+ Learning
259
+ Security
260
+ Online
261
+ Learning
262
+ Gradient
263
+ Reinforcement
264
+ Learning
265
+ Label
266
+ Flipping
267
+ GAN
268
+ Others
269
+ Neural
270
+ Network
271
+ SVM
272
+ Regression
273
+ Graph
274
+ Embedding
275
+ Customized
276
+ Shafahi et al. [28]
277
+
278
+
279
+
280
+ Liu et al. [29]
281
+
282
+
283
+
284
+
285
+ Cao et al. [30]
286
+
287
+
288
+
289
+ Shen et al. [31]
290
+
291
+
292
+
293
+ Zhang et al. [32]
294
+
295
+
296
+
297
+ Jiang et al. [33]
298
+
299
+
300
+
301
+ Kwon et al. [34]
302
+
303
+
304
+
305
+ Zhang et al. [35]
306
+
307
+
308
+
309
+ Bagdasaryal et al. [36]
310
+
311
+
312
+
313
+
314
+ Li et al. [37]
315
+
316
+
317
+
318
+ Sasaki et al. [38]
319
+
320
+
321
+
322
+ Zhang et al. [39]
323
+
324
+
325
+
326
+
327
+ Lovisotto et al. [40]
328
+
329
+
330
+
331
+ Li et al. [41]
332
+
333
+
334
+
335
+
336
+ Kravchik et al. [42]
337
+
338
+
339
+
340
+ This Work
341
+
342
+
343
+
344
+
345
+
346
+ Domains:Poisoning domain for crafted attack, Approach: Approach to poison the training data, Target: Target of poisoning attack
347
+ al. [33] performed a flexible poisoning attack against linear and
348
+ logistic regression. Kwon et al. [34] could selectively poison
349
+ particular classes against deep neural networks. Cao et al. [30]
350
+ proposed a distributed label-flipping poisoning approach to
351
+ poison the DL model in federated architecture. Miao et al. [46]
352
+ poisoned Dawid-Skene [47] model by exploiting the reliability
353
+ degree of workers. Fang et al. [48] proposed a poisoning attack
354
+ against a graph-based recommendation system by maximizing
355
+ the hit ratio of target items using fake users.
356
+ In the given Table I, we can observe that only a handful
357
+ of works have been carried out in the security domain.
358
+ Sasaki et al. [38] proposed an attack framework for backdoor
359
+ embedding, which prevented the detection of specific types of
360
+ malware. They generated poisoning samples by solving an op-
361
+ timization problem and tested it against a logistic regression-
362
+ based malware detector. To poison the Android malware de-
363
+ tectors, Lie et al. [41] experimented backdoor poisoning attack
364
+ against Drebin [49], DroidCat [50], MamaDroid [51] and
365
+ DroidAPIMiner [52]. Kravchik et al. [42] attacked the cyber
366
+ attack detectors deployed in the industrial control system. The
367
+ back gradient optimization techniques used to pollute the train-
368
+ ing data successfully poison the neural network-based model.
369
+ These works have focused their approach on some algorithm
370
+ testing against some defense mechanism. However, none of
371
+ the works compared the feebleness of multiple algorithms
372
+ against data poisoning attacks. In this work, we demonstrate
373
+ the effectiveness of label switch poisoning of the training
374
+ data against eight machine learning algorithms widely used
375
+ in malware detectors.
376
+ III. THREAT MODEL: KNOW THE ADVERSARY
377
+ All security threats are defined in terms of their goals and
378
+ attack capabilities. Modeling the threat allows for identifying
379
+ and better understanding the risk arriving with a threat. A
380
+ poisoning attack is performed by manipulating the training
381
+ data either at the initial learning or incremental learning
382
+ Fig. 3. Threat model for poisoning attack
383
+ period. The threat model of a poisoning attack reflects the
384
+ attacker’s knowledge, goal, capabilities, and attack surface, as
385
+ shown in Figure 3.
386
+ Attack Surface: Attack surface denotes how the adversary
387
+ attacks the model under analysis. Machine learning algorithms
388
+ require data to pass through different stages in the pipeline,
389
+ and each stage offers some kind of vulnerability. In this work,
390
+ we are only concerned about poisoning attacks which make
391
+ the training data an attack surface.
392
+ Attacker’s Knowledge: The attacker’s knowledge is the
393
+ amount of information about the model under attack that
394
+ an attacker has. Based on the amount of knowledge of the
395
+ attacker, the poisoning approach is determined. Attacker’s
396
+ knowledge can be broadly classified into the two following
397
+ categories:
398
+ • White box model: In the white box model, an attacker has
399
+ complete information about the underlying target model,
400
+ such as the algorithm used, training data, hyper-parameters,
401
+ and gradient information. It’s easier to carry out a white box
402
+ attack due to the information available that helps the attacker
403
+ to create a worst-case scenario for the target model.
404
+ • Black box model: In the black-box model, an attacker only
405
+
406
+ Threat Model
407
+ Attack Surface
408
+ Attacker's
409
+ Attacker's
410
+ Capability
411
+ Attacker's Goal
412
+ Knowledge
413
+ Training
414
+ White Box
415
+ Data
416
+ Untargeted
417
+ data
418
+ model
419
+ Injection
420
+ Misclassifcation
421
+ Black Box
422
+ Data
423
+ Targeted
424
+ model
425
+ Modification
426
+ Misclassification
427
+ Logic
428
+ Confidence
429
+ Corruption
430
+ Reductionhas information about the model’s input and output. An
431
+ attacker has no information about the internal structure of
432
+ the model. Black-box models can also be divided further
433
+ into complete black-box models and gray-box models. In
434
+ the gray box model, the model’s performance for each input
435
+ the attacker provides can be known. As such, the gray box
436
+ attack is considered to be relatively easier than the complete
437
+ black box model.
438
+ In this paper, we perform a black box attack on different
439
+ malware detection models. Our experiments will prove the vul-
440
+ nerability of these models to random label poisoning attacks
441
+ without having any information about the models.
442
+ Attacker’s Capability: The attacker’s capability represents
443
+ the ability of an adversary to manipulate the data and model in
444
+ different stages of the ML pipeline. It defines the sections that
445
+ can be manipulated, the mechanism used for manipulation, and
446
+ constraints to the attacker. Poisoning can be carried out in a
447
+ well-controlled environment if the attacker has complete infor-
448
+ mation about the underlying model and training data. Attacker
449
+ capabilities can be classified into the following categories:
450
+ • Data Injection: It is the ability to insert new data into the
451
+ training dataset, leading machine learning models to learn
452
+ on contaminated data.
453
+ • Data Modification: It is the ability to access and modify
454
+ the training data as well as the data labels. Label flipping
455
+ is a well-known approach carried out in poisoning attack
456
+ domains.
457
+ • Logic Corruption: It is the ability to manipulate the logic of
458
+ ML models. This ability is out of scope for data poisoning
459
+ and is considered a model poisoning approach.
460
+ Adversarial Goals: The attacker’s objective is to deceive the
461
+ ML model by injecting poisoned data. However, poisoning
462
+ training data might differ depending on the goals of an
463
+ attacker. Attacker goals can be categorized as:
464
+ • Untargeted Misclassification:
465
+ An attacker tries to change
466
+ the model’s output to a value different than the original
467
+ prediction. Untargeted misclassification is a relatively easier
468
+ goal for attackers.
469
+ • Targeted Misclassification:
470
+ An attacker’s goal is to add a
471
+ certain backdoor in the models so that particular samples
472
+ are classified to a chosen class.
473
+ • Confidence Reduction: An attacker can also poison training
474
+ data to reduce the confidence of the machine learning model
475
+ for a particular prediction. In this approach, changing the
476
+ classification label is unnecessary, but reducing the confi-
477
+ dence score is enough to meet the attacker’s goal.
478
+ Our paper aims to cause the malware detector models to
479
+ misclassify. However, since we are dealing with binary clas-
480
+ sification, it can be considered either targeted or untargeted
481
+ misclassification.
482
+ IV. OVERVIEW OF MACHINE LEARNING ALGORITHMS
483
+ Almost all of the ML architectures have already been
484
+ victimized by data poisoning attacks. In this section, we will
485
+ brief some ML architectures in which we performed data
486
+ poisoning attacks later in this paper.
487
+ Stochastic Gradient Descent:
488
+ Stochastic gradient descent
489
+ (SGD) is derived from the gradient descent algorithm, which
490
+ is a popular ML optimization technique. A gradient gives the
491
+ slope of the function and measures the degree of change of
492
+ a variable in response to the changes of another variable.
493
+ Starting from an initial value, gradient descent runs iteratively
494
+ to find the optimal values of the parameters, which are the
495
+ minimal possible value of the given cost function. In Stochastic
496
+ Gradient Descent, a few samples are randomly selected in
497
+ place of the whole dataset for each iteration. The term batch
498
+ determines the number of samples to calculate each iteration’s
499
+ gradient. In normal gradient descent optimization, a batch is
500
+ taken to be the whole dataset leading to the problem when the
501
+ dataset gets big. Stochastic gradient descent considers a small
502
+ batch in each iteration to lower the computing cost of the
503
+ gradient descent approach while working with a large dataset.
504
+ Random Forest:
505
+ A random forest is a supervised ML
506
+ algorithm that is constructed from an ensemble of decision tree
507
+ algorithms. Its ensemble nature helps to provide a solution to
508
+ complex problems. The random forest is made up of a large
509
+ number of decision trees that have been trained via bagging
510
+ or bootstrap aggregation. The average mean of the output
511
+ of constituent decision trees is the random forest’s ultimate
512
+ forecast. The precision of the output improves as the number
513
+ of decision trees used grows. A random forest overcomes the
514
+ decision tree algorithm’s limitations by eliminating over-fitting
515
+ and enhancing precision.
516
+ Logistic Regression: The probability for classification prob-
517
+ lems is modeled using logistic regression, which divides
518
+ them into two possible outcomes. For classification, logistic
519
+ regression is an extension of the linear regression model. For
520
+ regression tasks, linear regression works well; however, it fails
521
+ to replicate for classification. The linear model considers the
522
+ class a number and finds the optimum hyperplane that mini-
523
+ mizes the distances between the points and the hyperplane. As
524
+ it interpolates between the points, it cannot be interpreted as
525
+ probabilities. Because there is no relevant threshold for class
526
+ separation, logistic regression is applied. It is a widely used
527
+ classification algorithm due to its ease of implementation and
528
+ strong performance in linearly separable classes.
529
+ K-Nearest Neighbors (KNN) Classifier: The KNN algorithm
530
+ relies on the assumption that similar things exist in close
531
+ proximity. It is a non-parametric and lazy learning algorithm.
532
+ KNN does not carry any assumption for underlying data
533
+ distribution. It does not require training data points for model
534
+ generation, as all the training data are used in a testing phase.
535
+ This results in faster training and a slower testing process. The
536
+ costly testing phase will consume more time and memory. In
537
+ KNN, K is the number of nearest neighbors and is generally
538
+ considered odd. KNN, however, suffers from the curse of
539
+ dimensionality. With increased feature dimension, it requires
540
+ more data and becomes prone to overfitting.
541
+ Support Vector Machine (SVM): A support vector machine
542
+
543
+ Algorithm 1: Data Poisoning Algorithm
544
+ Input: Non-poisoned feature set
545
+ Output: Poisoned feature set
546
+ Data: Static features obtained from malware and
547
+ benign training set
548
+ 1 for all the samples do
549
+ 2
550
+ Train the machine learning models and measure
551
+ the performance
552
+ 3
553
+ for 10% each of Malware and Benign data do
554
+ 4
555
+ if Training label is not flipped then
556
+ 5
557
+ label=Get training label of given data
558
+ 6
559
+ if label==0 then
560
+ 7
561
+ Flip the label to 1
562
+ 8
563
+ else if label==1 then
564
+ 9
565
+ Flip the label to 0
566
+ 10
567
+ Train all the models and measure the performance
568
+ 11
569
+ for 20% each of Malware and Benign data do
570
+ 12
571
+ if Training label is not flipped then
572
+ 13
573
+ label=Get training label of given data
574
+ 14
575
+ if label==0 then
576
+ 15
577
+ Flip the label to 1
578
+ 16
579
+ else if label==1 then
580
+ 17
581
+ Flip the label to 0
582
+ 18
583
+ Train all the models and measure the performance
584
+ is a popular supervised ML algorithm applied in both classi-
585
+ fication and regression tasks. SVM aims to find a hyperplane
586
+ that classifies the data points. In SVM, there are several pos-
587
+ sible hyperplanes, and we need to determine the optimal hy-
588
+ perplane that maximizes the margin between the two classes.
589
+ Hyperplanes are the decision boundary for SVM, where data
590
+ points near to hyperplane are the support vectors. Due to its
591
+ effectiveness in high dimensional spaces and memory-efficient
592
+ properties, it is widely adopted in different domains.
593
+ Multi-Layer Perceptron:
594
+ The term ’Perceptron’ is derived
595
+ from the ability to perceive, see, and recognize images in a
596
+ human-like manner. A perceptron machine is based on the
597
+ neuron, a basic unit of computation, with a cell receiving a
598
+ series of pairs of inputs and weights. Although the perceptron
599
+ was originally thought to represent any circuit and logic, non-
600
+ linear data cannot be represented by a perceptron with only one
601
+ neuron. Multi-Layer Perceptron was developed to overcome
602
+ this limitation. In multi-layer perceptron, the mapping between
603
+ input and output is non-linear. It has input and output layers
604
+ and several hidden layers stacked with numerous neurons.
605
+ Because the inputs are merged with the initial weights in
606
+ a weighted sum and applied to the activation function, the
607
+ multi-layer perceptron falls under the category of feedforward
608
+ algorithms. Each linear combination is propagated to the
609
+ following layer, unlike with a perceptron.
610
+ V. EXPERIMENTAL METHODOLOGY
611
+ In this paper, we are using the label-flipping approach to
612
+ poison the training data. With source class CS and a target
613
+ class CT from a set of classes C, the dataset DI is poisoned.
614
+ The detailed poisoning performed in the paper is shown in
615
+ Algorithm 1. We perform a label poisoning attack of differ-
616
+ ent volumes to training data without guiding the poisoning
617
+ mechanism through machine learning architecture or the loss
618
+ function. It is an efficient way to showcase the ability of
619
+ random poisoning to hamper the model’s performance. We are
620
+ training all eight malware detector models three times in total.
621
+ As illustrated in Algorithm 1, we begin the model training
622
+ with clean data without adding any noise. After recording the
623
+ model’s performance on clean data, we proceed towards the
624
+ first stage of poisoning our data. We take 10% of shuffled
625
+ training data belonging to each malware and benign class, and
626
+ we change their labels. We retrain all the models and again
627
+ measure the performance of the models. We repeat the same
628
+ operation with 20% of shuffled training data. The percentage
629
+ of poisoned data is taken randomly for this experimental
630
+ purpose, as the goal is to show the impact on the models.
631
+ The algorithm we followed in carrying out this experiment is
632
+ not a novel approach but a generic approach to poison the
633
+ data.
634
+ A. Experimental Environment and Dataset
635
+ All the experiments are performed in Google-Colab us-
636
+ ing Google’s GPU. All the implementation will be worked
637
+ around using python libraries and Scikit-Learn. The training
638
+ dataset [53] is obtained from the Kaggle repository, where
639
+ data are collected from VirusTotal and VirusShare. The dataset
640
+ comprises windows PE malware and benign files processed
641
+ through static executable analysis. The dataset comprises
642
+ 216,352 files (75,503 benign files and 140,849 malware files)
643
+ with 54 features.
644
+ VI. EVALUATION RESULTS AND ANALYSIS
645
+ A. Data Pre-processing and Transformation
646
+ We begin our experiment by loading data from Kaggle
647
+ dataset [53]. To clean the data, we followed two different
648
+ approaches. First, we ignored rows that are missing more than
649
+ 50% of data, whereas we replaced the null values with the
650
+ arithmetic mean value of the column for rows with less than
651
+ 50% missing values. Second, we normalized the data by scal-
652
+ ing the values from 0 to 1. Afterward, 85% of data were used
653
+ for training purposes while the remaining 15% were used for
654
+ testing purposes. We trained selected eight machine learning
655
+ models with standard hyper-parameters for each model. We
656
+ didn’t tweak many machine learning parameters to fine-tune
657
+ the detection accuracy, resulting in significant overfitting in a
658
+ few models.
659
+ B. Performance Indicators
660
+ We evaluated the malware detectors’ performance using the
661
+ following metrics:
662
+
663
+ TABLE II
664
+ MALWARE DETECTION TRAINING RESULT
665
+ Algorithm
666
+ Clean Data
667
+ Training Data
668
+ Testing Data
669
+ Accuracy
670
+ Precision
671
+ Recall
672
+ F1
673
+ Accuracy
674
+ Precision
675
+ Recall
676
+ F1
677
+ Stochastic Gradient Descent
678
+ 93.41
679
+ 92.49
680
+ 88.29
681
+ 90.34
682
+ 72.98
683
+ 58.6
684
+ 78.77
685
+ 67.20
686
+ Decision Tree
687
+ 99.96
688
+ 99.98
689
+ 99.91
690
+ 99.94
691
+ 59.65
692
+ 44.5
693
+ 59.85
694
+ 51.05
695
+ Random Forest
696
+ 99.97
697
+ 99.92
698
+ 99.97
699
+ 99.94
700
+ 83.65
701
+ 98.82
702
+ 54.12
703
+ 69.94
704
+ Logistic Regression
705
+ 93.2
706
+ 92.21
707
+ 87.94
708
+ 90.02
709
+ 92.33
710
+ 92.24
711
+ 85.36
712
+ 88.67
713
+ KNN Classifier
714
+ 98.38
715
+ 97.33
716
+ 98.05
717
+ 97.69
718
+ 97.42
719
+ 96.38
720
+ 96.25
721
+ 96.31
722
+ Support Vector Machine
723
+ 93.15
724
+ 92.44
725
+ 87.51
726
+ 89.91
727
+ 92.03
728
+ 90.89
729
+ 85.94
730
+ 88.34
731
+ Perceptron
732
+ 90.93
733
+ 88.6
734
+ 84.91
735
+ 86.72
736
+ 75.39
737
+ 60.28
738
+ 87.86
739
+ 71.50
740
+ Multi-Layer Perceptron
741
+ 91.28
742
+ 91.07
743
+ 83.16
744
+ 86.94
745
+ 71.93
746
+ 57.45
747
+ 77.66
748
+ 66.04
749
+ TABLE III
750
+ MALWARE DETECTION PERFORMANCE WITH 10% POISONING DATA
751
+ Algorithm
752
+ 10% Poisoned Data
753
+ Training Data
754
+ Testing Data
755
+ Accuracy
756
+ Precision
757
+ Recall
758
+ F1
759
+ Accuracy
760
+ Precision
761
+ Recall
762
+ F1
763
+ Stochastic Gradient Descent
764
+ 85.12
765
+ 82.49
766
+ 77.14
767
+ 79.73
768
+ 72.39
769
+ 64.23
770
+ 61.38
771
+ 62.77
772
+ Decision Tree
773
+ 96.77
774
+ 99.44
775
+ 92.01
776
+ 95.58
777
+ 51.92
778
+ 38.33
779
+ 43.98
780
+ 40.96
781
+ Random Forest
782
+ 96.77
783
+ 98.92
784
+ 92.51
785
+ 95.61
786
+ 80.13
787
+ 82.68
788
+ 60.22
789
+ 69.68
790
+ Logistic Regression
791
+ 84.51
792
+ 82.29
793
+ 75.39
794
+ 78.69
795
+ 83.26
796
+ 81.06
797
+ 72.91
798
+ 76.77
799
+ KNN Classifier
800
+ 89.49
801
+ 85.47
802
+ 87.1
803
+ 86.28
804
+ 86.59
805
+ 83.1
806
+ 81.15
807
+ 82.11
808
+ Support Vector Machine
809
+ 84.75
810
+ 82.84
811
+ 75.42
812
+ 78.96
813
+ 66.99
814
+ 63.14
815
+ 31.16
816
+ 41.73
817
+ Perceptron
818
+ 77.94
819
+ 67.78
820
+ 79.69
821
+ 73.25
822
+ 40.16
823
+ 25.89
824
+ 31
825
+ 73.25
826
+ Multi-Layer Perceptron
827
+ 83.85
828
+ 82.72
829
+ 72.58
830
+ 77.32
831
+ 83.33
832
+ 82.81
833
+ 70.74
834
+ 76.30
835
+ Accuracy =
836
+ TP + TN
837
+ TP + TN + FP + FN
838
+ Precision =
839
+ TP
840
+ TP + FP , Recall =
841
+ TP
842
+ TP + FN
843
+ F1-score = 2 ∗ (Precision ∗ Recall)
844
+ Precision + Recall
845
+ A positive outcome corresponds to a malware sample, while
846
+ a negative result corresponds to a benign example. TP, TN,
847
+ FP, and FN are true positives, true negatives, false positives,
848
+ and false negatives, respectively. Accuracy is the percentage
849
+ of correct predictions on the given data. Precision measures
850
+ the ratio between true positives and all the positives. Recall
851
+ provides the ability of our model to predict true positives
852
+ correctly. The F1 score is the harmonic mean, the combination
853
+ of a classifier’s precision and recall.
854
+ C. Results and Discussion
855
+ Table II shows the accuracy, precision, and recall for train-
856
+ ing and testing data. Stochastic Gradient Descent, Decision
857
+ Trees, Random Forest, and Perceptron looked overfitted to
858
+ training data compared to other models. Since the data volume
859
+ is a little bit high, decision tree-based classifiers are prone to
860
+ overfitting problems. We used shallow layer neural networks
861
+ leading perceptron to overfit in the data. However, classifiers
862
+ like logistic regression, KNN classifier, and Support Vector
863
+ Machine have shown the best performance in all three metrics.
864
+ We have compared the performance of both the training and
865
+ testing sets as we have only poisoned the training data while
866
+ preserving the test data from attack.
867
+ We flipped the labels of 10% training data as a poisoning
868
+ attack. On poisoning 10% of total data, the performance metric
869
+ for each detector is displayed in Table III. The results show the
870
+ robustness of decision trees and random forest-based malware
871
+ detectors compared to other malware detectors. We further
872
+ poisoned 20% of total training data to see the impact of
873
+ increased poisoned data in each model, whose results are
874
+ shown in Table IV. The left-most confusion matrix in each of
875
+ the figures from Figure 4 to Figure 11 shows the number of
876
+ TP, TN, FP, and FN for each classifier on clean data, whereas
877
+ the middle and right one shows results with 10% and 20%
878
+ poisoning, respectively. In the confusion matrix, label ’0’ is
879
+ for malware, and label ’1’ is for benign samples. The top-left
880
+ corner in the confusion matrix gives True Positive, the top-
881
+ right corner gives False Positive, the bottom-left gives False
882
+ Negative, and the bottom-right corner gives True Negative
883
+ samples.
884
+ D. Analysis and Observations
885
+ The goal of this work is to show the vulnerability of popular
886
+ machine-learning models that are used for malware detection.
887
+ Results in Tables II, III and IV reflect the limitations of
888
+ all the experimented machine learning models even with the
889
+ basic label poisoning attack. Figure 12 shows the ROC curve,
890
+ comparing the models’ performance on the clean data, 10%
891
+ and 20% poisoned data. In the ROC curve, the blue curve
892
+ corresponds to the performance of clean data, the orange
893
+ curve corresponds to 10% poisoned data, and the green curve
894
+ corresponds to the 20% poisoned data. The curve closest to
895
+ the top-left corner is the one performing best. We can infer
896
+ from the graph that logistic regression, K-Nearest Neighbors,
897
+
898
+ TABLE IV
899
+ MALWARE DETECTION PERFORMANCE WITH 20% POISONING DATA
900
+ Algorithm
901
+ 20% Poisoned data
902
+ Training Data
903
+ Testing Data
904
+ Accuracy
905
+ Precision
906
+ Recall
907
+ F1
908
+ Accuracy
909
+ Precision
910
+ Recall
911
+ F1
912
+ Stochastic Gradient Descent
913
+ 78.56
914
+ 75.65
915
+ 70.21
916
+ 72.83
917
+ 62.69
918
+ 54.86
919
+ 50.72
920
+ 52.71
921
+ Decision Tree
922
+ 96.54
923
+ 93.54
924
+ 98.34
925
+ 95.88
926
+ 40.26
927
+ 34.25
928
+ 49.67
929
+ 40.54
930
+ Random Forest
931
+ 96.54
932
+ 93.04
933
+ 98.94
934
+ 95.90
935
+ 72.8
936
+ 68.77
937
+ 61.66
938
+ 65.02
939
+ Logistic Regression
940
+ 78.38
941
+ 74.3
942
+ 72.13
943
+ 73.20
944
+ 77.58
945
+ 75.1
946
+ 76.78
947
+ 75.93
948
+ KNN Classifier
949
+ 87.41
950
+ 82.48
951
+ 87.94
952
+ 85.12
953
+ 82.15
954
+ 76.16
955
+ 82.2
956
+ 79.06
957
+ Support Vector Machine
958
+ 78.58
959
+ 74.45
960
+ 72.6
961
+ 73.51
962
+ 75.39
963
+ 74.74
964
+ 60.37
965
+ 66.79
966
+ Perceptron
967
+ 75.16
968
+ 68.58
969
+ 72.57
970
+ 72.57
971
+ 49.37
972
+ 38.28
973
+ 38.28
974
+ 38.28
975
+ Multi Layer Perceptron
976
+ 77.6
977
+ 75.45
978
+ 67.1
979
+ 71.03
980
+ 76.85
981
+ 74.81
982
+ 65.66
983
+ 69.94
984
+ Fig. 4. Confusion Matrix for Stochastic Gradient Descent Based Malware Detector
985
+ Fig. 5. Confusion Matrix for Decision Tree Based Malware Detector
986
+ Fig. 6. Confusion Matrix for Random Forest-Based Malware Detector
987
+ Fig. 7. Confusion Matrix for Logistic Regression Based Malware Detector
988
+ Support Vector Machine, and Multi-Layer Perceptron are the
989
+ best models on the clean data. However, the distance between
990
+ the three curves represents the robustness of the model toward
991
+ the poisoning attack. If the separation between the curves of
992
+ clean data and poisoning data is low, it infers that the poisoning
993
+ attack has a minimal impact on the model’s performance. In
994
+ the ROC graph, we can observe that Random Forest, Logistic
995
+ Regression, K-Nearest Neighbors, and Multi-Layer Perceptron
996
+
997
+ ConfusionMatrixforSGD
998
+ Predicted Class
999
+ 12000
1000
+ 12148
1001
+ 1884
1002
+ 10000
1003
+ Actual Class
1004
+ 8000
1005
+ 6000
1006
+ 1522
1007
+ 6082
1008
+ 4000
1009
+ 2000
1010
+ 0
1011
+ 1Confusion Matrix for SGD with 10% Poisoning
1012
+ PredictedClass
1013
+ 10000
1014
+ IClass
1015
+ 0
1016
+ 11778
1017
+ 1652
1018
+ 8000
1019
+ Actual
1020
+ 6000
1021
+ 2495
1022
+ 5711
1023
+ 4000
1024
+ 2000
1025
+ 0
1026
+ 1Confusion Matrix for SGD with 20% Poisoning
1027
+ PredictedClass
1028
+ 10000
1029
+ IClass
1030
+ 0
1031
+ 10830
1032
+ 1936
1033
+ 8000
1034
+ Actual
1035
+ 6000
1036
+ 3294
1037
+ 5576
1038
+ 4000
1039
+ 2000
1040
+ 0
1041
+ 1ConfusionMatrixforDT
1042
+ Predicted Class
1043
+ 7000
1044
+ 7593
1045
+ 6439
1046
+ Actual Class
1047
+ 6000
1048
+ 5000
1049
+ 2677
1050
+ 4927
1051
+ 4000
1052
+ -3000
1053
+ 0
1054
+ 1Confusion Matrix for DT with 10% Poisoning
1055
+ PredictedClass
1056
+ 8000
1057
+ Class
1058
+ 0
1059
+ 4915
1060
+ 8515
1061
+ 7000
1062
+ Actual
1063
+ 6000
1064
+ 5000
1065
+ 4763
1066
+ 3443
1067
+ 4000
1068
+ 0
1069
+ 1Confusion Matrix for DT with 20% Poisoning
1070
+ PredictedClass
1071
+ 7000
1072
+ IClass
1073
+ 0
1074
+ 5440
1075
+ 7326
1076
+ 6000
1077
+ Actual
1078
+ 5000
1079
+ 4945
1080
+ 3925
1081
+ 4000
1082
+ 0
1083
+ 1ConfusionMatrixforRF
1084
+ Predicted Class
1085
+ 12500
1086
+ 14000
1087
+ 32
1088
+ 10000
1089
+ Actual Class
1090
+ 7500
1091
+ 5000
1092
+ 4405
1093
+ 3199
1094
+ -2500
1095
+ 0
1096
+ -Confusion Matrix for RF with 10% Poisoning
1097
+ PredictedClass
1098
+ 12000
1099
+ 12145
1100
+ 1285
1101
+ 10000
1102
+ ActualClass
1103
+ 8000
1104
+ 6000
1105
+ 3293
1106
+ 4913
1107
+ 4000
1108
+ 2000
1109
+ 0
1110
+ 1Confusion Matrix for RF with 20% Poisoning
1111
+ PredictedClass
1112
+ 10000
1113
+ ActualClass
1114
+ 0
1115
+ 10036
1116
+ 2730
1117
+ 8000
1118
+ 6000
1119
+ 3154
1120
+ 5716
1121
+ 4000
1122
+ 0
1123
+ 1ConfusionMatrixfor LR
1124
+ PredictedClass
1125
+ 12500
1126
+ 13492
1127
+ 540
1128
+ 10000
1129
+ Actual Class
1130
+ 7500
1131
+ -5000
1132
+ 1119
1133
+ 6485
1134
+ 2500
1135
+ 0
1136
+ 1Confusion Matrix for LR with 10% Poisoning
1137
+ PredictedClass
1138
+ 12000
1139
+ 10000
1140
+ Actual Class
1141
+ 0
1142
+ 12032
1143
+ 1398
1144
+ 8000
1145
+ 6000
1146
+ 2223
1147
+ 5983
1148
+ 4000
1149
+ 2000
1150
+ 0
1151
+ 1Confusion Matrix for LR with 20% Poisoning
1152
+ PredictedClass
1153
+ 10000
1154
+ IClass
1155
+ 0
1156
+ 10774
1157
+ 1992
1158
+ 8000
1159
+ Actual
1160
+ 6000
1161
+ 2859
1162
+ 6011
1163
+ 4000
1164
+ 2000
1165
+ 0
1166
+ 1Fig. 8. Confusion Matrix for KNN Based Malware Detector
1167
+ Fig. 9. Confusion Matrix for Support Vector Machine-Based Malware Detector
1168
+ Fig. 10. Confusion Matrix for Perceptron Based Malware Detector
1169
+ Fig. 11. Confusion Matrix for Multi-Layer Perceptron Based Malware Detector
1170
+ have their graphs close to each other, proving their robustness
1171
+ against poisoned data. Random Forest’s robustness can be
1172
+ attributed to its ensemble nature which helps it to capture
1173
+ better insights about the data. The robustness of logistic
1174
+ regression and K-Nearest Neighbors can be due to the low
1175
+ dimensionality of our training data. Further, we can observe
1176
+ the performance of models, like SVM and perceptron, doing
1177
+ better with the 20% poisoned data than with 10% poisoned
1178
+ data. The gain in the performance of these models is due
1179
+ to unrestricted data poisoning. Since we are not guiding our
1180
+ poisoning approach according to the models, further adding
1181
+ poisoning data after some threshold point slightly improves the
1182
+ models’ performance. In the end, even the least sophisticated
1183
+ attacks, like label poisoning, are causing the performance
1184
+ decay of the models to a large extent. This further alerts us
1185
+ toward the catastrophic consequences of more sophisticated
1186
+ attacks like gradients and reinforcement learning.
1187
+ VII. CONCLUSION
1188
+ In this work, we perform a feasibility analysis of label-
1189
+ flip poisoning attacks on ML-based malware detectors. We
1190
+ evaluated eight different ML models that are widely used in
1191
+ malware detection. Spotting the lack of poisoning attacks work
1192
+ in the malware domain, this paper analyses the robustness
1193
+ of ML-based malware detectors against different volumes of
1194
+ poisoned data. We observed the decay in performance of all
1195
+ the models while poisoning 10% and 20% of total training
1196
+ data. The significant decrease in the performance of the models
1197
+ shows the severe vulnerability of malware detectors to guided
1198
+ poisoning approaches. We also observed differences in the
1199
+ effect of poisoning attacks across the different models. Our
1200
+ work is carried out within the limited scope of one generic
1201
+ poisoning algorithm and a single malware dataset. There are
1202
+ few future research directions that are clearly visible. The
1203
+ malware detectors can be tested against many advanced poi-
1204
+ soning approaches using numerous datasets from the industry.
1205
+
1206
+ ConfusionMatrixforPerceptron
1207
+ Predicted Class
1208
+ 7000
1209
+ 7831
1210
+ 6201
1211
+ Actual Class
1212
+ 6000
1213
+ 5000
1214
+ 4000
1215
+ 2139
1216
+ 5465
1217
+ 3000
1218
+ 1ConfusionMatrixforPerceptronwith10%Poisoning
1219
+ PredictedClass
1220
+ 7000
1221
+ IClass
1222
+ 0
1223
+ 7734
1224
+ 5696
1225
+ 6000
1226
+ Actual
1227
+ 5000
1228
+ 4897
1229
+ 3309
1230
+ 4000
1231
+ 0
1232
+ 1Confusion Matrix for Perceptron with 20% Poisoning
1233
+ PredictedClass
1234
+ 10000
1235
+ ActualClass
1236
+ 0
1237
+ 1393
1238
+ 11373
1239
+ 8000
1240
+ 6000
1241
+ 4507
1242
+ 4363
1243
+ 4000
1244
+ 2000
1245
+ 0
1246
+ 1ConfusionMatrixforMLE
1247
+ Predicted Class
1248
+ 12500
1249
+ 13412
1250
+ 620
1251
+ 10000
1252
+ Actual Class
1253
+ 7500
1254
+ 5000
1255
+ 1277
1256
+ 6327
1257
+ 2500
1258
+ 0
1259
+ 1Confusion Matrix for MLP with 10% Poisoning
1260
+ PredictedClass
1261
+ 12000
1262
+ 1205
1263
+ 10000
1264
+ IClass
1265
+ 0
1266
+ 12225
1267
+ 8000
1268
+ Actual
1269
+ 6000
1270
+ 2401
1271
+ 5805
1272
+ 4000
1273
+ 2000
1274
+ 0
1275
+ 1Confusion Matrix for MLP with 20% Poisoning
1276
+ PredictedClass
1277
+ 10000
1278
+ IClass
1279
+ 0
1280
+ 10805
1281
+ 1961
1282
+ 8000
1283
+ Actual
1284
+ 6000
1285
+ 3046
1286
+ 5824
1287
+ 4000
1288
+ 2000
1289
+ 0
1290
+ 1ConfusionMatrixforKNN
1291
+ Predicted Class
1292
+ 12500
1293
+ 13767
1294
+ 265
1295
+ 10000
1296
+ Actual Class
1297
+ 7500
1298
+ 5000
1299
+ 261
1300
+ 7343
1301
+ 2500
1302
+ 0
1303
+ 1Confusion Matrix for KNN with 10% Poisoning
1304
+ PredictedClass
1305
+ 10000
1306
+ IClass
1307
+ 0
1308
+ 11860
1309
+ 1570
1310
+ 8000
1311
+ Actual
1312
+ 6000
1313
+ 1531
1314
+ 6675
1315
+ 4000
1316
+ 2000
1317
+ 0
1318
+ 1Confusion Matrix for KNN with 20% Poisoning
1319
+ PredictedClass
1320
+ 10000
1321
+ IClass
1322
+ 0
1323
+ 10484
1324
+ 2282
1325
+ 8000
1326
+ Actual
1327
+ 6000
1328
+ 1578
1329
+ 7292
1330
+ 4000
1331
+ 2000
1332
+ 0
1333
+ 1ConfusionMatrixforSVM
1334
+ Predicted Class
1335
+ 12500
1336
+ 13371
1337
+ 661
1338
+ 10000
1339
+ Actual Class
1340
+ 7500
1341
+ 5000
1342
+ 1061
1343
+ 6543
1344
+ 2500
1345
+ 0
1346
+ 1Confusion Matrix for SVM with 10% Poisoning
1347
+ PredictedClass
1348
+ 10000
1349
+ IClass
1350
+ 0
1351
+ 11989
1352
+ 1441
1353
+ 8000
1354
+ Actual
1355
+ 6000
1356
+ 5718
1357
+ 2488
1358
+ -4000
1359
+ 2000
1360
+ 0
1361
+ 1Confusion Matrix for SVM with 20% Poisoning
1362
+ PredictedClass
1363
+ 10000
1364
+ IClass
1365
+ 0
1366
+ 10955
1367
+ 1811
1368
+ 8000
1369
+ Actual
1370
+ 6000
1371
+ 3510
1372
+ 5360
1373
+ -4000
1374
+ 2000
1375
+ 0
1376
+ 1Fig. 12. ROC Curve for Malware detectors under Poisoning Environments
1377
+ The poisoning can be tested in a more real environment by
1378
+ poisoning the executable files. The research community still
1379
+ lacks exhaustive studies on the vulnerabilities of malware
1380
+ detectors and how to make detectors more robust against these
1381
+ poisoning attacks.
1382
+ REFERENCES
1383
+ [1] D. Ciregan, U. Meier, and J. Schmidhuber, “Multi-column deep neural
1384
+ networks for image classification,” in 2012 IEEE Conference on Com-
1385
+ puter Vision and Pattern Recognition, 2012, pp. 3642–3649.
1386
+ [2] J. Schmidhuber, U. Meier, and D. Ciresan, “Multi-column deep neural
1387
+ networks for image classification,” in 2012 IEEE Conference on Com-
1388
+ puter Vision and Pattern Recognition.
1389
+ IEEE Computer Society, 2012.
1390
+ [3] K. Chowdhary, “Natural Language Processing,” Fundamentals of Arti-
1391
+ ficial Intelligence, pp. 603–649, 2020.
1392
+ [4] J. Hirschberg and C. D. Manning, “Advances in Natural Language
1393
+ Processing,” Science, vol. 349, no. 6245, pp. 261–266, 2015.
1394
+ [5] C.-F. Tsai, Y.-F. Hsu, C.-Y. Lin, and W.-Y. Lin, “Intrusion detection by
1395
+ machine learning: A review,” Expert Systems with Applications, vol. 36,
1396
+ no. 10, pp. 11 994–12 000, 2009.
1397
+ [6] N. Peiravian and X. Zhu, “Machine learning for android malware detec-
1398
+
1399
+ KNNCleanData,auc=0.971460832443106
1400
+ KNN 10% Poison, auc=0.8552929416737186
1401
+ KNN 20% Poison, auc=0.8216704426092349SVM Clean Data, auc=0.9073593040810904
1402
+ SVM 10% Poison, auc=0.6002161213967441
1403
+ SVM 20% Poison, auc=0.73096877256933DT Clean Data, auc=0.5872804184858597
1404
+ DT 10% Poison, auc=0.5037422175699491
1405
+ DT 20% Poison, auc=0.4169380476360457MLp CleanData,auc=0.8939386759774156
1406
+ MLP 10% Poison, auc=0.8088423576886244
1407
+ MLP 20% Poison, auc=0.7514920551542543Perceptron Clean Data, auc=0.7824525005443335
1408
+ Perceptron 10% Poison, auc=0.3838246137390344
1409
+ Perceptron 20% Poison, auc=0.476954003915064SGD Clean Data, auc=0.7482652186900373
1410
+ SGD 10% Poison, auc=0.7025164286923702
1411
+ SGD 20% Poison, auc=0.6086523161420353RF Clean Data.auC=0.7456458394939469
1412
+ RF 10% Poison, auc=0.7625879961069476
1413
+ RF 20% Poison, auc=0.711092219132663LR Clean Data, auc=0.9073593040810904
1414
+ LR 10% Poison, auc=0.8125026745227009
1415
+ LR 20% Poison, auc=0.7608190424784267tion using permission and api calls,” in 2013 IEEE 25th International
1416
+ Conference on Tools with Artificial Intelligence, 2013, pp. 300–305.
1417
+ [7] J. Kober and J. Peters, “Learning motor primitives for robotics,” in 2009
1418
+ IEEE International Conference on Robotics and Automation, 2009.
1419
+ [8] R. Manicavasaga, P. B. Lamichhane, P. Kandel, and D. A. Talbert, “Drug
1420
+ repurposing for rare orphan diseases using machine learning techniques,”
1421
+ in The International FLAIRS Conference Proceedings, vol. 35, 2022.
1422
+ [9] A. Dhakal, C. McKay, J. J. Tanner, and J. Cheng, “Artificial intelligence
1423
+ in the prediction of protein–ligand interactions: recent advances and
1424
+ future directions,” Briefings in Bioinformatics, vol. 23, no. 1, p. bbab476,
1425
+ 2022.
1426
+ [10] M. H. R. Khouzani, S. Sarkar, and E. Altman, “Maximum Damage
1427
+ Malware Attack in Mobile Wireless Networks,” IEEE/ACM Transactions
1428
+ on Networking, vol. 20, no. 5, pp. 1347–1360, 2012.
1429
+ [11] S. Gupta, A. Singhal, and A. Kapoor, “A literature survey on social
1430
+ engineering attacks: Phishing attack,” in 2016 International Conference
1431
+ on Computing, Communication and Automation (ICCCA), 2016.
1432
+ [12] F. Callegati, W. Cerroni, and M. Ramilli, “Man-in-the-Middle Attack to
1433
+ the HTTPS Protocol,” IEEE Security Privacy, vol. 7, no. 1, 2009.
1434
+ [13] C. Schuba, I. Krsul, M. Kuhn, E. Spafford, A. Sundaram, and D. Zam-
1435
+ boni, “Analysis of a denial of service attack on TCP,” in Proceedings.
1436
+ 1997 IEEE Symposium on Security and Privacy (Cat. No.97CB36097),
1437
+ 1997.
1438
+ [14] W. G. Halfond, J. Viegas, A. Orso et al., “A classification of sql-injection
1439
+ attacks and countermeasures,” in Proceedings of the IEEE International
1440
+ Symposium on Secure Software Engineering, vol. 1.
1441
+ IEEE, 2006.
1442
+ [15] I. Yilmaz and R. Masum, “Expansion of cyber attack data from
1443
+ unbalanced
1444
+ datasets
1445
+ using
1446
+ generative
1447
+ techniques,”
1448
+ arXiv
1449
+ preprint
1450
+ arXiv:1912.04549, 2019.
1451
+ [16] B. Kolosnjaji, A. Demontis, B. Biggio, D. Maiorca, G. Giacinto,
1452
+ C. Eckert, and F. Roli, “Adversarial malware binaries: Evading deep
1453
+ learning for malware detection in executables,” in IEEE European Signal
1454
+ Processing Conference, 2018, pp. 533–537.
1455
+ [17] F. Kreuk, A. Barak, S. Aviv-Reuven, M. Baruch, B. Pinkas, and
1456
+ J. Keshet, “Adversarial examples on discrete sequences for beating
1457
+ whole-binary malware detection,” arXiv preprint :1802.04528, 2018.
1458
+ [18] L. Demetrio, B. Biggio, G. Lagorio, F. Roli, and A. Armando, “Explain-
1459
+ ing Vulnerabilities of Deep Learning to Adversarial Malware Binaries,”
1460
+ arXiv preprint arXiv:1901.03583, 2019.
1461
+ [19] O. Suciu, S. E. Coull, and J. Johns, “Exploring adversarial examples
1462
+ in malware detection,” in 2019 IEEE Security and Privacy Workshops,
1463
+ 2019.
1464
+ [20] K. Aryal, M. Gupta, and M. Abdelsalam, “A Survey on Adversarial
1465
+ Attacks for Malware Analysis,” arXiv preprint arXiv:2111.08223, 2021.
1466
+ [21] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow,
1467
+ and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint
1468
+ arXiv:1312.6199, 2013.
1469
+ [22] S. M. P. Dinakarrao, S. Amberkar, S. Bhat, A. Dhavlle, H. Sayadi,
1470
+ A. Sasan, H. Homayoun, and S. Rafatirad, “Adversarial attack on
1471
+ microarchitectural events based malware detectors,” in Proceedings of
1472
+ the 56th Annual Design Automation Conference 2019, 2019, pp. 1–6.
1473
+ [23] W. Hu and Y. Tan, “Generating adversarial malware examples for black-
1474
+ box attacks based on GAN,” arXiv preprint arXiv:1702.05983, 2017.
1475
+ [24] S. Shalev-Shwartz et al., “Online learning and online convex optimiza-
1476
+ tion,” Foundations and Trends® in Machine Learning, vol. 4, 2012.
1477
+ [25] A. Rai, K. K. Chintalapudi, V. N. Padmanabhan, and R. Sen, “Zee:
1478
+ Zero-effort Crowdsourcing for Indoor Localization,” in Proceedings of
1479
+ the 18th Annual International Conference on Mobile Computing and
1480
+ Networking, 2012, pp. 293–304.
1481
+ [26] K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman,
1482
+ V. Ivanov, C. Kiddon, J. Koneˇcn`y, S. Mazzocchi, B. McMahan et al.,
1483
+ “Towards federated learning at scale: System design,” Proceedings of
1484
+ Machine Learning and Systems, vol. 1, pp. 374–388, 2019.
1485
+ [27] “Tay, Microsoft’s AI chatbot, gets a crash course in racism from
1486
+ Twitter,”
1487
+ https://www.theguardian.com/technology/2016/mar/24/
1488
+ tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter,
1489
+ 2016.
1490
+ [28] A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras,
1491
+ and T. Goldstein, “Poison Frogs! Targeted Clean-Label Poisoning At-
1492
+ tacks on Neural Networks,” Advances in Neural Information Processing
1493
+ Systems, vol. 31, 2018.
1494
+ [29] X. Liu, S. Si, X. Zhu, Y. Li, and C.-J. Hsieh, “A unified framework for
1495
+ data poisoning attack to graph-based semi-supervised learning,” arXiv
1496
+ preprint arXiv:1910.14147, 2019.
1497
+ [30] D. Cao, S. Chang, Z. Lin, G. Liu, and D. Sun, “Understanding distributed
1498
+ poisoning attack in federated learning,” in 2019 IEEE 25th International
1499
+ Conference on Parallel and Distributed Systems (ICPADS). IEEE, 2019.
1500
+ [31] J. Shen, X. Zhu, and D. Ma, “TensorClog: An imperceptible poisoning
1501
+ attack on deep neural network applications,” IEEE Access, vol. 7, 2019.
1502
+ [32] J. Zhang, J. Chen, D. Wu, B. Chen, and S. Yu, “Poisoning Attack
1503
+ in Federated Learning using Generative Adversarial Nets,” in 2019
1504
+ 18th IEEE International Conference On Trust, Security And Privacy In
1505
+ Computing And Communications/13th IEEE International Conference
1506
+ On Big Data Science And Engineering (TrustCom/BigDataSE).
1507
+ IEEE,
1508
+ 2019.
1509
+ [33] W. Jiang, H. Li, S. Liu, Y. Ren, and M. He, “A Flexible Poisoning Attack
1510
+ Against Machine Learning,” in ICC 2019-2019 IEEE International
1511
+ Conference on Communications (ICC).
1512
+ IEEE, 2019, pp. 1–6.
1513
+ [34] H. Kwon, H. Yoon, and K.-W. Park, “Selective poisoning attack on deep
1514
+ neural network to induce fine-grained recognition error,” in IEEE Sec-
1515
+ ond International Conference on Artificial Intelligence and Knowledge
1516
+ Engineering, 2019, pp. 136–139.
1517
+ [35] H. Zhang, T. Zheng, J. Gao, C. Miao, L. Su, Y. Li, and K. Ren, “Data
1518
+ poisoning attack against knowledge graph embedding,” in Proceedings
1519
+ of the Twenty-Eighth International Joint Conference on Artificial
1520
+ Intelligence, IJCAI-19.
1521
+ International Joint Conferences on Artificial
1522
+ Intelligence Organization, 7 2019, pp. 4853–4859. [Online]. Available:
1523
+ https://doi.org/10.24963/ijcai.2019/674
1524
+ [36] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to
1525
+ backdoor federated learning,” in International Conference on Artificial
1526
+ Intelligence and Statistics.
1527
+ PMLR, 2020, pp. 2938–2948.
1528
+ [37] M. Li, Y. Sun, H. Lu, S. Maharjan, and Z. Tian, “Deep reinforcement
1529
+ learning for partially observable data poisoning attack in crowdsensing
1530
+ systems,” IEEE Internet of Things Journal, vol. 7, 2020.
1531
+ [38] S. Sasaki, S. Hidano, T. Uchibayashi, T. Suganuma, M. Hiji, and S. Kiy-
1532
+ omoto, “On embedding backdoor in malware detectors using machine
1533
+ learning,” in IEEE International Conference on Privacy, Security and
1534
+ Trust, 2019, pp. 1–5.
1535
+ [39] X. Zhang, X. Zhu, and L. Lessard, “Online Data Poisoning Attack,” in
1536
+ Learning for Dynamics and Control.
1537
+ PMLR, 2020, pp. 201–210.
1538
+ [40] G. Lovisotto, S. Eberz, and I. Martinovic, “Biometric backdoors: A
1539
+ poisoning attack against unsupervised template updating,” in 2020 IEEE
1540
+ European Symposium on Security and Privacy (EuroS&P). IEEE, 2020.
1541
+ [41] C. Li, X. Chen, D. Wang, S. Wen, M. E. Ahmed, S. Camtepe, and
1542
+ Y. Xiang, “Backdoor attack on machine learning based android malware
1543
+ detectors,” IEEE Trans. on Dependable and Secure Computing, 2021.
1544
+ [42] M. Kravchik, B. Biggio, and A. Shabtai, “Poisoning attacks on cyber
1545
+ attack detectors for industrial control systems,” in Proceedings of the
1546
+ 36th Annual ACM Symposium on Applied Computing, 2021.
1547
+ [43] B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support
1548
+ vector machines,” arXiv preprint arXiv:1206.6389, 2012.
1549
+ [44] C. Yang, Q. Wu, H. Li, and Y. Chen, “Generative Poisoning Attack
1550
+ Method Against Neural Networks,” preprint arXiv:1703.01340, 2017.
1551
+ [45] L. Mu˜noz-Gonz´alez, B. Biggio, A. Demontis, A. Paudice, V. Wongras-
1552
+ samee, E. C. Lupu, and F. Roli, “Towards Poisoning of Deep Learning
1553
+ Algorithms with Back-gradient Optimization,” in Proceedings of the
1554
+ 10th ACM workshop on Artificial Intelligence and Security, 2017.
1555
+ [46] C. Miao, Q. Li, L. Su, M. Huai, W. Jiang, and J. Gao, “Attack under
1556
+ Disguise: An Intelligent Data Poisoning Attack Mechanism in Crowd-
1557
+ sourcing,” in Proceedings of the 2018 World Wide Web Conference,
1558
+ 2018.
1559
+ [47] A. P. Dawid and A. M. Skene, “Maximum Likelihood Estimation of
1560
+ Observer Error-Rates Using the EM Algorithm,” Journal of the Royal
1561
+ Statistical Society: Series C (Applied Statistics), vol. 28, no. 1, 1979.
1562
+ [48] M. Fang, G. Yang, N. Z. Gong, and J. Liu, “Poisoning attacks to
1563
+ graph-based recommender systems,” in Proceedings of the 34th Annual
1564
+ Computer Security Applications Conference, 2018, pp. 381–392.
1565
+ [49] D. Arp, M. Spreitzenbarth, M. Hubner, H. Gascon, K. Rieck, and
1566
+ C. Siemens, “Drebin: Effective and explainable detection of android
1567
+ malware in your pocket.” in NDSS, vol. 14, 2014, pp. 23–26.
1568
+ [50] H. Cai, N. Meng, B. Ryder, and D. Yao, “Droidcat: Effective android
1569
+ malware detection and categorization via app-level profiling,” IEEE
1570
+ Transactions on Information Forensics and Security, vol. 14, no. 6, 2018.
1571
+ [51] E. Mariconti, L. Onwuzurike, P. Andriotis, E. De Cristofaro, G. Ross,
1572
+ and G. Stringhini, “Mamadroid: Detecting android malware by building
1573
+ markov chains of behavioral models,” preprint arXiv:1612.04433, 2016.
1574
+
1575
+ [52] Y. Aafer, W. Du, and H. Yin, “Droidapiminer: Mining api-level features
1576
+ for robust malware detection in android,” in International Conference
1577
+ on Security and Privacy in Communication Systems.
1578
+ Springer, 2013.
1579
+ [53] “Malware
1580
+ detection,”
1581
+ https://www.kaggle.com/competitions/
1582
+ malware-detection/data.
1583
+
GdAzT4oBgHgl3EQfHfsT/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
GtAzT4oBgHgl3EQfUvxQ/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42001c617a2cfa4034219c76568dbfe5ea05626070f92a6f3d8fca8b9cdf868d
3
+ size 4063277
I9E1T4oBgHgl3EQfGAMz/content/tmp_files/2301.02908v1.pdf.txt ADDED
@@ -0,0 +1,1168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.02908v1 [math.FA] 7 Jan 2023
2
+ DYNAMICAL PROPERTIES AND SOME CLASSES OF
3
+ NON-POROUS SUBSETS OF LEBESGUE SPACES
4
+ STEFAN IVKOVI´C, SERAP ¨OZTOP, AND SEYYED MOHAMMAD TABATABAIE∗
5
+ Abstract. In this paper, we introduce several classes of non-σ-porous
6
+ subsets of a general Lebesgue space. Also, we study some linear dynam-
7
+ ics of operators and show that the set of all non-hypercyclic vectors of a
8
+ sequences of weighted translation operators on Lp-spaces is not σ-porous.
9
+ 1. Introduction
10
+ σ-porous sets, as a collection of very thin subsets of metric spaces, were
11
+ introduced and studied first time in [8] through a research on boundary be-
12
+ havior of functions, and then were applied in differentiation and Banach spaces
13
+ theories in [3, 14]. The concepts related to porosity have been active topics in
14
+ recent decades because they can be adapted for many known notions in several
15
+ kind of metric spaces; see the monograph [21]. σ-porous subsets of R are null
16
+ and of first category, while in every complete metric space without any isolated
17
+ points these two categories are different [20]. On the other hand, linear dy-
18
+ namics including hypercyclicity in operator theory received attention during
19
+ the last years; see books [2, 11] and for instance [6, 16, 17]. Recently, F. Bayart
20
+ in [1] through study of hypercyclic shifts (which was previously studied in [15];
21
+ see also [10]) proved that the set of non-hypercyclic vectors of some classes of
22
+ weighted shift operators on ℓ2(Z) is a non-σ-porous set. This would be a new
23
+ example of a first category set which is not σ-porous. In this work, by some
24
+ idea from the proof of [1, Theorem1] first we introduce a class of non-σ-porous
25
+ subsets of general Lebesgue spaces, and then we develop the main result of [1]
26
+ to sequences of weighted translation operators on general Lebesgue spaces in
27
+ the context of discrete groups and hypergroups. In particular, we prove that
28
+ if p ≥ 1, K is a discrete hypergroup, (an) is a sequence with distinct terms in
29
+ K, and w : K → (0, ∞) is a bounded measurable function such that
30
+
31
+ n∈N
32
+ 1
33
+ w(a0)w(a1) . . . w(an)χ{an+1} ∈ Lp(K),
34
+ then the set of all non-hypercyclic vectors of the sequence (Λn)n is not σ-
35
+ porous, where the operators Λn are given in Definition 3.8. Also, we study
36
+ 2010 Mathematics Subject Classification. 47A16, 28A05, 43A15, 43A62.
37
+ ∗Corresponding author.
38
+ Key words and phrases. non-σ-porous sets, Lebesgue spaces, σ-porous operators, locally
39
+ compact groups, locally compact hypergroups, hypercyclic vectors.
40
+ 1
41
+
42
+ 2
43
+ S. IVKOVI´C, S. ¨OZTOP, AND S.M. TABATABAIE
44
+ non-σ-porosity of non-hypercyclic vectors of weighted composition operators
45
+ on L∞(Ω) for a general measure space Ω equipped with a nonnegative Radon
46
+ measure and on Lp(R, τ), where τ is the Lebesgue measure on R. We show
47
+ that if G is a locally compact group, µ is a left Haar measure on G, a ∈ G,
48
+ and w : G → (0, ∞) be a weight such that
49
+
50
+ 1
51
+ w(a)w(a2) . . . w(an)
52
+
53
+ n ∈ L∞(G, µ),
54
+ then the set of all non-hypercyclic vectors of the weighted translation operator
55
+ Ta,w,∞ on L∞(G, µ) is not σ-porous.
56
+ 2. Non-σ-porous subsets of Lebesgue spaces
57
+ In this section, we will introduce some classes of non-σ-porous subsets of
58
+ Lebesgue spaces related to a fixed function. First, we recall the definition of
59
+ the main notion of this paper.
60
+ Definition 2.1. Let 0 < λ < 1. A subset E of a metric space X is called
61
+ λ-porous at x ∈ E if for each δ > 0 there is an element y ∈ B(x; δ) \ {x} such
62
+ that
63
+ B(y; λ d(x, y)) ∩ E = ∅.
64
+ E is called λ-porous if it is λ-porous at every element of E. Also, E is called
65
+ σ-λ-porous if it is a countable union of λ-porous subsets of X.
66
+ The following lemma plays a key role in the proof of main results of this
67
+ section. This fact is a special case of [19, Lemma2]; see also [1, Lemma2].
68
+ Lemma 2.2. Let F be a non-empty family of non-empty closed subsets of a
69
+ complete metric space X such that for each F ∈ F and each x ∈ X and r > 0
70
+ with B(x; r) ∩ F ̸= ∅, there exists an element J ∈ F such that
71
+ ∅ ̸= J ∩ B(x; r) ⊆ F ∩ B(x; r)
72
+ and F ∩B(x; r) is not λ-porous at all elements of J ∩B(x; r). Then, every set
73
+ in F is not σ-λ-porous.
74
+ The next result is a development of of [1, Theorem1]. Same as [1], the proof
75
+ of this theorem is based on Lemma 2.2.
76
+ Theorem 2.3. Let p ≥ 1, Ω be a locally compact Hausdorff space, µ be a
77
+ nonnegative Radon measure on Ω, and A ⊆ Ω be a Borel set such that
78
+ |f|χA ≤ ∥f∥p a.e.
79
+ (f ∈ Lp(Ω, µ)).
80
+ (2.1)
81
+ Then, for each measurable function g on Ω with gχA ∈ Lp(Ω, µ), the set
82
+ Γg :=
83
+
84
+ f ∈ Lp(Ω, µ) : |f| ≥ |g|χA a.e.
85
+
86
+ is not σ-porous in Lp(Ω, µ).
87
+
88
+ DYNAMICAL PROPERTIES AND SOME CLASSES OF NON-POROUS SUBSETS
89
+ 3
90
+ Proof. Fix an arbitrary number 0 < λ ≤ 1
91
+ 2, and pick 0 < β < λ. Denote
92
+ F :=
93
+
94
+ Γg : gχA ∈ Lp(Ω, µ)
95
+
96
+ .
97
+ We will show that the collection F satisfies the conditions of Lemma 2.2.
98
+ Let g ∈ Lp(Ω, µ). Without lossing the generality, we can assume that g is a
99
+ nonnegative function. Trivially, Γg ̸= ∅. Let (fn) be a sequence in Γg and
100
+ fn → f in Lp(Ω, µ). Then, by (2.1), |f| ≥ gχA a.e., and so f ∈ Γg. Therefore,
101
+ every element of the collection F is a closed subset of Lp(Ω, µ). Now, assume
102
+ that f ∈ Lp(Ω, µ) and r > 0 with B(f; r) ∩ Γg ̸= ∅. We find a measurable
103
+ function h with 0 ≤ hχA ∈ Lp(Ω, µ) such that
104
+ ∅ ̸= B(f; r) ∩ Γh ⊆ B(f; r) ∩ Γg,
105
+ and B(f; r) ∩ Γg is not λ-porous at elements of B(f; r) ∩ Γh.
106
+ Since
107
+
108
+ |f| + β−1gχA
109
+ �p ∈ L1(Ω, µ) and µ is a Radon measure, the mapping
110
+ ν defined by
111
+ ν(B) :=
112
+
113
+ B
114
+
115
+ |f| + β−1gχA
116
+ �p dµ
117
+ (for every Borel set B ⊆ Ω)
118
+ is a Radon measure [9]. Hence, there are some 0 < ǫ < 1, a function k ∈
119
+ B(f; r) ∩ Γg and a compact subset D of Ω with µ(D) > 0 such that
120
+ ∥k − f∥p < ǫ1/p r.
121
+ and
122
+
123
+ Dc
124
+
125
+ |f| + β−1gχA
126
+ �p dµ < (1 − ǫ) rp.
127
+ (2.2)
128
+ Pick some α with
129
+ ∥k − f∥p < α < ǫ1/p r,
130
+ and denote
131
+ δ := ǫ1/p r − α
132
+ 2µ(D)
133
+ 1
134
+ p
135
+ .
136
+ Now, we define two functions h, ξ : Ω → C by
137
+ h := (gχA + δ)χD + β−1gχA χΩ\D
138
+ and
139
+ ξ := (|k| + δ)η χD + hχΩ\D,
140
+ where
141
+ η(x) :=
142
+
143
+
144
+
145
+ k(x)
146
+ |k(x)|,
147
+ if k(x) ̸= 0
148
+ 1,
149
+ if k(x) = 0
150
+ for all x ∈ Ω. Since D is compact, we have hχA ∈ Lp(Ω, µ). Also, for each
151
+ x ∈ D,
152
+ |k(x) − ξ(x)| =
153
+ ��k(x) −
154
+
155
+ |k(x)| + δ
156
+
157
+ η(x)
158
+ ��
159
+ =
160
+ ��k(x) − k(x) − δ η(x)
161
+ ��
162
+ = δ,
163
+
164
+ 4
165
+ S. IVKOVI´C, S. ¨OZTOP, AND S.M. TABATABAIE
166
+ and therefore
167
+ ∥(ξ − k) χD∥p = δ µ(D)
168
+ 1
169
+ p = ǫ1/p r − α
170
+ 2
171
+ .
172
+ This implies that
173
+ ∥(ξ − f) χD∥p ≤ ∥(ξ − k) χD∥p + ∥(k − f) χD∥p
174
+ ≤ ǫ1/p r − α
175
+ 2
176
+ + α < ǫ1/p r.
177
+ Hence,
178
+ ∥ξ − f∥p
179
+ p =
180
+
181
+ D
182
+ |ξ − f|p dµ +
183
+
184
+ Ω\D
185
+ |ξ − f|p dµ
186
+ < ǫ rp +
187
+
188
+ Ω\D
189
+ |β−1gχA − f|p dµ
190
+ ≤ ǫ rp +
191
+
192
+ Ω\D
193
+ (β−1gχA + |f|)p dµ
194
+ < ǫ rp + (1 − ǫ) rp = rp,
195
+ and so, ξ ∈ B(f; r). Moreover,
196
+ |ξ(x)| = |k(x)| + δ ≥ g(x) + δ = h(x)
197
+ a.e. on D ∩ A,
198
+ and for each x ∈ (Ω \ D) ∩ A we have |ξ(x)| = h(x). This shows that ξ ∈ Γh,
199
+ and so
200
+ ∅ ̸= B(f; r) ∩ Γh ⊆ B(f; r) ∩ Γg
201
+ because h ≥ g. Now, let u ∈ B(f; r)∩Γh and put r′ := min{δ, λ (r−∥f −u∥p)}.
202
+ Let v ∈ B(u; r′). We define the function γ : Ω → C by
203
+ γ(x) :=
204
+
205
+
206
+
207
+
208
+
209
+ v(x),
210
+ if x ∈ D
211
+
212
+ |v(x)| + β|u(x) − v(x)|
213
+
214
+ θ(x),
215
+ if x ∈ Ω \ D
216
+ where
217
+ θ(x) :=
218
+
219
+
220
+
221
+ v(x)
222
+ |v(x)|,
223
+ if v(x) ̸= 0
224
+ 1,
225
+ if v(x) = 0.
226
+ Therefore, for each x ∈ Ω \ D we have
227
+ |γ(x) − v(x)| = β |u(x) − v(x)|
228
+ and
229
+ |γ(x)| ≥ β |u(x)|.
230
+ Easily,
231
+ ∥γ − v∥p
232
+ p = ∥(γ − v) χD∥p
233
+ p + ∥(γ − v) χΩ\D∥p
234
+ p
235
+ = ∥(γ − v) χΩ\D∥p
236
+ p
237
+ = βp ∥(u − v) χΩ\D∥p
238
+ p
239
+ ≤ βp ∥u − v∥p
240
+ p < λp ∥u − v∥p
241
+ p,
242
+
243
+ DYNAMICAL PROPERTIES AND SOME CLASSES OF NON-POROUS SUBSETS
244
+ 5
245
+ and hence,
246
+ γ ∈ B
247
+
248
+ v; λ ∥u − v∥p
249
+
250
+ ⊆ B(f; r).
251
+ In addition,
252
+ |γ(x)| ≥ β |u(x)| ≥ β h(x) = g(x)
253
+ for a.e. x ∈ (Ω \ D) ∩ A
254
+ and
255
+ |γ(x)| = |v(x)| ≥ |u(x)| − δ ≥ g(x)
256
+ for a.e. x ∈ D ∩ A,
257
+ because ∥u − v∥p ≤ δ and also |u| ≥ h. Therefore,
258
+ B
259
+
260
+ v; λ ∥u − v∥p
261
+
262
+ ∩ B(f; r) ∩ Γg ̸= ∅.
263
+ and this competes the proof.
264
+
265
+ Remark 2.4. Note that, in general, the condition (2.1) in the statement of
266
+ Theorem 2.3 does not implies that Ω is a discrete space. In particular, if in
267
+ the condition (2.1) we set A := Ω, then it implies that Lp(Ω, µ) ⊆ L∞(Ω, µ),
268
+ and this inclusion is equivalent to
269
+ α := inf{µ(E) : µ(E) > 0} > 0,
270
+ (2.3)
271
+ and equivalently, for each q > p, Lp(Ω, µ) ⊆ Lq(Ω, µ); see [18]. If in addition,
272
+ suppµ = Ω, then the condition (2.3) implies that for each x ∈ Ω,
273
+ µ({x}) = inf{µ(F) : F is a compact neighborhood of x} > 0.
274
+ Specially, if Ω is a locally compact group (or hypergroup) and µ is a left Haar
275
+ measure of it, then the condition (2.1) implies that Ω is a discrete topological
276
+ space.
277
+ The next result is a direct conclusion of Theorem 2.3.
278
+ Corollary 2.5. Let Ω be a discrete topological space and ϕ := (ϕj)j∈Ω ⊆
279
+ [1, ∞) such that for each j, ϕj ≥ 1. Put µϕ := �
280
+ j∈Ω ϕj δj, where δj is the
281
+ point-mass measure at j. Then, for each g ∈ Lp(Ω, µϕ), the set
282
+ Γg :=
283
+
284
+ f ∈ Lp(Ω, µϕ) : |f| ≥ |g|
285
+
286
+ is not σ-porous in Lp(Ω, µϕ).
287
+ Proof. Just note that for each k ∈ Ω and f ∈ Lp(Ω, µϕ),
288
+ ∥f∥p
289
+ p =
290
+
291
+ j∈Ω
292
+ |f(j)|p µϕ({j}) ≥ |f(k)| ϕk ≥ |f(k)|p.
293
+
294
+ In particular, if a set is endowed with the counting measure, we get the fact.
295
+ Corollary 2.6. Let p ≥ 1 and A be a non-empty set. Then, for each g ∈
296
+ ℓp(A), the set
297
+ Γg :=
298
+
299
+ f ∈ ℓp(A) : |f| ≥ |g|
300
+
301
+ is not σ-porous in ℓp(A).
302
+ The situation for L∞-spaces is different.
303
+
304
+ 6
305
+ S. IVKOVI´C, S. ¨OZTOP, AND S.M. TABATABAIE
306
+ Theorem 2.7. Let Ω be a locally compact Hausdorff space and µ be a non-
307
+ negative Radon measure on Ω. Then, for each g ∈ L∞(Ω, µ), the set
308
+ Γg :=
309
+
310
+ f ∈ L∞(Ω, µ) : |f| ≥ |g| a.e.
311
+
312
+ is not σ-porous in L∞(Ω, µ).
313
+ Proof. Same as the proof of Theorem 2.3 fix 0 < λ ≤ 1
314
+ 2, and set
315
+ F :=
316
+
317
+ Γg : g ∈ L∞(Ω, µ)
318
+
319
+ .
320
+ This collection satisfies the conditions of Lemma 2.2. Trivially, Γg is a closed
321
+ subset of L∞(Ω, µ) for all g ∈ L∞(Ω, µ). Let Assume that 0 ≤ g ∈ L∞(Ω, µ),
322
+ and let f ∈ L∞(Ω, µ) and r > 0.
323
+ If B(f; r) ∩ Γg ̸= ∅, we choose some
324
+ k ∈ B(f; r) ∩ Γg and we find some ε ∈ (0, 1) such that ∥k − f∥∞ < εr. Pick
325
+ some δ ∈ (0, (1 − ε)r), and set
326
+ h := g + δ
327
+ and
328
+ ξ := (|k| + δ)η,
329
+ where η is as in the proof of Theorem 2.3. Then, we get
330
+ ∥ξ − f∥∞ ≤ ∥ξ − k∥∞ + ∥k − f∥∞ ≤ δ + εr < (1 − ε)r + εr = r,
331
+ so ξ ∈ B(f; r), and ξ ∈ Γh since |k| ≥ g. Next, let u ∈ B(f; r) ∩ Γh, and set
332
+ r′ := min{δ, λ (r − ∥f − u∥∞)}. Pick some v ∈ B(u; r′). Then, ∥u − v∥∞ <
333
+ r′ ≤ δ, so |v| ≥ |u| − δ ≥ h − δ = g a.e., so v ∈ Γg. Thus,
334
+ v ∈ B(v; λ∥u − v∥∞) ∩ B(f; r) ∩ Γg.
335
+ This completes the proof.
336
+
337
+ Remark 2.8. The main Theorem 2.3 is valid also for the sequence space c0,
338
+ because the sequences with finitely many non-zero coefficients approximate
339
+ sequences in c0.
340
+ At the end of this section, we give a class of non-σ-porous subsets of the
341
+ Lp-space on real line. In the proof of this result, which is also based on Lemma
342
+ 2.2, we apply some functions defined in the proof of Theorem 2.3.
343
+ Theorem 2.9. Let p ≥ 1, and τ be the Lebesgue measure on R. For each
344
+ g ∈ Lp(R, τ) put
345
+ Θg :=
346
+
347
+ f ∈ Lp(R, τ) : ∥fχ[m,m+1]∥p ≥ ∥gχ[m,m+1]∥p for all m ∈ Z
348
+
349
+ .
350
+ Then, Θg is not σ-porous in Lp(R, τ).
351
+ Proof. Let 0 < λ ≤ 1
352
+ 2, and 0 < β < λ. Denote
353
+ F :=
354
+
355
+ Θg : g ∈ Lp(R, τ)
356
+
357
+ .
358
+ We prove that the collection F satisfies the conditions of Lemma 2.2. Let
359
+ 0 ≤ g ∈ Lp(R, τ). Then, easily Θg ̸= ∅ and it is closed in Lp(R, τ). Now,
360
+ assume that f ∈ Lp(R, τ) and r > 0 with B(f; r)∩Θg ̸= ∅. Then, there exists
361
+
362
+ DYNAMICAL PROPERTIES AND SOME CLASSES OF NON-POROUS SUBSETS
363
+ 7
364
+ a large enough number N ∈ N, some 0 < ǫ < 1 and a function k ∈ B(f; r)∩Θg
365
+ such that
366
+ ∥k − f∥p < ǫ
367
+ 1
368
+ p r
369
+ and
370
+
371
+ [−N,N]c
372
+
373
+ |f| + β−1g
374
+ �p dτ < (1 − ǫ) rp.
375
+ Pick some α with ∥k − f∥p < α < ǫ
376
+ 1
377
+ p r, and denote δ := ǫ
378
+ 1
379
+ p r−α
380
+ 2(2N)
381
+ 1
382
+ p . Put
383
+ A1 := {m ∈ [N] : g = 0 a.e. on [m, m + 1]},
384
+ A2 := [N] \ A1
385
+ and
386
+ B1 := {m ∈ [N] : k = 0 a.e. on [m, m + 1]},
387
+ B2 := [N] \ B1,
388
+ where [N] := {−N, . . . , N − 1}, and then define
389
+ ρ :=
390
+
391
+ m∈A1
392
+ χ[m,m+1] +
393
+
394
+ m∈A2
395
+ gχ[m,m+1]
396
+ ∥gχ[m,m+1]∥p
397
+ ,
398
+ and
399
+ η :=
400
+
401
+ m∈B1
402
+ χ[m,m+1] +
403
+
404
+ m∈B2
405
+ kχ[m,m+1]
406
+ ∥kχ[m,m+1]∥p
407
+ .
408
+ Now, we define h, ξ : R → C by
409
+ h := gχ[−N,N] + δρ + β−1g χ[−N,N]c
410
+ and
411
+ ξ := |k| χ[−N,N] + δη + hχ[−N,N]c.
412
+ Clearly, h ∈ Lp(R, τ). For each x ∈ [−N, N] we have |k(x) − ξ(x)| = δ |η(x)|,
413
+ and so
414
+ ∥(k − ξ)χ[−N,N]∥p
415
+ p = δp ∥ηχ[−N,N]∥p
416
+ p
417
+ = δp
418
+
419
+ m∈[N]
420
+ ∥ηχ[m,m+1]∥p
421
+ p
422
+ = δp 2N.
423
+ Hence, ∥(k − ξ)χ[−N,N]∥p = δ (2N)
424
+ 1
425
+ p . Now, similar to the proof of Theorem
426
+ 2.3 we have ξ ∈ B(f; r). Moreover,
427
+ ∥ξχ[m,m+1]∥p = ∥kχ[m,m+1]∥p + δ ≥ ∥gχ[m,m+1]∥p + δ = ∥hχ[m,m+1]∥p
428
+ for all m ∈ [N]. And also for each m /∈ [N],
429
+ ∥ξχ[m,m+1]∥p = ∥hχ[m,m+1]∥p ≥ ∥gχ[m,m+1]∥p.
430
+ So,
431
+ ξ ∈ B(f; r) ∩ Θh ⊆ B(f; r) ∩ Θg.
432
+
433
+ 8
434
+ S. IVKOVI´C, S. ¨OZTOP, AND S.M. TABATABAIE
435
+ Now, let u ∈ B(f; r) ∩ Θh and put r′ := min{δ, λ (r − ∥f − u∥p)}. Let v ∈
436
+ B(u; r′). We define the function γ : R → C by
437
+ γ(x) :=
438
+
439
+
440
+
441
+
442
+
443
+ v(x),
444
+ if x ∈ [−N, N]
445
+
446
+ |v(x)| + β|u(x) − v(x)|
447
+
448
+ θ(x),
449
+ if x ∈ [−N, N]c
450
+ where
451
+ θ(x) :=
452
+
453
+
454
+
455
+ v(x)
456
+ |v(x)|,
457
+ if v(x) ̸= 0
458
+ 1,
459
+ if v(x) = 0.
460
+ Similar to the proof of Theorem 2.3, we have γ ∈ B
461
+
462
+ v; λ ∥u − v∥p
463
+
464
+ . Now, for
465
+ each m /∈ [N],
466
+ |γ|χ(m,m+1) = (|v| + β|u − v|)χ(m,m+1) ≥ β|u|χ(m,m+1).
467
+ Hence,
468
+ ∥γχ[m,m+1]∥p ≥ β ∥uχ[m,m+1]∥p ≥ β ∥hχ[m,m+1]∥p
469
+ since u ∈ B(f; r) ∩ Θh. However, in this case we have (m, m + 1) ∈ [−N, N]c,
470
+ so hχ(m,m+1) = β−1gχ(m,m+1). Thus, β∥hχ[m,m+1]∥p = ∥gχ[m,m+1]∥p. If m ∈
471
+ [N], we have γχ[m,m+1] = vχ[m,m+1] because γχ[−N,N] = vχ[−N,N] and [m, m+
472
+ 1] ⊆ [−N, N]. We get
473
+ �� ∥uχ[m,m+1]∥p − ∥vχ[m,m+1]∥p
474
+ �� ≤ ||(u − v)χ[m,m+1]∥p ≤ ∥u − v∥p < δ
475
+ because v ∈ B(u; r′), hence
476
+ ∥γχ[m,m+1]∥p = ∥vχ[m,m+1]∥p
477
+ ≥ ∥uχ[m,m+1]∥p − δ
478
+ ≥ ∥hχ[m,m+1]∥p − δ
479
+ = ∥gχ[m,m+1]∥p.
480
+ Therefore,
481
+ γ ∈ B
482
+
483
+ v; λ ∥u − v∥p
484
+
485
+ ∩ B(f; r) ∩ Θg,
486
+ and the proof is complete.
487
+
488
+ 3. Applications
489
+ In this section, we will apply the results of the previous section, to prove that
490
+ the set of all non-hypercyclic vectors of some sequences of weighted translation
491
+ operators is non-σ-porous.
492
+ Definition 3.1. Let X be a Banach space. A sequence (Tn)n∈N0 of operators
493
+ in B(X) is called hypercyclic if there is an element x ∈ X (called hypercyclic
494
+ vector) such that the orbit {Tn(x) : n ∈ N0} is dense in X. The set of all
495
+ hypercyclic vectors of a sequence (Tn)n∈N0 is denoted by HC((Tn)n∈N0). An
496
+ operator T ∈ B(X) is called hypercyclic if the sequence (T n)n∈N0 is hyper-
497
+ cyclic.
498
+
499
+ DYNAMICAL PROPERTIES AND SOME CLASSES OF NON-POROUS SUBSETS
500
+ 9
501
+ Let G be a locally compact group and a ∈ G.
502
+ Then, for each function
503
+ f : G → C we define Laf : G → C by Laf(x) := f(a−1x) for all x ∈ G. Note
504
+ that if p ≥ 1, then the left translation operator
505
+ La : Lp(G) → Lp(G),
506
+ f �→ Laf
507
+ is not hypercyclic because ∥La∥ ≤ 1. Hypercyclicity of weigted translation
508
+ operators on Lp(G) and regarding an aperiodic element a was studied in [5]
509
+ (an element a ∈ G is called aperiodic if the closed subgroup of G generated by
510
+ a is not compact).
511
+ Definition 3.2. Let G be a locally compact group with a left Haar measure
512
+ µ. Fix p ≥ 1. We denote Lp(G) := Lp(G, µ). Assume that w : G → (0, ∞)
513
+ is a bounded measurable function (called a weight) and a ∈ G. Then, the
514
+ weighted translation operator Ta,w,p : Lp(G) → Lp(G) is defined by
515
+ Ta,w,p(f) := w Laf,
516
+ (f ∈ Lp(G)).
517
+ For each n ∈ N we denote ϕn := w Law . . . Lan−1w, where a0 := e, the
518
+ identity element of G.
519
+ Theorem 3.3. Let p ≥ 1, G be a discrete group and a ∈ G. Let µ be a left
520
+ Haar measure on G with µ({e}) ≥ 1 and (γn)n be an unbounded sequence of
521
+ non-negative integers. Let w : G → (0, ∞) be a bounded function such that for
522
+ some finite nonempty set F ⊆ G and some N > 0 we have
523
+ aγnF ∩ F = ∅
524
+ (n ≥ N),
525
+ and
526
+ β := inf
527
+ � γn
528
+
529
+ k=1
530
+ w(akt) : n ≥ N, t ∈ F
531
+
532
+ > 0.
533
+ Then, the set
534
+ Λ :=
535
+
536
+ f ∈ Lp(G, µ) : ∥T γn
537
+ a,w,pf − χF ∥p ≥ µ(F)
538
+ 1
539
+ p for all n ≥ N
540
+
541
+ is non-σ-porous.
542
+
543
+ 10
544
+ S. IVKOVI´C, S. ¨OZTOP, AND S.M. TABATABAIE
545
+ Proof. Let Γ := {f ∈ Lp(G, µ) : |f| ≥
546
+ 1
547
+ βχF }. Then, Γ is not σ-porous in
548
+ Lp(G, µ) thanks to Theorem 2.7. Also, for each f ∈ Γ and n ≥ N we have
549
+ ∥T γn
550
+ a,w,pf − χF∥p
551
+ p =
552
+
553
+ G
554
+ |
555
+ n
556
+
557
+ k=1
558
+ w(a−γn+kx) f(a−γnx) − χF (x)|p dµ(x)
559
+ =
560
+
561
+ G
562
+ |
563
+ γn
564
+
565
+ k=1
566
+ w(akx) f(x) − χF(aγnx)|p dµ(x)
567
+ =
568
+
569
+ G
570
+ |
571
+ γn
572
+
573
+ k=1
574
+ w(akx) f(x) − χa−γnF (x)|p dµ(x)
575
+
576
+
577
+ F
578
+ |
579
+ γn
580
+
581
+ k=1
582
+ w(akx) f(x) − χa−γnF (x)|p dµ(x)
583
+ =
584
+
585
+ F
586
+ |
587
+ γn
588
+
589
+ k=1
590
+ w(akx) f(x)|p dµ(x)
591
+
592
+
593
+ F
594
+ |β 1
595
+ β |p dµ(x)
596
+ = µ(F).
597
+ This completes the proof.
598
+
599
+ Example 3.4. Let G be the additive group Z with the counting measure.
600
+ Let F be a finite non-empty subset of Z. Put N := max{|j| : j ∈ F}. If
601
+ w := (wn)n∈Z ⊆ (0, ∞) is a bounded sequence with wn ≥ 1 for all n ≥ N.
602
+ Then the required conditions in the previous theorem hold with respect to F
603
+ and a := 1.
604
+ The following fact is a direct conclusion of the previous theorem.
605
+ Corollary 3.5. Let p ≥ 1, G be a discrete group and a ∈ G with infinite order.
606
+ Let µ be the counting measure on G and (γn)n be an unbounded sequence of
607
+ non-negative integers. Let w : G → (0, ∞) be a bounded function such that for
608
+ some t ∈ G,
609
+ inf
610
+ � γn
611
+
612
+ k=1
613
+ w(akt) : n ∈ N
614
+
615
+ > 0.
616
+ Then, the set
617
+
618
+ f ∈ Lp(G, µ) : ∥T γn
619
+ a,wf − χ{t}∥p ≥ 1 for all n
620
+
621
+ is non-σ-porous.
622
+ Theorem 3.6. Let p ≥ 1, G be a discrete group, and a ∈ G. Let µ be a left
623
+ Haar measure on G with µ({e}) ≥ 1. Let (γn)n be an unbounded sequence of
624
+ non-negative integers and let w : G → (0, ∞) be a bounded function such that
625
+ inf
626
+ n∈N
627
+ γn
628
+
629
+ k=1
630
+ w(ak) > 0.
631
+
632
+ DYNAMICAL PROPERTIES AND SOME CLASSES OF NON-POROUS SUBSETS
633
+ 11
634
+ Then, the set
635
+ Γ :=
636
+
637
+ f ∈ Lp(G, µ) : |f(e)| inf
638
+ n∈N
639
+ γn
640
+
641
+ k=1
642
+ w(ak) ≥ 1
643
+
644
+ is non-σ-porous. In particular, setting Tn := T γn
645
+ a,w,p for all n, the set of all
646
+ non-hypercyclic vectors of the sequence (Tn)n is not σ-porous in Lp(G, µ).
647
+ Proof. Since µ({e}) ≥ 1, applying Theorem 2.3 the set Γ is non-σ-porous,
648
+ because
649
+ [ inf
650
+ n∈N
651
+ γn
652
+
653
+ k=1
654
+ w(ak)]−1 χ{e} ∈ Lp(G, µ).
655
+ Let f ∈ Γ. If n is a nonnegative integer, then for every x in G we have
656
+ ∥Tnf∥p ≥
657
+ ��ϕγn(x) Laγn f(x)
658
+ ��,
659
+ and so setting x = aγn we have
660
+ ∥Tnf∥p ≥
661
+ ��ϕn(aγn) Laγn f(aγn)
662
+ ��
663
+ =
664
+ � γn
665
+
666
+ k=1
667
+ w(ak)
668
+
669
+ |f(e)|
670
+ ≥ |f(e)| inf
671
+ m∈N
672
+ γm
673
+
674
+ k=1
675
+ w(ak) ≥ 1.
676
+ This implies that the set {Tnf : n ∈ N} is not dense in Lp(G, µ), and so Γ
677
+ is a subset of the set of all non-hypercyclic vectors of T. This completes the
678
+ proof.
679
+
680
+ Now, we recall the definition of hypergroups which are generalizations of
681
+ locally compact groups; see the monograph [4] and the basic paper [12] for
682
+ more details. In locally compact hypergroups the convolution of two Dirac
683
+ measures is not necessarily a Dirac measure.
684
+ Let K be a locally compact
685
+ Hausdorff space. We denote by M(K) the space of all regular complex Borel
686
+ measures on K, and by δx the Dirac measure at the point x. The support of
687
+ a measure µ ∈ M(K) is denoted by supp(µ).
688
+ Definition 3.7. Suppose that K is a locally compact Hausdorff space, (µ, ν) �→
689
+ µ∗ν is a bilinear positive-continuous mapping from M(K)×M(K) into M(K)
690
+ (called convolution), and x �→ x− is an involutive homeomorphism on K (called
691
+ involution) with the following properties:
692
+ (i)
693
+ M(K) with ∗ is a complex associative algebra;
694
+ (ii)
695
+ if x, y ∈ K, then δx∗δy is a probability measure with compact support;
696
+ (iii)
697
+ the mapping (x, y) �→ supp(δx ∗ δy) from K × K into C(K) is contin-
698
+ uous, where C(K) is the set of all non-empty compact subsets of K
699
+ equipped with Michael topology;
700
+ (iv) there exists a (necessarily unique) element e ∈ K (called identity) such
701
+ that for all x ∈ K, δx ∗ δe = δe ∗ δx = δx;
702
+
703
+ 12
704
+ S. IVKOVI´C, S. ¨OZTOP, AND S.M. TABATABAIE
705
+ (v) for all x, y ∈ K, e ∈ supp(δx ∗ δy) if and only if x = y−;
706
+ Then, K ≡ (K, ∗,− , e) is called a locally compact hypergroup.
707
+ A nonzero nonnegative regular Borel measure m on K is called the (left)
708
+ Haar measure if for each x ∈ K, δx∗m = m. For each x, y ∈ K and measurable
709
+ function f : K → C we denote
710
+ f(x ∗ y) :=
711
+
712
+ K
713
+ f d(δx ∗ δy),
714
+ while this integral exists.
715
+ Definition 3.8. Suppose that a := (an)n∈N0 is a sequence in a hypergroup
716
+ K, and w is a weight function on K. For each n ∈ N0 we define the bounded
717
+ linear operator Λn+1 on Lp(K) by
718
+ Λn+1f(x) := w(a0 ∗ x) w(a1 ∗ x) . . . w(an ∗ x) f(an+1 ∗ x)
719
+ (f ∈ Lp(K))
720
+ for all x ∈ K. Also, we assume that Λ0 is the identity operator on Lp(K).
721
+ Some linear dynamical properties of this sequence of operators were studied
722
+ in [13]. The sequence {Λn}n is a generalization of the usual powers of a single
723
+ weighted translation operator on Lp(G), where G is a locally compact group.
724
+ In fact, any locally compact group G with the mapping
725
+ µ ∗ ν �→
726
+
727
+ G
728
+
729
+ G
730
+ δxydµ(x)dν(y)
731
+ (µ, ν ∈ M(G))
732
+ as convolution, and x �→ x−1 from G onto G as involution is a locally compact
733
+ hypergroup. Let η := (an)n∈N0 be a sequence in G, and w be a weight on G.
734
+ Then for each f ∈ Lp(G), n ∈ N0 and x ∈ G, we have
735
+ Λn+1f(x) = w(a0x) w(a1x) . . . w(anx) f(an+1x).
736
+ In particular, let a ∈ G and for each n ∈ N0, put an := a−n. Then, Λn = T n
737
+ a,w,p
738
+ for all n ∈ N. In this case, the operator Ta,w,p is hypercyclic if and only if the
739
+ sequence (Λn)n is hypercyclic.
740
+ Let K be a discrete hypergroup with the convolution ∗ between Radon
741
+ measures of K and the involution ·− : K → K. Then, by [12, Theorem7.1A],
742
+ the measure µ on K given by
743
+ µ({x}) :=
744
+ 1
745
+ δx ∗ δx−({e}),
746
+ (x ∈ K)
747
+ (3.1)
748
+ is a left Haar measure on K.
749
+ Proposition 3.9. Let K be a discrete hypergroup, µ be the Haar measure
750
+ (3.1), and p ≥ 1. Then for each g ∈ Lp(K, µ), the set
751
+
752
+ f ∈ Lp(K, µ) : |f| ≥ |g|
753
+
754
+ is not σ-porous in Lp(K, µ).
755
+
756
+ DYNAMICAL PROPERTIES AND SOME CLASSES OF NON-POROUS SUBSETS
757
+ 13
758
+ Proof. Just note that for each x ∈ K we have µ({x}) ≥ 1 because
759
+ 1 = δx ∗ δx−(K) ≥ δx ∗ δx−({e}).
760
+ Hence, the measure space (K, µ) satisfies the condition of Corollary 2.5.
761
+
762
+ Let a := (an)n∈N be a sequence in a discrete hypergroup K such that
763
+ an ̸= am for each m ̸= n, and let w : K → (0, ∞) be bounded. We define
764
+ ha,w : K → C by
765
+ ha,w :=
766
+
767
+ n∈N0
768
+ 1
769
+ w(a0)w(a1) . . . w(an)χ{an+1}.
770
+ Theorem 3.10. Let p ≥ 1, and K be a discrete hypergroup endowed with the
771
+ left Haar measure (3.1). Let a := (an)n∈N0 ⊆ K with distinct terms, and w be
772
+ a weight on K such that ha,w ∈ Lp(K). Then, the set of all non-hypercyclic
773
+ vectors of the sequence (Λn)n is not σ-porous.
774
+ Proof. First, thanks to Proposition 3.9, the set
775
+ E :=
776
+
777
+ f ∈ Lp(K) : |f(an+1)| ≥
778
+ 1
779
+ w(a0)w(a1) . . . w(an) for all n
780
+
781
+ is not σ-porous because it equals to the set
782
+
783
+ f ∈ Lp(K) : |f| ≥ ha,w
784
+
785
+ . Now,
786
+ for each f ∈ E,
787
+ ∥Λn+1f∥p ≥ sup
788
+ x∈K
789
+ w(a0 ∗ x) w(a1 ∗ x) . . . w(an ∗ x) |f(an+1 ∗ x)|
790
+ ≥ w(a0) w(a1) . . . w(an) |f(an+1)| ≥ 1
791
+ for all n ∈ N0. This implies that 0 does not belong to the closure of {Λnf :
792
+ n ∈ N} in Lp(K), and so E ⊆ [HC((Λn)n)]c. This completes the proof.
793
+
794
+ Since any group is a hypergroup, we can give the fact below.
795
+ Corollary 3.11. Let p ≥ 1, and G be a discrete group. Let a ∈ G be of
796
+ infinite order, (γn)n∈N0 ⊆ N be with distinct terms and w : G → (0, ∞) be a
797
+ weight such that
798
+
799
+ 1
800
+ w(aγ0)w(aγ1) . . . w(aγn)
801
+
802
+ n ∈ ℓp(G).
803
+ Then, the set of all non-hypercyclic vectors of the sequence (T γn
804
+ a,w,p)n is not
805
+ σ-porous in ℓp(G).
806
+ Now, we can write the next corollary which is a generalization of [1, Theorem
807
+ 1].
808
+ Corollary 3.12. Let p ≥ 1, (γn)n ⊆ N be strictly increasing and (wn)n∈Z be
809
+ a bounded sequence in (0, ∞) such that
810
+
811
+ 1
812
+ wγ0wγ1wγ2 . . . wγn
813
+
814
+ n ∈ ℓp(Z).
815
+
816
+ 14
817
+ S. IVKOVI´C, S. ¨OZTOP, AND S.M. TABATABAIE
818
+ Then, the set of all non-hypercyclic vectors of the sequence (Tn)n is not σ-
819
+ porous, where
820
+ (Tn+1a)k := wγ0wγ1wγ2 . . . wγnak+γn+1
821
+ (k ∈ N0)
822
+ for all a := (aj)j ∈ ℓp(Z).
823
+ Applying Theorem 2.7 we can speak regarding some more general situation
824
+ in the case of p = ∞. Let Ω be a locally compact Hausdorff space endowed with
825
+ a nonnegative Radon measure µ. Let w : Ω → (0, ∞) be a bounded measurable
826
+ function, and α : Ω → Ω be a bi-measurable mapping such that ∥f ◦ α±1∥∞ =
827
+ ∥f∥∞ for all f ∈ L∞(Ω, µ). Then, we define Tα,w,∞ : L∞(Ω, µ) → L∞(Ω, µ)
828
+ by
829
+ Tα,w,∞(f) := w (f ◦ α)
830
+ (f ∈ L∞(Ω, µ)).
831
+ If Ω be a locally compact group and a ∈ Ω, setting αa(x) := ax for all x ∈ Ω,
832
+ we denote Ta,w,∞ := Tαa,w,∞. Note that α−1 means the inverse function of α,
833
+ and for each k ∈ N, α−k := (α−1)k.
834
+ Theorem 3.13. Let Tα,w,∞ be the weighted composition operator defined as
835
+ above and let {γn}n ⊆ N be a fixed unbounded sequence. Suppose that there
836
+ exists a sequence {An}n of disjoint subsets of Ω with µ(An) > 0 for all n such
837
+ that
838
+ jα,w :=
839
+
840
+ n∈N
841
+ 1
842
+ (w ◦ α−γn) (w ◦ α−γn+1) . . . (w ◦ α−1)χAn ∈ L∞(Ω, µ).
843
+ Then, the set {f ∈ L∞(Ω, µ) : ∥T γn
844
+ α,w,∞(f)∥∞ ≥ 1 for all n} is not σ-porous.
845
+ In particular, the set of all non-hypercyclic vectors of the sequence {T γn
846
+ α,w,∞}n
847
+ is not σ-porous.
848
+ Proof. Let E := {f ∈ L∞(Ω, µ) : |f| ≥ jα,w}. Then, E is not σ-porous thanks
849
+ to Theorem 2.7. For each f ∈ E and n ∈ N we have
850
+ ∥T γn
851
+ α,w,∞(f)∥∞ = ∥
852
+ γn
853
+
854
+ k=1
855
+ (w ◦ αγn−k) (f ◦ αγn)∥∞
856
+ = ∥
857
+ γn
858
+
859
+ k=1
860
+ (w ◦ α−k) f∥∞
861
+ ≥ ∥
862
+ γn
863
+
864
+ k=1
865
+ (w ◦ α−k) χAn f∥∞
866
+ ≥ ∥
867
+ γn
868
+
869
+ k=1
870
+ (w ◦ α−k) χAn jα,w∥∞
871
+ = 1.
872
+ This completes the proof.
873
+
874
+
875
+ DYNAMICAL PROPERTIES AND SOME CLASSES OF NON-POROUS SUBSETS
876
+ 15
877
+ Corollary 3.14. Let G be a locally compact group and µ be a left Haar mea-
878
+ sure on G. Let a ∈ G and w : G → (0, ∞) be a bounded measurable function.
879
+ Then, if
880
+
881
+ 1
882
+ w(a)w(a2) . . . w(an)
883
+
884
+ n ∈ L∞(G, µ),
885
+ then the set of all non-hypercyclic vectors of the operator Ta,w,∞ on L∞(G, µ)
886
+ is not σ-porous.
887
+ Corollary 3.15. If (wn)n∈Z is a bounded sequence such that
888
+
889
+ 1
890
+ w1 . . . wn
891
+
892
+ n ∈ ℓ∞,
893
+ then the set of all non-hypercyclic vectors of the sequence (Tγn,w)n is not σ-
894
+ porous in ℓ∞.
895
+ Theorem 3.16. Let Tα,w,∞ be the weighted composition operator on L∞(Ω, µ)
896
+ and let F ⊆ Ω be a Borel set with 0 < µ(F) < ∞. Let there exists a constant
897
+ N > 0 such that for all n ≥ N,
898
+ αn(F) ∩ F = ∅,
899
+ (3.2)
900
+ and
901
+ β := inf{
902
+ n
903
+
904
+ k=1
905
+ (w ◦ α−k)(t) : n ≥ N, t ∈ F} ̸= 0.
906
+ Then, the set
907
+ {f ∈ L∞(Ω, µ) : ∥T n
908
+ α,w,∞f − χF∥∞ ≥ 1 for all n ≥ N}
909
+ is not σ-porous in L∞(Ω, µ).
910
+ Proof. Let Γ := {f ∈ L∞(Ω, µ) : |f| ≥ 1
911
+ βχF }. Then by Theorem 2.7, Γ is not
912
+ σ-porous in L∞(Ω, µ). Also, for each f ∈ Γ we have
913
+ ∥T n
914
+ α,w,∞f − χF ∥∞ = ∥
915
+ n
916
+
917
+ k=1
918
+ (w ◦ αn−k) (f ◦ αn) − χF∥∞
919
+ = ∥
920
+ n
921
+
922
+ k=1
923
+ (w ◦ α−k) f − χF ◦ αn∥∞
924
+ = ∥
925
+ n
926
+
927
+ k=1
928
+ (w ◦ α−k) f − χαn(F )∥∞
929
+ ≥ ∥
930
+ n
931
+
932
+ k=1
933
+ (w ◦ α−k) fχF − χαn(F )χF∥∞
934
+ = ∥
935
+ n
936
+
937
+ k=1
938
+ (w ◦ α−k) fχF∥∞
939
+ ≥ β∥fχF∥∞ ≥ β 1
940
+ β ∥χF∥∞ = 1.
941
+
942
+ 16
943
+ S. IVKOVI´C, S. ¨OZTOP, AND S.M. TABATABAIE
944
+ This completes the proof.
945
+
946
+ Example 3.17. Let Ω := R and µ be the Lebesgue measure. Put α(t) := t−1
947
+ for all t ∈ R and F := [0, 1]. If w ∈ Cb(R) such that |w(t)| ≥ 1 for all t ≥ k > 0
948
+ and inf{|w(t)| : t ∈ [0, 1]} > 0, then the required conditions in the previous
949
+ theorem hold with respect to F.
950
+ With some similar proof, one can prove the next fact without the condition
951
+ (3.2).
952
+ Theorem 3.18. Let Tα,w,∞ be the weighted composition operator on L∞(Ω, µ)
953
+ and let F ⊆ Ω be a Borel set with 0 < µ(F) < ∞ such that
954
+ inf{
955
+ n
956
+
957
+ k=1
958
+ (w ◦ α−k)(t) : n ≥ N, t ∈ F} ̸= 0.
959
+ Then, the set
960
+ {f ∈ L∞(Ω, µ) : ∥T n
961
+ α,w,∞f∥∞ ≥ 1 for all n ≥ N}
962
+ is not σ-porous in L∞(Ω, µ). In particular, the set of all non-hypercyclic vec-
963
+ tors of the operator Tα,w,∞ is not σ-porous.
964
+ In sequel, we find some application for Theorem 2.9 regarding hypercyclicity
965
+ of shift operators on Lp(R, τ).
966
+ Theorem 3.19. Consider the weighted translation operator Tα,w on Lp(R, τ)
967
+ given by Tα,wf := w · (f ◦α), where 0 < w, w−1 ∈ Cb(R) and α(t) = t + 1. For
968
+ each n ∈ N put An := [n, n + 1] = αn([0, 1]). Set
969
+ yα,w :=
970
+
971
+ n∈N
972
+ 1
973
+ inft∈An
974
+ �n
975
+ k=1(w ◦ α−k)(t)χAn
976
+ and assume that yα,w ∈ Lp(R, τ) (in particular inft∈An
977
+ �n
978
+ k=1(w ◦ α−k)(t) > 0
979
+ for all n ∈ N). Then, the set
980
+ {f ∈ Lp(R, τ) : ∥T n
981
+ α,w(f)∥p ≥ 1 for all n ∈ N}
982
+ is not σ-porous.
983
+ Proof. By Theorem 2.9, the set
984
+ E := {f ∈ Lp(R, τ) : ∥fχAn∥p ≥ ∥yα,wχAn∥p for all n ∈ N}
985
+ is not σ-porous, because it equals to
986
+ {f ∈ Lp(R, τ) : ∥fχ[m,m+1]∥p ≥ ∥yα,wχ[m,m+1]∥p for all m ∈ Z},
987
+
988
+ DYNAMICAL PROPERTIES AND SOME CLASSES OF NON-POROUS SUBSETS
989
+ 17
990
+ as yα,wχ[m,m+1] = 0 for all m ∈ Z with m ≤ 0. Now, note that for each f ∈ E
991
+ and n ∈ N,
992
+ ∥T n
993
+ α,w(f)∥p
994
+ p =
995
+
996
+ R
997
+ � n
998
+
999
+ k=1
1000
+ (w ◦ αn−k)(t)
1001
+ �p
1002
+ |(f ◦ αn)(t)|p dτ
1003
+ =
1004
+
1005
+ R
1006
+ � n
1007
+
1008
+ k=1
1009
+ (w ◦ α−k)(t)
1010
+ �p
1011
+ |f(t)|p dτ
1012
+
1013
+
1014
+ An
1015
+ � n
1016
+
1017
+ k=1
1018
+ (w ◦ α−k)(t)
1019
+ �p
1020
+ |f(t)|p dτ
1021
+ ≥ inf
1022
+ t∈An
1023
+ � n
1024
+
1025
+ k=1
1026
+ (w ◦ α−k)(t)
1027
+ �p
1028
+ ∥yα,wχAn∥p
1029
+ p
1030
+ = inf
1031
+ t∈An
1032
+ � n
1033
+
1034
+ k=1
1035
+ (w ◦ α−k)(t)
1036
+ �p
1037
+ 1
1038
+ inft∈An [�n
1039
+ k=1(w ◦ α−k)(t)]p τ(An) = 1.
1040
+
1041
+ Assume now that there exists some l ∈ Z such that
1042
+ β := inf{
1043
+ n
1044
+
1045
+ k=1
1046
+ (w ◦ α−k)(t) : t ∈ [l, l + 1], n ∈ N} > 0.
1047
+ Put
1048
+ F := {f ∈ Lp(R, τ) : ∥fχ[m,m+1]∥p ≥ ∥ 1
1049
+ β χ[l,l+1]χ[m,m+1]∥p for all m ∈ Z}.
1050
+ So by Theorem 2.9, F is not σ-porous. For every f ∈ F we have
1051
+ ∥T n
1052
+ α,w(f)∥p
1053
+ p =
1054
+
1055
+ R
1056
+ � n
1057
+
1058
+ k=1
1059
+ (w ◦ αn−k)(t)
1060
+ �p
1061
+ |(f ◦ αn)(t)|p dτ
1062
+ =
1063
+
1064
+ R
1065
+ � n
1066
+
1067
+ k=1
1068
+ (w ◦ α−k)(t)
1069
+ �p
1070
+ |f(t)|p dτ
1071
+
1072
+
1073
+ [l,l+1]
1074
+ � n
1075
+
1076
+ k=1
1077
+ (w ◦ α−k)(t)
1078
+ �p
1079
+ |f(t)|p dτ
1080
+ ≥ 1.
1081
+ Hence, the set
1082
+ {f ∈ Lp(R, τ) : ∥T n
1083
+ α,w(f)∥p ≥ 1 for all n ∈ N}
1084
+ is not σ-porous.
1085
+ Next, suppose that α is an aperiodic function on R (this means that for each
1086
+ compact set C ⊂ R, there exists a constant N > 0 such that αn(C) ∩ C = ∅
1087
+ for all n ≥ N) and β > 0, where β is as above. Then, the set
1088
+ {f ∈ Lp(R, τ) : ∥T n
1089
+ α,w(f) − χ[l,l+1]∥p ≥ 1 for all n ≥ N}
1090
+
1091
+ 18
1092
+ S. IVKOVI´C, S. ¨OZTOP, AND S.M. TABATABAIE
1093
+ is not σ-porous. Indeed, for all f ∈ F, and n ≥ N we have
1094
+ ∥T n
1095
+ α,w(f) − χ[l,l+1]∥p ≥ ∥
1096
+ n
1097
+
1098
+ k=1
1099
+ (w ◦ α−k) fχ[l,l+1]∥p
1100
+ by the similar calculations as in the proof of Theorem 3.16. However,
1101
+
1102
+ n
1103
+
1104
+ k=1
1105
+ (w ◦ α−k) fχ[l,l+1]∥p ≥ β ∥fχ[l,l+1]∥p ≥ 1.
1106
+ References
1107
+ 1. F. Bayart, Porosity and hypercyclic operators, Proc. Amer. Math. Soc. 133(11) (2005)
1108
+ 3309-3316.
1109
+ 2. F. Bayart and ´E. Matheron, Dynamics of Linear Operators, Cambridge Tracts in Math.
1110
+ 179, Cambridge University Press, Cambridge, 2009.
1111
+ 3. C. L. Belna, M. J. Evans and P.D.Humke, Symmetric and ordinary differentiation, Proc.
1112
+ Amer. Math. Soc. 72(2) (1978) 261-267.
1113
+ 4. W.R. Bloom and H. Heyer, Harmonic Analysis of Probability Measures on Hypergroups,
1114
+ De Kruyter, Berlin, 1995.
1115
+ 5. C-C. Chen and C-H. Chu, Hypercyclic weighted translations on groups, Proc. Amer.
1116
+ Math. Soc. 139 (2011) 2839-2846.
1117
+ 6. C-C. Chen, S. ¨Oztop and S. M. Tabatabaie, Disjoint dynamics on weighted Orlicz spaces,
1118
+ Disjoint dynamics on weighted Orlicz spaces, Complex Anal. Oper. Theory, 14(72)
1119
+ (2020). https://doi.org/10.1007/s11785-020-01034-x
1120
+ 7. C.-C. Chen and S. M. Tabatabaie, Chaotic operators on hypergroups, Oper. Mat. 12(1)
1121
+ (2018) 143-156.
1122
+ 8. E. P. Dolˇzenko, Boundary properties of arbitrary functions, Izv. Akad. Nauk SSSR Ser.
1123
+ Mat. 31 (1967) 3-14.
1124
+ 9. G.B. Folland, Real Analysis, Modern Techniques and Their Applications; Second Edition,
1125
+ John Wiley and Sons, Inc. New York (1999).
1126
+ 10. K-G. Grosse-Erdmann, Hypercyclic and chaotic weighted shifts, Studia Math. 139 (2000)
1127
+ 47-68.
1128
+ 11. K-G. Grosse-Erdmann and A. Peris, Linear Chaos, Universitext, Springer, 2011.
1129
+ 12. R. I. Jewett, Spaces with an abstract convolution of measures, Adv. Math., 18 (1975)
1130
+ 1-101.
1131
+ 13. V. Kumar and S. M. Tabatabaie, Hypercyclic sequences of weighted translations on hy-
1132
+ pergroups, Semigroup Forum, 103 (2021) 916–934.
1133
+ 14. D. Preiss and L. Zaj´ıˇcek, Fr´echet differentiation of convex functions in a Banach space
1134
+ with a separable dual, Proc. Amer. Math. Soc. 91(2) (1984) 202–204.
1135
+ 15. H. Salas, Hypercyclic weighted shifts, Trans. Amer. Math. Soc. 347 (1995) 993-1004.
1136
+ 16. Y. Sawano, S. M. Tabatabaie and F. Shahhoseini, Disjoint Dynamics of Weighted
1137
+ Translations
1138
+ on
1139
+ Solid
1140
+ Spaces,
1141
+ Topology
1142
+ Appl.
1143
+ 298,
1144
+ 107709,
1145
+ 14
1146
+ pp.
1147
+ (2021)
1148
+ DOI:10.1016/J.TOPOL.2021.107709
1149
+ 17. S. M. Tabatabaie and S. Ivkovic, Linear dynamics of cosine operator functions on solid
1150
+ Banach function spaces, Positivity, 25 (2021) 1437–1448.
1151
+ 18. A. Villani, Another note on the inclusion Lp(µ) ⊂ Lq(µ), Amer. Math. Monthly, 92
1152
+ (1985) 485–487.
1153
+ 19. L. Z´ajiˇcek, Porosity and σ-porous, Real Anal. Exchange 13 (1987/1988) 314-350.
1154
+ 20. L. Zaj´ıˇcek, Small non-σ-porous sets in topologically complete metric spaces, Colloq.
1155
+ Math. 77(2) (1998) 293-304.
1156
+ 21. L. Z´ajiˇcek, On σ-porous sets in abstract spaces, Abstr. Appl. Anal. 5 (2005) 509-534.
1157
+
1158
+ DYNAMICAL PROPERTIES AND SOME CLASSES OF NON-POROUS SUBSETS
1159
+ 19
1160
+ Mathematical Institute of the Serbian Academy of Sciences and Arts, p.p.
1161
+ 367, Kneza Mihaila 36, 11000 Beograd, Serbia.
1162
+ Email address: [email protected]
1163
+ Department of Mathematics, Faculty of Science, Istanbul University, Istan-
1164
+ bul, Turkey
1165
+ Email address: [email protected]
1166
+ Department of Mathematics, University of Qom, Qom, Iran.
1167
+ Email address: [email protected]
1168
+
I9E1T4oBgHgl3EQfGAMz/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
IdE2T4oBgHgl3EQfowif/content/tmp_files/2301.04022v1.pdf.txt ADDED
@@ -0,0 +1,2070 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Distributed Sparse Linear Regression
2
+ under Communication Constraints ∗
3
+ Rodney Fonseca and Boaz Nadler
4
+ Department of Computer Science and Applied Mathematics,
5
+ Weizmann Institute of Science, Rehovot, Israel
6
7
+ Abstract: In multiple domains, statistical tasks are performed in dis-
8
+ tributed settings, with data split among several end machines that are con-
9
+ nected to a fusion center. In various applications, the end machines have
10
+ limited bandwidth and power, and thus a tight communication budget.
11
+ In this work we focus on distributed learning of a sparse linear regression
12
+ model, under severe communication constraints. We propose several two
13
+ round distributed schemes, whose communication per machine is sublinear
14
+ in the data dimension. In our schemes, individual machines compute debi-
15
+ ased lasso estimators, but send to the fusion center only very few values. On
16
+ the theoretical front, we analyze one of these schemes and prove that with
17
+ high probability it achieves exact support recovery at low signal to noise
18
+ ratios, where individual machines fail to recover the support. We show in
19
+ simulations that our scheme works as well as, and in some cases better,
20
+ than more communication intensive approaches.
21
+ MSC2020 subject classifications: Primary 62J07, 62J05; secondary
22
+ 68W15.
23
+ Keywords and phrases: Divide and conquer, communication-efficient,
24
+ debiasing, high-dimensional.
25
+ 1. Introduction
26
+ In various applications, datasets are stored in a distributed manner among sev-
27
+ eral sites or machines (Fan et al., 2020, chap. 1.2). Often, due to communication
28
+ constraints as well as privacy restrictions, the raw data cannot be shared be-
29
+ tween the various machines. Such settings have motivated the development of
30
+ methods and supporting theory for distributed learning and inference. See, e.g.,
31
+ the reviews by Huo and Cao (2019), Gao et al. (2022) and references therein.
32
+ ∗This research was supported by a grant from the Council for Higher Education Compet-
33
+ itive Program for Data Science Research Centers. RF acknowledges support provided by the
34
+ Mor´a Miriam Rozen Gerber Fellowship for Brazilian postdocs.
35
+ 1
36
+ arXiv:2301.04022v1 [cs.LG] 9 Jan 2023
37
+
38
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
39
+ 2
40
+ In this paper we consider distributed learning of a sparse linear regression
41
+ model. Specifically, we assume that the response y ∈ R and the vector X ∈ Rd
42
+ of explanatory variables are linearly related via
43
+ y = X⊤θ∗ + w,
44
+ (1)
45
+ where w ∼ N(0, σ2), σ > 0 is the noise level, and θ∗ ∈ Rd is an unknown
46
+ vector of coefficients. We further assume X ∈ Rd is random with mean zero
47
+ and covariance matrix Σ. We focus on a high-dimensional setting d ≫ 1, and
48
+ assume that θ∗ is sparse with only K ≪ d nonzero coefficients. The support set
49
+ of θ∗ ∈ Rd is denoted by S = {i ∈ [d] | |θ∗
50
+ i | > 0}.
51
+ Given N samples {(Xi, yi)}N
52
+ i=1 from the model (1), common tasks are to
53
+ estimate the vector θ∗ and its support set S. Motivated by contemporary ap-
54
+ plications, we consider these tasks in a distributed setting where the data are
55
+ randomly split among M machines. Specifically, we consider a star topology
56
+ network, whereby the end machines communicate only with a fusion center.
57
+ As reviewed in Section 2, estimating θ∗ and its support in the above or simi-
58
+ lar distributed settings were studied by several authors, see for example Mateos,
59
+ Bazerque and Giannakis (2010); Chen and Xie (2014); Lee et al. (2017); Battey
60
+ et al. (2018); Chen et al. (2020); Liu et al. (2021); Barghi, Najafi and Mota-
61
+ hari (2021) and references therein. Most prior works on distributed regression
62
+ required communication of at least O(d) bits per machine, as in their schemes
63
+ each machine sends to the fusion center its full d-dimensional estimate of the
64
+ unknown vector θ∗. Some works in the literature denote this as communication
65
+ efficient, in the sense that for a machine holding n samples, an O(d) communi-
66
+ cation is still significantly less than the size O(n · d) of its data.
67
+ The design and analysis of communication efficient distributed schemes is
68
+ important, as in various distributed settings the communication channel is the
69
+ critical bottleneck. Moreover, in some practical cases, such as mobile devices and
70
+ sensor networks, the end machines may have very limited bandwidth. Thus, in
71
+ high dimensional settings with d ≫ 1, it may not even be feasible for each ma-
72
+ chine to send messages of length O(d). In this work, we study such a restricted
73
+ communication setting, assuming that each machine is allowed to send to the fu-
74
+ sion center only a limited number of bits, significantly lower than the dimension
75
+ d. Our goals are to develop low communication distributed schemes to estimate
76
+ θ∗ and its support and to theoretically analyze their performance.
77
+ We make the following contributions. On the methodology side, in Section 4
78
+ we present several two round distributed schemes. The schemes vary slightly by
79
+
80
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
81
+ 3
82
+ the messages sent, but in all of them, the fusion center estimates the support set
83
+ of θ∗ in the first round and the vector θ∗ in the second round. In our schemes,
84
+ each machine computes its own debiased lasso estimate. However, it sends to
85
+ the center only the indices of its top few largest values, possibly along with their
86
+ signs. Hence, the communication per machine is significantly less than d bits.
87
+ In the simplest variant, the fusion center estimates the support of θ∗ by voting,
88
+ selecting the few indices that were sent by the largest number of machines.
89
+ Next, on the theoretical side, we prove in Section 5 that under suitable condi-
90
+ tions, with high probability the first round of our scheme achieves exact support
91
+ recovery with communication per machine sublinear in d. Specifically, we present
92
+ support guarantees under two different parameter regimes. Theorem 2 consid-
93
+ ers a case with a relatively large number of machines. Here, each machine sends
94
+ a short message of O(K ln d) bits. Next, Theorem 3 considers a setting with
95
+ relatively few machines, M = O(ln d). Here, to achieve exact support recovery
96
+ each machine sends a much longer message, of length O(dα) for some suitable
97
+ α < 1. This is still sublinear in d, and much less than the communication re-
98
+ quired if a machine were to send its full d-dimensional estimated vector. The
99
+ proofs of our theorems rely on recent results regarding the distribution of debi-
100
+ ased lasso estimators, combined with sharp bounds on tails of binomial random
101
+ variables. Exact support recovery follows by showing that with high probability,
102
+ all non-support indices receive fewer votes than support indices.
103
+ In Section 6 we present simulations comparing our schemes to previously
104
+ proposed methods. These illustrate that with our algorithms, the fusion center
105
+ correctly detects the support of θ∗ and consequently accurately estimates θ∗,
106
+ even at low signal to noise ratios where each machine is unable to do so. Fur-
107
+ thermore, this is achieved with very little communication per machine compared
108
+ to the dimension d. One insight from both the simulations and our theoretical
109
+ analysis is that for the fusion center to detect the correct support, it is not
110
+ necessary to require M/2 votes as suggested in Barghi, Najafi and Motahari
111
+ (2021) and Chen and Xie (2014). Instead, as few as O(ln d) votes suffice to dis-
112
+ tinguish support from non-support indices. Interestingly, under a broad range
113
+ of parameter values, our schemes work as well as, and in some cases better than
114
+ more communication intensive approaches. Our simulations also highlight the
115
+ importance and advantages of a second round of communication. Specifically,
116
+ even though a single-round scheme based on averaging debiased lasso estimates,
117
+ as proposed by Lee et al. (2017), is minimax rate optimal and finds the correct
118
+ support, it nonetheless may output an estimate with a larger mean squared er-
119
+
120
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
121
+ 4
122
+ ror than that of our scheme. We conclude with a summary and discussion in
123
+ Section 7. Proofs appear in the Appendix.
124
+ Notation
125
+ For an integer k ≥ 1, we denote [k] = {1, 2, . . . , k}. The indicator
126
+ function is denoted as I(A), which equals one if condition A holds and zero
127
+ otherwise. The ℓq norm of a vector Y ∈ Rn for q ≥ 1 is ∥Y ∥q = (�n
128
+ i=1 |Yi|q)1/q,
129
+ whereas ∥Y ∥0 = �n
130
+ i=1 I(Yi ̸= 0) is its number of nonzero entries. We denote
131
+ by |Y | the vector whose entries are (|Y1|, |Y2|, . . . , |Yn|). For a d × d matrix
132
+ A = {aij}d
133
+ i,j=1, we denote ∥A∥∞ = max1≤i≤d
134
+ �d
135
+ j=1 |aij|. We further denote by
136
+ σmin(A) and σmax(A) its smallest and largest singular values, respectively. For
137
+ a subset J ⊂ [d], AJ is the d×|J| matrix whose columns are those in the subset
138
+ J. Similarly, AJ,J is the |J|×|J| submatrix whose rows and columns correspond
139
+ to the indices in J. The cumulative distribution function (CDF) of a standard
140
+ Gaussian is denoted by Φ(·) whereas Φc(·) = 1−Φ(·). We write an ≳ bn for two
141
+ sequences {an}n≥1 and {bn}n≥1 if there are positive constants C and n0 such
142
+ that an ≥ Cbn for all n > n0.
143
+ 2. Previous works
144
+ Distributed linear regression schemes under various settings, not necessarily
145
+ involving sparsity, have been proposed and theoretically studied in multiple
146
+ fields, including sensor networks, statistics and machine learning, see for example
147
+ (Guestrin et al., 2004; Predd, Kulkarni and Poor, 2006; Boyd et al., 2011; Zhang,
148
+ Duchi and Wainwright, 2013; Heinze et al., 2014; Rosenblatt and Nadler, 2016;
149
+ Jordan, Lee and Yang, 2019; Chen et al., 2020; Dobriban and Sheng, 2020; Zhu,
150
+ Li and Wang, 2021; Dobriban and Sheng, 2021).
151
+ Mateos, Bazerque and Giannakis (2010) were among the first to study dis-
152
+ tributed sparse linear regression in a general setting without a fusion center,
153
+ where machines are connected and communicate with each other. They devised
154
+ a multi-round scheme whereby all the machines reach a consensus and jointly
155
+ approximate the centralized solution, that would have been computed if all data
156
+ were available at a single machine. Several later works focused on the setting
157
+ which we also consider in this paper, where machines are connected in a star
158
+ topology to a fusion center, and only one or two communication rounds are
159
+ made. In a broader context of generalized sparse linear models, Chen and Xie
160
+ (2014) proposed a divide-and-conquer approach where each machine estimates
161
+ θ∗ by minimizing a penalized objective with a sparsity inducing penalty, such
162
+
163
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
164
+ 5
165
+ as ∥θ∥1. Each machine sends its sparse estimate to the fusion center, which es-
166
+ timates the support by voting over the indices of the individual estimates of
167
+ the M machines. Finally, the center estimates θ∗ by a weighted average of these
168
+ M estimates. For sparse linear regression, with each machine computing a lasso
169
+ estimate of θ∗, their method suffers from the well known bias of the lasso, which
170
+ is not reduced by averaging.
171
+ To overcome the bias of the lasso, in recent years several debiased lasso es-
172
+ timators were derived and theoretically studied, see Zhang and Zhang (2014);
173
+ van de Geer et al. (2014); Javanmard and Montanari (2018). For distributed
174
+ learning, debiased estimators have been applied in various settings, including
175
+ hypothesis testing, quantile regression and more, see for example Lee et al.
176
+ (2017); Battey et al. (2018); Liu et al. (2021); Lv and Lian (2022).
177
+ In particular, Lee et al. (2017) proposed a single round scheme whereby each
178
+ machine computes its own debiased lasso estimator, and sends it to the fusion
179
+ center. The center averages these debiased estimators and thresholds the result
180
+ to estimate θ∗ and recover its support. Lee et al. (2017) proved that the re-
181
+ sulting estimator achieves the same error rate as the centralized solution, and
182
+ is minimax rate optimal. However, their scheme requires a communication of
183
+ O(d) bits per machine and is thus not applicable in the restricted communica-
184
+ tion setting considered in this manuscript. Moreover, as we demonstrate in the
185
+ simulation section, unless the signal strength is very low, our two round scheme
186
+ in fact achieves a smaller mean squared error, with a much lower communica-
187
+ tion. This highlights the potential sub-optimality of lasso and debiased lasso in
188
+ sparse regression problems with sufficiently strong signals.
189
+ Most related to our paper is the recent work by Barghi, Najafi and Motahari
190
+ (2021). In their method, each machine computes a debiased lasso estimator
191
+ ˆθ, but sends to the fusion center only the indices i for which |ˆθi| is above a
192
+ certain threshold. The support set estimated by the fusion center consists of all
193
+ indices that were sent by at least half of the machines, i.e., indices that received
194
+ at least M/2 votes. Focusing on the consistency of feature selection, Barghi,
195
+ Najafi and Motahari (2021) derive bounds on the type-I and type-II errors
196
+ of the estimated support set. Their results, however, are given as rates with
197
+ unspecified multiplicative constants. As we show in this work, both theoretically
198
+ and empirically, consistent support estimation is possible with a much lower
199
+ voting threshold. Furthermore, requiring at least M/2 votes implies that their
200
+ scheme achieves exact support recovery only for much stronger signals.
201
+ We remark that voting is a natural approach for distributed support esti-
202
+
203
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
204
+ 6
205
+ mation under communication constraints. Amiraz, Krauthgamer and Nadler
206
+ (2022) analyzed voting-based distributed schemes in the context of a simpler
207
+ problem of sparse Gaussian mean estimation. They proved that even at low
208
+ signal strengths, their schemes achieve exact support recovery with high prob-
209
+ ability using communication sublinear in the dimension. Their setting can be
210
+ viewed as a particular case of sparse linear regression but with a unitary design
211
+ matrix. Their proofs, which rely on this property, do not extend to our setting.
212
+ 3. The lasso and debiased lasso estimators
213
+ For our paper to be self contained, we briefly review the lasso and debiased lasso
214
+ and some of their theoretical properties. The lasso (Tibshirani, 1996) is perhaps
215
+ the most popular method to fit high-dimensional sparse linear models. Given
216
+ a regularization parameter λ > 0 and n samples (Xi, yi), stacked in a design
217
+ matrix X ∈ Rn×d and a response vector Y ∈ Rn, the lasso estimator is given by
218
+ ˜θ = ˜θ(X, Y, λ) = arg min
219
+ θ∈Rd
220
+ � 1
221
+ 2n∥Y − Xθ∥2
222
+ 2 + λ∥θ∥1
223
+
224
+ .
225
+ (2)
226
+ The lasso has two desirable properties. First, computationally Eq. (2) is a convex
227
+ problem for which there are fast solvers. Second, from a theoretical standpoint,
228
+ it enjoys strong recovery guarantees, assuming the data follows the model (1)
229
+ with an exact or approximately sparse θ∗, see for example (Candes and Tao,
230
+ 2005; Bunea, Tsybakov and Wegkamp, 2007; van de Geer and B¨uhlmann, 2009;
231
+ Hastie, Tibshirani and Wainwright, 2015). However, the lasso has two major
232
+ drawbacks: it may output significantly biased estimates and it does not have
233
+ a simple asymptotic distribution. The latter is needed for confidence intervals
234
+ and hypothesis testing. To overcome these limitations, and in particular derive
235
+ confidence intervals for high-dimensional sparse linear models, several authors
236
+ developed debiased lasso estimators (Zhang and Zhang, 2014; van de Geer et al.,
237
+ 2014; Javanmard and Montanari, 2014a,b, 2018).
238
+ For random X with a known population covariance matrix Σ, Javanmard and
239
+ Montanari (2014a) proposed 1
240
+ nΣ−1X⊤(Y − X˜θ) as a debiasing term. . As Σ is
241
+ often unknown, both van de Geer et al. (2014) and Javanmard and Montanari
242
+ (2014b) developed methods to estimate its inverse Ω = Σ−1. In our work, we
243
+ estimate Ω using the approach of van de Geer et al. (2014), who assume that Ω
244
+ is sparse. In their method, presented in Algorithm 1, ˆΩ is constructed by fitting
245
+ a lasso regression with regularization λΩ > 0 to each column of X against all
246
+ the other columns. Hence, it requires solving d separate lasso problems.
247
+
248
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
249
+ 7
250
+ Given the lasso estimate ˜θ of Eq. (2) and the matrix ˆΩ, the debiased lasso is
251
+ ˆθ = ˆθ(Y, X, λ, λΩ) = ˜θ + 1
252
+ n
253
+ ˆΩX⊤(Y − X˜θ).
254
+ (3)
255
+ An appealing property of ˆθ is that, under some conditions, it is asymptotically
256
+ unbiased with a Gaussian distribution. For our analysis, we shall use the follow-
257
+ ing result (Javanmard and Montanari, 2018, Theorem 3.13).
258
+ Theorem 1. Consider the linear model Y = Xθ∗+W, where W ∼ N(0, σ2In×n)
259
+ and X ∈ Rn×d has independent Gaussian rows with zero mean and covariance
260
+ matrix Σ ∈ Rd×d. Suppose that Σ satisfies the following conditions:
261
+ i. For all i ∈ [d], Σii ≤ 1.
262
+ ii. For some constants Cmax, Cmin > 0,
263
+ 0 < Cmin < σmin(Σ) ≤ σmax(Σ) < Cmax.
264
+ (4)
265
+ iii. For C0 = (32Cmax/Cmin) + 1, and a constant ρ > 0,
266
+ max
267
+ J⊆[d], |J|≤C0K ∥Σ−1
268
+ J,J∥∞ ≤ ρ.
269
+ Let KΩ be the maximum row-wise sparsity of Ω = Σ−1, that is,
270
+ KΩ = max
271
+ i∈[d] |{j ∈ [d]; Ωij ̸= 0, j ̸= i}| .
272
+ Let ˜θ be the lasso estimator computed using λ = κσ
273
+
274
+ (ln d)/n for κ ∈ [8, κmax],
275
+ and let ˆθ be the debiased lasso estimator in Eq. (3) with ˆΩ computed by Algorithm
276
+ 1 with λΩ = κΩ
277
+
278
+ (ln d)/n for some suitable large κΩ > 0. Let ˆΣ = X⊤X/n de-
279
+ note the empirical covariance matrix. Then there exist constants c, c∗, C depend-
280
+ ing solely on Cmin, Cmax, κmax and κΩ such that, for n ≥ c max{K, KΩ} ln d,
281
+ the following holds:
282
+ √n(ˆθ − θ∗) = Z + R,
283
+ Z|X ∼ N(0, σ2 ˆΩˆΣˆΩ⊤),
284
+ (5)
285
+ where Z = n−1/2 ˆΩX⊤W and R = √n
286
+
287
+ ˆΩˆΣ − I
288
+
289
+ (θ∗ − ˜θ), and with probability
290
+ at least 1 − 2de−c∗n/K − de−cn − 6d−2,
291
+ ∥R∥∞ ≤ Cσ ln d
292
+ √n
293
+
294
+ ρ
295
+
296
+ K + min{K, KΩ}
297
+
298
+ .
299
+ (6)
300
+ Assumptions (i) and (ii) in this theorem are common in the literature. As-
301
+ sumption (iii) is satisfied, for example, by circulant matrices Σij = ς|i−j|,
302
+
303
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
304
+ 8
305
+ Algorithm 1 Computation of a precision matrix estimate ˆΩ
306
+ Input: design matrix X ∈ Rn×d, regularization parameter λΩ > 0.
307
+ Output: precision matrix estimate ˆΩ ∈ Rd×d.
308
+ xi ∈ Rn denotes the i-th column of X.
309
+ X−i ∈ Rn×(d−1) denotes the design matrix with the i-th column removed.
310
+ 1: for i = 1, . . . , d do
311
+ 2:
312
+ Fit a lasso with response xi, design matrix X−i and regularization parameter λΩ.
313
+ 3:
314
+ Let ˜γi = {˜γi,j}d
315
+ j=1,j̸=i ∈ Rd−1 be the estimated regression coefficients of step 2.
316
+ 4:
317
+ Compute ˜τ 2
318
+ i = (2n)−1∥xi − X−i˜γi∥2
319
+ 2 + λΩ∥˜γi∥1, i ∈ [d].
320
+ 5: end for
321
+ 6: Construct d × d matrix
322
+ ˜C =
323
+
324
+ ����
325
+ 1
326
+ −˜γ1,2
327
+ · · ·
328
+ −˜γ1,d
329
+ −˜γ2,1
330
+ 1
331
+ · · ·
332
+ −˜γ2,d
333
+ ...
334
+ ...
335
+ ...
336
+ ...
337
+ −˜γd,1
338
+ −˜γd,2
339
+ · · ·
340
+ 1
341
+
342
+ ���� .
343
+ 7: return ˆΩ = diag{˜τ −2
344
+ 1
345
+ , . . . , ˜τ −2
346
+ d
347
+ } ˜C.
348
+ ς ∈ (0, 1). The quantity R in Eq. (5) can be viewed as a bias term. By Theorem
349
+ 1, this bias is small if the sample size and dimension are suitably large, which
350
+ in turn implies that ˆθi is approximately Gaussian. The following lemma, proven
351
+ in the Appendix, bounds the error of this approximation. It will be used in
352
+ analyzing the probability of exact support recovery of our distributed scheme.
353
+ Lemma 1. Under the assumptions of Theorem 1, for any τ > 0,
354
+ ���Pr
355
+ � √n(ˆθi−θ∗
356
+ i )
357
+ σ√cii
358
+ ≤ τ
359
+
360
+ − Φ (τ)
361
+ ��� ≤
362
+ δR
363
+ σ√cii
364
+ φ (τ) + 2de−c∗n/K + de−cn + 6
365
+ d2 ,
366
+ (7)
367
+ where φ(·) denotes the Gaussian density function, cii = (ˆΩˆΣˆΩ⊤)ii, and δR is
368
+ the upper bound on the bias term in Eq. (6), namely
369
+ δR = Cσ ln d
370
+ √n
371
+
372
+ ρ
373
+
374
+ K + min{K, KΩ}
375
+
376
+ .
377
+ (8)
378
+ 4. Distributed sparse regression with restricted communication
379
+ As described in Section 1, we consider a distributed setting with M machines
380
+ connected in a star topology to a fusion center. For simplicity, we assume that
381
+ each machine m has a sample (Xm, Y m) of n = N/M i.i.d. observations from
382
+ the model (1), where Y m ∈ Rn and Xm ∈ Rn×d. In describing our schemes,
383
+ we further assume that the noise level σ is known. If σ is unknown, it may
384
+
385
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
386
+ 9
387
+ Algorithm 2 Distributed voting based scheme for support estimation
388
+ Input: Data (Xm, Y m) ∈ Rn×(d+1), threshold τ and regularization parameters λΩ and λ.
389
+ Output: Support estimate ˆS.
390
+ At each local machine m = 1, . . . , M
391
+ 1: Compute a lasso estimator ˜θm via Eq. (2) with regularization parameter λ.
392
+ 2: Compute a precision matrix estimate ˆΩm ∈ Rd×d by Algorithm 1 with Xm and λΩ.
393
+ 3: Compute a debiased lasso estimate ˆθm ∈ Rd, Eq. (3), with data (Xm, Y m), λ and ˆΩm.
394
+ 4: Calculate the empirical covariance matrix ˆΣm = n−1(Xm)⊤Xm.
395
+ 5: Use ˆΩm and ˆΣm to compute the standardized estimator ˆξm ∈ Rd, Eq. (9).
396
+ 6: Set Sm = {i; |ˆξm
397
+ i | > τ} and send it to the fusion center.
398
+ At the fusion center
399
+ 7: For each i ∈ [d], compute Vi = �M
400
+ m=1 I(i ∈ Sm).
401
+ 8: Sort Vj1 ≥ Vj2 ≥ · · · ≥ Vjd.
402
+ 9: return ˆS = {j1, . . . , jK}.
403
+ be consistently estimated, for example, by the scaled lasso of Sun and Zhang
404
+ (2012), see also (Javanmard and Montanari, 2018, Corollary 3.10).
405
+ We present several two round distributed schemes to estimate the sparse
406
+ vector θ∗ of Eq. (1) under the constraint of limited communication between the
407
+ M machines and the fusion center. Here we present the simplest scheme and
408
+ discuss other variants in section 4.1. In all variants, the fusion center estimates
409
+ the support of θ∗ in the first round, and θ∗ itself in the second round.
410
+ The first round of our scheme is described in Algorithm 2, whereas the full two
411
+ round scheme is outlined in Algorithm 3. In the first round, each machine m ∈
412
+ [M] computes the following quantities using its own data (Xm, Y m): (i) a lasso
413
+ estimate ˜θm by Eq. (2); (ii) a matrix ˆΩm by Algorithm 1; and (iii) a debiased
414
+ lasso ˆθm by Eq. (3). Up to this point, this is identical to Lee et al. (2017). The
415
+ main difference is that in their scheme, each machine sends to the center its
416
+ debiased lasso estimate ˆθm ∈ Rd, incurring O(d) bits of communication.
417
+ In contrast, in our scheme each machine sends only a few indices. Towards
418
+ this end and in light of Eq. (7) of Lemma 1, each machine computes a normalized
419
+ vector ˆξm whose coordinates are given by
420
+ ˆξm
421
+ k =
422
+ √nˆθm
423
+ k
424
+ σ(ˆΩm ˆΣm(ˆΩm)⊤)1/2
425
+ kk
426
+ ,
427
+ ∀k ∈ [d].
428
+ (9)
429
+ In the simplest variant, each machine sends to the center only indices k such
430
+ that |ˆξm
431
+ k | > τ for some suitable threshold τ > 0.
432
+ Given the messages sent by the M machines, the fusion center counts the
433
+ number of votes received by each index. If the sparsity level K is known, its
434
+
435
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
436
+ 10
437
+ Algorithm 3 Two round distributed scheme to estimate θ∗
438
+ Input: Data (Xm, Y m) ∈ Rn×(d+1), sparsity K, threshold τ and regularizations λΩ and λ.
439
+ Output: A two-round estimate ˆθ ∈ Rd of θ∗.
440
+ First round
441
+ 1: The fusion center estimates ˆS with Algorithm 2.
442
+ Second round
443
+ 2: The fusion center sends ˆS to all M machines.
444
+ At each local machine m = 1, . . . , M
445
+ 3: Let Xm
446
+ ˆ
447
+ S ∈ Rn×K be the K columns of Xm corresponding to indices in ˆS.
448
+ 4: Compute ˆβm = arg minβ ∥Xm
449
+ ˆ
450
+ S β − Y m∥2
451
+ 2.
452
+ 5: Send ˆβm to the fusion center.
453
+ At the fusion center
454
+ 6: Given ˆβ1, . . . , ˆβM, compute the estimate ˆθ according to Eq. (11).
455
+ 7: return ˆθ.
456
+ estimated support set ˆS consists of the K indices with the largest number of
457
+ votes. Otherwise, as discussed in Remark 4.5 below, the center may estimate the
458
+ support set by the indices whose number of votes exceed a suitable threshold.
459
+ Next, we describe the second round. At its start, the fusion center sends
460
+ the estimated support ˆS to all M machines. Next, each machine computes the
461
+ standard least squares regression solution, restricted to the set ˆS, namely
462
+ ˆβm = arg min
463
+ β ∥Xm
464
+ ˆ
465
+ S β − Y m∥2
466
+ 2
467
+ (10)
468
+ where Xm
469
+ ˆ
470
+ S ∈ Rn×| ˆ
471
+ S| consists of the columns of Xm corresponding to the indices
472
+ in ˆS. Each machine then sends its vector ˆβm to the fusion center. Finally, the
473
+ fusion center estimates θ∗ by averaging these M vectors,
474
+ ˆθi =
475
+
476
+ 1
477
+ M
478
+ �M
479
+ m=1 ˆβm
480
+ i
481
+ i ∈ ˆS
482
+ 0
483
+ otherwise
484
+ (11)
485
+ In the next section we present several variants of this basic two round scheme.
486
+ Before that we make a few remarks and observations.
487
+ Remark 4.1. The communication of the first round (Algorithm 2) depends on
488
+ the threshold τ. A high threshold leads to only few sent indices. However, at
489
+ low signal strengths, the signal coordinates may not have the highest values |ˆξm
490
+ k |
491
+ and thus may not be sent. Hence, for successful support recovery by the fusion
492
+ center, a lower threshold leading to many more sent coordinates is required. Since
493
+ the maxima of d standard Gaussian variables scales as
494
+
495
+ 2 ln d, to comply with
496
+
497
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
498
+ 11
499
+ the communication constraints, the threshold τ should also scale as O(
500
+
501
+ ln d). In
502
+ Section 5, we present suitable thresholds and sufficient conditions on the number
503
+ of machines and on the signal strength, which guarantee support recovery by
504
+ Algorithm 2, with high probability and little communication per machine.
505
+ Remark 4.2. With known K, the communication per machine of the second
506
+ round is O(K ln d) bits. For suitable choices of the threshold τ in the first round,
507
+ this is negligible or at most comparable to the communication of the first round.
508
+ Remark 4.3. A two round scheme, whereby given an estimated set ˆS, the
509
+ second round is identical to ours was discussed by Battey et al. (2018) in Eq.
510
+ (A.2) of their supplementary. The difference is that in their first round, similar
511
+ to Lee et al. (2017), each machine sends its full debiased lasso vector, with a
512
+ communication of O(d) bits. Battey et al. (2018) showed that, under certain
513
+ conditions, their two-round estimator attains an optimal rate. In Section 5, we
514
+ prove that for a sufficiently high SNR, our method achieves the same rate, but
515
+ using much less communication.
516
+ Remark 4.4. With a higher communication per machine in the second round,
517
+ it is possible for the fusion center to compute the exact centralized least squares
518
+ solution corresponding to the set ˆS, denoted ˆθLS. Specifically, suppose that each
519
+ machine sends to the center both the vector (Xm
520
+ ˆ
521
+ S )⊤Y m of length | ˆS|, and the
522
+ | ˆS| × | ˆS| matrix (Xm
523
+ ˆ
524
+ S )⊤Xm
525
+ ˆ
526
+ S . The center may then compute ˆθLS as follows
527
+ ˆθ
528
+ LS =
529
+ � M
530
+
531
+ m=1
532
+ (Xm
533
+ ˆ
534
+ S )⊤Xm
535
+ ˆ
536
+ S
537
+ �−1
538
+ M
539
+
540
+ m=1
541
+ (Xm
542
+ ˆ
543
+ S )⊤Y m.
544
+ (12)
545
+ With K known and | ˆS| = K, such a second round has a communication of
546
+ O(K2) bits. If the sparsity K is non-negligible, this is much higher than the
547
+ O(K) bits of our original scheme. In particular, if K = O(d1/2), the resulting
548
+ communication is comparable to that of sending the full debiased lasso vector.
549
+ Remark 4.5. In practice, the sparsity K is often unknown. Instead of step 9
550
+ in Algorithm 2, one alternative is to estimate S by thresholding the number of
551
+ votes. For some threshold τvotes > 0, ˆS could be set as all indices i such that
552
+ Vi > τvotes. Lemma 3 in Appendix A shows that, under suitable conditions,
553
+ non-support indices have a small probability of receiving more than 2 ln d votes.
554
+ Hence, τvotes = 2 ln d is a reasonable choice for such a threshold value.
555
+
556
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
557
+ 12
558
+ 4.1. Variations of Algorithm 2
559
+ Various adaptations of Algorithm 2 are possible and may offer better perfor-
560
+ mance. One example is a top L algorithm where each machine sends to the cen-
561
+ ter the indices of the L largest entries of |ˆξm|, for some parameter K ≤ L ≪ d.
562
+ A similar approach was proposed in Amiraz, Krauthgamer and Nadler (2022)
563
+ for the simpler problem of sparse normal means estimation. One advantage of
564
+ this variant is that its communication per machine is fixed and known a priori
565
+ O(L ln d). This is in contrast to the above thresholding based scheme, whose
566
+ communication per machine is random.
567
+ A different variant is to use sums of signs to estimate the support. Here
568
+ machines send both the indices corresponding to the largest entries in |ˆξm| and
569
+ their signs. Hence, in step 5 of Algorithm 2 the message sent by machine m is
570
+ Sm =
571
+ ��
572
+ i, sign(ˆξm
573
+ i )
574
+
575
+ ; |ˆξm
576
+ i | > τ
577
+
578
+ .
579
+ Next, the fusion center computes for each index i ∈ [d] its corresponding sum
580
+ of received signs, i.e.,
581
+ V sign
582
+ i
583
+ =
584
+ M
585
+
586
+ m=1
587
+ sign(ˆξm
588
+ i )I
589
+ ��
590
+ i, sign(ˆξm
591
+ i )
592
+
593
+ ∈ Sm�
594
+ .
595
+ (13)
596
+ For known K, the estimated support set are the K indices with largest values
597
+ of |V sign
598
+ i
599
+ |. This algorithm uses a few more bits than a voting scheme. How-
600
+ ever, sums of signs are expected to better distinguish between support and
601
+ non-support coefficients when the number of machines is large. The reason is
602
+ that at non-support indices j ̸∈ S, the random variable V sign
603
+ j
604
+ has approximately
605
+ zero mean, unlike sums of votes Vj, whereas at support indices |V sign
606
+ i
607
+ | ≈ Vi since
608
+ support indices are unlikely to be sent to the fusion center with the opposite
609
+ sign of θ∗
610
+ i . In the simulation section we illustrate the improved performance of
611
+ a sign-based over a votes-based distributed scheme.
612
+ 5. Theoretical results
613
+ In this section, we present a theoretical analysis for one of our schemes. Specif-
614
+ ically, both Theorems 2 and 3 show that under suitable conditions, with high
615
+ probability Algorithm 2 achieves exact support recovery with little communi-
616
+ cation per machine. In Theorem 2, the number of machines is relatively large,
617
+ and the communication per machine is linear in the sparsity K. In Theorem 3
618
+
619
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
620
+ 13
621
+ the number of machines M is logarithmic in d, in which case the communica-
622
+ tion per machine is much higher, though still sublinear in d. Both theorems are
623
+ based on the Gaussian approximation in Lemma 1 and on probability bounds
624
+ for binomial random variables. Their proofs appear in Appendix A.
625
+ To put our theorems in context, let us briefly review previous results on exact
626
+ support recovery in the simpler (non-distributed) sparse linear regression set-
627
+ ting. A key quantity characterizing the ability of exact support recovery is the
628
+ signal strength, defined as θmin = mini∈S |θ∗
629
+ i |. As proven by Wainwright (2009),
630
+ under suitable conditions on the design matrix, the lasso estimator based on
631
+ n samples and an appropriately chosen regularization parameter λn, achieves
632
+ exact support recovery with high probability, provided that θmin ≳
633
+
634
+ (ln d)/n.
635
+ The same rate θmin ≳
636
+
637
+ (ln d)/n is also sufficient for support recovery using
638
+ a debiased lasso estimator (see, e.g., section 2.2 of Javanmard and Montanari
639
+ (2014b)). In a distributed setting, Lee et al. (2017) proved that with high proba-
640
+ bility, their scheme achieves exact support recovery when θmin ≳
641
+
642
+ (ln d)/(nM).
643
+ While this result matches the centralized setting, their scheme requires each ma-
644
+ chine to send to the center its d-dimensional debiased lasso estimate, incurring
645
+ O(d) communication per machine. Hence, an interesting range for the signal
646
+ strength, for the study of support recovery under communication constraints,
647
+ is
648
+
649
+ ln d
650
+ nM ≲ θmin ≲
651
+
652
+ ln d
653
+ n . In this range, individual machines may be unable to
654
+ exactly recover the support using the lasso or debiased lasso estimators.
655
+ To derive support recovery guarantees, we assume the smallest nonzero co-
656
+ efficient of θ∗ is sufficiently large, namely |θ∗
657
+ i | ≥ θmin for all i ∈ S and some
658
+ suitable θmin > 0. For our analysis below, conditional on the design matrices
659
+ X1, . . . , XM at the M machines, it will be convenient to make the following
660
+ change of variables from θmin to the (data-dependent) SNR parameter r,
661
+ θmin = θmin(d, σ, r, n, cΩ) = σ
662
+
663
+ 2cΩ
664
+ n r ln d,
665
+ (14)
666
+ where cΩ is defined as
667
+ cΩ =
668
+ max
669
+ i∈[d],m∈[M]
670
+
671
+ ˆΩm ˆΣm(ˆΩm)⊤�
672
+ ii .
673
+ (15)
674
+ Recall from Eq. (9) that by Theorem 1, σ2 �
675
+ n−1 ˆΩm ˆΣm(ˆΩm)⊤�
676
+ ii is the asymp-
677
+ totic variance of ˆθm
678
+ i . Hence, σ2cΩ/n is the largest variance of all d coordinates
679
+ of the M debiased estimators computed by the M machines. In terms of the
680
+ SNR parameter r, the range of interest is thus
681
+ 1
682
+ M < r < 1.
683
+
684
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
685
+ 14
686
+ Recall that our scheme is based on thresholding the normalized debiased lasso
687
+ estimators ˆξm
688
+ k of Eq. (9). We denote the corresponding normalized signal by
689
+ ϑm
690
+ k =
691
+ √nθ∗
692
+ k
693
+ σ
694
+
695
+ ˆΩm ˆΣm(ˆΩm)⊤
696
+ �1/2
697
+ kk
698
+ ,
699
+ ∀k ∈ [d].
700
+ (16)
701
+ Lemma 1 states that under suitable conditions, ˆξm
702
+ k − ϑm
703
+ k has approximately a
704
+ standard Gaussian distribution. This property plays an important role in our
705
+ theoretical results. Eq. (7) of Lemma 1 provides a bound on the error between
706
+ the CDF of ˆξm
707
+ k −ϑm
708
+ k and that of a standard Gaussian. For a threshold τ, let ϵ(τ)
709
+ be the largest of these error bounds over all d coordinates in all M machines,
710
+ ϵ(τ) =
711
+ max
712
+ k∈[d],m∈[M]
713
+
714
+
715
+
716
+
717
+
718
+ δRφ (τ − ϑm
719
+ k )
720
+ σ
721
+
722
+ ˆΩm ˆΣm(ˆΩm)⊤
723
+ �1/2
724
+ kk
725
+ + 2de−c∗n/K + de−cn + 6
726
+ d2
727
+
728
+
729
+
730
+
731
+
732
+ . (17)
733
+ Recall that δR, defined in Eq. (8), is an upper bound on the bias ˆθm
734
+ k − θ∗
735
+ k.
736
+ By Lemma 5.4 of van de Geer et al. (2014), if the row sparsity of Ω satisfies
737
+ KΩ = o(n/ ln d) and ˆΩm is computed with regularization λΩ ∝
738
+
739
+ (ln d)/n, then
740
+
741
+ ˆΩm ˆΣm(ˆΩm)⊤�
742
+ kk ≥ Ωkk + oP (1) ≥ C−1
743
+ max + oP (1) when ln d
744
+ n → 0. Hence, when
745
+ n and d are large, all terms on the right hand side of Eq. (17) are small, and
746
+ the Gaussian approximation is accurate.
747
+ To prove that our scheme recovers S with high probability, we assume that:
748
+ (C1) The n samples in each of the M machines are i.i.d. from the model (1)
749
+ and the conditions of Theorem 1 all hold. Additionally, all machines use
750
+ the same regularization parameters λ and λΩ to compute the lasso (2) and
751
+ debiased lasso (3) estimators, respectively.
752
+ (C2) |θ∗
753
+ i | ≥ θmin(d, σ, r, n, cΩ) for all i ∈ S, where θmin and cΩ are defined in
754
+ Eqs. (14) and (15), respectively.
755
+ The following theorem provides a recovery guarantee for Algorithm 2, where
756
+ the sparsity K is assumed to be known to the fusion center.
757
+ Theorem 2. Suppose Algorithm 2 is run with threshold τ =
758
+
759
+ 2 ln d. Assume
760
+ that d is sufficiently large and that the SNR in Eq. (14) satisfies
761
+ 1
762
+ 4
763
+ ln2(48√π ln3/2 d)
764
+ ln2(d)
765
+ < r < 1.
766
+ Additionally, assume conditions C1 and C2 hold and the approximation error
767
+
768
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
769
+ 15
770
+ in Eq. (17) satisfies ϵ(τ) ≤ 1/d. Then, if the number of machines satisfies
771
+ 8 ln d
772
+
773
+ 2(1−√r)
774
+ 2 ln d
775
+
776
+
777
+
778
+ 2(1−√r)
779
+ 2 ln d+1
780
+ � − ϵ(τ)d(1−√r)
781
+ 2 d(1−√r)
782
+ 2
783
+ ≤ M ≤ d
784
+ 3,
785
+ (18)
786
+ with probability at least 1 − K+1
787
+ d , Algorithm 2 achieves exact support recovery.
788
+ Additionally, the expected communication per machine is O (K ln d) bits.
789
+ Let us make a few remarks regarding this theorem. The upper bound M <
790
+ d/3 is rather artificial and stems from the fact that in our proof we assume
791
+ M < d. It is possible to derive support guarantees also for the case M > d,
792
+ though this setting seems to be unlikely in practice. The lower bound on the
793
+ number of machines is required to guarantee that with high probability, all
794
+ support indices receive more votes than any non-support coordinate. The lower
795
+ bound on the SNR r ensures that the lower bound on the number of machines
796
+ in Eq. (18) is indeed smaller than d/3, so the range of possible values for M is
797
+ not empty. A similar lower bound on r appeared in Amiraz, Krauthgamer and
798
+ Nadler (2022) after their Theorem 1.B.
799
+ Another important remark is that the threshold τ =
800
+
801
+ 2 ln d in Theorem 2
802
+ is relatively high, so each machine sends only few indices to the center. How-
803
+ ever, to guarantee support recovery, this requires a relatively large number of
804
+ machines M = polylog(d) · d(1−√r)2. In Theorem 3, we give sufficient conditions
805
+ to still achieve a high probability of exact support recovery when the number
806
+ of machines is much smaller, of order only logarithmic in d. The price to pay is
807
+ a higher communication per machine, which nonetheless is still sub-linear in d,
808
+ namely much lower than the communication required to send the whole debi-
809
+ ased lasso vector. For the next theorem, we assume that a lower bound on the
810
+ SNR is known to all machines, which set a threshold that depends on it.
811
+ Theorem 3. Suppose Algorithm 2 is run with threshold τ =
812
+
813
+ 2r ln d. Assume
814
+ that d is sufficiently large and that the SNR in Eq. (14) satisfies
815
+ ln(16 ln d)
816
+ ln d
817
+ < r < 1.
818
+ Additionally, assume conditions C1 and C2 hold and the approximation error
819
+ in Eq. (17) satisfies ϵ(τ) < 1/(4dr). If the number of machines satisfies
820
+ 16 ln d
821
+ 1 − 2ϵ(τ) ≤ M ≤ dr,
822
+ (19)
823
+
824
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
825
+ 16
826
+ then with probability at least 1 − K+1
827
+ d , Algorithm 2 achieves exact support re-
828
+ covery, with expected communication per machine O
829
+
830
+ d1−r ln d
831
+
832
+ bits.
833
+ Beyond support recovery, another quantity of interest is the accuracy of the
834
+ distributed estimator ˆθ of Eq. (11). The following corollary, proven in Appendix
835
+ A, shows that once S is precisely recovered, ˆθ is close to the oracle least squares
836
+ estimator ˆθLS computed with all data in a single machine and with knowledge
837
+ of the true support. Consequently, ˆθ is also close to the true vector θ∗.
838
+ Corollary 1. Assume the conditions of Theorem 2 hold. Let N = nM denote
839
+ the total sample size in all M machines. If M = O
840
+
841
+ NK
842
+ (max{K,ln N})2
843
+
844
+ , then
845
+ ∥ˆθ − ˆθ
846
+ LS∥2 = OP
847
+ �√
848
+ M max{K, ln N}
849
+ N
850
+
851
+ and
852
+ ∥ˆθ − θ∗∥2 = OP
853
+ ��
854
+ K
855
+ N
856
+
857
+ ,
858
+ as d, N → ∞ and
859
+ ln d
860
+ N/M → 0, where ˆθ is defined in Eq. (11) and ˆθLS is the
861
+ least squares solution using all N samples and with a known S, as in Eq. (12),
862
+ appended by zeros at all coordinates j /∈ S.
863
+ Corollary 1 shows that in a high dimensional sparse setting, for a sufficiently
864
+ strong signal, Algorithm 3 with a threshold τ =
865
+
866
+ 2 ln d achieves the same error
867
+ rate as the oracle estimator. Let us put this result in a broader context. If the
868
+ support S were known, then each machine could have computed its least squares
869
+ solution restricted to S and send it to the center for averaging. As discussed
870
+ in Rosenblatt and Nadler (2016), in a general setting of M-estimators, if the
871
+ number of machines is not too large, averaging is optimal and to leading order
872
+ coincides with the centralized solution. Yet, while being rate optimal, we note
873
+ that averaging does lead to a loss of accuracy and is not as efficient as the oracle
874
+ estimator, see Dobriban and Sheng (2021).
875
+ As mentioned in Remark 4.3, Battey et al. (2018) also proposed a two-round
876
+ estimator that attains the optimal rate in Corollary 1, but requires each machine
877
+ to send at least d values to the fusion center. In contrast, ˆθ is computed using
878
+ a much lower communication cost. Similar results can also be established for
879
+ Algorithm 3 under the conditions of Theorem 3.
880
+ 5.1. Comparison to other works
881
+ Theorems 2 and 3 can be viewed as analogous to Theorems 2.A and 2.B of
882
+ Amiraz, Krauthgamer and Nadler (2022), who studied distributed estimation
883
+
884
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
885
+ 17
886
+ of a sparse vector under communication constraints. A key difference is that
887
+ in their setting, the d coordinates of the estimated vector at each machine are
888
+ all independent and unbiased. This allowed them to analyze a top-L scheme
889
+ since the probability of sending any non-support index was the same and could
890
+ be easily bounded by L/d. As such, their proofs do not directly apply to our
891
+ setting. In our case, the debiased lasso ˆθm still has a small bias term and its
892
+ d coordinates are in general correlated. This implies that the probabilities of
893
+ sending non-support indices are not all identical. In our analysis, we bypass
894
+ this issue by analyzing instead a thresholding approach at step 5 of Algorithm
895
+ 2, where each ˆθm
896
+ i
897
+ for i ∈ [d] is compared separately to a fixed threshold. This
898
+ way, we do not need to account for the complex dependence among different
899
+ coordinates of the debiased lasso.
900
+ Barghi, Najafi and Motahari (2021) considered a similar distributed scheme
901
+ to estimate the support of θ∗. They did not normalize the debiased lasso estimate
902
+ at each machine, and more importantly the estimated support set consists only
903
+ of those indices that received at least M/2 votes. The authors performed a
904
+ theoretical analysis of their scheme, though various quantities are described only
905
+ up to unspecified multiplicative constants. We remark that both theoretically as
906
+ well as empirically, the SNR must be quite high for support indices to obtain at
907
+ least M/2 votes. In our work, we present explicit expressions for the minimum
908
+ SNR in Eq. (14), sufficient for exact support recovery, requiring far fewer votes
909
+ at the fusion center. The simulations in the next section illustrate the advantages
910
+ of our scheme as compared to that of Barghi, Najafi and Motahari (2021).
911
+ 6. Simulations
912
+ We present simulations that illustrate the performance of our proposed methods
913
+ in comparison to other distributed schemes. We focus on methods based on
914
+ debiased lasso estimates, and specifically consider the following five distributed
915
+ schemes to estimate θ∗ and its support.
916
+ • thresh-votes: Algorithm 2 with a threshold of τ =
917
+
918
+ 2 ln d.
919
+ • top-L-votes: The top L algorithm presented in section 4.1. Each machine
920
+ sends the indices of its top L values of |ˆξm
921
+ i | to the fusion center.
922
+ • top-L-signs: Each machine sends both the indices and signs of the top
923
+ L values of |ˆξm
924
+ i |. The center forms ˆS using sums of signs as in Eq. (13).
925
+ • BNM21: The algorithm proposed by Barghi, Najafi and Motahari (2021)
926
+
927
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
928
+ 18
929
+ with a threshold τ =
930
+
931
+ 2 ln d. It is similar to Algorithm 2, the difference
932
+ is that ˆS consists of the indices i ∈ [d] with Vi ≥ M/2.
933
+ • AvgDebLasso: Based on Lee et al. (2017), each machine sends its debiased
934
+ lasso estimate ˆθm. The center computes ˆθavg =
935
+ 1
936
+ M
937
+ �M
938
+ m=1 ˆθm and estimates
939
+ the support as the indices with the K largest values |ˆθavg
940
+ i
941
+ |, i ∈ [d].
942
+ In all these algorithms, each machine computes a debiased lasso estimator using
943
+ its own data. The methods differ by the content and length of the messages sent
944
+ to the fusion center and hence by the manner in which the fusion center estimates
945
+ S and θ∗. For a fair comparison, we run all methods with the same regularization
946
+ parameters λΩ = 2
947
+
948
+ (ln d)/n and λ = 8
949
+
950
+ (ln d)/n in each machine to compute
951
+ the precision matrix ˆΩm and the lasso estimator ˜θm, respectively.
952
+ We performed simulations with both known sparsity, where the methods
953
+ were run as described above, as well as unknown sparsity. In the latter case,
954
+ for thresh-votes, top-L-votes and top-L-signs, the center computes ˆS as
955
+ the indices i such that Vi or |V sign
956
+ i
957
+ | is larger than 2 ln d, see Remark 4.5 in
958
+ Section 4; For AvgDebLasso, we set ˆS as the indices i such that |ˆθavg
959
+ i
960
+ | > 11 ln d
961
+ n .
962
+ This scaling is motivated by Theorem 16 of Lee et al. (2017), whereby in our
963
+ simulation setting this term is larger than the term O
964
+ ��
965
+ ln d/(nM)
966
+
967
+ in their
968
+ bound for ∥ˆθavg − θ∗∥∞. The factor 11 was manually tuned for good results.
969
+ BNM21 is unchanged, as it does not require knowledge of K.
970
+ We evaluate the accuracy of an estimated support set ˆS by the F-measure,
971
+ F-measure = 2 · precision · recall
972
+ precision + recall ,
973
+ where precision = |S ∩ ˆS|/| ˆS| and recall = |S ∩ ˆS|/K. An F-measure equal to
974
+ one indicates that exact support recovery was achieved.
975
+ Given a support estimate ˆS, the vector θ∗ is estimated as follows. For all
976
+ four methods excluding AvgDebLasso, we perform a second round and compute
977
+ the estimator ˆθ given by Eq. (11). AvgDebLasso is a single round scheme. Its
978
+ estimate of θ∗ consists of ˆθavg restricted to the indices i ∈ ˆS. The error of
979
+ an estimate ˆθ is measured by its ℓ2-norm ∥ˆθ − θ∗∥2. As a benchmark for the
980
+ achievable accuracy, we also computed the oracle centralized estimator ˆθLS that
981
+ knows the support S and estimates θ∗ by least squares on the whole data.
982
+ We generated data as follows: The design matrix Xm ∈ Rn×d in machine
983
+ m has n rows i.i.d. N(0, Σ), with Σi,j = 0.5|i−j|. We then computed for each
984
+ machine its matrix ˆΩm, and the quantity cΩ in Eq. (15). Next, we generated a K
985
+ sparse vector θ∗ ∈ Rd, whose nonzero indices are sampled uniformly at random
986
+
987
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
988
+ 19
989
+ (a)
990
+ (b)
991
+ Fig 1: Results for known sparsity, averaged over 500 realizations, as a function
992
+ of SNR in the range r ∈ [1/M, 1]. (a) F-measure; (b) ℓ2 error, on a log scale.
993
+ from [d]. Its nonzero coefficients have random ±1 signs and their magnitudes
994
+ are chosen from K equally spaced values {θmin, . . . , 2θmin}, where
995
+ θmin =
996
+
997
+ 2cΩ ln d
998
+ n
999
+ .
1000
+ These matrices and the vector θ∗ are then kept fixed. For a simulation with
1001
+ SNR parameter r, we set σ = 1/√r. Finally, in each realization we generated
1002
+ the response Y m ∈ Rn according to the model (1).
1003
+ Our first simulation compared the performance of various schemes as a func-
1004
+ tion of the SNR, with a known sparsity K = 5. We fixed the dimension d = 5000,
1005
+ the sample size in each machine n = 250, and the number of machines M = 100.
1006
+ The top L method was run with L = K (see Sec. 4.1). Figure 1a displays the
1007
+ F-measure of each method, averaged over 500 realizations. As expected, at low
1008
+ SNR values, AvgDebLasso achieved the best performance in terms of support
1009
+ recovery. However, for stronger signals with r > 0.4, both top-K-votes and
1010
+ thresh-votes achieved an F-measure of one, in accordance with our theoret-
1011
+ ical results regarding exact recovery. In particular, at sufficiently high SNR,
1012
+ our methods estimate the support as accurately as AvgDebLasso, but with 2-3
1013
+ orders of magnitude less communication. The scheme of BNM21 achieves good
1014
+
1015
+ n=250.d=5000.K=5.M=100
1016
+ 1.0
1017
+ 0.8
1018
+ measure
1019
+ 0.6
1020
+ 0.4
1021
+ 0.2
1022
+ 0.0
1023
+ 0.0
1024
+ 0.2
1025
+ 0.4
1026
+ 0.6
1027
+ 0.8
1028
+ 1.0
1029
+ r
1030
+ AvgDebLasso
1031
+ top-K-votes
1032
+ BNM21
1033
+ oracle
1034
+ thresh-votesn=250,d=5000,K=5,M=100
1035
+ 0.00
1036
+ -0.25
1037
+ error
1038
+ -0.50
1039
+ -0.75
1040
+ )
1041
+ 1.00
1042
+ -1.25
1043
+ 1.50
1044
+ -1.75
1045
+ 0.0
1046
+ 0.2
1047
+ 0.4
1048
+ 0.6
1049
+ 0.8
1050
+ 1.0
1051
+ r
1052
+ AvgDebLasso
1053
+ top-K-votes
1054
+ BNM21
1055
+ oracle
1056
+ thresh-votesR. Fonseca and B. Nadler/Distributed Sparse Linear Regression
1057
+ 20
1058
+ (a)
1059
+ (b)
1060
+ Fig 2: Results for unknown sparsity, averaged over 500 realizations, as a function
1061
+ of SNR in the range r ∈ [1/M, 1]. (a) F-measure; (b) ℓ2 error on a log scale.
1062
+ performance only at higher SNR.
1063
+ Figure 1b shows the errors ∥ˆθ − θ∗∥2, averaged over 500 realizations. At low
1064
+ SNR, 1/M < r < 0.1, AvgDebLasso has the smallest error. However, for r > 0.4,
1065
+ thresh-votes and top-K-votes yield more accurate estimates. Thus, Figure
1066
+ 1b shows the benefits of an accurate support estimate followed by a distributed
1067
+ least squares in a second round. Indeed, at these SNR levels, our methods exactly
1068
+ recover the support. Consequently, the second round reduces to a distributed
1069
+ ordinary least squares restricted to the correct support set S. In accordance
1070
+ with Corollary 1, Algorithm 2 then has the same error rate as the oracle.
1071
+ Next we present simulation results for unknown sparsity, as a function of the
1072
+ SNR in the range r ∈ [ 1
1073
+ M , 1]. As seen in Figure 2, throughout this SNR range,
1074
+ AvgDebLasso with a threshold of 11(ln d)/n achieves an F-measure close to one
1075
+ and ℓ2 errors close to those of the oracle. In contrast, thresh-votes achieves
1076
+ accurate estimates only for r > 0.6. These results illustrate that even when
1077
+ sparsity is unknown, our schemes can accurately estimate the vector θ∗ and its
1078
+ support, albeit with a higher SNR as compared to the case of known sparsity.
1079
+ Finally, we present simulation results as a function of number of samples
1080
+ n ∈ [nmin, nmax] = [100, 400] with M = 100 machines, and as a function of
1081
+ number of machines M ∈ [Mmin, Mmax] = [40, 160] with n = 250 samples
1082
+
1083
+ n=250,d=5000,K=5,M=100
1084
+ 1.0
1085
+ 0.8
1086
+ measure
1087
+ 0.6
1088
+ 0.4
1089
+ 0.2
1090
+ 0.0
1091
+ 0.0
1092
+ 0.2
1093
+ 0.4
1094
+ 0.6
1095
+ 0.8
1096
+ 1.0
1097
+ r
1098
+ AvgDebLasso
1099
+ thresh-votes
1100
+ BNM21
1101
+ oraclen=250,d=5000,K=5,M=100
1102
+ 0.00
1103
+ -0.25
1104
+ error)
1105
+ -0.50
1106
+ -0.75
1107
+ log1o(l2
1108
+ -1.00
1109
+ -1.25
1110
+ -1.50
1111
+ -1.75
1112
+ 0.0
1113
+ 0.2
1114
+ 0.4
1115
+ 0.6
1116
+ 0.8
1117
+ 1.0
1118
+ r
1119
+ AvgDebLasso
1120
+ thresh-votes
1121
+ BNM21
1122
+ oracleR. Fonseca and B. Nadler/Distributed Sparse Linear Regression
1123
+ 21
1124
+ (a)
1125
+ (b)
1126
+ Fig 3: ℓ2 error vs. sample size n (a) and vs. number of machines M (b), both on
1127
+ a log-log scale, at an SNR of r = 0.4. Values are averaged over 1000 realizations.
1128
+ per machine. Initially, for each machine m ∈ [Mmax], we generated its full data
1129
+ matrix Xm with nmax number of samples. We then computed the corresponding
1130
+ matrix ˆΩm and the quantity cΩ of Eq. (15). We then generated the sparse
1131
+ vector θ∗ as described above. To save on run-time, in simulations with a smaller
1132
+ number of samples n < nmax we nonetheless used the decorrelation matrices
1133
+ Ωm that correspond to nmax samples. This can be viewed as a semi-supervised
1134
+ setting, as also mentioned in Javanmard and Montanari (2018). We fixed r = 0.4
1135
+ and d = 5000, and the sparsity K = 5 was known to the center. Figure 3
1136
+ shows the resulting ℓ2 errors averaged over 1000 simulations. In this simulation,
1137
+ top-K-votes and thresh-votes are close to the centralized least squares oracle
1138
+ since their support estimates are accurate, as can be seen in Figure 1 for r = 0.4.
1139
+ Both plots in Figure 3 show a linear dependence on a log-log scale with a slope of
1140
+ approximately −1/2, namely that the resulting errors decay as 1/√n and 1/
1141
+
1142
+ M,
1143
+ respectively. This is in agreement with our theoretical result in Corollary 1.
1144
+ 6.1. The advantages of sending signs
1145
+ We now illustrate the advantages of using sums of signs instead of sums of votes,
1146
+ in terms of both support recovery and parameter estimation. In this simulation,
1147
+ we fixed n = 250, d = 5000, M = 100 and K = 25. The results in Figure
1148
+ 4 show that using sums of signs is more accurate than sums of votes for low
1149
+
1150
+ d=5000,K=5,r=0.40,M=100
1151
+ -1.0
1152
+ log1o(l2 error)
1153
+ -1.2
1154
+ -1.4
1155
+ -1.6
1156
+ 1.8
1157
+ 2.0
1158
+ 2.2
1159
+ 2.4
1160
+ 2.6
1161
+ log1o(n)
1162
+ AvgDebLasso
1163
+ top-K-votes
1164
+ thresh-votes
1165
+ oracled=5000,K=5,r=0.40,n=250
1166
+ -1.0
1167
+ log1o(l2 error)
1168
+ -1.2
1169
+ -1.4
1170
+ -1.6
1171
+ -1.8
1172
+ 1.6
1173
+ 1.8
1174
+ 2.0
1175
+ 2.2
1176
+ log1o(M)
1177
+ AvgDebLasso
1178
+ top-K-votes
1179
+ thresh-votes
1180
+ oracleR. Fonseca and B. Nadler/Distributed Sparse Linear Regression
1181
+ 22
1182
+ (a)
1183
+ (b)
1184
+ Fig 4: Results for schemes using sums of signs and sums of votes, averaged over
1185
+ 500 realizations, as a function of r∈[ 1
1186
+ M , 0.1]. (a) F-measure; (b) log10 of ℓ2 error.
1187
+ SNR values in the range r ∈ [1/M, 0.1]. Figure 4 also shows that using L = 5K
1188
+ instead of L = K significantly improves the accuracy of the support estimator,
1189
+ at the expense of increasing the communication. This illustrates the potential
1190
+ trade-offs between the accuracy of support estimation and communication.
1191
+ 7. Summary and Discussion
1192
+ The development and analysis of distributed statistical inference schemes having
1193
+ low communication are important contemporary problems. Given its simplicity
1194
+ and ubiquity, the sparse linear regression model has attracted significant atten-
1195
+ tion in the literature. Most previous inference schemes for this model require
1196
+ communication per machine of at least O(d) bits. In this work we proved the-
1197
+ oretically and showed via simulations that, under suitable conditions, accurate
1198
+ distributed inference for sparse linear regression is possible with a much lower
1199
+ communication per machine.
1200
+ Over the past years, several authors studied distributed statistical infer-
1201
+ ence under communication constraints. Specifically, for sparse linear regression,
1202
+ Braverman et al. (2016) proved that without a lower bound on the SNR, to ob-
1203
+
1204
+ n=250.d=5000.K=25.M=100
1205
+ 1.0
1206
+ 0.8
1207
+ measure
1208
+ 0.6
1209
+ 0.4
1210
+ 0.2
1211
+ 0.0
1212
+ 0.02
1213
+ 0.04
1214
+ 0.06
1215
+ 0.08
1216
+ 0.10
1217
+ r
1218
+ AvgDebLasso
1219
+ top-5K-votes
1220
+ thresh-votes
1221
+ top-5K-signs
1222
+ top-K-votes
1223
+ oracle
1224
+ top-K-signsn=250,d=5000,K=25,M=100
1225
+ 0.4
1226
+ 0.2
1227
+ error)
1228
+ 0.0
1229
+ 0.2
1230
+ )0
1231
+ 0.4
1232
+ 0.6
1233
+ -0.8
1234
+ -1.0
1235
+ 0.02
1236
+ 0.04
1237
+ 0.06
1238
+ 0.08
1239
+ 0.10
1240
+ r
1241
+ AvgDebLasso
1242
+ top-5K-votes
1243
+ thresh-votes
1244
+ top-5K-signs
1245
+ top-K-votes
1246
+ oracle
1247
+ top-K-signsR. Fonseca and B. Nadler/Distributed Sparse Linear Regression
1248
+ 23
1249
+ tain a risk comparable to that of the minimax lower bound, a communication of
1250
+ at least Ω(M min(n, d)/ log d) bits is required. Acharya et al. (2019) proved that,
1251
+ under certain conditions, rate optimal estimates of a linear regression model can
1252
+ be computed using total communication sublinear in the dimension. However,
1253
+ as they mention in their appendix B.3, a precise characterization of the ability
1254
+ to recover the support with sublinear communication in d and its dependency on
1255
+ other parameters such as SNR and the number of machines is still an open prob-
1256
+ lem. In our theoretical results, we presented explicit expressions for the minimal
1257
+ SNR at which our scheme is guaranteed to achieve exact recovery with high
1258
+ probability and with sublinear communication. While we did not address the
1259
+ open problem of tight lower bounds, our results highlight the potential tradeoffs
1260
+ between SNR, communication and number of machines.
1261
+ We believe that using more refined techniques, our theoretical analysis can be
1262
+ extended and improved. For example, since the d coordinates of a debiased lasso
1263
+ estimator are correlated, sharp concentration bounds for dependent variables,
1264
+ like those of Lopes and Yao (2022), could improve our analysis and extend it to
1265
+ other schemes such as top-L. In our analysis, we focused on a setting where both
1266
+ the noise and covariates have Gaussian distribution. Lee et al. (2017) and Battey
1267
+ et al. (2018), for example, considered sub-Gaussian distributions for these terms.
1268
+ Our results can be adapted for this case, but a careful control of the various
1269
+ constants in probability bounds is needed to derive explicit expressions.
1270
+ Finally, our low-communication schemes could also be applied to other prob-
1271
+ lems, such as sparse M-estimators, sparse covariance estimation and distributed
1272
+ estimation of jointly sparse signals. We leave these for future research.
1273
+ Appendix A: Proofs
1274
+ A.1. Proof of Lemma 1
1275
+ Proof. Consider the debiased lasso estimator ˆθi given in Eq. (3). Making the
1276
+ change of variables t = σ√ciiτ/√n, gives that
1277
+ Pr
1278
+ �√n(ˆθi − θ∗
1279
+ i )
1280
+ σ√cii
1281
+ ≤ τ
1282
+
1283
+ = Pr(ˆθi − θ∗
1284
+ i ≤ t).
1285
+ (20)
1286
+
1287
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
1288
+ 24
1289
+ It follows from Eq. (5) that Pr(ˆθi − θ∗
1290
+ i ≤ t) = Pr (Zi ≤ √nt − Ri). By the law
1291
+ of total probability,
1292
+ Pr
1293
+
1294
+ Zi ≤ √nt − Ri
1295
+
1296
+ =
1297
+ Pr
1298
+
1299
+ {Zi ≤ √nt − Ri} ∩ {|Ri| ≤ δR}
1300
+
1301
+ + Pr
1302
+
1303
+ {Zi ≤ √nt − Ri} ∩ {|Ri| > δR}
1304
+
1305
+
1306
+ Pr
1307
+
1308
+ Zi ≤ √nt + δR
1309
+
1310
+ + Pr (|Ri| > δR) .
1311
+ (21)
1312
+ From Eq. (5) it follows that Pr (Zi ≤ √nt + δR) = Φ
1313
+ � √nt+δR
1314
+ σ√cii
1315
+
1316
+ . Hence, from
1317
+ Eqs. (20) and (21) we get that
1318
+ Pr
1319
+ �√n(ˆθi − θ∗
1320
+ i )
1321
+ σ√cii
1322
+ ≤ τ
1323
+
1324
+ ≤ Φ
1325
+ �√nt + δR
1326
+ σ√cii
1327
+
1328
+ + Pr (|Ri| > δR) .
1329
+ (22)
1330
+ The second term on the right-hand side of Eq. (22) can be bounded by Eq. (6).
1331
+ This gives the last three terms on the right hand side in Eq. (7).
1332
+ Let us analyze Φ
1333
+ � √nt+δR
1334
+ σ√cii
1335
+
1336
+ . For any fixed x and δ > 0, by the mean value
1337
+ theorem, |Φ(x + δ) − Φ(x)| ≤ δφ(x∗), where x∗ ∈ (x, x + δ). Since φ(x) is a
1338
+ decreasing function for x > 0, we have φ(x∗) ≤ φ(x). Thus,
1339
+ |Φ(x + δ) − Φ(x)| ≤ δφ(x).
1340
+ Applying this result with x =
1341
+ √nt
1342
+ σ√cii and δ =
1343
+ δR
1344
+ σ√cii , gives
1345
+ ����Φ
1346
+ �√nt + δR
1347
+ σ√cii
1348
+
1349
+ − Φ
1350
+ � √nt
1351
+ σ√cii
1352
+ ����� ≤
1353
+ δR
1354
+ σ√cii
1355
+ φ
1356
+ � √nt
1357
+ σ√cii
1358
+
1359
+ .
1360
+ Combining the above with Eq. (22), and replacing t = σ√ciiτ
1361
+ √n
1362
+ proves Eq. (7).
1363
+ A.2. Proofs of Theorem 2 and Theorem 3
1364
+ Let us first provide an overview of the proofs. Recall that we consider a dis-
1365
+ tributed setting with M machines each with its own data, and each machine
1366
+ sends an independent message containing a few indices to the fusion center. For
1367
+ any index i ∈ [d] and machine m, let pm
1368
+ i
1369
+ denote the probability that index i
1370
+ is sent by machine m, namely that |ˆξm
1371
+ i | > τ. Since data at different machines
1372
+ are statistically independent, the total number of votes Vi received at the fusion
1373
+ center for index i is distributed as Vi ∼ �M
1374
+ i=1 Ber(pm
1375
+ i ). Our proof strategy is as
1376
+ follows: we compute an upper bound for pm
1377
+ j for non-support indices j ̸∈ S, and
1378
+ a lower bound for pm
1379
+ i for support indices i ∈ S. Next, we employ tail bounds for
1380
+
1381
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
1382
+ 25
1383
+ binomial random variables. Combining these implies that under suitable condi-
1384
+ tions on the SNR and number of machines, with high probability, the number
1385
+ of votes Vi can perfectly distinguish between support and non-support indices.
1386
+ To carry out this proof outline, we now introduce some auxiliary lemmas and
1387
+ results. The following are standard Gaussian tail bounds,
1388
+ t
1389
+
1390
+ 2π(t2 + 1)e−t2/2 ≤ 1 − Φ(t) ≤
1391
+ 1
1392
+
1393
+ 2πte−t2/2,
1394
+ ∀t > 0.
1395
+ (23)
1396
+ We also use the following inequality for a binomial variable V ∼ Bin(M, p)
1397
+ (Boucheron, Lugosi and Massart, 2013, exercise 2.11). For any 0 < p ≤ a < 1,
1398
+ Pr (V > Ma) ≤
1399
+ ��p
1400
+ a
1401
+ �a �1 − p
1402
+ 1 − a
1403
+ �1−a�M
1404
+ = eMF (a,p)
1405
+ (24)
1406
+ where
1407
+ F(a, p) = a ln
1408
+ �p
1409
+ a
1410
+
1411
+ + (1 − a) ln
1412
+ �1 − p
1413
+ 1 − a
1414
+
1415
+ .
1416
+ (25)
1417
+ The following result appeared in (Amiraz, Krauthgamer and Nadler, 2022,
1418
+ Lemma A.3). It is used in our proof to show that with high probability, support
1419
+ indices receive a relatively large number of votes.
1420
+ Lemma 2. Assume that mini∈S |θ∗
1421
+ i | is sufficiently large so that for some suitable
1422
+ pmin > 0, for all i ∈ S, and m ∈ [M], pm
1423
+ i ≥ pmin. If pmin ≥ 8 ln d
1424
+ M , then
1425
+ Pr
1426
+
1427
+ min
1428
+ i∈S Vi < 4 ln d
1429
+
1430
+ ≤ K
1431
+ d .
1432
+ The next lemma shows that under suitable conditions, non-support indices
1433
+ receive relatively few total number of votes.
1434
+ Lemma 3. Assume that d ≥ 4 and M > 2 ln d. In addition, assume that
1435
+ pm
1436
+ j ≤
1437
+ 1
1438
+ M for all non-support indices j ̸∈ S and all machines m ∈ [M]. Then
1439
+ Pr
1440
+
1441
+ max
1442
+ j̸∈S Vj > 2 ln d
1443
+
1444
+ ≤ 1
1445
+ d.
1446
+ (26)
1447
+ Proof. Recall that the number of votes received by an index j ̸∈ S at the fusion
1448
+ center is distributed as Vj ∼ �M
1449
+ m=1 Ber(M, pm
1450
+ j ). Since pm
1451
+ j ≤
1452
+ 1
1453
+ M for all j ̸∈ S,
1454
+ then Vj is stochastically dominated by
1455
+ V ∼ Bin(M, p),
1456
+ where
1457
+ p = 1/M.
1458
+ (27)
1459
+
1460
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
1461
+ 26
1462
+ Thus, by a union bound,
1463
+ Pr
1464
+
1465
+ max
1466
+ j̸∈S Vj > t
1467
+
1468
+ ≤ (d − K) · Pr(V > t) ≤ d · Pr
1469
+
1470
+ V > M t
1471
+ M
1472
+
1473
+ .
1474
+ We now apply Eq. (24) with t ≥ 1, so that the value of a = t/M = tp indeed
1475
+ satisfies that a ≥ p. With F(a, p) defined in Eq. (25), this gives
1476
+ Pr
1477
+
1478
+ max
1479
+ j̸∈S Vj > t
1480
+
1481
+ ≤ d · eMF (tp,p).
1482
+ (28)
1483
+ Next, we upper bound F(tp, p). Since ln(1 + x) ≤ x holds for all x ≥ 0,
1484
+ F(tp, p) = −tp ln(t) + (1 − tp) ln
1485
+
1486
+ 1 + tp − p
1487
+ 1 − tp
1488
+
1489
+ ≤ −tp ln(t) + tp − p
1490
+ < −tp ln(t) + tp = −tp ln(t/e).
1491
+ Inserting this into Eq. (28) with t = 2 ln d, p = 1/M and M > 2 ln d gives
1492
+ Pr
1493
+
1494
+ max
1495
+ j̸∈S Vj > 2 ln d
1496
+
1497
+ ≤ deMF (tp,p) ≤ de−2 ln(d) ln(2 ln(d)/e).
1498
+ For Eq. (26) to hold, we thus require that
1499
+ 2 ln(d) ln
1500
+ �2 ln d
1501
+ e
1502
+
1503
+ ≥ ln
1504
+
1505
+ d2�
1506
+ .
1507
+ (29)
1508
+ This holds for d ≥ exp{e/2} ≈ 3.89.
1509
+ Proof of Theorem 2. Our goal is to show that the event { ˆS = S} occurs with
1510
+ high probability. Recall that ˆS is determined at the fusion center as the K
1511
+ indices with the largest number of votes. Further recall that pm
1512
+ i
1513
+ denotes the
1514
+ probability that machine m sends index i. Our strategy is to show that these
1515
+ probabilities are sufficiently large for support indices and sufficiently small for
1516
+ non-support indices. This shall allow us to apply Lemmas 2 and 3 to prove the
1517
+ required result. To derive bounds on pm
1518
+ i
1519
+ we employ Lemma 1.
1520
+ First, we prove that the condition of Lemma 2 holds, i.e. that
1521
+ pm
1522
+ i ≥ 8 ln d
1523
+ M
1524
+ for all i ∈ S,
1525
+ (30)
1526
+ where pm
1527
+ i
1528
+ = Pr(|ˆξm
1529
+ i | > τ), and ˆξm
1530
+ i
1531
+ =
1532
+ √nˆθm
1533
+ i
1534
+ σ(ˆΩm ˆΣm(ˆΩm)⊤)1/2
1535
+ ii
1536
+ is the standardized
1537
+ debiased lasso estimator, defined in Eq. (9). Without loss of generality, assume
1538
+ that θ∗
1539
+ i > 0. Otherwise, we could do the same calculations for −ˆξm
1540
+ i . Clearly,
1541
+ pm
1542
+ i = Pr(|ˆξm
1543
+ i | > τ) ≥ Pr(ˆξm
1544
+ i
1545
+ > τ) = Pr
1546
+
1547
+ √n
1548
+ ˆθm
1549
+ i − θ∗
1550
+ i
1551
+ σ(ˆΩm ˆΣm(ˆΩm)⊤)1/2
1552
+ ii
1553
+ > τ − ϑm
1554
+ i
1555
+
1556
+ ,
1557
+
1558
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
1559
+ 27
1560
+ where ϑm
1561
+ i
1562
+ is defined in Eq. (16). Since condition C1 holds, applying Eq. (7) of
1563
+ Lemma 1 gives that
1564
+ pm
1565
+ i ≥ Φc(τ − ϑm
1566
+ i ) − ϵ(τ),
1567
+ (31)
1568
+ where ϵ(τ) is the error defined in Eq. (17).
1569
+ Next, by condition C2, with the definition of cΩ in Eq. (15) and Eq. (14)
1570
+ ϑm
1571
+ i =
1572
+ √nθ∗
1573
+ i
1574
+
1575
+ ˆΩm ˆΣm(ˆΩm)⊤
1576
+ �1/2
1577
+ ii
1578
+
1579
+ √nθmin
1580
+ √cΩ
1581
+ =
1582
+
1583
+ 2r ln d.
1584
+ At a threshold τ =
1585
+
1586
+ 2 ln d we thus obtain
1587
+ pm
1588
+ i ≥ Φc �
1589
+ (1 − √r)
1590
+
1591
+ 2 ln d
1592
+
1593
+ − ϵ(τ).
1594
+ By the Gaussian tail bound (23),
1595
+ Φc
1596
+ ��
1597
+ 2(1 − √r)2 ln d
1598
+
1599
+ ≥ C(r, d)d−(1−√r)2,
1600
+ where
1601
+ C(r, d) =
1602
+
1603
+ 2(1 − √r)2 ln(d)
1604
+
1605
+ 2π{2(1 − √r)2 ln(d) + 1}.
1606
+ Therefore, for the condition (30) to hold, it suffices that
1607
+ M ≥
1608
+ 8 ln d
1609
+ C(r, d)d−(1−√r)2 − ϵ(τ) =
1610
+ 8 ln d
1611
+ C(r, d) − ϵ(τ)d(1−√r)2 d(1−√r)2,
1612
+ which is precisely condition (18) of the theorem. Notice that the requirement
1613
+ ϵ(τ) < 1/d guarantees that the denominator in the fraction above is positive.
1614
+ The lower bound on r in the theorem, guarantees that the range of possible
1615
+ values for M is non-empty.
1616
+ Next, we prove that the conditions of Lemma 3 hold. The condition M >
1617
+ 2 ln d is satisfied given the requirement of Eq. (18). The next condition to verify
1618
+ is pm
1619
+ j ≤ 1/M for all j ̸∈ S. Since pm
1620
+ j = Pr(|ˆξm
1621
+ j | > τ) and ϑm
1622
+ j = 0 for j ̸∈ S, then
1623
+ pm
1624
+ j − 2Φc(τ) =
1625
+
1626
+ Pr(ˆξm
1627
+ j > τ) − Φc(τ)
1628
+
1629
+ +
1630
+
1631
+ Pr(ˆξm
1632
+ j < −τ) − Φc(τ)
1633
+
1634
+ .
1635
+ According to Eq. (5) of Theorem 1, apart from a bias term, ξm
1636
+ j
1637
+ and −ξm
1638
+ j
1639
+ have
1640
+ the same distribution because ϑm
1641
+ j = 0. Hence, applying Eq. (7) in Lemma 1 to
1642
+ each of the above bracketed terms separately gives that
1643
+ pm
1644
+ j ≤ 2Φc(τ) + 2ϵ(τ),
1645
+
1646
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
1647
+ 28
1648
+ where ϵ(τ) was defined in Eq. (17). By Eq. (23), for τ =
1649
+
1650
+ 2 ln d we have 2Φc(τ) ≤
1651
+ 2
1652
+
1653
+
1654
+
1655
+ 2 ln d
1656
+ 1
1657
+ d. Since ϵ(τ) < 1/d, we obtain that for all j ̸∈ S
1658
+ pm
1659
+ j ≤
1660
+ 1
1661
+
1662
+ π ln d
1663
+ 1
1664
+ d + 2
1665
+ d ≤ 3
1666
+ d ≤ 1
1667
+ M ,
1668
+ (32)
1669
+ where the last inequality follows from the assumption that M ≤ d/3 in Eq. (18).
1670
+ Therefore, all conditions of Lemma 3 are satisfied.
1671
+ Since Lemmas 2 and 3 hold, we apply their results with a union bound to
1672
+ derive a lower bound on the probability of exact support recovery, as follows:
1673
+ Pr( ˆS = S) ≥ Pr
1674
+ ��
1675
+ max
1676
+ j̸∈S Vj ≤ 4 ln d
1677
+
1678
+
1679
+
1680
+ min
1681
+ i∈S Vi ≥ 4 ln d
1682
+ ��
1683
+ ≥ 1 − Pr
1684
+
1685
+ max
1686
+ j̸∈S Vj ≥ 4 ln d
1687
+
1688
+ − Pr
1689
+
1690
+ min
1691
+ i∈S Vi ≤ 4 ln d
1692
+
1693
+ ≥ 1 − (K + 1)
1694
+ d
1695
+ .
1696
+ We remark that Lemma 3 provides a bound on Pr
1697
+
1698
+ maxj /∈S Vj > 2 ln d
1699
+
1700
+ . Clearly,
1701
+ the probability for a higher threshold 4 ln d above is much smaller.
1702
+ Finally, we analyze the communication per machine. Let Bm denote the num-
1703
+ ber of bits sent by machine m. Note that Bm is a sum of Bernoulli random
1704
+ variables Bm
1705
+ k ∼ Ber(pm
1706
+ k ) times some factor ∝ ln d corresponding to the num-
1707
+ ber of bits necessary to represent indices in [d]. The random variable Bm
1708
+ k is an
1709
+ indicator whether machine m sends index k to the center. Then
1710
+ E(Bm) = O
1711
+ � d
1712
+
1713
+ k=1
1714
+ E(Bm
1715
+ k ) ln d
1716
+
1717
+ = O
1718
+
1719
+
1720
+
1721
+
1722
+
1723
+
1724
+ i∈S
1725
+ pm
1726
+ i +
1727
+
1728
+ j̸∈S
1729
+ pm
1730
+ j
1731
+
1732
+
1733
+ � ln d
1734
+
1735
+ � .
1736
+ (33)
1737
+ Since pm
1738
+ j ≤ 3/d for all j ̸∈ S, then �
1739
+ j̸∈S pm
1740
+ j ≤ 3. Additionally, �
1741
+ i∈S pm
1742
+ i ≤ K.
1743
+ Therefore, E(Bm) = O (K ln d).
1744
+ Proof of Theorem 3. The proof is similar to that of Theorem 2. We first show
1745
+ that the conditions of Lemmas 2 and 3 hold. Then, we derive a lower bound
1746
+ for the probability of Algorithm 2 achieving exact support recovery. Recall that
1747
+ here the threshold is τ =
1748
+
1749
+ 2r ln d, where r is the SNR introduced in Eq. (14).
1750
+ Let us start by proving that the condition of Lemma 2 holds. We need to
1751
+ show that pm
1752
+ i ≥ 8 ln d
1753
+ M
1754
+ for all i ∈ S. As in Eq. (31) in the proof of Theorem 2,
1755
+ pm
1756
+ i ≥ Φc(τ − ϑm
1757
+ i ) − ϵ(τ)
1758
+ where ϑm
1759
+ i
1760
+ is given by Eq. (16).
1761
+
1762
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
1763
+ 29
1764
+ By condition C2, ϑm
1765
+ i ≥ √nθmin/√cΩ =
1766
+
1767
+ 2r ln d, where θmin and cΩ are given
1768
+ by Eqs. (14) and (15), respectively. Since τ =
1769
+
1770
+ 2r ln d, then τ − ϑm
1771
+ i ≤ 0 and
1772
+ pm
1773
+ i ≥ Φc(0) − ϵ(τ) ≥ 1
1774
+ 2 − ϵ(τ).
1775
+ The inequality pm
1776
+ i
1777
+ ≥ 8 ln d
1778
+ M
1779
+ follows directly from the requirement on M in Eq.
1780
+ (19), i.e., M ≥
1781
+ 8 ln d
1782
+ 1/2−ϵ(τ).
1783
+ Next, we verify the conditions of Lemma 3. Its first condition that M > 4 ln d
1784
+ holds given Eq. (19). For the second condition, we need to show that pm
1785
+ j ≤ 1/M
1786
+ for all j ̸∈ S. As in the proof of Theorem 2,
1787
+ pm
1788
+ j ≤ 2Φc(τ) + 2ϵ(τ).
1789
+ Plugging the value τ =
1790
+
1791
+ 2r ln d into the Gaussian tail bound (23) gives
1792
+ pm
1793
+ j ≤
1794
+ 1
1795
+
1796
+ πr ln d
1797
+ 1
1798
+ dr + 2ϵ(τ)
1799
+ (i)
1800
+ ≤ 1
1801
+ dr ,
1802
+ (34)
1803
+ where inequality (i) follows from the assumptions that ϵ(τ) < 1/(4dr) and
1804
+ r > ln(16 ln d)/(ln d). This latter assumption implies that
1805
+ 1
1806
+
1807
+ πr ln(d) ≤
1808
+ 1
1809
+ 2 for
1810
+ sufficienly large d. Since we assume that M ≤ dr in Eq. (19), then pm
1811
+ j ≤ 1/dr ≤
1812
+ 1/M for all j ̸∈ S. Hence, both conditions of Lemma 3 are satisfied.
1813
+ Applying a union bound and the result of Lemmas 2 and 3, it follows that
1814
+ Pr( ˆS = S) ≥ Pr
1815
+ ��
1816
+ max
1817
+ j̸∈S Vj ≤ 4 ln d
1818
+
1819
+
1820
+
1821
+ min
1822
+ i∈S Vi ≥ 4 ln d
1823
+ ��
1824
+ ≥ 1 − (K + 1)
1825
+ d
1826
+ .
1827
+ Finally, let us analyze the average communication per machine. Let B de-
1828
+ note the number of bits sent by a single machine. Following the same steps
1829
+ used to compute Eq. (33), the expectation of B may be bounded as E(B) ≤
1830
+ O
1831
+ ��
1832
+ K + d−K
1833
+ dr
1834
+
1835
+ ln d
1836
+
1837
+ , where the factor 1/dr is due to Eq. (34). Hence, the ex-
1838
+ pected communication of a single machine is O
1839
+
1840
+ d1−r ln d
1841
+
1842
+ bits.
1843
+ A.3. Proof of Corollary 1
1844
+ Proof. We proceed similar to Battey et al. (2018) in the proof of their Corollary
1845
+ A.3. By the law of total probability, for any constant C′ > 0,
1846
+ Pr
1847
+
1848
+ ∥ˆθ − ˆθ
1849
+ LS∥2 > C′
1850
+
1851
+ M max{K, ln N}
1852
+ N
1853
+
1854
+ ≤ Pr
1855
+ ��
1856
+ ∥ˆθ − ˆθ
1857
+ LS∥2 > C′
1858
+
1859
+ M max{K, ln N}
1860
+ N
1861
+
1862
+ ∩ { ˆS = S}
1863
+
1864
+ + Pr( ˆS ̸= S).
1865
+
1866
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
1867
+ 30
1868
+ By Theorem 2, Pr( ˆS ̸= S) ≤ (K + 1)/d. In addition, when ˆS = S, then ˆθj =
1869
+ ˆθLS
1870
+ j
1871
+ = 0 for all j ̸∈ S. Consequently, ∥ˆθ − ˆθLS∥2 = ∥ˆθS − ˆθLS
1872
+ S ∥2. Furthermore, the
1873
+ first term on the right hand side above may be bounded by
1874
+ Pr
1875
+
1876
+ ∥ˆθS − ˆθ
1877
+ LS
1878
+ S ∥2 > C′
1879
+
1880
+ M max{K, ln N}
1881
+ N
1882
+
1883
+ .
1884
+ The next step is to apply a result from the proof of Theorem A.1 of Battey
1885
+ et al. (2018), which appeared in the last line of their proof. For clarity we state
1886
+ it here as a lemma.
1887
+ Lemma 4. Consider the linear model in dimension K,
1888
+ y = X⊤β∗ + σw,
1889
+ where w ∼ N(0, 1), X ∼ N(0, Σ), and Σ ∈ RK×K satisfies 0 < Cmin ≤
1890
+ σmin(Σ) ≤ σmax(Σ) ≤ Cmax < ∞. Suppose N i.i.d. samples from this model
1891
+ are uniformly distributed to M machines, with n > K. Denote by ˆβm the least
1892
+ squares solution at the m-th machine and ˆβLS the centralized least squares solu-
1893
+ tion. If the number of machines satisfies M = O
1894
+
1895
+ NK
1896
+ (max{K,ln N})2
1897
+
1898
+ , then
1899
+ Pr
1900
+ ���� 1
1901
+ M
1902
+
1903
+ m
1904
+ ˆβm − ˆβ
1905
+ LS���
1906
+ 2 > C′
1907
+
1908
+ M max{K,ln N}
1909
+ N
1910
+
1911
+ ≤ cMe− max{K ln N} + Me−c N
1912
+ M ,
1913
+ where c, C′ > 0 are constants that do not depend on K or N.
1914
+ Applying this lemma to our case gives
1915
+ Pr
1916
+
1917
+ ∥ˆθS − ˆθ
1918
+ LS
1919
+ S ∥2 > C′
1920
+
1921
+ M max{K, ln N}
1922
+ N
1923
+
1924
+ ≤ cMe− max{K ln N} + Me−c N
1925
+ M .
1926
+ Since M = O
1927
+
1928
+ NK
1929
+ (max{K,ln N})2
1930
+
1931
+ , it follows that ∥ˆθ − ˆθLS∥2 = OP
1932
+ ��
1933
+ K
1934
+ N
1935
+
1936
+ . As the
1937
+ oracle estimator has rate ∥ˆθLS − θ∗∥2 = OP
1938
+ ��
1939
+ K
1940
+ N
1941
+
1942
+ , by the triangle inequality
1943
+ ∥ˆθ − θ∗∥2 = OP
1944
+ ��
1945
+ K
1946
+ N
1947
+
1948
+ as well.
1949
+ References
1950
+ Acharya, J., De Sa, C., Foster, D. J. and Sridharan, K. (2019).
1951
+ Distributed
1952
+ Learning
1953
+ with
1954
+ Sublinear
1955
+ Communication.
1956
+ arXiv
1957
+ preprint
1958
+ arXiv:1902.11259.
1959
+
1960
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
1961
+ 31
1962
+ Amiraz, C., Krauthgamer, R. and Nadler, B. (2022). Distributed sparse
1963
+ normal means estimation with sublinear communication. Information and In-
1964
+ ference: A Journal of the IMA iaab030.
1965
+ Barghi, H., Najafi, A. and Motahari, S. A. (2021). Distributed sparse
1966
+ feature selection in communication-restricted networks. arXiv preprint
1967
+ arXiv:2111.02802.
1968
+ Battey, H., Fan, J., Liu, H., Lu, J. and Zhu, Z. (2018). Distributed testing
1969
+ and estimation under sparse high dimensional models. Annals of Statistics 46
1970
+ 1352–1382.
1971
+ Boucheron, S., Lugosi, G. and Massart, P. (2013). Concentration Inequal-
1972
+ ities: A Nonasymptotic Theory of Independence. Oxford University Press, Ox-
1973
+ ford.
1974
+ Boyd, S., Parikh, N., Chu, E., Peleato, B. and Eckstein, J. (2011).
1975
+ Distributed optimization and statistical learning via the alternating direction
1976
+ method of multipliers. Foundations and Trends in Machine learning 3 1–122.
1977
+ Braverman, M., Garg, A., Ma, T., Nguyen, H. L. and Woodruff, D. P.
1978
+ (2016). Communication lower bounds for statistical estimation problems via
1979
+ a distributed data processing inequality. In Proceedings of the forty-eighth
1980
+ annual ACM symposium on Theory of Computing 1011–1020.
1981
+ Bunea, F., Tsybakov, A. and Wegkamp, M. (2007). Sparsity oracle inequal-
1982
+ ities for the Lasso. Electronic Journal of Statistics 1 169–194.
1983
+ Candes, E. J. and Tao, T. (2005). Decoding by linear programming. IEEE
1984
+ Transactions on Information Theory 51 4203–4215.
1985
+ Chen, X. and Xie, M. (2014). A split-and-conquer approach for analysis of
1986
+ extraordinarily large data. Statistica Sinica 24 1655–1684.
1987
+ Chen, X., Liu, W., Mao, X. and Yang, Z. (2020). Distributed high-
1988
+ dimensional regression under a quantile loss function. Journal of Machine
1989
+ Learning Research 21 1–43.
1990
+ Dobriban, E. and Sheng, Y. (2020). WONDER: Weighted one-shot dis-
1991
+ tributed ridge regression in high dimensions. Journal of Machine Learning
1992
+ Research 21 1–52.
1993
+ Dobriban, E. and Sheng, Y. (2021). Distributed linear regression by averag-
1994
+ ing. Annals of Statistics 49 918–943.
1995
+ Fan, J., Li, R., Zhang, C.-H. and Zou, H. (2020). Statistical Foundations of
1996
+ Data Science. CRC Press, Boca Raton.
1997
+ Gao, Y., Liu, W., Wang, H., Wang, X., Yan, Y. and Zhang, R. (2022).
1998
+ A review of distributed statistical inference. Statistical Theory and Related
1999
+
2000
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
2001
+ 32
2002
+ Fields 6 89–99.
2003
+ Guestrin, C., Bodik, P., Thibaux, R., Paskin, M. and Madden, S. (2004).
2004
+ Distributed regression: an efficient framework for modeling sensor network
2005
+ data. In Third International Symposium on Information Processing in Sensor
2006
+ Networks 1–10.
2007
+ Hastie, T., Tibshirani, R. and Wainwright, M. (2015). Statistical Learning
2008
+ with Sparsity: The Lasso and Generalizations. CRC Press, Boca Raton.
2009
+ Heinze, C., McWilliams, B., Meinshausen, N. and Krummenacher, G.
2010
+ (2014). LOCO: Distributing ridge regression with random projections. arXiv
2011
+ preprint arXiv:1406.3469.
2012
+ Huo, X. and Cao, S. (2019). Aggregated inference. Wiley Interdisciplinary
2013
+ Reviews: Computational Statistics 11 e1451.
2014
+ Javanmard, A. and Montanari, A. (2014a). Hypothesis testing in high-
2015
+ dimensional regression under the Gaussian random design model: Asymptotic
2016
+ theory. IEEE Transactions on Information Theory 60 6522–6554.
2017
+ Javanmard, A. and Montanari, A. (2014b). Confidence intervals and hy-
2018
+ pothesis testing for high-dimensional regression. The Journal of Machine
2019
+ Learning Research 15 2869–2909.
2020
+ Javanmard, A. and Montanari, A. (2018). Debiasing the lasso: Optimal
2021
+ sample size for Gaussian designs. The Annals of Statistics 46 2593–2622.
2022
+ Jordan, M. I., Lee, J. D. and Yang, Y. (2019). Communication-efficient dis-
2023
+ tributed statistical inference. Journal of the American Statistical Association
2024
+ 114 668–681.
2025
+ Lee, J. D., Liu, Q., Sun, Y. and Taylor, J. E. (2017). Communication-
2026
+ efficient sparse regression. The Journal of Machine Learning Research 18 1–
2027
+ 30.
2028
+ Liu, M., Xia, Y., Cho, K. and Cai, T. (2021). Integrative high dimensional
2029
+ multiple testing with heterogeneity under data sharing constraints. Journal
2030
+ of Machine Learning Research 22 1–26.
2031
+ Lopes, M. E. and Yao, J. (2022). A sharp lower-tail bound for Gaussian
2032
+ maxima with application to bootstrap methods in high dimensions. Electronic
2033
+ Journal of Statistics 16 58–83.
2034
+ Lv, S. and Lian, H. (2022). Debiased distributed learning for sparse partial
2035
+ linear models in high dimensions. Journal of Machine Learning Research 23
2036
+ 1–32.
2037
+ Mateos, G., Bazerque, J. A. and Giannakis, G. B. (2010). Distributed
2038
+ sparse linear regression. IEEE Transactions on Signal Processing 58 5262–
2039
+
2040
+ R. Fonseca and B. Nadler/Distributed Sparse Linear Regression
2041
+ 33
2042
+ 5276.
2043
+ Predd, J. B., Kulkarni, S. B. and Poor, H. V. (2006). Distributed learning
2044
+ in wireless sensor networks. IEEE Signal Processing Magazine 23 56–69.
2045
+ Rosenblatt, J. D. and Nadler, B. (2016). On the optimality of averaging in
2046
+ distributed statistical learning. Information and Inference: A Journal of the
2047
+ IMA 5 379–404.
2048
+ Sun, T. and Zhang, C.-H. (2012). Scaled sparse linear regression. Biometrika
2049
+ 99 879–898.
2050
+ Tibshirani, R. (1996). Regression shrinkage and selection via the Lasso. Jour-
2051
+ nal of the Royal Statistical Society: Series B 58 267–288.
2052
+ van de Geer, S. A. and B¨uhlmann, P. (2009). On the conditions used to
2053
+ prove oracle results for the Lasso. Electronic Journal of Statistics 3 1360–
2054
+ 1392.
2055
+ van de Geer, S., B¨uhlmann, P., Ritov, Y. and Dezeure, R. (2014).
2056
+ On asymptotically optimal confidence regions and tests for high-dimensional
2057
+ models. The Annals of Statistics 42 1166–1202.
2058
+ Wainwright, M. J. (2009). Sharp thresholds for high-dimensional and noisy
2059
+ sparsity recovery using ℓ1 - constrained quadratic programming (Lasso). IEEE
2060
+ Transactions on Information Theory 55 2183–2202.
2061
+ Zhang, Y., Duchi, J. C. and Wainwright, M. J. (2013). Communication-
2062
+ efficient algorithms for statistical optimization. Journal of Machine Learning
2063
+ Research 14 3321–3363.
2064
+ Zhang, C. H. and Zhang, S. S. (2014). Confidence intervals for low dimen-
2065
+ sional parameters in high dimensional linear models. Journal of the Royal
2066
+ Statistical Society: Series B 76 217–242.
2067
+ Zhu, X., Li, F. and Wang, H. (2021). Least-square approximation for a dis-
2068
+ tributed system. Journal of Computational and Graphical Statistics 30 1004–
2069
+ 1018.
2070
+
IdE2T4oBgHgl3EQfowif/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
JdE0T4oBgHgl3EQfSACX/content/tmp_files/2301.02216v1.pdf.txt ADDED
@@ -0,0 +1,1950 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.02216v1 [gr-qc] 5 Jan 2023
2
+ Logarithm Corrections and Thermodynamics for Horndeski gravity like Black Holes
3
+ Riasat Ali,1, ∗ Zunaira Akhtar,2, † Rimsha Babar,3, ‡ G. Mustafa,4, § and Xia Tiecheng1, ¶
4
+ 1Department of Mathematics, Shanghai University,
5
+ Shanghai-200444, Shanghai, People’s Republic of China
6
+ 2Department of Mathematics, University of the Punjab, Quaid-e-Azam Campus, Lahore 54590, Pakistan
7
+ 3Division of Science and Technology, University of Education, Township, Lahore-54590, Pakistan
8
+ 4Department of Physics Zhejiang Normal University Jinhua 321004, People’s Republic of China
9
+ In this paper, we compute the Hawking temperature by applying quantum tunneling approach for
10
+ the Horndeski like black holes. We utilize the semi-classical phenomenon and WKB approximation
11
+ to the Lagrangian field equation involving generalized uncertainty principle (GUP) and compute
12
+ the tunneling rate as well as Hawking temperature.
13
+ For the zero gravity parameter, we obtain
14
+ results consistent without correction parameter or original tunneling.
15
+ Moreover, we study the
16
+ thermal fluctuations of the considered geometry and examine the stable state of the system by
17
+ heat capacity technique.
18
+ We also investigate the behaviour of thermodynamic quantities under
19
+ the influence of thermal fluctuations. We observe from the graphical analysis, the corresponding
20
+ system is thermodynamically stable with these correction terms.
21
+ keywords:
22
+ Horndeski like black holes;
23
+ Quantum gravity;
24
+ Tunneling radiation,
25
+ Thermal
26
+ fluctuations, Corrected entropy, Phase transition.
27
+ I.
28
+ INTRODUCTION
29
+ Tunneling is the semi-classical mechanism in which particles have radiated from black hole (BH) outer horizon.
30
+ Some analysis shows the keen interest in the Hawking temperature (TH) via tunneling method from different BHs.
31
+ The main aspect to examine the TH is the imaginary part of classical action which leads to the tunneling radiation
32
+ of boson particles appearing from the Horndeski like BHs.
33
+ The quantum tunneling and TH of charged fermions in BHs has been observed [1]. In this paper, they examined that
34
+ the tunneling and TH depend on charges of electric and magnetic, acceleration, rotation, mass and NUT parameter of
35
+ the charged pair BHs. The tunneling strategy from Reissner Nordstom-de Sitter BH like solution in global monopole
36
+ has been analyzed [2].
37
+ In this article, the authors observed that the modified TH depends on the parameter of
38
+ global monopole. The BH thermodynamics have been examined [3] with some parameters like acceleration, NUT
39
+ and rotation. The researchers studied thermodynamical quantities like the area, entropy, surface gravity and TH.
40
+ The tunneling spectrum of bosonic particles has been computed from the modified BHs horizon by utilizing the
41
+ Proca field equation. Hawking evaluated tunneling probability from BH [4] by utilizing theoretical technique and
42
+ later, it has been explained by Parikh and Wilczek [5, 6]. The important of this radiation represents that vacuum
43
+ thermal fluctuation produce pairs of particle (particle and anti-particle) from the horizon. Hawking considered that
44
+ the particle’s have ability to emit from the BH and the anti-particles have no ability to radiate from the horizon.
45
+ Parikh and Wilczek explained a mathematical approach by utilizing WKB approximation. This phenomenon use
46
+ geometrical optic approximation which is another view of eikonal approximation in wave clarification [7]. The set of
47
+ all particles remain at the front boundary and with the emission of these particles, the BH mass reduces in the form
48
+ of particles energy.
49
+ In the Parikh-Wilczek method, a precisely tunneling was established and there were as still unanswered problems
50
+ like information release, temperature unitary and divergence.
51
+ Many authors have made efforts on the tunneling
52
+ strategy and semi-classical phenomenon from the different BHs horizon; one of the important explanations can be
53
+ checked in [8]-[31]. The radiate particles for many BHs have been analyzed and also computed the radiate particles
54
+ with the influences of the geometry of BH with different parameters. It is possible to study modified thermodynamic
55
+ properties of BH by considering generalized uncertainty principle (GUP) influences [32]. The GUP implies high energy
56
+ ∗Electronic address: [email protected]
57
+ †Electronic address: [email protected]
58
+ ‡Electronic address: [email protected]
59
+ §Electronic address: [email protected]
60
+ ¶Electronic address: [email protected]
61
+
62
+ 2
63
+ result to thermodynamic of BH, by considering the quantum gravity theory with a minimal length. By considering
64
+ the GUP influences, it is viable to examine the modified thermodynamic of BHs.
65
+ It is a well known fact that thermal fluctuations are a result of statistical perturbations in the dense matter. With
66
+ the emission of Hawking radiations from the BH, the size of BH reduces and consequently its temperature increases.
67
+ Faizal and his colleague [33] have studied the thermodynamics and thermal fluctuations of generalized Schwarschild
68
+ BH, (i.e., Reissner-Nordstrom, Kerr and charged AdS BHs) with the help first-order corrections and discussed the
69
+ stability of BHs.
70
+ The thermodynamics of rotating Kerr-AdS BH and its phase transition have been studied by
71
+ Pourhassan and Faizal [34]. They concluded that the entropy corrections are very helpful to examine the geometry
72
+ of small BHs. By applying the stability test of heat capacity and Hessian matrix, the phase transition as well as
73
+ thermodynamics of non-minimal regular BHs in the presence of cosmological constant has been investigated [35] and
74
+ the authors concluded that the local and global stability of the corresponding BHs increases for higher values of
75
+ correction parameters. Zhang and Pradhan [36, 37] have investigated the corrected entropy and second order phase
76
+ transition via thermal fluctuations on the charged accelerating BHs.
77
+ Moreover, the thermodynamics and geometrical analysis of new Schwarzschild BHs have been studied [38, 39]. By
78
+ using the tunneling approach under the influence of quantum gravity the Hawking temperture for different types
79
+ of BH have been discussed [40]-[44]. Sharif and Zunaira [45, 46] have computed the thermodynamics, quasi-normal
80
+ modeand thermal fluctuations of charged BHs with the help of Weyl corrections. The authors found that the system
81
+ is unstable for the small radii of BHs under the influence of first order corrections and by using the heat capacity and
82
+ Hessian matrix technique, they have also studied the stable conditions of the system. The authors in [47, 48] have
83
+ investigated the thermodynamics, phase transition and local/global stability of NUT BH via charged, accelerating as
84
+ well as rotating pairs. Ilyas et al.[49–52] discussed the energy conditions and calculated the new solutions for stellar
85
+ structures by taking black hole geometry as exterior spacetime in the background of different modified theories of
86
+ gravity. Recently, Ditta et al. [53] discussed the thermal stability and Joule–Thomson expansion of regular BTZ-like
87
+ black hole.
88
+ The main intention of this paper is to investigate the tunneling radiation without self-gravity and back-reaction and
89
+ also explain the modified tunneling rate. The tunneling radiation is evaluated under the conditions of charge-energy
90
+ conservation, Horndeski parameter and GUP parameter influences.
91
+ The modified TH depends on the Horndeski
92
+ parameter as well as GUP parameter and also investigated the behaviour of thermodynamic quantities via thermal
93
+ fluctuations.
94
+ This paper is based on the analysis of quantum tunneling, TH, stability and instability conditions for the Horndeski
95
+ like BH. The paper is outlined as follows: in Sec. II, we study the tunneling radiation of bosonic particles for 4D
96
+ Horndeski like BH and also calculate the effects of GUP parameters on tunneling and TH. In section III, we study
97
+ the graphical presentation of tunneling radiation for this type of BH and analyze the stable and unstable conditions
98
+ for Horndeski like BH. In section IV, we investigate the behaviour of thermodynamic quantities under the effects of
99
+ thermal fluctuations. In section V, we express the discussion and conclusion of the whole analysis.
100
+ II.
101
+ HORNDESKI LIKE BLACK HOLES
102
+ Hui and his coauthor Nicolis [58] argued that the no-hair theorems cannot be applied on a Galileon field, as it is
103
+ coupled to gravity under the effect of peculiar derivative interactions. Further, they demonstrated that static and
104
+ spherically symmetric spacetime defining the geometry of the black hole could not sustain nontrivial Galileon profiles.
105
+ Babichev and Charmousis [59] examined the no-hair theorem in Ref. [58] by considering Horndeski theories and
106
+ beyond. Furthermore, they provided the Lagrangian of Horndeski theory which can be expressed as a generalized
107
+ Galileon Lagrangian, which is defined as
108
+ S =
109
+ � √−g
110
+
111
+ Q2(χ) + Q3(χ)□φ + Q4(χ)R + Q4,χ
112
+
113
+ (□φ)2��
114
+ .
115
+ − (∇ǫ∇εφ) (∇ǫ∇εφ)] + Q5(χ)Gǫε∇ǫ∇εφ
116
+ − 1
117
+ 6Q5,χ
118
+
119
+ (□φ)3 − 3(□φ) (∇ǫ∇vφ) (∇ǫ∇εφ)
120
+ +2 (∇ǫ∇εφ) (∇v∇γφ) (∇γ∇ǫφ)]} d4x.
121
+ (1)
122
+ where Q2, Q3, Q4, and Q5 are the arbitrary functions of the scalar field φ and χ = −∂ǫφ∂ǫφ/2 represents the canonical
123
+ kinetic term. Additionally, in the current analysis, fχ stands for ∂f(χ)/∂χ, Gǫε is the Einstein tensorR is the Ricci
124
+ scalar, and other relations are defined as:
125
+ (∇ǫ∇εφ)2 ≡ ∇ǫ∇εφ∇ε∇ǫφ
126
+ (∇ǫ∇εφ)3 ≡ ∇ǫ∇εφ∇ε∇ρφ∇ρ∇ǫφ
127
+ (2)
128
+
129
+ 3
130
+ The scalar field admits the Galilean shift symmetry ∂ǫφ → ∂ǫφ + bǫ in flat spacetime for Q2 ∼ Q3 ∼ χ and Q4 ∼
131
+ Q5 ∼ χ2, which resembles the Galilean symmetry [60]. In the current study, we investigate the tunneling radiation of
132
+ spin-1 massive boson particles from Horndeski-like BH. For this purpose, we adopted the procedure, which is already
133
+ reported in [57] for Horndeski spacetime. Finally, we have the following spacetime:
134
+ ds2 = −
135
+
136
+ 1 − 2rM(r)
137
+ Σ
138
+
139
+ dt2 +
140
+ 1
141
+ ∇(r)Σdr2 + Σ2dθ2 − A
142
+ Σ sin2 dφ2 − 4ar
143
+ Σ M(r) sin2 θdtdφ,
144
+ (3)
145
+ with Σ2 = a2 cos2 θ + r2 and ∇(r) = a2 − 2rM(r)+ r2, M(r) = M − 1
146
+ 2QIn r
147
+ r0 and A = (a2 + r2)2 − ∇a2 sin2 θ, while a,
148
+ Q and M represent the rotation parameter, Horndeski parameter and mass of BH, respectively. If Q → 0, the metric
149
+ (3) goes over to the Kerr BH [61] and if Q = a = 0 the metric (3) also goes over to the Schwarzschild metric. The
150
+ line-element (3) can be re-written as
151
+ ds2 = −f(r)dt2 + g−1(r)dr2 + I(r)dφ2 + h(r)dθ2 + 2R(r)dtdφ
152
+ (4)
153
+ where
154
+ f(r) =
155
+
156
+ 1 − 2rM(r)
157
+ Σ
158
+
159
+ ,
160
+ g−1(r) = 1
161
+ ∇Σ,
162
+ h(r) = Σ2,
163
+ I(r) = −A
164
+ Σ sin2,
165
+ R(r) = −2ar
166
+ Σ M(r) sin2 θ.
167
+ We study the tunneling radiation of spin-1 particles from four-dimensional Horndeski like BHs.
168
+ By utilizing the
169
+ Hamilton-Jacobi ansatz and the WKB approximation to the modified field equation for the Horndeski space-time, the
170
+ tunneling phenomenon is successfully applied. We study the modified filed equation on a four dimensional space-time
171
+ with the background of rotation parameter, Horndeski parameter and evaluated for the radial function. As a result,
172
+ we get the tunneling probability of the radiated particles and derive the modified TH of Horndeski like BHs. The
173
+ modified filed equation is expressed by [27, 30]
174
+ ∂µ
175
+ �√−gΨνµ�
176
+ + √−g m2
177
+ ℏ2 Ψν + √−g i
178
+ ℏAµΨνµ + √−g i
179
+ ℏeF νµΨµ + ℏ2β∂0∂0∂0
180
+ �√−gg00Ψ0ν�
181
+ −ℏ2β∂i∂i∂i
182
+ �√−ggiiΨiν�
183
+ = 0,
184
+ (5)
185
+ here Ψνµ, m and g present the anti-symmetric tensor, bosonic particle mass and determinant of coefficient matrix, so
186
+ Ψνµ =
187
+
188
+ 1 − ℏ2β∂2
189
+ ν
190
+
191
+ ∂νΨµ −
192
+
193
+ 1 − ℏ2β∂2
194
+ µ
195
+
196
+ ∂µΨν +
197
+
198
+ 1 − ℏ2β∂2
199
+ ν
200
+ � i
201
+ ℏeAνΨµ
202
+
203
+
204
+ 1 − ℏ2β∂2
205
+ ν
206
+ � i
207
+ ℏeAµΨν,
208
+ and Fνµ = ∇νAµ − ∇µAν,
209
+ with β, e , ∇µ and Aµ are the GUP parameter(quantum gravity), bosonic particle charge, covariant derivative and
210
+ BH potential, respectively. The Ψνµ can be computed as
211
+ Ψ0 = −IΨ0 + RΨ3
212
+ fI + R2
213
+ ,
214
+ Ψ1 =
215
+ 1
216
+ g−1 Ψ1,
217
+ Ψ2 = 1
218
+ hΨ2,
219
+ Ψ3 = RΨ0 + fΨ3
220
+ fI + R2
221
+ ,
222
+ Ψ01 =
223
+ ˜
224
+ −DΨ01 + RΨ13
225
+ (R2 + fI)g−1 ,
226
+ Ψ02 =
227
+ ˜
228
+ −DΨ02
229
+ (R2 + fI)h,
230
+ Ψ03 = (f 2 − fI)Ψ03
231
+ (fI + R2)2 ,
232
+ Ψ12 =
233
+ 1
234
+ g−1hΨ12,
235
+ Ψ13 =
236
+ 1
237
+ g−1(fI + R2)Ψ13,
238
+ Ψ23 = fΨ23 + RΨ02
239
+ (fI + R2)h .
240
+ In order to observe the bosonic tunneling, we have assumed Lagrangian gravity equation. Further, we utilized the
241
+ WKB approximation to the Lagrangian gravity equation and computed set of equations.
242
+ Furthermore, we have
243
+ utilized the variable separation action to get required solutions. The approximation of WKB is defined [? ] as
244
+ Ψν = ην exp
245
+ � i
246
+ ℏK0(t, r, φ, θ) + ΣℏnKn(t, r, φ, θ)
247
+
248
+ .
249
+ (6)
250
+ we get set of equations in Appendix A. Utilizing variable separation technique, we can take
251
+ K0 = −(E − Lω)t + W(r) + Lφ + ν(θ),
252
+ (7)
253
+
254
+ 4
255
+ where E and L present the particle energy and particle angular, respectively, corresponding to angle φ.
256
+ After considering Eq. (7) into Eqs. (22)-(25), we reach a matrix in the form
257
+ U(η0, η1, η2, η3)T = 0,
258
+ which express a 4 × 4 matrix presented as ”U”, whose elements are given as follows:
259
+ U00 =
260
+ −I
261
+ g−1(fI + R2)
262
+
263
+ W 2
264
+ 1 + βW 4
265
+ 1
266
+
267
+
268
+ I
269
+ (fI + R2)h
270
+
271
+ L2 + βL4�
272
+
273
+ fI
274
+ (fI + R2)2
275
+
276
+ ν2
277
+ 1 + βν4
278
+ 1
279
+
280
+
281
+ m2I
282
+ (fI + R2),
283
+ U01 =
284
+ −I
285
+ g−1(fI + R2)
286
+
287
+ ((E − Lω) + (E − Lω)3β + A0e + (E − Lω)2βeA0
288
+
289
+ W1 +
290
+ R
291
+ g−1(fI + R2) +
292
+
293
+ ν1 + βν3
294
+ 1
295
+
296
+ ,
297
+ U02 =
298
+ −I
299
+ h(fI + R2)
300
+
301
+ (E − Lω) + (E − Lω)3β − A0e − (E − Lω)2βeA0
302
+
303
+ L,
304
+ U03 =
305
+ −R
306
+ g−1(fI + R2)
307
+
308
+ W 2
309
+ 1 + βW 4
310
+ 1
311
+
312
+
313
+ fI
314
+ h(fI + R2)2
315
+
316
+ (E − Lω)3β − (E − Lω)2βeA0 + (E − Lω) − eA0
317
+
318
+ ν1
319
+ +
320
+ m2R
321
+ (fI + R2)2 ,
322
+ U12 =
323
+ 1
324
+ g−1h
325
+
326
+ W1 + βW 3
327
+ 1
328
+
329
+ L,
330
+ U11 =
331
+ −I
332
+ g−1(fI + R2)
333
+
334
+ β(E − Lω)4 − βeA0EW 2
335
+ 1 + (E − Lω)2 − eA0(E − Lω)
336
+
337
+ +
338
+ R
339
+ (fI + R2)g−1
340
+ +
341
+
342
+ ν1 + βν3
343
+ 1
344
+
345
+ (E − Lω) −
346
+ 1
347
+ g−1h
348
+
349
+ L2 + βL4�
350
+
351
+ 1
352
+ (fI + R2)g−1
353
+
354
+ ν1 + βν3
355
+ 1
356
+
357
+ − m2
358
+ g−1 −
359
+ eA0I
360
+ (fI + R2)g−1
361
+ ×
362
+
363
+ (E − Lω) + (E − Lω)3β − A0e − (E − Lω)2βeA0
364
+
365
+ +
366
+ eA0R
367
+ g−1(fI + R2)
368
+
369
+ ν1 + βν3
370
+ 1
371
+
372
+ ,
373
+ U13 =
374
+ −R
375
+ g−1(fI + R2)
376
+
377
+ W1 + βW 3
378
+ 1
379
+
380
+ (E − Lω) +
381
+ 1
382
+ g−1(fI + R2)2
383
+
384
+ W1 + βW 3
385
+ 1
386
+
387
+ ν1 +
388
+ ReA0
389
+ g−1(fI + R2)
390
+
391
+ W1 + βW 3
392
+ 1
393
+
394
+ ,
395
+ U20 =
396
+ I
397
+ h(fI + R2)
398
+
399
+ (E − Lω)L + β(E − Lω)L3�
400
+ +
401
+ R
402
+ h(fI + R2)
403
+
404
+ (E − Lω) + β(E − Lω)3ν2
405
+ 1
406
+
407
+
408
+ IeA0
409
+ h(fI + R2)
410
+
411
+ L + βL3�
412
+ ,
413
+ U22 =
414
+ I
415
+ h(R2 + fI)
416
+
417
+ βE4 − βeA0E + E2 − eA0(E − Lω)
418
+
419
+
420
+ 1
421
+ g−1h +
422
+ R
423
+ h(R2 + fI)
424
+
425
+ (E − Lω)3β
426
+ + −(E − Lω)2βeA0 − A0e + (E − Lω)
427
+
428
+ ν1 −
429
+ f
430
+ h(R2 + fI)
431
+
432
+ ν2
433
+ 1 + βν4
434
+ 1
435
+
436
+ − m2
437
+ h −
438
+ eA0I
439
+ h(fI + R2)
440
+
441
+ (E − Lω) + (E − Lω)3β − A0e − (E − Lω)2βeA0
442
+
443
+ ,
444
+ U23 =
445
+ f
446
+ h(fI + R2)
447
+
448
+ L + βL3�
449
+ ν1,
450
+ U30 =
451
+ (fI − f 2)
452
+ (fI + R2)2
453
+
454
+ ν1 + βν3
455
+ 1
456
+
457
+ E +
458
+ R
459
+ h(fI + R2)
460
+
461
+ L2 + βL4�
462
+
463
+ m2R
464
+ (fI + R2) − eA0(fI − f 2)
465
+ (fI + R2)2
466
+
467
+ ν1 + βν3
468
+ 1
469
+
470
+ ,
471
+ U31 =
472
+ 1
473
+ g−1(fI + R2)
474
+
475
+ ν1 + βν3
476
+ 1
477
+
478
+ W1,
479
+ U32 =
480
+ R
481
+ h(R2 + fI)
482
+
483
+ L + βL3�
484
+ E +
485
+ f
486
+ h(R2 + fI)
487
+
488
+ ν1 + βν3
489
+ 1
490
+
491
+ L,
492
+ U33 =
493
+ (fI − f 2)
494
+ (fI + R2)
495
+
496
+ (E − Lω)2 − eA0(E − Lω) + β(E − Lω)4 − βeA0(E − Lω)3�
497
+
498
+ 1
499
+ g−1(R2 + fI)
500
+
501
+ W 2
502
+ 1 + βW 4
503
+ 1
504
+
505
+
506
+ f
507
+ (R2 + fI)h
508
+
509
+ L2 + βL4�
510
+
511
+ m2f
512
+ (fI + R2) − eA0(fI − f 2)
513
+ (fI + R2)
514
+ ×
515
+
516
+ (E − Lω) + β(E − Lω)3 − eA0(E − Lω)2�
517
+ ,
518
+
519
+ 5
520
+ with ∂tK0 = (E − Lω),
521
+ ∂φK0 = L, W1 = ∂rK0 and ν1 = ∂θK0. For non-trivial solution, we get
522
+ ImW ± = ±
523
+
524
+
525
+
526
+
527
+
528
+
529
+ E − LΩ − eA0
530
+ �2
531
+ + Z1
532
+
533
+ 1 + β Z2
534
+ Z1
535
+
536
+ (fI + R2)gI−1
537
+ dr,
538
+ = ±iπ
539
+
540
+ R − LΩ − A0e
541
+
542
+ +
543
+
544
+ 1 + βA
545
+
546
+ 2k(r+)
547
+ ,
548
+ (8)
549
+ where
550
+ Z1 = (E − Lω)ν1
551
+ g−1R
552
+ fI + R2 +
553
+ fg−1
554
+ fI + R2 ν2
555
+ 1 − g−1m2,
556
+ Z2 =
557
+ g−1I
558
+ fI + R2
559
+
560
+ (R − LΩ)4 + (eA0)2(R − LΩ)2 − 2A0e(R − LΩ)3�
561
+ +
562
+ g−1R
563
+ h(fI + R2)
564
+
565
+ (R − LΩ)3 − eA0(R − LΩ)2�
566
+ ν1 −
567
+ fg−1
568
+ fI + R2 ν4
569
+ 1 − W 4
570
+ 1 .
571
+ and A is a arbitrary parameter. In particular case, we take the radial component of the action of particle, for this aim
572
+ we choose a components of matrix equals to zero. Since, we have found the tunneling radiation (related to Horndeski
573
+ gravity and quantum gravity) for BH. This tunneling and TH quantities relate on the Horndeski gravity and quantum
574
+ gravity of this particular physical object. Thus, we have found the corresponding TH which important as a component
575
+ of leading metric with Horndeski gravity and quantum gravity. Such that, in this method we are not concerned in
576
+ order of higher for Planck’s constant only obtained appropriate result. The generalized tunneling depends on the BHs
577
+ metric and Horndeski gravity and GUP parameter. The generalized tunneling for Horndeski like BH can be written
578
+ as
579
+ T = Temission
580
+ Tabsorption
581
+ = exp
582
+
583
+ −2π(R − LΩ − A0e)
584
+ k(r+)
585
+
586
+ [1 + βA] ,
587
+ (9)
588
+ with
589
+ k(r+) = 4πr+
590
+
591
+ a2 + r2
592
+ +
593
+
594
+ Qr+ + −a2 + r2
595
+ +
596
+ (10)
597
+ In the presence of GUP terms, we calculate the TH of the Horndeski gravity BHs. by taking the Boltzmann factor
598
+ TB = exp [(E − Lω − eA0)/TH] as
599
+ TH = −a2 + Qr+ + r2
600
+ +
601
+ 4πr+
602
+
603
+ a2 + r2
604
+ +
605
+ � [1 − βA] .
606
+ (11)
607
+ The above result shows that the TH depends on the Horndeski gravity, GUP parameter, rotation parameter, Horndeski
608
+ gravity, arbitrary parameter A and radius (r+) of BH. When β = 0, we obtain the general TH in [57]. In the absence
609
+ of charge i.e., Q = 0, the above temperature reduces into Kerr BH temperature [58, 59]. For β = 0 and a = 0,
610
+ the temperature reduces into Reissner Nordstr¨om BH. Moreover, when Q = 0 = a, we recover the temperature of
611
+ Schwarzschild BH [60]. The quantum corrections slow down the increase in TH throughout the radiation phenomenon.
612
+ A.
613
+ TH versus r+
614
+ We observe the geometrical presentation of TH w.r.t r+ for the 4D Horndeski like metric. Moreover, we observe
615
+ the physical significance of these graphs under Horndeski gravity and GUP parameter and study the stability and
616
+ instability analysis of corresponding TH. For β equals to zero, the tunneling radiation will be independent of GUP
617
+ parameter. In the left plot of Fig. 1, the TH increases with increasing β in small region of horizon 0 ≤ r+ ≤ 5, that
618
+ indicates the stable state of BH till r+ → ∞. In the right plot of Fig. 1, the rotating parameter and β are fixed,
619
+ then we take changing values of hairy parameter of Horndeski gravity and get the completely unstable form of BH
620
+ with negative temperature.
621
+
622
+ 6
623
+ β=10
624
+ β=20
625
+ β=30
626
+ 0
627
+ 1
628
+ 2
629
+ 3
630
+ 4
631
+ 5
632
+ 0
633
+ 1
634
+ 2
635
+ 3
636
+ 4
637
+ 5
638
+ 6
639
+ r+
640
+ TH
641
+ Q=0.5
642
+ Q=1
643
+ Q=1.5
644
+ 0
645
+ 1
646
+ 2
647
+ 3
648
+ 4
649
+ 5
650
+ - 6
651
+ - 4
652
+ - 2
653
+ 0
654
+ 2
655
+ r+
656
+ TH
657
+ Figure 1: TH w.r.t horizon r+ for a = 5, Q = 0.5 and Ξ = 1 and left β = 10 (black), β = 20 (blue), β = 30 (red).
658
+ Right a = 0.5, Ξ = 1, β = 5, Q = 0.5 (black), a = 0.5, Ξ = 1, β = 5, Q = 1 (blue), a = 0.5, Ξ = 1, β = 5, Q = 1.5
659
+ (red).
660
+ III.
661
+ THERMODYNAMICS AND EFFECTS OF FIRST ORDER CORRECTIONS
662
+ Thermal fluctuations plays important role on the study of BH thermodynamics. With the concept of Euclidean
663
+ quantum gravity, the temporal coordinates shifts towards complex plan. To check the effects of these correction in
664
+ entropy, we find Hawking temperature and usual entropy of the given system with the help of first law of thermody-
665
+ namics
666
+ S = π
667
+
668
+ a2 + r2
669
+ +
670
+
671
+ ,
672
+ T = −a2 + Qr+ + r2
673
+ +
674
+ 4πr+
675
+
676
+ a2 + r2
677
+ +
678
+
679
+ (12)
680
+ To check the corrected entropy along these thermal fluctuations, the partition function is Z(µ) in terms of density of
681
+ states η(E) is given as [37]
682
+ Z(µ) =
683
+ � ∞
684
+ 0
685
+ exp(−µE)η(E)dE,
686
+ (13)
687
+ where T+ = 1
688
+ µ and E is the mean energy of thermal radiations. By using the Laplace inverse transform, the expression
689
+ of density takes the form
690
+ ρ(E) =
691
+ 1
692
+ 2πi
693
+ � µ0+i∞
694
+ µ0−i∞
695
+ Z(µ) exp(µE)dµ =
696
+ 1
697
+ 2πi
698
+ � µ0+i∞
699
+ µ0−i∞
700
+ exp( ˜S(µ))dµ,
701
+ (14)
702
+ where ˜S(µ) = µE + ln Z(µ) represents the modified entropy of the considered system that is dependent on Hawking
703
+ temperature. Moreover, the expression of entropy gets modified with the help of steepest decent method,
704
+ ˜S(µ) = S + 1
705
+ 2(µ − µ0)2 ∂2 ˜S(µ)
706
+ ∂µ2
707
+ ���
708
+ µ=µ0 + higher-order terms.
709
+ (15)
710
+ Using the conditions ∂ ˜S
711
+ ∂µ = 0 and ∂2 ˜S
712
+ ∂µ2 > 0, the corrected entropy relation under the first-order corrections modified.
713
+ By neglecting higher order terms, the exact expression of entropy is expressed as
714
+ ˜S = S − δ ln(ST 2),
715
+ (16)
716
+ where δ is called correction parameter, the usual entropy of considered system is attained by fixing δ = 0 that is
717
+ without influence of these corrections. Furthermore, inserting the Eq. (12) into (16), we have
718
+ ˜S = (a2 + r2
719
+ +) − δ log
720
+ ��
721
+ a2 − r+ (Q + r+)
722
+ �2
723
+ 16πr2
724
+ +
725
+
726
+ a2 + r2
727
+ +
728
+
729
+
730
+ .
731
+ (17)
732
+
733
+ 7
734
+ δ=0
735
+ δ=0.4
736
+ δ=0.6
737
+ δ=0.6
738
+ 0.0
739
+ 0.1
740
+ 0.2
741
+ 0.3
742
+ 0.4
743
+ 10
744
+ 20
745
+ 30
746
+ 40
747
+ r+
748
+ S
749
+ ˜
750
+ Figure 2: Corrected entropy versus r+ for a=0.2, Q=0.4. .
751
+ In the Fig. 2, the graph of corrected entropy is monotonically increasing throughout the considered domain. It is
752
+ noted the graph (black) of usual entropy is increasing just for small value of horizon radius but corrected expression
753
+ of energy is increasing smoothly. Thus, these corrections terms are more effective for small BHs. Now, using the
754
+ expression of corrected entropy and check the other other thermodynamic quantities via thermal fluctuations. In this
755
+ way, the the Helmholtz energy (F = − � ˜SdT ) leads to the form
756
+ F =
757
+
758
+ a4 − r2
759
+ +
760
+
761
+ −4a2 + 2Qr+ + r2
762
+ +
763
+ �� �
764
+ δ log
765
+
766
+ (a2−r+(Q+r+))
767
+ 2
768
+ {
769
+ r2
770
+ +
771
+
772
+ a2 + r2
773
+ +
774
+
775
+ }
776
+
777
+ − a2 − δ log(16π) − r2
778
+ +
779
+
780
+ 4πr2
781
+ +
782
+
783
+ a2 + r2
784
+ +
785
+ �2
786
+ .
787
+ (18)
788
+ δ=0
789
+ δ=0.4
790
+ δ=0.6
791
+ δ=0.6
792
+ 0.00
793
+ 0.05
794
+ 0.10
795
+ 0.15
796
+ 0.20
797
+ 0.25
798
+ 0.30
799
+ 35
800
+ 40
801
+ 45
802
+ 50
803
+ 55
804
+ 60
805
+ 65
806
+ r+
807
+ F
808
+ Figure 3: Helmholtz free energy versus r+ for a=0.2, Q=0.4..
809
+ The Fig. 3 shows the graph of Helmholtz free energy versus horizon radius. It is observed that the behaviour of
810
+ energy is gradually decreases for the different correction parameter δ values. While the graph of usual entropy shows
811
+ opposite behaviour as the graph is increasing. This behaviour means, the considered system shifts its state towards
812
+ equilibrium, thus, no more work can be extract from it. The expression of internal energy (E = F + T ˜S) for the
813
+ corresponding geometry is given by [37]
814
+ E =
815
+
816
+ r+
817
+
818
+ a2 + r2
819
+ +
820
+ � �
821
+ r+ (Q + r+) − a2�
822
+
823
+ δ
824
+
825
+ log(16π) − log
826
+ ��
827
+ a2 − r+ (Q + r+)
828
+ � 2
829
+ r2
830
+ +
831
+
832
+ a2 + r2
833
+ +
834
+
835
+ ��
836
+ + π
837
+
838
+ a2 + r2
839
+ +
840
+
841
+
842
+
843
+
844
+ a4 − r2
845
+ +
846
+
847
+ −4a2 + 2Qr+ + r2
848
+ +
849
+ ��
850
+
851
+ −δ log
852
+ ��
853
+ a2 − r+ (Q + r+)
854
+ �2
855
+ r2
856
+ +
857
+
858
+ a2 + r2
859
+ +
860
+
861
+
862
+ + a2 + δ log(16π) + r2
863
+ +
864
+ � �
865
+
866
+ 4πr2
867
+ +
868
+
869
+ a2 + r2
870
+ +
871
+ �2 �−1
872
+ .
873
+ (19)
874
+
875
+ 8
876
+ δ=0.2
877
+ δ=0.4
878
+ δ=0.6
879
+ δ=0.8
880
+ 0.0
881
+ 0.1
882
+ 0.2
883
+ 0.3
884
+ 0.4
885
+ -2
886
+ 0
887
+ 2
888
+ 4
889
+ 6
890
+ r+
891
+ E
892
+ Figure 4: Internal Energy w.r.t r+ for a=0.2, Q=0.4.
893
+ The graphical behaviour of internal energy for the different choices of horizon radius is shown in Fig.
894
+ 4.
895
+ It is
896
+ observable that for the small values of radii, the graph is gradually decreases even shifts towards negative side, While
897
+ the corrected internal energy depicts positive behaviour. This mean that the considered BH absorbing more and
898
+ more heat from the surrounding to maintain its state. Since, BHs considered as a thermodynamic system, so there
899
+ is another important thermodynamic quantity that is pressure. In this regard, there is deep connection between
900
+ voulme (V =
901
+ 2π(r2
902
+ ++a2)(2r2
903
+ ++a2)
904
+ 3r+
905
+ ) and pressure. The Expression of BH pressure (P = − dF
906
+ dV ) under the effect of thermal
907
+ fluctuations takes the form
908
+ P =
909
+
910
+ 2r2
911
+ +
912
+
913
+ a2 + r2
914
+ +
915
+ ��
916
+ − 4a2 + 3Qr+ + 2r2
917
+ +
918
+ ��
919
+ a2 − r+
920
+
921
+ Q + r+
922
+ ��2�
923
+ − δ log
924
+ ���
925
+ a2 − r+(Q + r+)
926
+ �2�
927
+ (r2
928
+ +(a2 + r2
929
+ +))−1�
930
+ + a2 + δ log(16π) + r2
931
+ +
932
+
933
+ − 2
934
+
935
+ r+
936
+
937
+ Q + r+
938
+
939
+ − a2��
940
+ a4 − r2
941
+ +
942
+
943
+ − 4a2 + 2Qr+ + r2
944
+ +
945
+ ���
946
+ − a4δ + Qr3
947
+ +
948
+
949
+ a2 + δ
950
+
951
+ − r2
952
+ +
953
+
954
+ a4 + 3a2δ
955
+
956
+ + Qr5
957
+ + + r6
958
+ +
959
+
960
+ + 4r2
961
+ +
962
+
963
+ a4 − r2
964
+ +
965
+
966
+ − 4a2 + 2Qr+ + r2
967
+ +
968
+ ���
969
+ a2 − r+
970
+
971
+ Q + r+
972
+ ��2�
973
+ − δ log
974
+ ×
975
+ ���
976
+ a2 − r+
977
+
978
+ Q + r+
979
+ ��2��
980
+ r2
981
+ +
982
+
983
+ a2 + r2
984
+ +
985
+ ��−1�
986
+ + a2 + δ log(16π) + r2
987
+ +
988
+
989
+ + 2
990
+
991
+ a2 + r2
992
+ +
993
+ ��
994
+ a4 − r2
995
+ +
996
+
997
+ − 4a2
998
+ + 2Qr+ + r2
999
+ +
1000
+ ���
1001
+ a2 − r+
1002
+
1003
+ Q + r+
1004
+ ��2�
1005
+ − δ log
1006
+ ���
1007
+ a2 − r+
1008
+
1009
+ Q + r+
1010
+ ��2�
1011
+ (r2
1012
+ +
1013
+
1014
+ a2 + r2
1015
+ +
1016
+ ��−1�
1017
+ + a2 + δ log(16π)
1018
+ + r2
1019
+ +
1020
+ ���
1021
+ 4πr3
1022
+ +
1023
+
1024
+ a2 + r2
1025
+ +
1026
+ �3�
1027
+ a2 − r+
1028
+
1029
+ Q + r+
1030
+ ��2�−1
1031
+ .
1032
+ (20)
1033
+ δ=0
1034
+ δ=0.2
1035
+ δ=0.4
1036
+ δ=0.6
1037
+ 0.0
1038
+ 0.5
1039
+ 1.0
1040
+ 1.5
1041
+ 2.0
1042
+ 0
1043
+ 1
1044
+ ×108
1045
+ 2
1046
+ �108
1047
+ 3
1048
+ �108
1049
+ 4
1050
+ �108
1051
+ 5
1052
+ �108
1053
+ r+
1054
+ P
1055
+ Figure 5: Pressure versus r+ for a=0.2, Q=0.4.
1056
+ In the Fig. 5, the graph of pressure is just coincides the state of equilibrium. For the different values of correction
1057
+ parameter, the pressure is significantly increases for the considered system.
1058
+ Further, there is another important
1059
+ thermodynamic quantity enthalpy (H = E + PV ) is given in Appendix B.
1060
+
1061
+ 9
1062
+ δ=0
1063
+ δ=0.4
1064
+ δ=0.6
1065
+ δ=0.8
1066
+ 0.0
1067
+ 0.2
1068
+ 0.4
1069
+ 0.6
1070
+ 0.8
1071
+ 1.0
1072
+ -10
1073
+ -5
1074
+ 0
1075
+ 5
1076
+ r+
1077
+ H
1078
+ Figure 6: Enthalpy versus r+ for a=0.2, Q=0.4.
1079
+ From Fig. 6, it can observed that the graph of usual enthalpy is coincide with the plots of corrected one and abruptly
1080
+ decreases even shifts towards negative side. This means that there exists a exothermic reactions means there will be
1081
+ huge amount of energy release into its surroundings. By taking into account the thermal fluctuations, the Gibbs free
1082
+ energy (G = H − T ˜S) is expressed in Appendix B.
1083
+ δ=0
1084
+ δ=0.4
1085
+ δ=0.6
1086
+ δ=0.8
1087
+ 0
1088
+ 5
1089
+ 10
1090
+ 15
1091
+ 20
1092
+ 5
1093
+ 10
1094
+ 15
1095
+ 20
1096
+ r+
1097
+ G
1098
+ Figure 7: Gibbs free energy versus r+ for a=0.2, Q=0.4.
1099
+ The graphical analysis of Gibbs free energy with respect to horizon radius is shows in Fig. 7. The positivity of this
1100
+ energy is sign of occurrence of non-spontaneous reactions means this system requires more energy to gain equilibrium
1101
+ state. After the detail discussion of thermodynamics quantities, there is another important concept is the stability of
1102
+ the system that is checked by specific heat. The specific heat (C ˜S = dE
1103
+ dT ) is given as
1104
+ C ˜S =
1105
+
1106
+ 2
1107
+
1108
+ a2 + 3r2
1109
+ +
1110
+ ��
1111
+ r+
1112
+
1113
+ r+
1114
+
1115
+ a2�
1116
+ − δ(Q − 4) log
1117
+
1118
+
1119
+ a2 − r+
1120
+
1121
+ Q + r+
1122
+ ��2
1123
+ r2
1124
+ +
1125
+
1126
+ a2 + r2
1127
+ +
1128
+
1129
+
1130
+ + a2(πQ − 5) + δ(Q − 4) log(16π)
1131
+
1132
+ + r+
1133
+ +
1134
+
1135
+ − πa4 − 2δQ log
1136
+
1137
+
1138
+ a2 − r+
1139
+
1140
+ Q + r+
1141
+ ��2
1142
+ r2
1143
+ +
1144
+
1145
+ a2 + r2
1146
+ +
1147
+
1148
+
1149
+ + r+
1150
+
1151
+ − δ(Q + 1) log
1152
+
1153
+
1154
+ a2 − r+
1155
+
1156
+ Q + r+
1157
+ ��2
1158
+ r2
1159
+ +
1160
+
1161
+ a2 + r2
1162
+ +
1163
+
1164
+
1165
+ + r+
1166
+
1167
+ − δ log
1168
+ ×
1169
+
1170
+
1171
+ a2 − r+
1172
+
1173
+ Q + r+
1174
+ ��2
1175
+ r2
1176
+ +
1177
+
1178
+ a2 + r2
1179
+ +
1180
+
1181
+
1182
+ + πa2 + δ log(16π) + r+
1183
+
1184
+ πQ + πr+ + 1
1185
+
1186
+ + 2Q
1187
+
1188
+ + a2(2πQ − 3) + δ(Q + 1) log(16π)
1189
+
1190
+ + 2a2Q + 2δQ log(16π)
1191
+ ��
1192
+ − a4�
1193
+ − δ log
1194
+
1195
+
1196
+ a2 − r+
1197
+
1198
+ Q + r+
1199
+ ��2
1200
+ r2
1201
+ +
1202
+
1203
+ a2 + r2
1204
+ +
1205
+
1206
+
1207
+ + πa2 + δ log(16π)
1208
+ ��
1209
+ − a4�
1210
+ − δ log
1211
+ ×
1212
+
1213
+
1214
+ a2 − r+
1215
+
1216
+ Q + r+
1217
+ ��2
1218
+ r2
1219
+ +
1220
+
1221
+ a2 + r2
1222
+ +
1223
+
1224
+
1225
+ + a2 + δ log(16π)
1226
+ ����
1227
+ r+
1228
+
1229
+ a2 + r2
1230
+ +
1231
+ ��
1232
+ − a4 − 4a2r2
1233
+ + + 2Qr3
1234
+ + + r4
1235
+ +
1236
+ ��−1
1237
+ .
1238
+ (21)
1239
+
1240
+ 10
1241
+ δ=0
1242
+ δ=0.4
1243
+ δ=0.6
1244
+ δ=0.8
1245
+ 0.00
1246
+ 0.02
1247
+ 0.04
1248
+ 0.06
1249
+ 0.08
1250
+ 0.10
1251
+ -0.10
1252
+ -0.05
1253
+ 0.00
1254
+ 0.05
1255
+ r+
1256
+ CS
1257
+ ~
1258
+ Figure 8: Specific heat versus r+ for a=0.2, Q=0.4.
1259
+ From Fig.
1260
+ 8, the behaviour of specific heat is observed with respect to horizon radius and different choices of
1261
+ correction parameter δ. It can be observed that the uncorrected quantity (black) depicts negative behaviour means
1262
+ the system is unstable while the corrected specific heat shows positive behaviour throughout the considered domain.
1263
+ The positivity of this plot is indication of stable region. It can be concluded that these correction terms makes the
1264
+ system stable under thermal fluctuations.
1265
+ IV.
1266
+ DISCUSSION AND RESULT
1267
+ In this paper, we have utilized Lagrangian gravity equation to observe the tunneling of bosonic particles through
1268
+ the horizon of Horndeski like BH. We have used the metric from Ref. [57]. We have considered a new version of
1269
+ black hole with Horndeski parameter Q and a rotation parameter a, due to the presence of these parameters, we call
1270
+ the metric a new type of spacetime. Our results are also in terms of these parameters, therefore they are different
1271
+ from the previous literature related about the thermodynamics of this black hole. Assuming to relativistic quantum
1272
+ mechanics and the region of vacuum, where particles are produced continuously in the phenomenon of annihilated.
1273
+ The tunneling radiation as a quantum mechanical processes can be observed as a tunneling phenomenon, where
1274
+ positive boson particle radiate the horizon and the negative energy boson particle move inward and absorbed by the
1275
+ BH. The incoming and outgoing boson particles movement carried out by the action of particle’s is real and complex,
1276
+ respectively. The emission rate of these tunneling radiation corresponding to the Horndeski like BH configuration is
1277
+ associated to the imaginary part of the action of particles, which is associated to the factor of Boltzmann, this factor
1278
+ gives TH for Horndeski like BH. From our investigation, we have observed that, in rotating case of BH, the TH at
1279
+ which boson particles tunnel through the BH horizon is not dependent at any types of particles. In special case when
1280
+ particles have different (zero or upward or down) spins, the tunneling rate will be alike by assuming the semi-classical
1281
+ approach. Thus, their corresponding TH must be similar for all types of particle. Therefore, one can say tunneling
1282
+ radiation is independent of all kinds of the particles and this result also holds for different frame of coordinates by
1283
+ utilizing the transformations of particular coordinate. For this procedure, the tunneling particles is associated to the
1284
+ energy of particles, momentum, quantum gravity, hairy parameter of Horndeski gravity and BH surface gravity, while
1285
+ the temperature depends on hairy parameter of Horndeski gravity, rotation and quantum gravity parameters. It is
1286
+ very important to mention here that, when β = 0, we obtain the standard temperature for Horndeski like BH. In
1287
+ the absence of charge i.e., Q = 0, the above temperature reduces into Kerr BH temperature. For β = 0 and a = 0,
1288
+ the temperature reduces into Reissner Nordstr¨om BH. Moreover, when Q = 0 = a, we recover the temperature of
1289
+ Schwarzschild BH. For the changing values of β from 10 to 30 in the region 0 ≤ r+ ≤ 5, we have observed that the
1290
+ Horndeski like BH is stable and for changing Q from 0.5 to 1.5, the Horndeski like BH is un-stable with negative
1291
+ temperature. Moreover, the temperature increases with the increasing values of quantum gravity β.
1292
+ Moreover, we have computed TH as well as heat capacity for Horndeski gravity like BHs.
1293
+ Firstly, the TH is
1294
+ calculated through entropy and the density of state is also calculated with help of inverse Laplace transformation.
1295
+ We have observed that the exact entropy of the system depends on Hawking temperature by applying the method of
1296
+ steepest descent under different conditions. In graphically, we have studied the monotonically increasing entropy of
1297
+ the metric throughout the assumed domain. It is observed that a decrease in entropy for a certain value of the radius,
1298
+ the corrected expression of energy is increasing smoothly and also studied that the small BHs are more effective for
1299
+ thermal fluctuations.
1300
+ It is observed that the behaviour of energy gradually decreases for the different correction parameter δ values and
1301
+ the graph of usual entropy shows opposite behaviour as the graph increases. Therefore, the considered system shifts
1302
+ its state towards equilibrium. It is observable that for the small values of radii, the graph gradually decreases even
1303
+
1304
+ 11
1305
+ shifts towards negative side, while the corrected internal energy depicts positive behaviour. So, the considered BH
1306
+ absorbed more and more heat from the surrounding to maintain its state. The graph of pressure coincides the state
1307
+ of equilibrium. For the different values of correction parameter, the pressure significantly increases for the considered
1308
+ system.
1309
+ It is observed that the graph of usual enthalpy coincides with the plots of corrected one and abruptly decreases even
1310
+ shifts towards negative side. It can be concluded that there exists a exothermic reactions means a huge amount of
1311
+ energy released into its surroundings. The positivity of this energy is sign of occurrence of non-spontaneous reactions
1312
+ means the system required more energy to gain equilibrium state. The behaviour of specific heat with respect to
1313
+ horizon radius and different choices of correction parameter δ has been observed. The uncorrected quantity depicts
1314
+ negative behaviour means the system is unstable while the corrected specific heat shows positive behaviour throughout
1315
+ the considered domain. The positivity of the plots for specific heat is indication of stable region. It can be concluded
1316
+ that these correction terms makes the system stable under thermal fluctuations.
1317
+ Appendix A
1318
+ We have utilized the Lagrangian equation in the approximation of WKB to get following solutions,
1319
+ I
1320
+ (R2 + fI)g−1
1321
+
1322
+ η1(∂0K0)(∂1K0) + βη1(∂0K0)3(∂1K0) − η0(∂1K0)2 − βη0(∂1K0)4 + η1eA0(∂1K0) +
1323
+ η1βeA0(∂0K0)2(∂1K0)
1324
+
1325
+
1326
+ R
1327
+ g−1(fI + R2)
1328
+
1329
+ η3(∂1K0)2 + βη3(∂1K0)4 − η1(∂1K0)(∂3K0) − βη1(∂1K0)(∂3K0)2�
1330
+ +
1331
+ I
1332
+ h(fI + R2)
1333
+
1334
+ η2(∂0K0)(∂2K0) + βη2(∂0K0)3(∂2K0) − η0(∂2K0)2 − βη0(∂2K0)4 + η2eA0(∂2K0)
1335
+ + η2eA0β(∂0K0)2(∂1K0)
1336
+
1337
+ +
1338
+ fI
1339
+ (fI + R2)2
1340
+
1341
+ η3(∂0K0)(∂3K0) + βη3(∂0K0)3(∂3K0) − η0(∂3K0)2 − βη0(∂3K0)4
1342
+ + η3eA0(∂3K0) + η3eA0(∂0K0)2(∂3K0)
1343
+
1344
+ − m2 Iη0 − Rη3
1345
+ (fI + R2) = 0,
1346
+ (22)
1347
+ −I
1348
+ g−1(fI + R2)
1349
+
1350
+ η1(∂0K0)2 + βη1(∂0K0)4 − η0(∂0K0)(∂1K0) − βη0(∂0K0)(∂1K0)3 + η1eA0(∂0K0) + βη1eA0(∂0K0)3�
1351
+ +
1352
+ R
1353
+ g−1(fI + R2)
1354
+
1355
+ η3(∂0K0)(∂1K0) + βη3(∂0K0)(∂1K0)3 − η1(∂0K0)(∂3K0) − βη1(∂0K0)(∂3K0)3�
1356
+ +
1357
+ 1
1358
+ g−1h [η2(∂1K0)(∂2K0) + βη2(∂1K0)(∂2K0)3 − η1(∂2K0)2 − βη1(∂2K0)4�
1359
+ +
1360
+ 1
1361
+ g−1(fI + R2)
1362
+
1363
+ η3(∂1K0)(∂3K0) + βη3(∂1K0)(∂3K0)3 − η1(∂3K0)2 − βη1(∂3K0)4�
1364
+ − m2η1
1365
+ g−1 +
1366
+ eA0I
1367
+ g−1(fI + R2)
1368
+
1369
+ η1(∂0K0) + βη1(∂0K0)3 − η0(∂1K0) − βη0(∂1K0)3 + eA0η1 + βη1eA0(∂0K0)2)
1370
+
1371
+ +
1372
+ eA0R
1373
+ g−1(fI + R2)
1374
+
1375
+ η3(∂1K0) + βη3(∂1K0)3 − η1(∂3K0) − βη1(∂1K0)3�
1376
+ = 0,
1377
+ (23)
1378
+
1379
+ 12
1380
+ I
1381
+ h(fI + R2)
1382
+
1383
+ η2(∂0K0)2 + βη2(∂0K0)4 − η0(∂0K0)(∂2K0) − βη0(∂0K0)(∂2K0)3 + η2eA0(∂0K0) + βη2eA0(∂0K0)3�
1384
+ +
1385
+ 1
1386
+ g−1h
1387
+
1388
+ η2(∂1K0)2 + βη2(∂1K0)4 − η1(∂1K0)(∂2K0) − βη1(∂1K0)(∂2K0)3�
1389
+
1390
+ R
1391
+ h(fI + R2)
1392
+
1393
+ η2(∂0K0)(∂3K0)
1394
+ + βη2(∂0K0)3(∂3K0) − η0(∂0K0)(∂3K0) − βη0(∂0K0)3(∂3K0) + η2eA0(∂3K0) + βη2eA0(∂3K0)3�
1395
+ +
1396
+ f
1397
+ h(fI + R2)
1398
+
1399
+ η3(∂2K0)(∂3K0) + βη3(∂2K0)3(∂3K0) − η2(∂3K0)2 − βη2(∂3K0)4�
1400
+ +
1401
+ eA0I
1402
+ h(fI + R2)
1403
+
1404
+ ��2(∂0K0)
1405
+ + βη2(∂0K0)3 − η0(∂2K0) − βη0(∂2K0)3 + η2eA0 + η2βeA0(∂0K0)2�
1406
+ − m2η2
1407
+ h
1408
+ = 0,
1409
+ (24)
1410
+ (fI − f 2)
1411
+ (fI + R2)2
1412
+
1413
+ η3(∂0K0)2 + βη3(∂0K0)4 − η0(∂0K0)(∂3K0) − βη0(∂0K0)(∂3K0)3 + eA0η3(∂0K0) + βη3eA0(∂0K0)3�
1414
+
1415
+ I
1416
+ h(fI + R2)
1417
+
1418
+ η3(∂1K0)2 + βη3(∂1K0)4 − η1(∂1K0)(∂3K0) − βη1(∂1K0)(∂3K0)3�
1419
+
1420
+ R
1421
+ h(fI + R2)
1422
+
1423
+ η2(∂0K0)(∂2K0) + βη2(∂0K0)3(∂2K0) − η0(∂2K0)2 + βη0(∂2K0)4 + eA0η2(∂2K0) + βη2eA0∂0K0)2(∂2K0)
1424
+
1425
+
1426
+ eA0f
1427
+ h(fI + R2)
1428
+
1429
+ η3(∂2K0)2 + βη3(∂2K0)4 − η2(∂2K0)(∂3K0) − βη2(∂0K0)(∂3K0)3�
1430
+ − m2(Rη0 − fη3
1431
+ (fI + R2)
1432
+ +eA0(fI − f 2)
1433
+ (fI + R2)2
1434
+
1435
+ η3(∂0K0) + βη3(∂0K0)3 − η0(∂3K0) − βη0(∂3K0)3 + η3eA0 + η3βeA0(∂0K0)2�
1436
+ = 0,
1437
+ (25)
1438
+ Appendix B
1439
+ The thermodynamic quantity enthalpy is given as
1440
+ H =
1441
+
1442
+ r+
1443
+
1444
+ a2 + r2
1445
+ +
1446
+ ��
1447
+ r+
1448
+
1449
+ a2 + r2
1450
+ +
1451
+ ��
1452
+ r+
1453
+
1454
+ Q + r+
1455
+
1456
+ − a2��
1457
+ δ
1458
+
1459
+ log(16π) − log
1460
+
1461
+
1462
+ a2 − r+
1463
+
1464
+ Q + r+
1465
+ ��2
1466
+ r2
1467
+ +
1468
+
1469
+ a2 + r2
1470
+ +
1471
+
1472
+ ��
1473
+ + π
1474
+
1475
+ a2 + r2
1476
+ +
1477
+ ��
1478
+
1479
+
1480
+ a4 − r2
1481
+ +
1482
+
1483
+ − 4a2 + 2Qr+ + r2
1484
+ +
1485
+ ���
1486
+ − δ log
1487
+
1488
+
1489
+ a2 − r+
1490
+
1491
+ Q + r+
1492
+ ��2
1493
+ r2
1494
+ +
1495
+
1496
+ a2 + r2
1497
+ +
1498
+
1499
+
1500
+ + a2 + δ log(16π) + r2
1501
+ +
1502
+ ��
1503
+ +
1504
+
1505
+ 2r2
1506
+ +
1507
+
1508
+ a2 + r2
1509
+ +
1510
+ ��
1511
+ − 4a2 + 3Qr+ + 2r2
1512
+ +
1513
+ ��
1514
+ a2 − r+
1515
+
1516
+ Q + r+
1517
+ ��2�
1518
+ − δ log
1519
+
1520
+
1521
+ a2 − r+
1522
+
1523
+ Q + r+
1524
+ ��2
1525
+ r2
1526
+ +
1527
+
1528
+ a2 + r2
1529
+ +
1530
+
1531
+
1532
+ + a2
1533
+ + δ log(16π) + r2
1534
+ +
1535
+
1536
+ − 2
1537
+
1538
+ r+
1539
+
1540
+ Q + r+
1541
+
1542
+ − a2��
1543
+ a4 − r2
1544
+ +
1545
+
1546
+ − 4a2 + 2Qr+ + r2
1547
+ +
1548
+ ���
1549
+ − a4δ + Qr3
1550
+ +
1551
+
1552
+ a2 + δ
1553
+
1554
+ − r2
1555
+ +
1556
+
1557
+ a4 + 3a2δ
1558
+
1559
+ + Qr5
1560
+ + + r6
1561
+ +
1562
+
1563
+ + 4r2
1564
+ +
1565
+
1566
+ a4 − r2
1567
+ +
1568
+
1569
+ − 4a2 + 2Qr+ + r2
1570
+ +
1571
+ ���
1572
+ a2 − r+
1573
+
1574
+ Q + r+
1575
+ ��2�
1576
+ − δ log
1577
+ ×
1578
+
1579
+
1580
+ a2 − r+
1581
+
1582
+ Q + r+
1583
+ ��
1584
+ 2
1585
+ r2
1586
+ +
1587
+
1588
+ a2 + r2
1589
+ +
1590
+
1591
+
1592
+ + a2 + δ log(16π) + r2
1593
+ +
1594
+
1595
+ + 2
1596
+
1597
+ a2 + r2
1598
+ +
1599
+ ��
1600
+ a4 − r2
1601
+ +
1602
+
1603
+ − 4a2 + 2Qr+ + r2
1604
+ +
1605
+ ��
1606
+ ×
1607
+
1608
+ a2 − r+
1609
+
1610
+ Q + r+
1611
+ ��2�
1612
+ − δ log
1613
+
1614
+
1615
+ a2 − r+
1616
+
1617
+ Q + r+
1618
+ ��2
1619
+ r2
1620
+ +
1621
+
1622
+ a2 + r2
1623
+ +
1624
+
1625
+
1626
+ + a2 + δ log(16π) + r2
1627
+ +
1628
+ ����
1629
+ a2 − r+
1630
+
1631
+ Q + r+
1632
+ ��2�−1
1633
+ )
1634
+ ×
1635
+
1636
+ 4πr3
1637
+ +
1638
+
1639
+ a2 + r2
1640
+ +
1641
+ �3�−1
1642
+ .
1643
+ (26)
1644
+
1645
+ 13
1646
+ The Gibbs free energy is expressed as
1647
+ G =
1648
+
1649
+ − r2
1650
+ +
1651
+
1652
+ a2 + r2
1653
+ +
1654
+
1655
+ 2�
1656
+ r+
1657
+
1658
+ Q + r+
1659
+
1660
+ − a2��
1661
+ − δ log
1662
+
1663
+
1664
+ a2 − r+
1665
+
1666
+ Q + r+
1667
+ ��
1668
+ 2
1669
+ r2
1670
+ +
1671
+
1672
+ a2 + r2
1673
+ +
1674
+
1675
+
1676
+ + a2 + δ log(16π) + r2
1677
+ +
1678
+
1679
+ + r+
1680
+
1681
+ a2 + r2
1682
+ +
1683
+
1684
+ ×
1685
+
1686
+ r+
1687
+
1688
+ a2 + r2
1689
+ +
1690
+ ��
1691
+ r+
1692
+
1693
+ Q + r+
1694
+
1695
+ − a2��
1696
+ δ
1697
+
1698
+ log(16π) − log
1699
+
1700
+
1701
+ a2 − r+
1702
+
1703
+ Q + r+
1704
+ ��
1705
+ 2
1706
+ r2
1707
+ +
1708
+
1709
+ a2 + r2
1710
+ +
1711
+
1712
+ ��
1713
+ + π
1714
+
1715
+ a2 + r2
1716
+ +
1717
+ ��
1718
+
1719
+
1720
+ a4 − r2
1721
+ +
1722
+ ×
1723
+
1724
+ − 4a2 + 2Qr+ + r2
1725
+ +
1726
+ ���
1727
+ − δ log
1728
+
1729
+
1730
+ a2 − r+
1731
+
1732
+ Q + r+
1733
+ ��
1734
+ 2
1735
+ r2
1736
+ +
1737
+
1738
+ a2 + r2
1739
+ +
1740
+
1741
+
1742
+ + a2 + δ log(16π) + r2
1743
+ +
1744
+ ��
1745
+ +
1746
+
1747
+ 2r2
1748
+ +
1749
+
1750
+ a2 + r2
1751
+ +
1752
+
1753
+ ×
1754
+
1755
+ − 4a2 + 3Qr+ + 2r2
1756
+ +
1757
+ ��
1758
+ a2 − r+
1759
+
1760
+ Q + r+
1761
+ ��
1762
+ 2�
1763
+ − δ log
1764
+
1765
+
1766
+ a2 − r+
1767
+
1768
+ Q + r+
1769
+ ��
1770
+ 2
1771
+ r2
1772
+ +
1773
+
1774
+ a2 + r2
1775
+ +
1776
+
1777
+
1778
+ + a2 + δ log(16π) + r2
1779
+ +
1780
+
1781
+ − 2
1782
+
1783
+ r+
1784
+
1785
+ Q + r+
1786
+
1787
+ − a2��
1788
+ a4 − r2
1789
+ +
1790
+
1791
+ − 4a2 + 2Qr+ + r2
1792
+ +
1793
+ ���
1794
+ − a4δ + Qr3
1795
+ +
1796
+
1797
+ a2 + δ
1798
+
1799
+ − r2
1800
+ +
1801
+
1802
+ a4 + 3a2δ
1803
+
1804
+ + Qr5
1805
+ + + r6
1806
+ +
1807
+
1808
+ + 4r2
1809
+ +
1810
+
1811
+ a4 − r2
1812
+ +
1813
+
1814
+ − 4a2 + 2Qr+ + r2
1815
+ +
1816
+ ���
1817
+ a2 − r+
1818
+
1819
+ Q + r+
1820
+ ��2�
1821
+ − δ log
1822
+
1823
+
1824
+ a2 − r+
1825
+
1826
+ Q + r+
1827
+ ��
1828
+ 2
1829
+ r2
1830
+ +
1831
+
1832
+ a2 + r2
1833
+ +
1834
+
1835
+
1836
+ + a2 + δ log(16π)
1837
+ + r2
1838
+ +
1839
+
1840
+ + 2
1841
+
1842
+ a2 + r2
1843
+ +
1844
+ ��
1845
+ a4 − r2
1846
+ +
1847
+
1848
+ − 4a2 + 2Qr+ + r2
1849
+ +
1850
+ ���
1851
+ a2 − r+
1852
+
1853
+ Q + r+
1854
+ ��2�
1855
+ − δ log
1856
+
1857
+
1858
+ a2 − r+
1859
+
1860
+ Q + r+
1861
+ ��
1862
+ 2
1863
+ r2
1864
+ +
1865
+
1866
+ a2 + r2
1867
+ +
1868
+
1869
+
1870
+ + a2 + δ log(16π) + r2
1871
+ +
1872
+ ����
1873
+ a2 − r+
1874
+
1875
+ Q + r+
1876
+ ��
1877
+ 2���
1878
+ 4πr3
1879
+ +
1880
+
1881
+ a2 + r2
1882
+ +
1883
+
1884
+ 3�−1
1885
+ .
1886
+ (27)
1887
+ [1] M. Sharif, W. Javed, Eur. Phys. J. C 72, 1997(2012).
1888
+ [2] M. Sharif, W. Javed, Int. J. Mod. Phys. Conf. Ser. 23(2013)271.
1889
+ [3] M. Sharif, W. Javed, Can. J. Phys. 91(2013)236.
1890
+ [4] S. W. Hawking, Commun. Math. Phys. 43(1975)199.
1891
+ [5] M. K. Parikh and F. Wilczek, Phys. Rev. Lett. 85(2000)5042.
1892
+ [6] M. K. Parikh, Int. J. Mod. Phys. D 13(2004)2351.
1893
+ [7] J. T. Firouzjaee and G. F. R. Ellis, Gen. Rel. Grav. 47(2015)6.
1894
+ [8] W. Javed, R. Babar, Adv. High Energy Phys. 2019(2019)2759641.
1895
+ [9] W. Javed, R. Babar, Chinese Journal of Phys. 61(2019)138.
1896
+ [10] W. Javed, R. Babar, The Fifteenth Marcel Grossmann Meeting, World Scientific 3(2022)905.
1897
+ [11] W. Javed, R. Babar, Punjab University Journal of Mathematics 52(2020)6.
1898
+ [12] W. Javed, R. Babar, A. ¨Ovg¨un, Mod. Phys. Lett. A 34(2019)1950057.
1899
+ [13] R. Babar, W. Javed, A. ¨Ovg¨un, Mod. Phys. Lett. A 35(2020)2050104.
1900
+ [14] M. Sharif, W. Javed, J. Korean Phys. Soc. 57(2010)217.
1901
+ [15] A. ¨Ovg¨un, K. Jusufi, Eur. Phys. J. Plus. 132(2017)298.
1902
+ [16] A. ¨Ovg¨un, Int. J. Theor. Phys. 55(2016)2919.
1903
+ [17] K. Jusufi, A. Ovgun, G. Apostolovska, Adv. High Energy Phys. 2017(2017)8798657.
1904
+ [18] I. Sakalli, A. ¨Ovg¨un, K. Jusufi, Astrophys Space Sci. 361(2016)330.
1905
+ [19] I. Sakalli, A. ¨Ovg¨un, General Relativity and Gravitation 48(2016)1.
1906
+ [20] A. ¨Ovg¨un, I. Sakalli, Int. J. Theor. Phys. 57(2018)322.
1907
+ [21] A. ¨Ovg¨un, I. Sakalli, J. Saavedra, C. Leiva, Mod. Phys. Lett. A 35(2020)2050163.
1908
+ [22] I. Sakalli, A. Ovgun, J. Exp. Theor. Phys. 121(2015)404.
1909
+ [23] R. Ali, K. Bamba, M. Asgher, M. F. Malik, S. A. A. Shah, Symmetry. 12(2020)1165.
1910
+ [24] R. Ali, K. Bamba, M. Asgher, S. A. A. Shah, Int. J. Mod. Phys. D 30(2021)2150002.
1911
+ [25] R. Ali, M. Asgher, M. F. Malik, Mod. Phys. Lett. A 35(2020)2050225.
1912
+ [26] W. Javed, R. Ali, R. Babar, A. ¨Ovg¨un, Eur. Phys. J. Plus 134(2019)511.
1913
+ [27] W. Javed, R. Ali, R. Babar, A. ¨Ovg¨un, Chinese Phys. C 44(2020)015104.
1914
+ [28] K. Jusufi, A. ¨Ovg¨un, Astrophys Space Sci. 361(2016)207.
1915
+
1916
+ 14
1917
+ [29] R. Ali, M. Asgher, New Astronomy 93(2022)101759
1918
+ [30] R. Ali, R. Babar and P. K. Sahoo, Physics of the Dark Universe 35(2022)100948.
1919
+ [31] R. Ali, R. Babar, M. Asgher and S. A. A. Shah, Int. J. Geom. Methods Mod. Phys. 19(2022)2250017.
1920
+ [32] J. M. Bardeen, in Conference Proceedings of GR5 (Tbilisi, URSS, 1968), p. 174.
1921
+ [33] M. Faizal and M. M. Khalil, Int. J. Mod. Phys. A 30(2015)1550144.
1922
+ [34] B. Pourhassan and M. Faizal, Nucl. Phys. B 913(2016)834.
1923
+ [35] A. Jawad and M. U. Shahzad, Eur. Phys. J. C 77(2017)349.
1924
+ [36] M. Zhang, Nucl. Phys. B 935(2018)170.
1925
+ [37] P. Pradhan, Universe 5(2019)57.
1926
+ [38] K. Ghaderi, B. Malakolkalami, Nucl. Phys. B 903(2016)10.
1927
+ [39] W. X. Chen, Y. G. Zheng, arXiv preprint arXiv:2204.05470.
1928
+ [40] R. Ali, R. Babar, M. Asgher, and S. A. A. Shah, Annals of Physics, 432(2021)168572.
1929
+ [41] W. Javed, G. Abbas, R. Ali, Eur. Phys. J. C 77(2017)296.
1930
+ [42] A. ¨Ovg¨un, W. Javed, R. Ali, Adv. High Energy Phys. 2018(2018)11.
1931
+ [43] W. Javed, R. Ali, G. Abbas, Can. J. Phys. 97(2018)176.
1932
+ [44] R. Ali, K. Bamba, S. A. A. Shah, Symmetry. 631(2019)11.
1933
+ [45] M. Sharif and Z. Akhtar, Phys. Dark Universe 29(2020)100589.
1934
+ [46] M. Sharif and Z. Akhtar, Chin. J. Phys 71(2021)669.
1935
+ [47] W. Javed, Z. Yousaf and Z. Akhtar, Mod. Phys. Lett. A 33(2018) 1850089.
1936
+ [48] Z. Yousaf, K. Bamba, Z. Akhtar and W. Javed, Int. J. Geom. Methods Mod. 19(2022)2250102.
1937
+ [49] K. Bamba, M. Ilyas, M.Z.Bhatti, and Z. Yousaf, General Relativity and Gravitation, 49(8) (2017), pp.1-17.
1938
+ [50] M. Ilyas, Eur. Phys. J. C 78, 757 (2018).
1939
+ [51] M. Ilyas, International Journal of Modern Physics A 36.24 (2021): 2150165.
1940
+ [52] M. Ilyas, International Journal of Geometric Methods in Modern Physics 16.10 (2019): 1950149.
1941
+ [53] A. Ditta et al., Eur. Phys. J. C (2022) 82:756.
1942
+ [54] L. Hui, A. Nicolis, Phys. Rev. Lett. 110, 241104 (2013)
1943
+ [55] E. Babichev, C. Charmousis, A. Leh´ebel, JCAP 04, 027 (2017)
1944
+ [56] A. Nicolis, R. Rattazzi, E. Trincherini, Phys. Rev. D 79, 064036 (2009)
1945
+ [57] R. K. Walia, S. D. Maharaj and S. G. Ghosh, Eur. Phys. J. C 82(2022)547.
1946
+ [58] R. Kumar, S. G. Ghosh, A. Wang, Phys. Rev. D 101(2020)104001.
1947
+ [59] R. Kumar, S. G. Ghosh, Eur. Phys. J. C 78(2018)750.
1948
+ [60] D. Y. Chen, Q. Q. Jiang, X. T. Zua, Phys. Lett. B 665(2008)106.
1949
+ [61] R. P. Kerr, Phys. Rev. Lett. 11(1963)237.
1950
+
JdE0T4oBgHgl3EQfSACX/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
K9E0T4oBgHgl3EQfSgAu/content/2301.02222v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dcc91bc06566881867d61bf09246f423b290559483298b2486e7af4ae5441bfb
3
+ size 736781
K9E0T4oBgHgl3EQfSgAu/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b10d2018dd0ca5320939fcc871cc6f2f29f0b2c6c3f86dcb4d9965377ada870
3
+ size 222753
K9E0T4oBgHgl3EQfigH5/content/tmp_files/2301.02448v1.pdf.txt ADDED
@@ -0,0 +1,2484 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Springer Nature 2021 LATEX template
2
+ Optimal subsampling algorithm for composite
3
+ quantile regression with distributed data
4
+ Xiaohui Yuan1, Shiting Zhou1† and Yue Wang1*†
5
+ 1*School of Mathematics and Statistics, Changchun University of
6
+ Technology, Changchun, 130012, Jilin, China.
7
+ *Corresponding author(s). E-mail(s): [email protected];
8
+ Contributing authors: [email protected];
9
10
+ †These authors contributed equally to this work.
11
+ Abstract
12
+ For massive data stored at multiple machines, we propose a dis-
13
+ tributed subsampling procedure for the composite quantile regres-
14
+ sion. By establishing the consistency and asymptotic normality of
15
+ the
16
+ composite
17
+ quantile
18
+ regression
19
+ estimator
20
+ from
21
+ a
22
+ general
23
+ sub-
24
+ sampling algorithm, we derive the optimal subsampling probabili-
25
+ ties and the optimal allocation sizes under the L-optimality cri-
26
+ teria. A two-step algorithm to approximate the optimal subsam-
27
+ pling
28
+ procedure
29
+ is
30
+ developed.
31
+ The
32
+ proposed
33
+ methods
34
+ are
35
+ illus-
36
+ trated through numerical experiments on simulated and real datasets.
37
+ Keywords: Composite quantile regression, Distributed data, Massive data,
38
+ Optimal subsampling
39
+ 1 Introduction
40
+ With the rapid development of science and technology, extremely large datasets
41
+ are ubiquitous and lays heavy burden on storage and computation facilities.
42
+ Many efforts have been made to deal with these challenge. There are three
43
+ main directions from the view of statistical applications: divide-and-conquer,
44
+ online updating, and subsampling. Among them, subsampling has been found
45
+ 1
46
+ arXiv:2301.02448v1 [stat.CO] 6 Jan 2023
47
+
48
+ Springer Nature 2021 LATEX template
49
+ 2
50
+ Optimal subsampling algorithm for CQR with distributed data
51
+ to be useful for reducing computational burden and extracting information
52
+ from massive data.
53
+ The idea of subsampling was first proposed by Jones (1956)[5]. A key
54
+ tactic of subsampling methods is to specify nonuniform sampling probabil-
55
+ ities to include more informative data points with higher probabilities. For
56
+ example, the leverage score-based subsampling in Ma et al. (2015)[6], the
57
+ information based optimal subdata selection in Wang et al. (2019)[12], and
58
+ the optimal subsampling method under the A-optimality criterion in Wang et
59
+ al. (2018)[11]. Recently, Fang et al. (2021)[2] applied subsampling to a weak-
60
+ signal-assisted procedure for variable selection and statistical inference. Ai et
61
+ al. (2021)[1] studied the optimal subsampling method for generalized linear
62
+ models under the A-optimality criterion. Shao et al. (2022)[8] employed the
63
+ optimal subsampling method to ordinary quantile regression.
64
+ Due to the large scale and fast arrival speed of data stream, massive data
65
+ are often partitioned across multiple servers. For example, Walmart stores
66
+ produce a large number of data sets from different locations around the world,
67
+ which need to be processed. However, it is difficult to transmit these datasets to
68
+ a central location. For these datasets, it is common to analyze them on multiple
69
+ machines. Qiu et al. (2020)[7] constructed a data stream classification model
70
+ based on distributed processing. Sun et al. (2021)[10] proposed a data mining
71
+ scheme for edge computing based on distributed integration strategy. Zhang
72
+ and Wang (2021)[17] proposed a distributed subdata selection method for
73
+ big data linear regression model. Zuo et al. (2021)[19] proposed a distributed
74
+ subsampling procedure for the logistic regression. Yu et al. (2022)[16] derived
75
+ a optimal distributed Poisson subsampling procedure for the maximum quasi-
76
+ likelihood estimators with massive data.
77
+ In the paper, we investigate the optimal distributed subsampling for com-
78
+ posite quantile regression (CQR; Zou and Yuan (2008)[18]) in massive data.
79
+ In a linear model, composite quantile regression can uniformly estimate the
80
+ regression coefficients under heavy tail error. Moreover, since the asymptotic
81
+ variance of the composite quantile regression estimate does not depend on
82
+ the moment of the error distribution, the CQR estimator is robust. The CQR
83
+ method is widely used in many fields. For massive data, Jiang et al. (2018)[3]
84
+ proposed a divide-and-conquer CQR method. Jin and Zhao (2021)[4] proposed
85
+ a divide-and-conquer CQR neural network method. Wang et al. (2021)[13]
86
+ proposed a distributed CQR method for the massive data. Shao and Wang
87
+ (2022)[9] and Yuan et al. (2022)[15] developed the subsampling for composite
88
+ quantile regression. To the best of our knowledge, there is almost no work on
89
+ random subsampling for composite quantile regression with distributed data.
90
+ Based on the above motivation, we investigate the optimal subsampling
91
+ for the composite quantile regression in massive data when the datasets are
92
+ stored at different sites. We propose a distributed subsampling method in the
93
+ context of CQR, and then study the optimal subsampling technology for data
94
+ in each machine. The main advantages of our method are as follows: First,
95
+ we establish the convergence rate of the subsample-based estimator, which
96
+
97
+ Springer Nature 2021 LATEX template
98
+ Optimal subsampling algorithm for CQR with distributed data
99
+ 3
100
+ ensures the consistency of our proposed method. Second, it avoids the impact
101
+ of different intercept items in data sets stored at different sites. Third, the
102
+ computational speed of our subsampling method is much faster than the full
103
+ data approach.
104
+ The rest of this article is organized as follows. In Section 2, we propose the
105
+ distributed subsampling algorithm based on composite quantile regression. The
106
+ asymptotic properties of estimators based on subsamples are also established.
107
+ We present a subsampling strategy with optimal subsampling probability and
108
+ optimal allocation size. The simulation studies are given in Section 3. In Section
109
+ 4, we study the real data sets. The content of the article is summarized in
110
+ Section 5. All proofs are given in the Appendix.
111
+ 2 Methods
112
+ 2.1 Model and notation
113
+ Consider the following linear model
114
+ yik = xT
115
+ ikβ0 + εik, i = 1, · · · , nk, k = 1, · · · , K,
116
+ (1)
117
+ where xik denotes a p-dimensional covariate vector, β0 = (β1, · · · , βp)T ∈ Θ
118
+ is a p-dimensional vector of regression coefficients, nk is the sample size of
119
+ the kth dataset, n = �K
120
+ k=1 nk is the total sample size, and K is the number
121
+ of distributed datasets. Assume that the random error εik has cumulative
122
+ distribution function F(·) and probability density function f(·).
123
+ Let M be the composite level of composite quantile regression, which does
124
+ not depend on the sample size n. Given M, let τm, m = 1, · · · , M be the speci-
125
+ fied quantile levels such that τ1 < · · · < τM. Write θ0 = (θ01, · · · , θ0(p+M))T =
126
+ (βT
127
+ 0 , bT
128
+ 0 )T and b0 = (b01, · · · , b0M)T, where b0m = inf{u : F(u) ≥ τm} for
129
+ m = 1, · · · , M. In this paper, we assume that xik’s are nonrandom and are
130
+ interested in inferences about the unknown θ0 from the observed dataset
131
+ Dn = {Dkn = {(xT
132
+ ik, yik), i = 1 · · · , n}, k = 1, · · · , K}.
133
+ For τ ∈ (0, 1), u ∈ Rp, let ρτ(u) = u{τ − I(u < 0)} be the check loss function
134
+ for the τ-th quantile level. The CQR estimator of θ based on the full dataset
135
+ Dn is given by
136
+ ˆθF = (ˆβ
137
+ T
138
+ F , ˆb
139
+ T
140
+ F )T = arg min
141
+ β,b
142
+ K
143
+
144
+ k=1
145
+ nk
146
+
147
+ i=1
148
+ M
149
+
150
+ m=1
151
+ ρτm(yik − bm − xT
152
+ ikβ),
153
+ (2)
154
+ Our aim is to construct a subsample-based estimator, which can be used to
155
+ effectively approximate the full data estimator ˆθF .
156
+
157
+ Springer Nature 2021 LATEX template
158
+ 4
159
+ Optimal subsampling algorithm for CQR with distributed data
160
+ 2.2
161
+ Subsampling algorithm and asymptotic properties
162
+ In this subsection, we propose a distributed subsampling algorithm to approx-
163
+ imate the ˆθF . First we propose a subsampling method in Algorithm 1, which
164
+ can reasonably select a subsample from distributed data.
165
+ Algorithm 1 Distributed Subsampling Algorithm£º
166
+ • Sampling: Assign subsampling probabilities {πik}nk
167
+ i=1 for the kth dataset
168
+ Dk = {(yik, xik), i = 1, · · · , nk} with �nk
169
+ i=1 πik = 1, where k = 1, · · · , K.
170
+ Given total sampling size r, draw a random subsample of size rk with replace-
171
+ ment from Dk according to {πik}nk
172
+ i=1, where {rk}K
173
+ k=1 are allocation sizes
174
+ with �K
175
+ k=1 rk = r. For i = 1, · · · , nk and k = 1, · · · , K, we denote the cor-
176
+ responding responses, covariates, and subsampling probabilities as y∗
177
+ ik, x∗
178
+ ik
179
+ and π∗
180
+ ik, respectively.
181
+ • Estimation: Based on the subsamples {(y∗
182
+ ik, x∗
183
+ ik, π∗
184
+ ik), i = 1, · · · , rk}K
185
+ k=1, and
186
+ calculate the estimate ˜θs = (˜βs, ˜bs) = arg minθ Q∗(θ), where
187
+ Q∗(θ) = 1
188
+ n
189
+ K
190
+
191
+ k=1
192
+ r
193
+ rk
194
+ rk
195
+
196
+ i=1
197
+ M
198
+
199
+ m=1
200
+ ρτm(y∗
201
+ ik − βTx∗
202
+ ik − bm)
203
+ π∗
204
+ ik
205
+ .
206
+ To establish asymptotic properties of the subsample-based estimator ˜θs,
207
+ we need the following assumptions:
208
+ (A.1) Assume that f(t) is continuous with respect to t and 0 < f(b0m) <
209
+ +∞ for 1 ≤ m ≤ M. Let ˜xik,m = (xT
210
+ ik, eT
211
+ m)T, where em denotes a M ×1 vector,
212
+ which has a one only in its mth coordinate and is zero elsewhere. Define
213
+ En = 1
214
+ n
215
+ K
216
+
217
+ k=1
218
+ nk
219
+
220
+ i=1
221
+ M
222
+
223
+ m=1
224
+ f(b0m)˜xik,m(˜xik,m)T.
225
+ (3)
226
+ Assume that there exist positive definite matrices E, such that
227
+ En −→ E,
228
+ and
229
+ max
230
+ 1≤k≤K,1≤i≤nk ∥xik∥ = o(n1/2).
231
+ (A.2) Assume that, for k = 1, · · · , K.
232
+ max
233
+ 1≤k≤K,1≤i≤nk
234
+ ∥xik∥ + 1
235
+ rkπik
236
+ = op
237
+ � n
238
+ r1/2
239
+
240
+ .
241
+ (4)
242
+ Define
243
+ V π = 1
244
+ n2
245
+ K
246
+
247
+ k=1
248
+ r
249
+ rk
250
+ nk
251
+
252
+ i=1
253
+ 1
254
+ πik
255
+ � M
256
+
257
+ m=1
258
+ {I(εik < b0m) − τm}˜xik,m
259
+ �⊗2
260
+ ,
261
+ (5)
262
+
263
+ Springer Nature 2021 LATEX template
264
+ Optimal subsampling algorithm for CQR with distributed data
265
+ 5
266
+ where for a vector a, a⊗2 = aaT. Assume that there exist positive definite
267
+ matrices V such that
268
+ V π
269
+ p
270
+ −→ V ,
271
+ where
272
+ p
273
+ −→ means convergence in probability.
274
+ Theorem 1. If Assumptions (A.1) and (A.2) hold, conditional on Dn, as
275
+ n → ∞ and r → ∞, if r/n = o(1), then we have
276
+ Σ−1/2√r(˜θs − θ0)
277
+ d
278
+ −→ N(0, I),
279
+ (6)
280
+ where
281
+ d
282
+ −→ denotes convergence in distribution, Σ = E−1
283
+ n V πE−1
284
+ n .
285
+ 2.3 Optimal subsampling strategy
286
+ Given r, we specify the subsampling probablities {πik}nk
287
+ i=1, and the alloca-
288
+ tion sizes {rk}K
289
+ k=1 in Algorithm 1. A naive choice is the uniform subsampling
290
+ strategy with {πik = 1/nk}nk
291
+ i=1 and {rk = [rnk/n]}K
292
+ k=1, where [·] denotes the
293
+ rounding operation. However, this uniform subsampling method is not opti-
294
+ mal. As suggested by Wang et al. (2018)[11], we adopted the nonuniform
295
+ subsampling strategy to determine the optimal allocation sizes and optimal
296
+ subsampling probabilities by minimizing the trace of Σ in Theorem 1.
297
+ Since Σ = E−1
298
+ n V πE−1
299
+ n , the optimal allocation sizes and subsampling prob-
300
+ abilities require the calculation of En, which depend on the unknown density
301
+ function f(·). Following Wang and Ma (2021)[14], we derive optimal subsam-
302
+ pling probabilities under the L-optimality criterion. Note that En and V π are
303
+ nonnegative definite. Simple matrix algebra yields that tr(Σ) = tr(V πE−2
304
+ n ) =
305
+ tr(E−2
306
+ n )tr(V π). Σ depends on rk and πik only through V π, and En is free of
307
+ rk and πik. Hence, we suggest to determine the optimal allocation sizes and
308
+ optimal subsampling probabilites by directly minimizing tr(V π) rather than
309
+ tr(Σ), which can effectively speed up our subsampling algorithm.
310
+ Theorem 2. If rk and πik, i = 1, · · · , nk, k = 1, · · · , K, are chosen as
311
+ πLopt
312
+ ik
313
+ = πLopt
314
+ ik
315
+ (θ0) =
316
+ ∥ �M
317
+ m=1{τm − I(εik < b0m)}˜xik,m ∥
318
+ �nk
319
+ i=1 ∥ �M
320
+ m=1{τm − I(εik < b0m)}˜xik,m ∥
321
+ ,
322
+ (7)
323
+ and
324
+ rLopt
325
+ k
326
+ = r
327
+ �nk
328
+ i=1 ∥ �M
329
+ m=1{τm − I(εik < b0m)}˜xik,m ∥
330
+ �K
331
+ k=1
332
+ �nk
333
+ i=1 ∥ �M
334
+ m=1{τm − I(εik < b0m)}˜xik,m ∥
335
+ ,
336
+ (8)
337
+ then tr(V π)/n attains its minimum.
338
+
339
+ Springer Nature 2021 LATEX template
340
+ 6
341
+ Optimal subsampling algorithm for CQR with distributed data
342
+ 2.4 Two-step algorithm
343
+ Note that the optimal subsampling probabilities and allocation sizes depend
344
+ depends on εik = yik − xT
345
+ ikβ0 and b0m, m = 1, · · · , M. The L-optimal weight
346
+ result is not directly implementable. To deal with this problem, we use a pilot
347
+ estimator ˜θ to replace θ0. In the following, we propose a two-step subsampling
348
+ procedure in Algorithm 2.
349
+ Algorithm 2 Two-Step Algorithm£º
350
+ • Step 1: Given r0, we run Algorithm 1 with subsampling size rk = [r0
351
+ nk
352
+ n ] to
353
+ obtain a pilot estimator ˜θ, using πik = 1/nk, where [·] denotes the rounding
354
+ operation. Replace θ0 with ˜θ0 in (7) and (8) to get the allocation sizes rk(˜θ)
355
+ and subsampling probabilities πik(˜θ), for i = 1, · · · , nk and k = 1, · · · , K,
356
+ respectively.
357
+ • Step 2: Based on {rk(˜θ)}K
358
+ k=1 and {πik(˜θ)}nk
359
+ i=1 in Step 1, we can select a sub-
360
+ sample {(y∗
361
+ ik, x∗
362
+ ik, π∗
363
+ ik) : i = 1, · · · , rk}K
364
+ k=1 from the full data Dn. Minimizes
365
+ the following weighted function
366
+ Q∗(θ) =
367
+ K
368
+
369
+ k=1
370
+ r
371
+ rk(˜θ)
372
+ rk(˜θ)
373
+
374
+ i=1
375
+ M
376
+
377
+ m=1
378
+ ρτm(y∗
379
+ ik − βTx∗
380
+ ik − bm)
381
+ π∗
382
+ ik
383
+ ,
384
+ to get a two-step subsample estimate ˆθLopt, where ˆθLopt = (ˆβLopt, ˆbLopt) =
385
+ arg min Q∗(θ).
386
+ For the subsample-based estimator ˆθLopt in Algorithm 2, we give its
387
+ asymptotic distribution in the following theorem.
388
+ Theorem 3. If Assumptions (A.1) and (A.2) hold, then as r0 → ∞, r →
389
+ ∞, and n → ∞, then we have
390
+ Σ−1/2√r(ˆθLopt − θ0)
391
+ d
392
+ −→ N(0, I),
393
+ (9)
394
+ where
395
+ d
396
+ −→ denotes convergence in distribution, Σ = E−1
397
+ n V πE−1
398
+ n . Here
399
+ V π = 1
400
+ n2
401
+ K
402
+
403
+ k=1
404
+ r
405
+ rLopt
406
+ k
407
+ nk
408
+
409
+ i=1
410
+ 1
411
+ πLopt
412
+ ik
413
+ � M
414
+
415
+ m=1
416
+ {I(εik < b0m) − τm}˜xik,m
417
+ �⊗2
418
+ , (10)
419
+ where
420
+ πLopt
421
+ ik
422
+ =
423
+ ∥ �M
424
+ m=1{τm − I(εik < b0m)}˜xik,m ∥
425
+ �nk
426
+ i=1 ∥ �M
427
+ m=1{τm − I(εik < b0m)}˜xik,m ∥
428
+ ,
429
+ and
430
+ rLopt
431
+ k
432
+ = r
433
+ �nk
434
+ i=1 ∥ �M
435
+ m=1{τm − I(εik < b0m)}˜xik,m ∥
436
+ �K
437
+ k=1
438
+ �nk
439
+ i=1 ∥ �M
440
+ m=1{τm − I(εik < b0m)}˜xik,m ∥
441
+ .
442
+ For the statistical inference about θ0, to avoid estimating f(b0m), we
443
+ propose the following iterative sampling procedure.
444
+
445
+ Springer Nature 2021 LATEX template
446
+ Optimal subsampling algorithm for CQR with distributed data
447
+ 7
448
+ Firstly, using {πLopt
449
+ ik
450
+ (˜θ)}nk
451
+ i=1 proposed in Algorithm 2, we sample with
452
+ replacement to obtain B subsamples, {(y∗,j
453
+ ik , x∗,j
454
+ ik , π∗,j
455
+ ik ), i = 1, · · · , rLopt
456
+ k
457
+ (˜θ), k =
458
+ 1, · · · , K} for j = 1, · · · , B. Next, we calculate the jth estimate of θ0 through
459
+ ˆθLopt,j = (ˆβLopt,j, ˆbLopt,j)
460
+ = arg min
461
+ θ
462
+ K
463
+
464
+ k=1
465
+ r
466
+ rLopt
467
+ k
468
+ (˜θ)
469
+ rLopt
470
+ k
471
+ (˜θ)
472
+
473
+ i=1
474
+ M
475
+
476
+ m=1
477
+ ρτm(y∗,j
478
+ ik − βTx∗,j
479
+ ik − bm)
480
+ π∗,j
481
+ ik
482
+ .
483
+ The combined estimate can be obtained by
484
+ ˆθL = (ˆβ
485
+ T
486
+ L, ˆb
487
+ T
488
+ L)T = 1
489
+ B
490
+ B
491
+
492
+ j=1
493
+ ˆθLopt,j
494
+ (11)
495
+ and its variance-covariance matrix Ω = cov(ˆθL) can be estimated by
496
+ ˆΩ =
497
+ 1
498
+ refB(B − 1)
499
+ B
500
+
501
+ j=1
502
+ (ˆθLopt,j − ˆθL)⊗2,
503
+ (12)
504
+ where ref is the effective subsample size ratio (Wang & Ma, 2021[14]) given by
505
+ ref = 1
506
+ K
507
+ K
508
+
509
+ k=1
510
+
511
+ 1 − rkB − 1
512
+ 2
513
+ nk
514
+
515
+ i=1
516
+ {πLopt
517
+ ik
518
+ (˜θ)}2
519
+
520
+ .
521
+ From Theorem 3, for any fixed B, the conditional distribution of
522
+
523
+ rB(ˆθL−
524
+ θ0) satisfies
525
+ {E−1
526
+ n V πE−1
527
+ n }−1/2���
528
+ rB(ˆθL − θ0)
529
+ d
530
+ −→ N(0, I).
531
+ The distribution of ˆθLopt can be approximated by the empirical distribution
532
+ of {˜θLopt,j}B
533
+ j=1. For s = 1, · · · , p + K, the 100 × (1 − α)% confidence interval
534
+ of θ0s can be approximated by [ˆθL,s − ˆω1/2
535
+ ss z1−α/2, ˆθL,s + ˆω1/2
536
+ ss z1−α/2], where
537
+ ˆθL,s is the sth element of ˆθL, ˆωss is the (s, s)th element of ˆΩ and z1−α/2 is
538
+ the 1 − α/2 quantile of the standard normal distribution.
539
+ 3 Numerical studies
540
+ In this section, we conduct a simulation study to evaluate the performances of
541
+ the proposed optimal subsampling algorithm. Simulations were performed on
542
+ a laptop running Window 10 with an Intel i7 processor and 16 GB memory.
543
+ Full data are generated from the model
544
+ yik = xT
545
+ ikβ0 + εik, i = 1, · · · , nk, k = 1, · · · , K,
546
+
547
+ Springer Nature 2021 LATEX template
548
+ 8
549
+ Optimal subsampling algorithm for CQR with distributed data
550
+ with the true parameter β0 = (1, 1, 1, 1, 1)T . We consider the following four
551
+ cases for the error term ε: (1) the standard normal distribution, N(0, 1); (2)
552
+ the mixture normal distribution, 0.5N(0, 1) + 0.5N(0, 9); (3) the Student¡¯s
553
+ t distribution with three degrees of freedom, t(3); (4) the standard Cauchy
554
+ distribution, Cauchy(0,1).
555
+ We consider the following four cases for the covariate x:
556
+ Case I: xik ∼ N(0, Σ), where Σ = (0.5|s−t|)s,t.
557
+ Case II: xik ∼ N(0, Σ), where Σ = (0.5I(s̸=t))s,t.
558
+ Case III: xik ∼ t3(0, Σ) with three degrees of freedom and Σ = (0.5|s−t|)s,t.
559
+ Case IV: Set K = 5, xi1 ∼ N5(0, I), xi2 ∼ N5(0, Σ1), xi3 ∼ N5(0, Σ2),
560
+ xi4 ∼ t3(0, Σ1) and xi5 ∼ t5(0, Σ1), where Σ1 = (0.5|s−t|)s,t, Σ2 =
561
+ (0.5I(s̸=t))s,t.
562
+ Note that in Cases I-III, the covariate distributions are identical for all
563
+ distributed datasets. In Case IV, the covariates have different distributions for
564
+ distributed datasets.
565
+ All the simulation are based on 1000 replications. We set the sample size
566
+ of each datasets as {nk = [nuk/ �K
567
+ k=1 uk]}K
568
+ k=1, where [·] denotes the rounding
569
+ operation, uk are generated from the uniform distribution over (1, 2) with K =
570
+ 5 and 10, respectively. We use the quantile levels τm = m/16, m = 1, · · · , 15
571
+ for the composite quantile regression.
572
+ In Tables 1, we report the simulation results on subsample-based estimator
573
+ for β1 (other βi’s are similar and omitted) with K = 5 and K = 10 respec-
574
+ tively, including the estimated bias (Bias) and the standard deviation (SD) of
575
+ the estimates where r0 = 200, n = 106 in Case I. The bias and SDs of the pro-
576
+ posed subsample estimate for Case IV with n = 106 and n = 107 are presented
577
+ in Tabel 2. The subsample sizes r = 200, 400, 600, 800 and 1000, respectively.
578
+ It can be seen from the results that the subsample-based estimator is unbi-
579
+ ased. The performance of our estimator becomes better as r increases, which
580
+ confirms the theoretical result on consistency of the subsampling methods.
581
+ For comparison, we consider the uniform subsampling method (Uniform)
582
+ with πik =
583
+ 1
584
+ nk , and rk = [rnk/n] for i = 1, · · · , nk and k = 1, · · · , K. We cal-
585
+ culate empirical mean square error (MSE) of uniform subsampling estimator
586
+ (Unif) and our optimal subsampling estimator (Lopt) based on 1000 repeti-
587
+ tions of the simulation. Figures 1 and 2 present the MSEs of each method for
588
+ Case I with K = 5 and K = 10, where n = 106. Figures 3 presents the MSEs of
589
+ the subsampling estimator for Case IV with n = 106, n = 107 and ε ∼ N(0, 1).
590
+ From the above results, we can see that the MSEs of our method (Lopt) are
591
+ much smaller than those of Uniform subsampling method (Unif). The results
592
+ indicate that our method also works well with heterogeneous covariates, i.e.,
593
+ the covariates can have different distributions in different data blocks.
594
+ In the following, we evaluate the computational efficiency of our two-step
595
+ subsampling algorithm. The mechanism of data generation is the same as
596
+ the above mentioned situation. For fair comparison, we count the CPU time
597
+ with one core based on the mean calculation time of 1000 repetitions of each
598
+ subsample-based method. In Table 3, we report the results for Case I and
599
+
600
+ Springer Nature 2021 LATEX template
601
+ Optimal subsampling algorithm for CQR with distributed data
602
+ 9
603
+ the normal error with n = 106, K = 5, r0 = 200 and different r, respectively.
604
+ The computing time for the full data method is also given in the last row.
605
+ Note that the uniform subsampling requires the least computing time, because
606
+ its subampling probabilities πik =
607
+ 1
608
+ nk , and allocation sizes rk = [rnk/n], do
609
+ not take time to compute. Our subsampling algorithm has great computation
610
+ advantage over the full data method. To further investigate the computational
611
+ gain of the subsampling approach, we increase the dimension p to 30 with the
612
+ true parameter β0 = (0.5, · · · , 0.5)T. Table 4 presents the computing time for
613
+ Case I and normal error with r0 = 200, r = 1000, K = 5, n = 104, 105, 106 and
614
+ 107, respectively. It is clear that both subsampling methods take significantly
615
+ less computing times than the full data approach.
616
+ To investigate the performance of ˆΩ in (12), we compare the empirical mean
617
+ square error (EMSE, s−1 �1000
618
+ s=1 ∥ ˆβ
619
+ s
620
+ L−β0 ∥2) and the average estimated mean
621
+ square error(AMSE) of ˆβL in (11) with different B. In Tables 5, we report the
622
+ average length of the confidence intervals and 95% coverage probabilities (CP)
623
+ of our subsample-based estimator for β1 (other βi’s are similar and omitted)
624
+ with n = 106, r = 1000 and K = 5. Figures 4-7 present the EMSEs and AMSEs
625
+ of ˆβL. For all cases, the AMSEs are very close to the EMSEs, and the EMSEs
626
+ and AMSEs become smaller as B increases.
627
+ 4 A real data example
628
+ In this section, we apply our method to the USA airline data, which are pub-
629
+ licly available at http://stat-computing.org/datastore/2009/the-data.html.
630
+ The data include detailed information on the arrivals and departures of all
631
+ commercial flights in the USA from 1987 to 2008, and they are stored in 22
632
+ separate files (K = 22). The raw dataset is as large as 10 GB on a hard
633
+ drive. We use the composite regression to model the relationship between the
634
+ arrival delay time, y, and three covariate variables: x1, weekend/weekday sta-
635
+ tus (binary; 1 if departure occurred during the weekend, 0 otherwise), x2, the
636
+ departure delay time and x3, the distance. Since the y, x2 and x3 in the data
637
+ set are on different scales, we normalize them first. In addition, we drop the
638
+ NA values in the dataset and we have n = 115, 257, 291 observations with
639
+ completed information on y and x. Table 6 shows the cleaned data.
640
+ We use the quantile levels τm = m/16, m = 1, · · · , 15 for the composite
641
+ quantile regression. For comparison, the full-data estimate of the regression
642
+ parameters is given by ˆβF = (−0.0451, 0.9179, −0.0248)T. The proposed point
643
+ estimate ˆβL and corresponding confident intervals with different r and B are
644
+ presented in Table 7. It can be seen from Table 7 that the subsample estimator
645
+ ˆβL is close to ˆβF . In Figure 8, we present the MSEs of both subsampling
646
+ methods based on 1000 subsamples with r = 200, 400, 600, 800 and 1000,
647
+ respectively. The MSEs of the the optimal subsampling estimator are smaller
648
+ than those of the uniform subsampling estimator.
649
+
650
+ Springer Nature 2021 LATEX template
651
+ 10
652
+ Optimal subsampling algorithm for CQR with distributed data
653
+ 5 Conclusion
654
+ We have studied the statistical properties of a subsampling algorithm for the
655
+ composite quantile regression model with distributed massive data. We derived
656
+ the optimal subsampling probabilities and optimal allocation sizes. The asymp-
657
+ totic properties of the subsample estimator were established. Some simulations
658
+ and a real data example were provided to check the performance of our method.
659
+ Appendix
660
+ Proof of Theorem 1
661
+ Define
662
+ A∗
663
+ r(u) = 1
664
+ n
665
+ K
666
+
667
+ k=1
668
+ r
669
+ rk
670
+ rk
671
+
672
+ i=1
673
+ M
674
+
675
+ m=1
676
+ 1
677
+ π∗
678
+ ik
679
+ A∗
680
+ ik,m(u),
681
+ where A∗
682
+ ik,m(u) = ρτm(ε∗
683
+ ik − b0m − uT˜x∗
684
+ ik,m/√r) − ρτm(ε∗
685
+ ik − b0m), ˜x∗
686
+ ik,m =
687
+ (x∗T
688
+ ik , eT
689
+ m)T, and ε∗
690
+ ik = y∗
691
+ ik − βT
692
+ 0 x∗
693
+ ik, i = 1, · · · , rk. Since A∗
694
+ r(u) is a convex
695
+ function of u, its minimizer is √r(˜θs − θ0), we can focus on A∗
696
+ r(u) when
697
+ evaluating the properties of √r(˜θs − θ0).
698
+ Let ψτ(u) = τ − I(u < 0). By Knight’s identity (Knight, 1998),
699
+ ρτ(u − v) − ρτ(u) = −vψτ(u) +
700
+ � v
701
+ 0
702
+ {I(u ≤ s) − I(u ≤ 0)}ds,
703
+ we can rewrite A∗
704
+ ik,m(u) as
705
+ A∗
706
+ ik,m(u) = ρτm(ε∗
707
+ ik − b0m − uT˜x∗
708
+ ik,m/√r) − ρτm(ε∗
709
+ ik − b0m)
710
+ = − 1
711
+ √ruT˜x∗
712
+ ik,m{τm − I(ε∗
713
+ ik − b0m < 0)}
714
+ +
715
+ � uT ˜x∗
716
+ ik,m/√r
717
+ 0
718
+ {I(ε∗
719
+ ik − b0m ≤ s) − I(ε∗
720
+ ik − b0m ≤ 0)}ds.
721
+ Thus, we have
722
+ A∗
723
+ r(u)
724
+ = −uT 1
725
+ √r
726
+ 1
727
+ n
728
+ K
729
+
730
+ k=1
731
+ r
732
+ rk
733
+ M
734
+
735
+ m=1
736
+ rk
737
+
738
+ i=1
739
+ 1
740
+ π∗
741
+ ik
742
+ {τm − I(ε∗
743
+ ik − b0m < 0)}˜x∗
744
+ ik,m
745
+ + 1
746
+ n
747
+ K
748
+
749
+ k=1
750
+ r
751
+ rk
752
+ M
753
+
754
+ m=1
755
+ rk
756
+
757
+ i=1
758
+ 1
759
+ π∗
760
+ ik
761
+ � uT ˜xik,m/√r
762
+ 0
763
+ {I(ε∗
764
+ ik − b0m ≤ s) − I(ε∗
765
+ ik − b0m ≤ 0)}ds
766
+ = uTZ∗
767
+ r + A∗
768
+ 2r(u),
769
+ (1)
770
+
771
+ Springer Nature 2021 LATEX template
772
+ Optimal subsampling algorithm for CQR with distributed data
773
+ 11
774
+ where
775
+ Z∗
776
+ r = − 1
777
+ √r
778
+ 1
779
+ n
780
+ K
781
+
782
+ k=1
783
+ r
784
+ rk
785
+ M
786
+
787
+ m=1
788
+ rk
789
+
790
+ i=1
791
+ 1
792
+ π∗
793
+ ik
794
+ {τm − I(ε∗
795
+ ik − b0m < 0)}˜x∗
796
+ ik,m,
797
+ A∗
798
+ 2r(u) = 1
799
+ n
800
+ K
801
+
802
+ k=1
803
+ r
804
+ rk
805
+ rk
806
+
807
+ i=1
808
+ 1
809
+ π∗
810
+ ik
811
+ A∗
812
+ k,i(u),
813
+ A∗
814
+ k,i(u) =
815
+ M
816
+
817
+ m=1
818
+ � uT ˜x∗
819
+ ik,m/√r
820
+ 0
821
+ {I(ε∗
822
+ ik − b0m ≤ s) − I(ε∗
823
+ ik − b0m ≤ 0)}ds.
824
+ Firstly, we prove the asymptotic normality of Z∗
825
+ r. Denote
826
+ η∗
827
+ ik = −
828
+ r
829
+ rknπ∗
830
+ ik
831
+ M
832
+
833
+ m=1
834
+ {τm − I(ε∗
835
+ ik − b0m < 0)}˜x∗
836
+ ik,m,
837
+ then Z∗
838
+ r can be written as Z∗
839
+ r =
840
+ 1
841
+ √r
842
+ �K
843
+ k=1
844
+ �rk
845
+ i=1 η∗
846
+ ik. Direct calculation yields
847
+ E(η∗
848
+ ik | Dn) = − r
849
+ rkn
850
+ nk
851
+
852
+ i=1
853
+ M
854
+
855
+ m=1
856
+ {τm − I(εik − b0m < 0)}˜xik,m = Op
857
+
858
+ rn−1/2
859
+ k
860
+ rkn
861
+
862
+ ,
863
+ cov(η∗
864
+ ik | Dn) = E{(η∗
865
+ ik)⊗2 | Dn} − {E(η∗
866
+ ik | Dn)}⊗2
867
+ =
868
+ nk
869
+
870
+ i=1
871
+ r2
872
+ r2
873
+ kn2πik
874
+
875
+ M
876
+
877
+ m=1
878
+ [τm − I(εik − b0m < 0)]˜xik,m
879
+ �⊗2
880
+ − {E(η∗
881
+ ik | Dn)}⊗2
882
+ =
883
+ nk
884
+
885
+ i=1
886
+ r2
887
+ r2
888
+ kn2πik
889
+
890
+ M
891
+
892
+ m=1
893
+ [τm − I(εik − b0m < 0)]˜xik,m
894
+ �⊗2
895
+ − op(1).
896
+ It is easy to verify that
897
+ E{E(η∗
898
+ ik | Dn)} = 0,
899
+ cov{E(η∗
900
+ ik | Dn)} =
901
+ r2
902
+ r2
903
+ kn2
904
+ nk
905
+
906
+ i=1
907
+ cov
908
+ � M
909
+
910
+ m=1
911
+ [τm − I(εik < b0m)] ˜xik,m
912
+
913
+ .
914
+ Denote the (s, t) th element of cov{E(η∗
915
+ ik | Dn)} as σst. Using the Cauchy
916
+ inequality, it is easy to obtain
917
+ | σst |≤ √σss
918
+ √σtt ≤
919
+ r2
920
+ r2
921
+ kn2
922
+ nk
923
+
924
+ i=1
925
+ M(∥xi∥2 + 1) = Op
926
+ �r2nk
927
+ r2
928
+ kn2
929
+
930
+ .
931
+
932
+ Springer Nature 2021 LATEX template
933
+ 12
934
+ Optimal subsampling algorithm for CQR with distributed data
935
+ By Assumption 1 and Chebyshev’s inequality,
936
+ E(η∗
937
+ ik | Dn) = Op
938
+
939
+ rn1/2
940
+ k
941
+ rkn
942
+
943
+ .
944
+ Under the conditional distribution given Dn, we check Lindeberg’s condi-
945
+ tions (Theorem 2.27 of van der Vaart, 1998). Specifically, for ϵ > 0, we want
946
+ to prove that
947
+ K
948
+
949
+ k=1
950
+ rk
951
+
952
+ i=1
953
+ E{∥r−1/2η∗
954
+ ik∥2I(∥η∗
955
+ ik∥ > √rϵ) | Dn} = op(1).
956
+ (2)
957
+ Note that
958
+ K
959
+
960
+ k=1
961
+ rk
962
+
963
+ i=1
964
+ E{∥r−1/2η∗
965
+ ik∥2I(∥η∗
966
+ ik∥ > √rϵ) | Dn}
967
+ =
968
+ K
969
+
970
+ k=1
971
+ rk
972
+
973
+ i=1
974
+ E
975
+ �����
976
+ r1/2
977
+ rknπ∗
978
+ ik
979
+ M
980
+
981
+ m=1
982
+ ˜x∗
983
+ ik,m{τm − I(εik − b0m < 0)}
984
+ ����
985
+ 2
986
+ ×I
987
+ �����
988
+ r−1/2
989
+ rknπ∗
990
+ ikϵ
991
+ M
992
+
993
+ m=1
994
+ ˜x∗
995
+ ik,m{τm − I(εik − b0m < 0)}
996
+ ���� > 1
997
+ �����Dn
998
+
999
+ =
1000
+ K
1001
+
1002
+ k=1
1003
+ nk
1004
+
1005
+ i=1
1006
+ r
1007
+ rkn2πik
1008
+ ����
1009
+ M
1010
+
1011
+ m=1
1012
+ {τm − I(εik − b0m < 0)}˜xik,m
1013
+ ����
1014
+ 2
1015
+ ×I
1016
+
1017
+ r1/2
1018
+ rknπikϵ
1019
+ ����
1020
+ M
1021
+
1022
+ m=1
1023
+ {τm − I(εik − b0m < 0)}˜xik,m
1024
+ ���� > 1
1025
+
1026
+ .
1027
+ (3)
1028
+ By Assumption (A.2),
1029
+ max
1030
+ 1≤k≤K max
1031
+ 1≤i≤nk
1032
+ ∥xik∥ + 1
1033
+ rkπik
1034
+ = op
1035
+ � n
1036
+ r1/2
1037
+
1038
+ ,
1039
+ M 2
1040
+ K
1041
+
1042
+ k=1
1043
+ nk
1044
+
1045
+ i=1
1046
+ (1 + ∥xik∥)2
1047
+ n2πik
1048
+ = Op(1),
1049
+ the right hand side of (3) satisfies
1050
+ K
1051
+
1052
+ k=1
1053
+ nk
1054
+
1055
+ i=1
1056
+ r
1057
+ rkn2πik
1058
+ ����
1059
+ M
1060
+
1061
+ m=1
1062
+ {τm − I(εik < b0m)}˜xik,m
1063
+ ����
1064
+ 2
1065
+ ×I
1066
+
1067
+ r1/2
1068
+ rknπikϵ
1069
+ ����
1070
+ M
1071
+
1072
+ m=1
1073
+ {τm − I(εik < b0m)}˜xik,m
1074
+ ���� > 1
1075
+
1076
+
1077
+ Springer Nature 2021 LATEX template
1078
+ Optimal subsampling algorithm for CQR with distributed data
1079
+ 13
1080
+ ≤ M 2
1081
+ K
1082
+
1083
+ k=1
1084
+ n
1085
+
1086
+ i=1
1087
+ r
1088
+ rkn2πik
1089
+ (1 + ∥xik∥)2I
1090
+ �M(1 + ∥xik∥)r1/2
1091
+ rknπikϵ
1092
+ > 1
1093
+
1094
+ ≤ I
1095
+
1096
+ max
1097
+ 1≤k≤K max
1098
+ 1≤i≤nk
1099
+ ∥xik∥ + 1
1100
+ rkπik
1101
+ >
1102
+
1103
+ r1/2M
1104
+
1105
+ ×M 2
1106
+ K
1107
+
1108
+ k=1
1109
+ nk
1110
+
1111
+ i=1
1112
+ r(1 + ∥xik∥)2
1113
+ rkn2πik
1114
+ = op(1).
1115
+ (4)
1116
+ Thus, the Lindeberg’s conditions hold with probability approaching one.
1117
+ Note that η∗
1118
+ ik, i = 1, · · · , rk, are independent and identically distributed
1119
+ with mean E(η∗
1120
+ ik | Dn) and the covariance cov(η∗
1121
+ ik | Dn) when given Dn.
1122
+ Based on this result, as r, n → ∞, we get
1123
+ V −1/2
1124
+ π
1125
+ {Z∗
1126
+ r − √r
1127
+ K
1128
+
1129
+ k=1
1130
+ E(η∗
1131
+ ik | Dn)}
1132
+ d
1133
+ −→ N(0, I).
1134
+ Since √r �K
1135
+ k=1 E(η∗
1136
+ ik | Dn) = Op
1137
+
1138
+ r1/2
1139
+ n1/2
1140
+ �K
1141
+ k=1
1142
+ rn1/2
1143
+ k
1144
+ rkn1/2
1145
+
1146
+ = op(1), it is easy
1147
+ to verify that
1148
+ V −1/2
1149
+ π
1150
+ Z∗
1151
+ r
1152
+ d
1153
+ −→ N(0, I).
1154
+ (5)
1155
+ Next, we prove that
1156
+ A∗
1157
+ 2r(u) = 1
1158
+ 2uTEu + op(1).
1159
+ Write the conditional expectation of A∗
1160
+ 2r(u) as
1161
+ E{A∗
1162
+ 2r(u) | Dn}
1163
+ = r
1164
+ n
1165
+ K
1166
+
1167
+ k=1
1168
+ nk
1169
+
1170
+ i=1
1171
+ E{Ak,i(u)} + r
1172
+ n
1173
+ K
1174
+
1175
+ k=1
1176
+ nk
1177
+
1178
+ i=1
1179
+ [Ak,i(u) − E{A2r,i(u)}].
1180
+ (6)
1181
+ By Assumption (A.1),
1182
+ max
1183
+ 1≤k≤K max
1184
+ 1≤i≤nk ∥xik∥ = o(max(n1/2
1185
+ 1
1186
+ , · · · , n1/2
1187
+ K )) = o(n1/2),
1188
+ we can get
1189
+ r
1190
+ n
1191
+ K
1192
+
1193
+ k=1
1194
+ nk
1195
+
1196
+ i=1
1197
+ E(Ak,i(u))
1198
+
1199
+ Springer Nature 2021 LATEX template
1200
+ 14
1201
+ Optimal subsampling algorithm for CQR with distributed data
1202
+ = r
1203
+ n
1204
+ K
1205
+
1206
+ k=1
1207
+ nk
1208
+
1209
+ i=1
1210
+ M
1211
+
1212
+ m=1
1213
+ � uT ˜xik,m/√r
1214
+ 0
1215
+ {F(b0m + s) − F(b0m)}ds
1216
+ =
1217
+ √r
1218
+ n
1219
+ K
1220
+
1221
+ k=1
1222
+ nk
1223
+
1224
+ i=1
1225
+ M
1226
+
1227
+ m=1
1228
+ � uT ˜xik,m
1229
+ 0
1230
+ {F(b0m + t/√r) − F(b0m)}dt
1231
+ = 1
1232
+ 2uT
1233
+
1234
+ 1
1235
+ n
1236
+ K
1237
+
1238
+ k=1
1239
+ nk
1240
+
1241
+ i=1
1242
+ M
1243
+
1244
+ m=1
1245
+ f(b0m)˜xik,m˜xT
1246
+ ik,m
1247
+
1248
+ u + o(1)
1249
+ = 1
1250
+ 2uTEu + o(1).
1251
+ (7)
1252
+ Furthermore, we have
1253
+ E
1254
+
1255
+ r
1256
+ n
1257
+ K
1258
+
1259
+ k=1
1260
+ nk
1261
+
1262
+ i=1
1263
+
1264
+ Ak,i(u) − E{Ak,i(u)}
1265
+ ��
1266
+ = 0,
1267
+ and
1268
+ var
1269
+ � r
1270
+ n
1271
+ K
1272
+
1273
+ k=1
1274
+ nk
1275
+
1276
+ i=1
1277
+ [Ak,i(u) − E{Ak,i(u)}]
1278
+
1279
+ ≤ r2
1280
+ n2
1281
+ K
1282
+
1283
+ k=1
1284
+ nk
1285
+
1286
+ i=1
1287
+ E{A2
1288
+ k,i(u)}.
1289
+ (8)
1290
+ Since Ak,i(u) is nonnegative, it is easy to obtain
1291
+ Ak,i(u) ≤
1292
+ ����
1293
+ M
1294
+
1295
+ m=1
1296
+ � uT ˜xik,m/√r
1297
+ 0
1298
+ {I(εik ≤ b0m + s) − I(εik ≤ b0m)}ds
1299
+ ����
1300
+
1301
+ M
1302
+
1303
+ m=1
1304
+ � uT ˜xik,m/√r
1305
+ 0
1306
+ ����{I(εik ≤ b0m + s) − I(εik ≤ b0m)}
1307
+ ����ds
1308
+
1309
+ 1
1310
+ √r
1311
+ M
1312
+
1313
+ m=1
1314
+ | uT˜xik,m | .
1315
+ (9)
1316
+ By Assumption (A.1),
1317
+ max
1318
+ 1≤k≤K max
1319
+ 1≤i≤nk ∥xik∥ = o(max(n1/2
1320
+ 1
1321
+ , · · · , n1/2
1322
+ K )) = o(n1/2),
1323
+ together with (8) and (9), we get
1324
+ var
1325
+ � r
1326
+ n
1327
+ K
1328
+
1329
+ k=1
1330
+ nk
1331
+
1332
+ i=1
1333
+ [Ak,i(u) − E{Ak,i(u)}]
1334
+
1335
+
1336
+
1337
+ M ∥u∥
1338
+ √n (1 + max
1339
+ 1≤k≤K max
1340
+ 1≤i≤nk ∥xik∥)
1341
+ � K
1342
+
1343
+ k=1
1344
+ r3/2
1345
+ n3/2
1346
+ nk
1347
+
1348
+ i=1
1349
+ E{Ak,i(u)}
1350
+
1351
+ Springer Nature 2021 LATEX template
1352
+ Optimal subsampling algorithm for CQR with distributed data
1353
+ 15
1354
+ = o(1).
1355
+ (10)
1356
+ Combining the Chebyshev’s inequality, it follows from (6), (7) and (10) that
1357
+ E {A∗
1358
+ 2r(u) | Dn} = 1
1359
+ 2uTEu + op(1).
1360
+ (11)
1361
+ Next, we derive the conditional variance of A∗
1362
+ 2r(u), i.e., var {A∗
1363
+ 2r(u) | Dn}.
1364
+ Observing that A∗
1365
+ k,i(u), i = 1, · · · , rk are independent and identically dis-
1366
+ tributed when given Dn,
1367
+ var {A∗
1368
+ 2r(u) | Dn} =
1369
+ K
1370
+
1371
+ k=1
1372
+ r2
1373
+ (rkn)2
1374
+ rk
1375
+
1376
+ i=1
1377
+ var
1378
+ �A∗
1379
+ k,i(u)
1380
+ π∗
1381
+ ik
1382
+ ����Dn
1383
+
1384
+
1385
+ K
1386
+
1387
+ k=1
1388
+ r2rk
1389
+ r2
1390
+ kn2 E
1391
+ ��A∗
1392
+ k,i(u)
1393
+ π∗
1394
+ ik
1395
+ �2����Dn
1396
+
1397
+ .
1398
+ (12)
1399
+ By (9), the right hand of (12) satisfies
1400
+ K
1401
+
1402
+ k=1
1403
+ r2rk
1404
+ r2
1405
+ kn2
1406
+ nk
1407
+
1408
+ i=1
1409
+ A2
1410
+ k,i(u)
1411
+ πik
1412
+ ≤ r2
1413
+ n2
1414
+ K
1415
+
1416
+ k=1
1417
+ nk
1418
+
1419
+ i=1
1420
+ Ak,i(u)
1421
+ � 1
1422
+ √r
1423
+ M
1424
+
1425
+ m=1
1426
+ | uT˜xik,m |
1427
+ rkπik
1428
+
1429
+
1430
+ �r1/2
1431
+ n M∥u∥ max
1432
+ 1≤k≤K max
1433
+ 1≤i≤nk
1434
+ ∥xik∥ + 1
1435
+ rkπik
1436
+ � r
1437
+ n
1438
+ K
1439
+
1440
+ k=1
1441
+ nk
1442
+
1443
+ i=1
1444
+ Ak,i(u).
1445
+ (13)
1446
+ Together with (7), (13) and Assumption (A.2), we have
1447
+ var
1448
+
1449
+ A∗
1450
+ 2r(u) | Dn
1451
+
1452
+ = op(1).
1453
+ (14)
1454
+ Together with (9), (14) and Chebyshev’s inequality, we can obtain
1455
+ A∗
1456
+ 2r(u) = 1
1457
+ 2uTEu + op|Dn(1),
1458
+ (15)
1459
+ Here op|Dn(1) means if a = op|Dn(1), then a converges to 0 in conditional
1460
+ probability given Dn in probability, in other words, for any δ > 0, P(| a |>
1461
+ δ | Dn)
1462
+ p
1463
+ −→ 0 as n → +∞. Since 0 ≤ P(| a |> δ | Dn) ≤ 1, then it converges
1464
+ to 0 in probability if and only P(| a |> δ) = E{P(| a |> δ | Dn)} → 0. Thus,
1465
+ a = op|Dn(1) is equivalent to a = op(1).
1466
+
1467
+ Springer Nature 2021 LATEX template
1468
+ 16
1469
+ Optimal subsampling algorithm for CQR with distributed data
1470
+ It follows from (1) and (15) that
1471
+ A∗
1472
+ 2r(u) = uTZ∗
1473
+ r + 1
1474
+ 2uTEu + op(1).
1475
+ Since A∗
1476
+ 2r(u) is a convex function, we have
1477
+ √r(˜θs − θ0) = −E−1
1478
+ n Z∗
1479
+ r + op(1).
1480
+ Based on the above results, we can prove that
1481
+ {E−1
1482
+ n V πE−1
1483
+ n }−1/2√r(˜θs − θ0) = −{E−1
1484
+ n V πE−1
1485
+ n }−1/2E−1
1486
+ n Z∗
1487
+ r + op(1).
1488
+ By Slutsky’s Theorem, for any a ∈ Rp+M, from (5) we have that
1489
+ P[{E−1
1490
+ n V πE−1
1491
+ n }−1/2√r(˜θs − θ0) ≤ a | Dn]
1492
+ p
1493
+ −→ Φp+M(a),
1494
+ (16)
1495
+ where Φp+M(a) denotes the standard p + M dimensional multivariate normal
1496
+ distribution function. And the conditional probability in (16) is a bounded
1497
+ random variable, then convergence in probability to a constant implies
1498
+ convergence in the mean. Therefore, for any a ∈ Rp+M,
1499
+ P[{E−1
1500
+ n V πE−1
1501
+ n }−1/2√r(˜θs − θ0) ≤ a]
1502
+ = E(P[{E−1
1503
+ n V πE−1
1504
+ n }−1/2√r(˜θs − θ0) ≤ a | Dn])
1505
+ → Φp+M(a).
1506
+ We complete the proof of Theorem 1.
1507
+ Proof the Theorem 2
1508
+ We can prove that
1509
+ tr(V π) = 1
1510
+ n2
1511
+ K
1512
+
1513
+ k=1
1514
+ r
1515
+ rk
1516
+ nk
1517
+
1518
+ i=1
1519
+ 1
1520
+ πik
1521
+ tr
1522
+
1523
+
1524
+ � M
1525
+
1526
+ m=1
1527
+ {I(εik < b0m) − τm}˜xik,m
1528
+ �⊗2�
1529
+
1530
+ = 1
1531
+ n2
1532
+ K
1533
+
1534
+ k=1
1535
+ r
1536
+ rk
1537
+ � nk
1538
+
1539
+ i=1
1540
+ πik
1541
+ � � nk
1542
+
1543
+ i=1
1544
+ 1
1545
+ πik
1546
+ ����
1547
+ M
1548
+
1549
+ m=1
1550
+ [I(εik < b0m) − τm]˜xik,m
1551
+ ����
1552
+ 2�
1553
+ ≥ 1
1554
+ n2
1555
+ K
1556
+
1557
+ k=1
1558
+ r
1559
+ rk
1560
+ � nk
1561
+
1562
+ i=1
1563
+ ����
1564
+ M
1565
+
1566
+ m=1
1567
+ {I(εik < b0m) − τm}˜xik,m
1568
+ ����
1569
+ 2�
1570
+ = 1
1571
+ n2
1572
+ � K
1573
+
1574
+ k=1
1575
+ rk
1576
+ � K
1577
+
1578
+ k=1
1579
+ 1
1580
+ rk
1581
+ � nk
1582
+
1583
+ i=1
1584
+ ����
1585
+ M
1586
+
1587
+ m=1
1588
+ [I(εik < b0m) − τm]˜xik,m
1589
+ ����
1590
+ 2�
1591
+ ≥ 1
1592
+ n2
1593
+ K
1594
+
1595
+ k=1
1596
+ nk
1597
+
1598
+ i=1
1599
+ ����
1600
+ M
1601
+
1602
+ m=1
1603
+ {I(εik < b0m) − τm}˜xik,m
1604
+ ����
1605
+ 2
1606
+ ,
1607
+
1608
+ Springer Nature 2021 LATEX template
1609
+ Optimal subsampling algorithm for CQR with distributed data
1610
+ 17
1611
+ with Cauchy-Schwarz inequality and the equality in it holds if and only if
1612
+ when πik ∝ ∥ �M
1613
+ m=1[I(εik < b0m)−τm]˜xik,m∥ and rk ∝ �nk
1614
+ i=1 ∥ �M
1615
+ m=1[I(εik <
1616
+ b0m) − τm]˜xik,m∥, respectively. We complete the proof of Theorem 2.
1617
+
1618
+ Springer Nature 2021 LATEX template
1619
+ 18
1620
+ REFERENCES
1621
+ References
1622
+ Ai M, Yu J, Zhang H, Wang H (2019) Optimal subsampling algorithms for
1623
+ big data regressions. Statistica Sinica 31: 749-772
1624
+ Fang F, Zhao J, Ahmed S E, Qu A (2021) A weak-signal-assisted proce-
1625
+ dure for variable selection and statistical inference with an informative
1626
+ subsample. Biometrics 77(3): 996-1010
1627
+ Jiang R, Hu X, Yu K, Qian W (2018) Composite quantile regression for
1628
+ massive datasets. Statistics 52(5): 980-1004
1629
+ Jin J, Zhao Z (2021) Composite Quantile Regression Neural Network for
1630
+ Massive Datasets. Mathematical Problems in Engineering 2021
1631
+ Jones H L (1956) Investigating the properties of a sample mean by employ-
1632
+ ing random subsample means. Journal of the American Statistical
1633
+ Association 51(273): 54-83
1634
+ Ma P, Mahoney M W, Yu B (2015) A statistical perspective on algorithmic
1635
+ leveraging. Journal of Machine Learning Research 16: 861-919
1636
+ Qiu Y, Du G, Chai S (2020) A novel algorithm for distributed data stream
1637
+ using big data classification model. International Journal of Information
1638
+ Technology and Web Engineering 15(4): 1-17
1639
+ Shao L, Song S, Zhou Y (2022) Optimal subsampling for large-sample
1640
+ quantile regression with massive data. Canadian Journal of Statistics
1641
+ https://doi.org/10.1002/cjs.11697
1642
+ Shao Y, Wang L (2022) Optimal subsampling for composite quantile regres-
1643
+ sion model in massive data. Statistical Papers 63(4): 1139¨C1161
1644
+ Sun X, Xu R, Wu L, Guan Z (2021) A differentially private distributed data
1645
+ mining scheme with high efficiency for edge computing. Journal of Cloud
1646
+ Computing 10(1): 1-12
1647
+ Wang H Y, Zhu R, Ma P (2018) Optimal subsampling for large sample logistic
1648
+ regression. Journal of the American Statistical Association 113(522): 829-
1649
+ 844
1650
+ Wang H Y, Yang M, Stufken J (2019) Information-based optimal sub-
1651
+ data selection for big data linear regression. Journal of the American
1652
+ Statistical Association 114(525): 393-405
1653
+ Wang K, Li S, Zhang B (2021) Robust communication-efficient distributed
1654
+ composite quantile regression and variable selection for massive data.
1655
+ Computational Statistics & Data Analysis 161: 107262
1656
+ Wang H, Ma Y (2021) Optimal subsampling for quantile regression in big
1657
+ data. Biometrika 108: 99-112
1658
+ Yuan X, Li Y, Dong X, Liu T (2022) Optimal subsampling for composite
1659
+ quantile regression in big data. Statistical Papers 63(5): 1649-1676
1660
+ Yu J, Wang H, Ai M, Zhang H (2022) Optimal Distributed Subsampling for
1661
+ Maximum Quasi-Likelihood Estimators With Massive Data. Journal of
1662
+ the American Statistical Association 117(537): 265-276
1663
+ Zhang H, Wang H (2021) Distributed subdata selection for big data via
1664
+ sampling-based approach. Computational Statistics and Data Analysis
1665
+ 153: 107072
1666
+
1667
+ Springer Nature 2021 LATEX template
1668
+ REFERENCES
1669
+ 19
1670
+ Zou H, Yuan M (2008) Composite quantile regression and the oracle model
1671
+ selection theory. Annals of Statistics 36(3): 1108-1126
1672
+ Zuo L, Zhang H, Wang H Y, Sun L (2021) Optimal subsample selection
1673
+ for massive logistic regression with distributed data. Computational
1674
+ Statistics 36(4): 2535-2562
1675
+
1676
+ Springer Nature 2021 LATEX template
1677
+ 20
1678
+ REFERENCES
1679
+ Table 1: The proposed subsample estimate of β1 with n = 106 in Case I.
1680
+ K = 5
1681
+ K = 10
1682
+ Error
1683
+ r
1684
+ Bias
1685
+ SD
1686
+ Bias
1687
+ SD
1688
+ 200
1689
+ 0.0006
1690
+ 0.0769
1691
+ 0.0010
1692
+ 0.0737
1693
+ 400
1694
+ -0.0009
1695
+ 0.0554
1696
+ -0.0008
1697
+ 0.0531
1698
+ N(0, 1)
1699
+ 600
1700
+ 0.0025
1701
+ 0.0425
1702
+ 0.0008
1703
+ 0.0423
1704
+ 800
1705
+ 0.0009
1706
+ 0.0379
1707
+ 0.0004
1708
+ 0.0388
1709
+ 1000
1710
+ 0.0004
1711
+ 0.0348
1712
+ -0.0014
1713
+ 0.0338
1714
+ 200
1715
+ 0.0023
1716
+ 0.1405
1717
+ 0.0049
1718
+ 0.1336
1719
+ 400
1720
+ -0.0023
1721
+ 0.0970
1722
+ 0.0006
1723
+ 0.0934
1724
+ mixNormal
1725
+ 600
1726
+ -0.0033
1727
+ 0.0797
1728
+ -0.0004
1729
+ 0.0822
1730
+ 800
1731
+ 0.0028
1732
+ 0.0688
1733
+ -0.0019
1734
+ 0.0707
1735
+ 1000
1736
+ -0.0002
1737
+ 0.0600
1738
+ -0.0033
1739
+ 0.0621
1740
+ 200
1741
+ -0.0021
1742
+ 0.0961
1743
+ 0.0009
1744
+ 0.0914
1745
+ 400
1746
+ 0.0006
1747
+ 0.0665
1748
+ -0.0004
1749
+ 0.0645
1750
+ t(3)
1751
+ 600
1752
+ -0.0015
1753
+ 0.0552
1754
+ -0.0002
1755
+ 0.0505
1756
+ 800
1757
+ -0.0003
1758
+ 0.0477
1759
+ 0.0005
1760
+ 0.0462
1761
+ 1000
1762
+ 0.0024
1763
+ 0.0415
1764
+ 0.0013
1765
+ 0.0423
1766
+ 200
1767
+ -0.0108
1768
+ 0.1312
1769
+ 0.0070
1770
+ 0.1373
1771
+ 400
1772
+ 0.0040
1773
+ 0.0959
1774
+ 0.0003
1775
+ 0.0954
1776
+ Cauchy
1777
+ 600
1778
+ 0.0023
1779
+ 0.0793
1780
+ -0.0008
1781
+ 0.0778
1782
+ 800
1783
+ 0.0011
1784
+ 0.0700
1785
+ -0.0005
1786
+ 0.0674
1787
+ 1000
1788
+ -0.0014
1789
+ 0.0612
1790
+ -0.0018
1791
+ 0.0637
1792
+ Table 2: The proposed subsample estimate of β1 for Case IV and ε ∼ N(0, 1).
1793
+ n = 106
1794
+ n = 107
1795
+ r
1796
+ Bias
1797
+ SD
1798
+ Bias
1799
+ SD
1800
+ 200
1801
+ 0.0004
1802
+ 0.0551
1803
+ 0.0005
1804
+ 0.0555
1805
+ 400
1806
+ -0.0003
1807
+ 0.0394
1808
+ 0.0003
1809
+ 0.0392
1810
+ 600
1811
+ 0.0002
1812
+ 0.0313
1813
+ -0.0020
1814
+ 0.0312
1815
+ 800
1816
+ 0.0012
1817
+ 0.0273
1818
+ -0.0005
1819
+ 0.0267
1820
+ 1000
1821
+ 0.0012
1822
+ 0.0242
1823
+ -0.0011
1824
+ 0.0256
1825
+ Table 3: The CPU time for Case I and ε ∼ N(0, 1) with K = 5, n = 106 (seconds)
1826
+ r
1827
+ Methods
1828
+ 200
1829
+ 400
1830
+ 600
1831
+ 800
1832
+ 1000
1833
+ Uniform
1834
+ 0.077
1835
+ 0.098
1836
+ 0.145
1837
+ 0.170
1838
+ 0.217
1839
+ Proposed
1840
+ 0.446
1841
+ 0.494
1842
+ 0.552
1843
+ 0.615
1844
+ 0.689
1845
+ Full data
1846
+ 421.03
1847
+ Table 4: The CPU time for Case I and ε ∼ N(0, 1) with r = 1000, K = 5 and
1848
+ p = 30 (seconds)
1849
+ n
1850
+ Methods
1851
+ 104
1852
+ 105
1853
+ 106
1854
+ 107
1855
+ Uniform
1856
+ 0.411
1857
+ 0.417
1858
+ 0.447
1859
+ 0.490
1860
+ Proposed
1861
+ 0.586
1862
+ 0.620
1863
+ 0.922
1864
+ 5.393
1865
+ Full data
1866
+ 4.43
1867
+ 61.60
1868
+ 676.08
1869
+ 4667.22
1870
+
1871
+ Springer Nature 2021 LATEX template
1872
+ REFERENCES
1873
+ 21
1874
+ Table 5: The CPs and the average lengths (in parenthesis) of the confident interval
1875
+ of β1 with n = 106, r = 1000 and K = 5.
1876
+ Error
1877
+ B
1878
+ Case I
1879
+ Case II
1880
+ Case III
1881
+ Case IV
1882
+ 20
1883
+ 0.930(0.030)
1884
+ 0.948(0.034)
1885
+ 0.932(0.014)
1886
+ 0.920(0.021)
1887
+ 40
1888
+ 0.928(0.021)
1889
+ 0.924(0.024)
1890
+ 0.936(0.010)
1891
+ 0.954(0.015)
1892
+ N(0, 1)
1893
+ 60
1894
+ 0.952(0.018)
1895
+ 0.942(0.020)
1896
+ 0.942(0.009)
1897
+ 0.944(0.013)
1898
+ 80
1899
+ 0.918(0.015)
1900
+ 0.934(0.017)
1901
+ 0.926(0.008)
1902
+ 0.914(0.011)
1903
+ 100
1904
+ 0.936(0.014)
1905
+ 0.934(0.016)
1906
+ 0.930(0.007)
1907
+ 0.916(0.010)
1908
+ 20
1909
+ 0.926(0.054)
1910
+ 0.920(0.060)
1911
+ 0.938(0.026)
1912
+ 0.930(0.038)
1913
+ 40
1914
+ 0.932(0.038)
1915
+ 0.934(0.044)
1916
+ 0.922(0.019)
1917
+ 0.954(0.027)
1918
+ mixNormal
1919
+ 60
1920
+ 0.924(0.031)
1921
+ 0.936(0.036)
1922
+ 0.930(0.015)
1923
+ 0.934(0.023)
1924
+ 80
1925
+ 0.928(0.027)
1926
+ 0.928(0.031)
1927
+ 0.934(0.014)
1928
+ 0.946(0.020)
1929
+ 100
1930
+ 0.930(0.025)
1931
+ 0.934(0.028)
1932
+ 0.932(0.012)
1933
+ 0.948(0.018)
1934
+ 20
1935
+ 0.940(0.037)
1936
+ 0.940(0.041)
1937
+ 0.928(0.018)
1938
+ 0.954(0.026)
1939
+ 40
1940
+ 0.944(0.026)
1941
+ 0.960(0.030)
1942
+ 0.946(0.013)
1943
+ 0.916(0.019)
1944
+ t(3)
1945
+ 60
1946
+ 0.946(0.022)
1947
+ 0.968(0.025)
1948
+ 0.936(0.010)
1949
+ 0.936(0.016)
1950
+ 80
1951
+ 0.940(0.019)
1952
+ 0.944(0.021)
1953
+ 0.946(0.009)
1954
+ 0.940(0.013)
1955
+ 100
1956
+ 0.948(0.017)
1957
+ 0.944(0.019)
1958
+ 0.934(0.008)
1959
+ 0.914(0.012)
1960
+ 20
1961
+ 0.932(0.053)
1962
+ 0.944(0.060)
1963
+ 0.918(0.026)
1964
+ 0.936(0.038)
1965
+ 40
1966
+ 0.926(0.037)
1967
+ 0.932(0.043)
1968
+ 0.922(0.018)
1969
+ 0.944(0.027)
1970
+ Cauchy
1971
+ 60
1972
+ 0.924(0.031)
1973
+ 0.942(0.036)
1974
+ 0.930(0.015)
1975
+ 0.926(0.022)
1976
+ 80
1977
+ 0.938(0.027)
1978
+ 0.946(0.031)
1979
+ 0.934(0.013)
1980
+ 0.924(0.020)
1981
+ 100
1982
+ 0.942(0.024)
1983
+ 0.952(0.028)
1984
+ 0.926(0.012)
1985
+ 0.928(0.018)
1986
+ Table 6: The number of yearly data and allocation sizes (r = 1000)
1987
+ Years
1988
+ nk
1989
+ rk
1990
+ Years
1991
+ nk
1992
+ rk
1993
+ 1987
1994
+ 1,287,333
1995
+ 11
1996
+ 1998
1997
+ 5,227,051
1998
+ 45
1999
+ 1988
2000
+ 5,126,498
2001
+ 47
2002
+ 1999
2003
+ 5,360,018
2004
+ 45
2005
+ 1989
2006
+ 4,925,482
2007
+ 45
2008
+ 2000
2009
+ 5,481,303
2010
+ 45
2011
+ 1990
2012
+ 5,110,527
2013
+ 46
2014
+ 2001
2015
+ 4,873,031
2016
+ 42
2017
+ 1991
2018
+ 4,995,005
2019
+ 46
2020
+ 2002
2021
+ 5,093,462
2022
+ 45
2023
+ 1992
2024
+ 5,020,651
2025
+ 47
2026
+ 2003
2027
+ 6,375,689
2028
+ 56
2029
+ 1993
2030
+ 4,993,587
2031
+ 46
2032
+ 2004
2033
+ 6,987,729
2034
+ 59
2035
+ 1994
2036
+ 5,078,411
2037
+ 46
2038
+ 2005
2039
+ 6,992,838
2040
+ 58
2041
+ 1995
2042
+ 5,219,140
2043
+ 46
2044
+ 2006
2045
+ 7,003,802
2046
+ 57
2047
+ 1996
2048
+ 5,209,326
2049
+ 44
2050
+ 2007
2051
+ 7,275,288
2052
+ 58
2053
+ 1997
2054
+ 5,301,999
2055
+ 47
2056
+ 2008
2057
+ 2,319,121
2058
+ 19
2059
+
2060
+ Springer Nature 2021 LATEX template
2061
+ 22
2062
+ REFERENCES
2063
+ Table 7: The estimator and the length of confident interval for ˆβL with different r
2064
+ and B for the airline data.
2065
+ B
2066
+ r
2067
+ 40
2068
+ 100
2069
+ 200
2070
+ β1
2071
+ -0.0524 (-0.0675,-0.0373)
2072
+ -0.0458 (-0.0545,-0.0370)
2073
+ β2
2074
+ 0.9232 (0.9164, 0.9299)
2075
+ 0.9183 (0.9142,0.9225)
2076
+ β3
2077
+ -0.0242 (-0.0320, -0.0164)
2078
+ -0.0221 (-0.0261,-0.0181)
2079
+ 600
2080
+ β1
2081
+ -0.0450 (-0.0539,-0.0361)
2082
+ -0.0479 (-0.0537,-0.0421)
2083
+ β2
2084
+ 0.9172 (0.9127,0.9217)
2085
+ 0.9203 (0.9179,0.9227)
2086
+ β3
2087
+ -0.0268 (-0.0309,-0.0228)
2088
+ -0.0264 (-0.0288,-0.0240)
2089
+ 1000
2090
+ β1
2091
+ -0.0446 (-0.0509,-0.0383)
2092
+ -0.0404 (-0.0445,-0.0363)
2093
+ β2
2094
+ 0.9192 (0.9163,0.9220)
2095
+ 0.9205 (0.9184,0.9226)
2096
+ β3
2097
+ -0.0238 (-0.0269,-0.0208)
2098
+ -0.0277 (-0.0297,-0.0257)
2099
+ Fig. 1: The MSEs for different subsampling methods with K = 5 and n = 106
2100
+ (Case 1).
2101
+
2102
+ E~N(0,1)
2103
+ E-mixNormal
2104
+ 0.040
2105
+ Unif-MSE
2106
+ 0.14
2107
+ Unif-MSE
2108
+ Lopt-MSE
2109
+ Lopt-MSE
2110
+ 0.10
2111
+ MSE
2112
+ 0.025
2113
+ MSE
2114
+ 0.06
2115
+ 0.010
2116
+ 0.02
2117
+ 200
2118
+ 400
2119
+ 600
2120
+ 800
2121
+ 1000
2122
+ 200
2123
+ 400
2124
+ 600
2125
+ 800
2126
+ 1000
2127
+ E~t(3)
2128
+ E-Cauchy
2129
+ 90'0
2130
+ Unif-MSE
2131
+ 0.14
2132
+ Unif-MSE
2133
+ Lopt-MSE
2134
+ Lopt-MSE
2135
+ 0.04
2136
+ 0.10
2137
+ MSE
2138
+ MSE
2139
+ 0.06
2140
+ 0.02
2141
+ 0
2142
+ 200
2143
+ 400
2144
+ 600
2145
+ 800
2146
+ 1000
2147
+ 200
2148
+ 400
2149
+ 600
2150
+ 800
2151
+ 1000
2152
+ -Springer Nature 2021 LATEX template
2153
+ REFERENCES
2154
+ 23
2155
+ Fig. 2: The MSEs for different subsampling methods with K = 10 and n =
2156
+ 106(Case 1).
2157
+
2158
+ E~N(0,1)
2159
+ E-mixNormal
2160
+ 90'0
2161
+ Unif-MSE
2162
+ 0.14
2163
+ Unif-MSE
2164
+ Lopt-MSE
2165
+ Lopt-MSE
2166
+ 0.10
2167
+ MSE
2168
+ 0.03
2169
+ MSE
2170
+ 0.06
2171
+ 0.01
2172
+ 0.02
2173
+ 200
2174
+ 400
2175
+ 600
2176
+ 800
2177
+ 1000
2178
+ 200
2179
+ 400
2180
+ 600
2181
+ 800
2182
+ 1000
2183
+ r
2184
+ (
2185
+ E~t(3)
2186
+ E~Cauchy
2187
+ 20'0
2188
+ Unif-MSE
2189
+ 0.14
2190
+ Unif-MSE
2191
+ 90'0
2192
+ Lopt-MSE
2193
+ Lopt-MSE
2194
+ 0.10
2195
+ MSE
2196
+ MSE
2197
+ 0.03
2198
+ 0.06
2199
+ .01
2200
+ 0.02
2201
+ 200
2202
+ 400
2203
+ 600
2204
+ 800
2205
+ 1000
2206
+ 200
2207
+ 400
2208
+ 600
2209
+ 800
2210
+ 1000
2211
+ -Springer Nature 2021 LATEX template
2212
+ 24
2213
+ REFERENCES
2214
+ Fig. 3: The MSEs for different subsampling methods with ε ∼ N(0, 1)(Case
2215
+ IV).
2216
+
2217
+ n=106
2218
+ n=107
2219
+ 0.025
2220
+ Unif-MSE
2221
+ 0.025
2222
+ Unif-MSE
2223
+ Lopt-MSE
2224
+ Lopt-MSE
2225
+ MSE
2226
+ 0.015
2227
+ MSE
2228
+ 0.015
2229
+ .005
2230
+ 200
2231
+ 400
2232
+ 600
2233
+ 800
2234
+ 1000
2235
+ 200
2236
+ 400
2237
+ 600
2238
+ 800
2239
+ 1000
2240
+ rSpringer Nature 2021 LATEX template
2241
+ REFERENCES
2242
+ 25
2243
+ Fig. 4: The EMSEs and AMSEs of ˆθL with different values of B and r = 1000
2244
+ (Case 1).
2245
+
2246
+ E~N(0,1)
2247
+ E~mixNormal
2248
+ 4e-04
2249
+ 0.0020
2250
+ EMSE
2251
+ EMSE
2252
+ AMSE
2253
+ AMSE
2254
+ MSE
2255
+ 2e-04
2256
+ MSE
2257
+ 0.0010
2258
+ 00+a0
2259
+ 0000'0
2260
+ 20
2261
+ 40
2262
+ 60
2263
+ 80
2264
+ 100
2265
+ 20
2266
+ 40
2267
+ 60
2268
+ 80
2269
+ 100
2270
+ B
2271
+ 8
2272
+ E~t(3)
2273
+ E~Cauchy
2274
+ 6e-04
2275
+ 0.0020
2276
+ EMSE
2277
+ EMSE
2278
+ AMSE
2279
+ AMSE
2280
+ MSE
2281
+ 3e-04
2282
+ MSE
2283
+ 0.0010
2284
+ 00+a0
2285
+ 0000'0
2286
+ 20
2287
+ 40
2288
+ 60
2289
+ 80
2290
+ 100
2291
+ 20
2292
+ 40
2293
+ 60
2294
+ 80
2295
+ 100
2296
+ 8
2297
+ 8Springer Nature 2021 LATEX template
2298
+ 26
2299
+ REFERENCES
2300
+ Fig. 5: The EMSEs and AMSEs of ˆθL with different values of B and r = 1000
2301
+ (Case II).
2302
+
2303
+ E~N(0,1)
2304
+ E-mixNormal
2305
+ 4e-04
2306
+ 0.0020
2307
+ EMSE
2308
+ EMSE
2309
+ AMSE
2310
+ AMSE
2311
+ MSE
2312
+ 2e-04
2313
+ MSE
2314
+ 0.0010
2315
+ 00+a0
2316
+ 0000'0
2317
+ 20
2318
+ 40
2319
+ 60
2320
+ 80
2321
+ 100
2322
+ 20
2323
+ 40
2324
+ 60
2325
+ 80
2326
+ 100
2327
+ 8
2328
+ 8
2329
+ E~t(3)
2330
+ E~Cauchy
2331
+ 6e-04
2332
+ 0.0020
2333
+ EMSE
2334
+ EMSE
2335
+ AMSE
2336
+ AMSE
2337
+ MSE
2338
+ 3e-04
2339
+ MSE
2340
+ 0.0010
2341
+ 00+a0
2342
+ 0000'0
2343
+ 20
2344
+ 40
2345
+ 60
2346
+ 80
2347
+ 100
2348
+ 20
2349
+ 40
2350
+ 60
2351
+ 80
2352
+ 100
2353
+ 8
2354
+ BSpringer Nature 2021 LATEX template
2355
+ REFERENCES
2356
+ 27
2357
+ Fig. 6: The EMSEs and AMSEs of ˆθL with different values of B and r = 1000
2358
+ (Case III).
2359
+
2360
+ E~N(0,1)
2361
+ E-mixNorma
2362
+ 05000'0
2363
+ 90-a8
2364
+ EMSE
2365
+ EMSE
2366
+ AMSE
2367
+ AMSE
2368
+ MSE
2369
+ MSE
2370
+ 0.00015
2371
+ 4e-05
2372
+ 00+a0
2373
+ 00000'0
2374
+ 20
2375
+ 40
2376
+ 60
2377
+ 80
2378
+ 100
2379
+ 20
2380
+ 40
2381
+ 60
2382
+ 80
2383
+ 100
2384
+ B
2385
+ B
2386
+ E~t(3)
2387
+ E~Cauchy
2388
+ 0.00020
2389
+ 05000'0
2390
+ EMSE
2391
+ EMSE
2392
+ AMSE
2393
+ AMSE
2394
+ 0.00010
2395
+ 0.00015
2396
+ MSE
2397
+ MSE
2398
+ 00000'0
2399
+ 00000'0
2400
+ 20
2401
+ 40
2402
+ 60
2403
+ 80
2404
+ 100
2405
+ 20
2406
+ 40
2407
+ 60
2408
+ 80
2409
+ 100
2410
+ 8
2411
+ 8Springer Nature 2021 LATEX template
2412
+ 28
2413
+ REFERENCES
2414
+ Fig. 7: The EMSEs and AMSEs of ˆθL with different values of B and r = 1000
2415
+ (Case IV).
2416
+
2417
+ E~N(0,1)
2418
+ E-mixNormal
2419
+ 0.00020
2420
+ 6e-04
2421
+ EMSE
2422
+ EMSE
2423
+ AMSE
2424
+ AMSE
2425
+ 0.00010
2426
+ SE
2427
+ MSE
2428
+ 3e-04
2429
+ 0.00000
2430
+ 00+a0
2431
+ 20
2432
+ 40
2433
+ 60
2434
+ 80
2435
+ 100
2436
+ 20
2437
+ 40
2438
+ 60
2439
+ 80
2440
+ 100
2441
+ 8
2442
+ 8
2443
+ E~t(3)
2444
+ E-Cauchy
2445
+ 05000'0
2446
+ 6e-04
2447
+ EMSE
2448
+ EMSE
2449
+ AMSE
2450
+ AMSE
2451
+ 0.00015
2452
+ WSE
2453
+ MSE
2454
+ 3e-04
2455
+ 00000'0
2456
+ 00+a0
2457
+ 20
2458
+ 40
2459
+ 60
2460
+ 80
2461
+ 100
2462
+ 20
2463
+ 40
2464
+ 60
2465
+ 80
2466
+ 100
2467
+ 8
2468
+ 8Springer Nature 2021 LATEX template
2469
+ REFERENCES
2470
+ 29
2471
+ Fig. 8: The results of MSEs for the airline data.
2472
+
2473
+ 800'0
2474
+ Unif-MSE
2475
+ Lopt-MSE
2476
+ MSE
2477
+ 0.004
2478
+ 000'0
2479
+ 200
2480
+ 400
2481
+ 600
2482
+ 800
2483
+ 1000
2484
+ r
K9E0T4oBgHgl3EQfigH5/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
KNFOT4oBgHgl3EQfyzQ_/content/tmp_files/2301.12929v1.pdf.txt ADDED
@@ -0,0 +1,2263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Can Persistent Homology provide an efficient alternative for
2
+ Evaluation of Knowledge Graph Completion Methods?
3
+ Anson Bastos
4
5
+ IIT, Hyderabad
6
+ India
7
+ Kuldeep Singh
8
9
+ Zerotha Research and
10
+ Cerence GmbH
11
+ Germany
12
+ Abhishek Nadgeri
13
14
+ Zerotha Research and
15
+ RWTH Aachen
16
+ Germany
17
+ Johannes Hoffart
18
19
+ SAP
20
+ Germany
21
+ Toyotaro Suzumura
22
23
+ The University of Tokyo
24
+ Japan
25
+ Manish Singh
26
27
+ IIT Hyderabad
28
+ India
29
+ ABSTRACT
30
+ In this paper we present a novel method, Knowledge Persistence
31
+ (KP), for faster evaluation of Knowledge Graph (KG) completion
32
+ approaches. Current ranking-based evaluation is quadratic in the
33
+ size of the KG, leading to long evaluation times and consequently a
34
+ high carbon footprint. KP addresses this by representing the topol-
35
+ ogy of the KG completion methods through the lens of topological
36
+ data analysis, concretely using persistent homology. The character-
37
+ istics of persistent homology allow KP to evaluate the quality of
38
+ the KG completion looking only at a fraction of the data. Experi-
39
+ mental results on standard datasets show that the proposed metric
40
+ is highly correlated with ranking metrics (Hits@N, MR, MRR). Per-
41
+ formance evaluation shows that KP is computationally efficient:
42
+ In some cases, the evaluation time (validation+test) of a KG com-
43
+ pletion method has been reduced from 18 hours (using Hits@10)
44
+ to 27 seconds (using KP), and on average (across methods & data)
45
+ reduces the evaluation time (validation+test) by ≈ 99.96%.
46
+ ACM Reference Format:
47
+ Anson Bastos, Kuldeep Singh, Abhishek Nadgeri, Johannes Hoffart, Toyotaro
48
+ Suzumura, and Manish Singh. 2023. Can Persistent Homology provide
49
+ an efficient alternative for Evaluation of Knowledge Graph Completion
50
+ Methods?. In Proceedings of the Web Conference 2023 (WWW ’23), APRIL 30 -
51
+ MAY 4, 2023, Texas, USA. WWW, Texas, USA, 13 pages. https://doi.org/10.
52
+ XXXXX/YYYYY.3449917
53
+ 1
54
+ INTRODUCTION
55
+ Publicly available Knowledge Graphs (KGs) find broad applicability
56
+ in several downstream tasks such as entity linking, relation extrac-
57
+ tion, fact-checking, and question answering [22, 41]. These KGs are
58
+ large graph databases used to express facts in the form of relations
59
+ between real-world entities and store these facts as triples (subject,
60
+ Permission to make digital or hard copies of part or all of this work for
61
+ personal or classroom use is granted without fee provided that copies are
62
+ not made or distributed for profit or commercial advantage and that copies
63
+ bear this notice and the full citation on the first page. Copyrights for third-
64
+ party components of this work must be honored. For all other uses, contact
65
+ the owner/author(s).
66
+ WWW ’23, APRIL 30 - MAY 4, 2023, Texas, USA
67
+ © 2023 Copyright held by the owner/author(s).
68
+ ACM ISBN 978-Y-4500-YYYY-7/21/04.
69
+ https://doi.org/10.XXXXX/YYYYY.3449917
70
+ relation, object). KGs must be continuously updated because new en-
71
+ tities might emerge or facts about entities are extended or updated.
72
+ Knowledge Graph Completion (KGC) task aims to fill the missing
73
+ piece of information into an incomplete triple of KG [5, 18, 22].
74
+ Several Knowledge Graph Embedding (KGE) approaches have
75
+ been proposed to model entities and relations in vector space for
76
+ missing link prediction in a KG [55]. KGE methods infer the connec-
77
+ tivity patterns (symmetry, asymmetry, etc.) in the KGs by defining a
78
+ scoring function to calculate the plausibility of a knowledge graph
79
+ triple. While calculating plausibility of a KG triple τ = (𝑒ℎ,𝑟,𝑒𝑡),
80
+ the predicted score by scoring function affirms the confidence of a
81
+ model that entities 𝑒𝑡 and 𝑒ℎ are linked by 𝑟.
82
+ For evaluating KGE methods, ranking metrics have been widely
83
+ used [22] which is based on the following criteria: given a KG triple
84
+ with a missing head or tail entity, what is the ability of the KGE
85
+ method to rank candidate entities averaged over triples in a held-
86
+ out test set [28]? These ranking metrics are useful as they intend to
87
+ gauge the behavior of the methods in real world applications of KG
88
+ completion. Since 2019, over 100 KGE articles have been published
89
+ in various leading conferences and journals that use ranking metrics
90
+ as evaluation protocol1.
91
+ Limitations of Ranking-based Evaluation: The key challenge
92
+ while computing ranking metrics for model evaluation is the time
93
+ taken to obtain them. Since the (most of) KGE models aim to rank
94
+ all the negative triples that are not present in the KG [8, 9], comput-
95
+ ing these metrics takes a quadratic time in the number of entities
96
+ in the KG. Moreover, the problem gets alleviated in the case of
97
+ hyper-relations [62] where more than two entities participate, lead-
98
+ ing to exponential computation time. For instance, Ali et al. [2]
99
+ spent 24,804 GPU hours of computation time while performing a
100
+ large-scale benchmarking of KGE methods.
101
+ There are two issues with high model evaluation time. Firstly,
102
+ efficiency at evaluation time is not a widely-adapted criterion for
103
+ assessing KGE models alongside accuracy and related measures.
104
+ There are efforts to make KGE methods efficient at training time
105
+ [52, 54]. However, these methods also use ranking-based protocols
106
+ resulting in high evaluation time. Secondly, the need for signifi-
107
+ cant computational resources for the KG completion task excludes a
108
+ large group of researchers in universities/labs with restricted GPU
109
+ 1https://github.com/xinguoxia/KGE#papers
110
+ arXiv:2301.12929v1 [cs.LG] 30 Jan 2023
111
+
112
+ WWW ’23, APRIL 30 - MAY 4, 2023, Texas, USA
113
+ Bastos, et al.
114
+ availability. Such preliminary exclusion implicitly challenges the ba-
115
+ sic notion of various diversity and inclusion initiatives for making
116
+ the Web and its related research accessible to a wider community.
117
+ In past, researchers have worked extensively towards efficient Web-
118
+ related technologies such as Web Crawling [12], Web Indexing [25],
119
+ RDF processing [17], etc. Hence, for the KG completion task, similar
120
+ to other efficient Web-based research, there is a necessity to develop
121
+ alternative evaluation protocols to reduce the computation com-
122
+ plexity, a crucial research gap in available KGE scientific literature.
123
+ Another critical issue in ranking metrics is that they are biased
124
+ towards popular entities and such popularity bias is not captured
125
+ by current evaluation metrics [28]. Hence, we need a metric which
126
+ is efficient than popular ranking metrics and also omits such biases.
127
+ Motivation and Contribution: In this work, we focus on ad-
128
+ dressing above-mentioned key research gaps and aim for the first
129
+ study to make KGE evaluation more efficient. We introduce Knowl-
130
+ edge Persistence(KP), a method for characterizing the topology
131
+ of the learnt KG representations. It builds upon Topological Data
132
+ Analysis [58] based on the concepts from Persistent Homology(PH)
133
+ [15], which has been proven beneficial for analyzing deep networks
134
+ [29, 36]. PH is able to effectively capture the geometry of the mani-
135
+ fold on which the representations reside whilst requiring fraction of
136
+ data [15]. This property allows to reduce the quadratic complexity
137
+ of considering all the data points (KG triples in our case) for rank-
138
+ ing. Another crucial fact that makes PH useful is its stability with
139
+ respect to perturbations making KP robust to noise [19] mitigating
140
+ the issues due to the open-world problem. Thus we use PH due to
141
+ its effectiveness for limited resources and noise [50]. Concretely,
142
+ the following are our key contributions:
143
+ (1) We propose (KP), a novel approach along with its theoreti-
144
+ cal foundations to estimate the performance of KGE models
145
+ through the lens of topological data analysis. This allows us
146
+ to drastically reduce the computation factor from order of
147
+ O(|E|2) to O(|E|). The code is here.
148
+ (2) We run extensive experiments on families of KGE methods
149
+ (e.g., Translation, Rotation, Bi-Linear, Factorization, Neural
150
+ Network methods) using standard benchmark datasets. The
151
+ experiments show that KP correlates well with the stan-
152
+ dard ranking metrics. Hence, KP could be used for faster
153
+ prototyping of KGE methods and paves the way for efficient
154
+ evaluation methods in this domain.
155
+ In the remainder of the paper, related work is in section 2. Section
156
+ 3 briefly explains the concept of persistent homology. Section 4
157
+ describes the proposed method. Later, section 5 shows associated
158
+ empirical results and we conclude in section 7.
159
+ 2
160
+ RELATED WORK
161
+ Broadly, KG embeddings are classified into translation and semantic
162
+ matching models [55]. Translation methods such as TransE [8],
163
+ TransH [57], TransR [26] use distance-based scoring functions.
164
+ Whereas semantic matching models (e.g., ComplEx [48], Distmult
165
+ [60], RotatE [44]) use similarity-based scoring functions.
166
+ Kadlec et al. [23] first pointed limitations of KGE evaluation
167
+ and its dependency on hyperparameter tuning. [45] with exhaus-
168
+ tive evaluation (using ranking metrics) showed issues of scoring
169
+ functions of KGE methods whereas [31] studied the effect of loss
170
+ function of KGE performance. Jain et al. [20] studied if KGE meth-
171
+ ods capture KG semantic properties. Work in [35] provides a new
172
+ dataset that allows the study of calibration results for KGE mod-
173
+ els. Speranskaya et al. [43] used precision and recall rather than
174
+ rankings to measure the quality of completion models. Authors pro-
175
+ posed a new dataset containing triples such that their completion
176
+ is both possible and impossible based on queries. However, queries
177
+ were build by creating a tight dependency on such queries for the
178
+ evaluation as pointed by [47]. Rim et al. [37] proposed a capability-
179
+ based evaluation where the focus is to evaluate KGE methods on
180
+ various dimensions such as relation symmetry, entity hierarchy,
181
+ entity disambiguation, etc. Mohamed et al. [28] fixed the popularity
182
+ bias of ranking metrics by introducing modified ranking metrics.
183
+ The geometric perspective of KGE methods was introduced by [40]
184
+ and its correlation with task performance. Berrendorf et al. [6] sug-
185
+ gested the adjusted mean rank to improve reciprocal rank, which
186
+ is an ordinal scale. Authors do not consider the effect of negative
187
+ triples available for a given triple under evaluation. [47] propose
188
+ to balances the number of negatives per triple to improve rank-
189
+ ing metrics. Authors suggested the preparation of training/testing
190
+ splits by maintaining the topology. Work in [24] proposes efficient
191
+ non-sampling techniques for KG embedding training, few other
192
+ initiatives improve efficiency of KGE training time [52–54], and
193
+ hyperparameter search efficiency of embedding models [49, 56, 63].
194
+ Overall, the literature is rich with evaluations of knowledge
195
+ graph completion methods [4, 21, 38, 46]. However, to the best
196
+ of our knowledge, extensive attempts have not been made to im-
197
+ prove KG evaluation protocols’ efficiency, i.e., to reduce run-time
198
+ of widely-used ranking metrics for faster prototyping. We position
199
+ our work orthogonal to existing attempts such as [40], [47], [28],
200
+ and [37]. In contrast with these attempts, our approach provides a
201
+ topological perspective of the learned KG embeddings and focuses
202
+ on improving the efficiency of KGE evaluations.
203
+ 3
204
+ PRELIMINARIES
205
+ We now briefly describe concepts used in this paper.
206
+ Ranking metrics have been used for evaluating KG embedding
207
+ methods since the inception of the KG completion task [8]. These
208
+ metrics include the Mean Rank (MR), Mean Reciprocal Rank (MRR)
209
+ and the cut-off hit ratio (Hits@N (N=1,3,10)). MR reports the average
210
+ predicted rank of all the labeled triples. MRR is the average of the
211
+ inverse rank of the labelled triples. Hits@N evaluates the fraction
212
+ of the labeled triples that are present in the top N predicted results.
213
+ Persistent Homology (PH) [15, 19]: studies the topological
214
+ features such as components in 0-dimension (e.g., a node), holes in
215
+ 1-dimension (e.g., a void area bounded by triangle edges) and so
216
+ on, spread over a scale. Thus, one need not choose a scale before-
217
+ hand. The number(rank) of these topological features(homology
218
+ group) in every dimension at a particular scale can be used for
219
+ downstream applications. Consider the simplicial complex ( e.g.,
220
+ point is a 0-simplex, an edge is a 1-simplex, a triangle is a 2-simplex
221
+ ) 𝐶 with weights 𝑎0 ≤ 𝑎1 ≤ 𝑎2 . . . 𝑎𝑚−1, which could represent the
222
+ edge weights, for example, the triple score from the KG embed-
223
+ ding method in our case. One can then define a Filtration process
224
+ [15], which refers to generating a nested sequence of complexes
225
+ 𝜙 ⊆ 𝐶1 ⊆ 𝐶2 ⊆ . . .𝐶𝑚 = 𝐶 in time/scale as the simplices below
226
+
227
+ Can Persistent Homology provide an efficient alternative
228
+ WWW ’23, APRIL 30 - MAY 4, 2023, Texas, USA
229
+ Figure 1: Calculating Knowledge Persistence(KP) score from the given KG and KG embedding method. The KG is sampled for
230
+ positive(G+) and negative(G−) triples (step one), keeping the order O(|E|). The edge weights represent the score obtained from
231
+ the KG embedding method. In step two, the persistence diagram (PD) is computed using filtration process explained in Figure
232
+ 2. In final step, a Sliced Wasserstein distance (SW) is obtained between the PDs of G+ and G− to get the KP score. However,
233
+ ranking metrics run the KGE methods over all the O(|E|2) triples as explained in bottom left part of the figure(red box).
234
+ the threshold weights are added in the complex. The filtration pro-
235
+ cess [15] results in the creation(birth) and destruction(death) of
236
+ components, holes, etc. Thus each structure is associated with a
237
+ birth-death pair (𝑎𝑖,𝑎𝑗) ∈ 𝑅2 with 𝑖 ≤ 𝑗. The persistence or life-
238
+ time of each component can then be given by 𝑎𝑗 − 𝑎𝑖. A persistence
239
+ diagram (PD) summarizes the (birth,death) pair of each object on
240
+ a 2D plot, with birth times on the x axis and death times on the y
241
+ axis. The points near the diagonal are shortlived components and
242
+ generally are considered noise (local topology), whereas the persis-
243
+ tent objects (global topology) are treated as features. We consider
244
+ local and global topology to compare two PDs (i.e., positive and
245
+ negative triple graphs in our case).
246
+ 4
247
+ PROBLEM STATEMENT AND METHOD
248
+ 4.1
249
+ Problem Setup
250
+ We define a KG as a tuple 𝐾𝐺 = (E, R, T +) where E denotes
251
+ the set of entities (vertices), R is the set of relations (edges), and
252
+ T + ⊆ E × R × E is a set of all triples. A triple τ = (𝑒ℎ,𝑟,𝑒𝑡) ∈ T +
253
+ indicates that, for the relation 𝑟 ∈ R, 𝑒ℎ is the head entity (origin
254
+ of the relation) while 𝑒𝑡 is the tail entity. Since 𝐾𝐺 is a multigraph;
255
+ 𝑒ℎ = 𝑒𝑡 may hold and |{𝑟𝑒ℎ,𝑒𝑡 }| ≥ 0 for any two entities. The KG
256
+ completion task predicts the entity pairs ⟨𝑒𝑖,𝑒𝑗⟩ in the KG that have
257
+ a relation 𝑟𝑐 ∈ R between them.
258
+ 4.2
259
+ Proposed Method
260
+ In this section we describe our approach for evaluating KG embed-
261
+ ding methods using the theory of persistent homology (PH) . This
262
+ process is divided into three steps ( Figure 1), namely: (i) Graph con-
263
+ struction, (ii) Filtration process and (iii) Sliced Wasserstein distance
264
+ computation. The first step creates two graphs (one for positive
265
+ triples, another for negative triples) using sampling(O(V) triples),
266
+ with scores calculated by a KGE method as edge weights. The
267
+ second step considers these graphs and, using a process called "fil-
268
+ tration," converts to an equivalent lower dimension representation.
269
+ The last step calculates the distance between graphs to provide a
270
+ final metric score. We now detail the approach.
271
+ 4.2.1
272
+ Graph Construction. We envisioned KGE from the topolog-
273
+ ical lens while proposing an efficient solution for its evaluation.
274
+ Previous works such as [40] proposed a KGE metric only consider-
275
+ ing embedding space. However, we intend to preserve the topology
276
+ (graph structure and its topological feature) along with the KG em-
277
+ bedding features. We first construct graphs of positive and negative
278
+ triples. We denote a graph as (V, E) where V is the set of 𝑁 nodes
279
+ and E represents the edges between them. Consider a KG embed-
280
+ ding method M that takes as input the triple τ = (ℎ,𝑟,𝑡) ∈ T and
281
+ gives the score 𝑠τ of it being a right triple. We construct a weighted
282
+ directed graph G+ from positive triples τ ∈ T + in the train set,
283
+ with the entities as the nodes and the relations between them as the
284
+ edges having 𝑠τ as the edge weights. Here, 𝑠τ is the score calculated
285
+ by KGE method for a triple and we propose to use it as the edge
286
+ weights. Our idea is to capture topology of graph (G+) with repre-
287
+ sentation learned by a KG embedding method. We sample an order
288
+ of O(|E|) triples, |E| being the number of entities to keep compu-
289
+ tational time linear. Similarly, we construct a negative graph G−
290
+ by sampling the same number of unknown triples as the positive
291
+ samples. One question may arise if KP is robust to sampling, that
292
+ we answer theoretically in Theorem 4.4 and empirically in section
293
+ 6. Note, here we do not take all the negative triples in the graphs
294
+ and consider only a fraction of what the ranking metrics need. This
295
+ is a fundamental difference with ranking metrics. Ranking metrics
296
+ use all the unlabeled triples as negatives for ranking, thus incurring
297
+ a computational cost of 𝑂(|E|2).
298
+ 4.2.2
299
+ Filtration Process. Having constructed the Graphs G+ and
300
+ G−, we now need some definition of a distance between them
301
+
302
+ 1. Graph Construction
303
+ 2. Filtration Process
304
+ Winneror
305
+ gt
306
+ a=0
307
+ Einstein(HAE)
308
+ Hans
309
+ Prize(NP)
310
+ Hans
311
+ KG Embedding
312
+ Einstein
313
+ GrandSonof
314
+ method
315
+ Alfred
316
+ Sonot
317
+ n SupervisedBy
318
+ r(A
319
+ a=2
320
+ 3Albert
321
+ Einstein
322
+ Homen
323
+ Alfred
324
+ Birth
325
+ 3. Sliced Wasserstein
326
+ Distance Computation
327
+ D+
328
+ O(E*)Graph with scores from the
329
+ KGE method onthe edges
330
+ HAE
331
+ KP(G+, G-) = SW(D+, D-)
332
+ 0.3
333
+ Ranking
334
+ Ranking
335
+ metric
336
+ Albert
337
+ Einstein
338
+ HE
339
+ AK
340
+ Hermann
341
+ Einstein
342
+ Birth
343
+ Ranking Metrics
344
+ D-
345
+ process
346
+ Sampled O(E) Graphs
347
+ with scores from the KGE
348
+ method on the edgesWWW ’23, APRIL 30 - MAY 4, 2023, Texas, USA
349
+ Bastos, et al.
350
+ Figure 2: For a KGE method, the positive triple graph G+ is used as input (leftmost graph with edge weights) and filtration
351
+ process is applied on the edge weights (calculated by KGE method) for the graph. The filtration starts with only nodes as first
352
+ step, and based on the edge weights, edges are added to the nodes. The persistence diagram is given on the right with red dots
353
+ indicating 0-dimensional homology (components) and the blue dots indicating 1-dimensional homology (cycles). Persistent
354
+ Diagram generated from this filtration process is a condensed 2D representation of G+. A similar process is repeated for G−.
355
+ to define a metric. However, since the KGs could be large with
356
+ many entities and relations, directly comparing the graphs could
357
+ be computationally challenging. Therefore, we allude to the theory
358
+ of persistent homology (PH) to summarize the structures in the
359
+ graphs in the form of the persistence diagram (PD). Such summa-
360
+ rizing is obtained by a process known as filtration [64]. One can
361
+ imagine a PD as mapping of higher dimensional data to a 2D plane
362
+ upholding the representation of data points and we can then derive
363
+ computational efficiency for distance comparison between 2D repre-
364
+ sentations. Specifically, we compute the 0-dimensional topological
365
+ features (i.e., connected-nodes/components) for each graph (G−
366
+ and G+) to keep the computation time linear. We also experimented
367
+ using the 1-dimensional features without much empirical benefit.
368
+ Consider the positive triple graph G+ as input (cf., Figure 2).
369
+ We would need a scale (as pointed in section 3) for the filtration
370
+ process. Once the filtration process starts, initially, we have a graph
371
+ structure containing only the nodes (entities) and no edges of G+.
372
+ For capturing topological features at various scales, we define a
373
+ variable 𝑎 which varies from −∞ to +∞ and it is then compared
374
+ with edge weights (𝑠τ). A scale allows to capture topology at various
375
+ timesteps. Thus, we use the edge weights obtained from the scores
376
+ (𝑠τ) of the KGE methods for filtration. As the filtration proceeds,
377
+ the graph structures (components) are generated/removed. At a
378
+ given scale 𝑎, the graph structure ((G+
379
+ 𝑠𝑢𝑏)𝑎) contains those edges
380
+ (triples) for which 𝑠τ ≤ 𝑎. Formally, this is expressed as:
381
+ (G+
382
+ 𝑠𝑢𝑏)𝑎 = {(V, E+
383
+ 𝑎)|E+
384
+ 𝑎 ⊆ E,𝑠τ ≤ 𝑎 ∀τ ∈ E+
385
+ 𝑎 }
386
+ Alternatively, we add those edges for which score of the triple is
387
+ greater than or equal to the filtration value, i.e., 𝑠τ ≥ 𝑎 defined as
388
+ (G+
389
+ 𝑠𝑢𝑝𝑒𝑟)𝑎 = {(V, E𝑎+)|E𝑎+ ⊆ E,𝑠τ ≥ 𝑎 ∀τ ∈ E𝑎+}
390
+ One can imagine that for filtration, graph G+ is subdivided into
391
+ (G+
392
+ 𝑠𝑢𝑏)𝑎 and (G+𝑠𝑢𝑝𝑒𝑟)𝑎 as the filtration adds/deletes edges for cap-
393
+ turing topological features. Hence, specific components in a sub-
394
+ graphs will appear and certain components will disappear at differ-
395
+ ent scale levels (timesteps) 𝑎 = 1, 3, 5 and so on. Please note, Figure 2
396
+ explains creation of PD for (G+
397
+ 𝑠𝑢𝑏)𝑎. A similar process is repeated for
398
+ (G+𝑠𝑢𝑝𝑒𝑟)𝑎. This expansion/contraction process enables capturing
399
+ topology at different time-steps without worrying about defining an
400
+ optimal scale (similar to hyperparameter). Next step is the creation
401
+ of persistent diagrams of (G+
402
+ 𝑠𝑢𝑏)𝑎 and (G+𝑠𝑢𝑝𝑒𝑟)𝑎 where the x-axis
403
+ and y-axis denotes the timesteps of appearance/disappearance of
404
+ components. For creating a 2D representation graph, components of
405
+ graphs which appear(disappear) during filtration process at 𝑎𝑥 (𝑎𝑦)
406
+ are plotted on (𝑎𝑥,𝑎𝑦). The persistence or lifetime of each compo-
407
+ nent can then be given by 𝑎𝑦 − 𝑎𝑥. At implementation level, one
408
+ can view PDs(∈ 𝑅𝑁×2) of (G+
409
+ 𝑠𝑢𝑏)𝑎 and (G+𝑠𝑢𝑝𝑒𝑟)𝑎 as tensors which
410
+ are concatenated into one common tensor representing positive
411
+ triple graph G+. Hence, final PD of G+ is a concatenation of PDs
412
+ of (G+
413
+ 𝑠𝑢𝑏)𝑎 and (G+𝑠𝑢𝑝𝑒𝑟)𝑎. This final persistent diagram represents
414
+ a summary of the local and global topological features of the graph
415
+ G+. Following are the benefits of a persistent diagram against con-
416
+ sidering the whole graph: 1) a 2D summary of a higher dimensional
417
+ graph structure data is highly beneficial for large graphs in terms
418
+ of the computational efficiency. 2) The summary could contain
419
+ fewer data points than the original graph, preserving the topologi-
420
+ cal information. Similarly, the process is repeated for negative triple
421
+ graph G− for creating its persistence diagram. Now, the two newly
422
+ created PDs are used for calculating the proposed metric score.
423
+ 4.2.3
424
+ Sliced Wasserstein distance computation. To compare two
425
+ PDs, generally the Wasserstein distance between them is computed
426
+ [16]. As the Wasserstein distance could be computationally costly,
427
+ we find the sliced Wasserstein distance [13] between the PDs, which
428
+ we empirically observe to be eight times faster on average. The
429
+ Sliced Wasserstein distance(𝑆𝑊 ) between measures 𝜇 and 𝜈 is:
430
+ 𝑆𝑊𝑝 (𝜇,𝜈) =
431
+ �∫
432
+ 𝑆𝑑−1 𝑊 𝑝
433
+ 𝑝 (𝑅𝜇 (.,𝜃), 𝑅𝜈 (.,𝜃))
434
+ � 1
435
+ 𝑝
436
+ where 𝑅𝜇 (.,𝜃) is the projection of 𝜇 along 𝜃,𝑊 is initial Wasserstein
437
+ distance. Generally a Monte Carlo average over 𝐿 samples is done
438
+ instead of the integral. The 𝑆𝑊 distance takes O(𝐿𝑁𝑑 +𝐿𝑁𝑙𝑜𝑔(𝑁))
439
+ time which can be improved to linear time O(𝑁𝑑) for 𝑆𝑊2 (i.e.,
440
+ Euclidean distance) as a closed form solution [30]. Thus,
441
+ KP(G+, G−) = 𝑆𝑊 (𝐷+, 𝐷−)
442
+ (1)
443
+ where 𝐷+, 𝐷− are the persistence diagrams for G+, G− respectively.
444
+ Since the metric is obtained by summarizing the Knowledge graph
445
+ using Persistence diagrams we term it as Knowledge Persistence(KP).
446
+ As KP correlates well with ranking metrics (sections 4.2.4 and 5),
447
+ higher KP signifies a better performance of the KGE method.
448
+ 4.2.4
449
+ Theoretical justification. This section briefly states the theo-
450
+ retical results justifying the proposed method to approximate the
451
+
452
+ 1Can Persistent Homology provide an efficient alternative
453
+ WWW ’23, APRIL 30 - MAY 4, 2023, Texas, USA
454
+ ranking metrics. We begin the analysis by assuming two distribu-
455
+ tions: One for the positive graph’s edge weights(scores) and the
456
+ other for the negative graph. We define a metric "PERM" (Figure 3),
457
+ that is a proxy to the ranking metrics while being continuous(for
458
+ the definition of integrals and derivatives) for ease of theoretical
459
+ analysis. The proof sketches are given in the appendix.
460
+ Figure 3: Figure gives an intuition of the metric PERM which
461
+ is designed to be a proxy to the ranking metrics for ease of
462
+ theoretical analysis. For a given positive triple 𝜏 with score
463
+ 𝑥𝜏 the expected rank(𝐸𝑅(𝜏)) is defined as the area under the
464
+ curve of the negative distribution from 𝑥𝜏 to ∞(shown in the
465
+ shaded area above). PERM is then defined as the expectation
466
+ of the expected rank under the positive distribution.
467
+ Definition 4.1 (Expected Ranking(ER)). Consider the positive triples
468
+ to have the distribution 𝐷+ and the negative triples to have the
469
+ distribution 𝐷−. For a positive triple with score 𝑎 its expected rank-
470
+ ing(ER) is defined as, 𝐸𝑅(𝑎) =
471
+ ∫ 𝑥=∞
472
+ 𝑥=𝑎
473
+ 𝐷−(𝑥)𝑑𝑥
474
+ Definition 4.2 (PERM). Consider the positive triples to have the
475
+ distribution 𝐷+ and the negative triples to have the distribution 𝐷−.
476
+ The PERM metric is then defined as, 𝑃𝐸𝑅𝑀 =
477
+ ∫ 𝑥=∞
478
+ 𝑥=−∞ 𝐷+(𝑥)𝐸𝑅(𝑥)𝑑𝑥
479
+ It is easy to see that PERM has a monotone increasing corre-
480
+ spondence with the actual ranking metrics. That is, as many of
481
+ the negative triples get a higher score than the positive triples, the
482
+ distribution of the negative triples will shift further right of the pos-
483
+ itive distribution. Hence, the area under the curve would increase
484
+ for a given triple(x=a). We just established a monotone increasing
485
+ correspondence of PERM with the ranking metrics, we now need
486
+ show that there exists a one-one correspondence between PERM
487
+ and KP. For closed-form solutions, we work with normalised dis-
488
+ tributions (can be extended to other distributions using [39]) of
489
+ KGE score under the following mild consideration: As the KGE
490
+ method converges, the mean statistic(𝑚𝜈) of the scores of the posi-
491
+ tive triples consistently lies on one side of the half-plane formed
492
+ by the mean statistic(𝑚𝜇) of the negative triples, irrespective of the
493
+ data distribution.
494
+ Lemma 4.1. KP has a monotone increasing correspondence with
495
+ the Proxy of the Expected Ranking Metrics(PERM) under the above
496
+ stated considerations as 𝑚𝜈 deviates from 𝑚𝜇
497
+ The above lemma shows that there is a one-one correspondence
498
+ between KP and PERM and by definition PERM has a one-one cor-
499
+ respondence with the ranking metrics. Therefore, the next theorem
500
+ follows as a natural consequence:
501
+ Theorem 4.3. KP has a one-one correspondence with the ranking
502
+ metrics under the above stated considerations.
503
+ The above theorem states that, with high probability, there exists
504
+ a correlation between KP and the ranking metrics under certain
505
+ considerations and proof details are in the appendix. In an ideal
506
+ case, we seek a linear relationship between the proposed mea-
507
+ sure and the ranking metric. This would help interpret whether
508
+ an increase/decrease in the measure would cause a corresponding
509
+ increase/decrease in the ranking metric we wish to simulate. Such
510
+ interpretation becomes essential when the proposed metric has
511
+ different behavior from the existing metric. While the correlation
512
+ could be high, for interpretability of the results, we would also like
513
+ the change in KP to be bounded for a change in the scores(ranking
514
+ metrics). The below theorem gives a sense for this bound.
515
+ Theorem 4.4. Under the considerations of theorem 4.3, the relative
516
+ change in KP on addition of random noise to the scores is bounded
517
+ by a function of the original and noise-induced covariance matrix
518
+ as ΔK P
519
+ K P ≤ 𝑚𝑎𝑥((1 − |Σ+1
520
+ 𝜇1 Σ−1
521
+ 𝜇2 |
522
+ 3
523
+ 2 ), (1 − |Σ+1
524
+ 𝜈1 Σ−1
525
+ 𝜈2 |
526
+ 3
527
+ 2 )), where Σ𝜇1 and
528
+ Σ𝜈1 are the covariance matrices of the positive and negative triples’
529
+ scores respectively and Σ𝜇2 and Σ𝜈2 are that of the corrupted scores.
530
+ Theorem 4.4 gives a bound on the change in KP while inducing
531
+ noise in the KGE predictions. Ideally, the error/change would be 0,
532
+ and as the noise is increased(and the ranking changed), gradually,
533
+ the KP value also changes in a bounded manner as desired.
534
+ 5
535
+ EXPERIMENTAL SETUP
536
+ For de-facto KGC task (c.f., section 4.1), we use popular KG embed-
537
+ ding methods from its various categories: (1) Translation: TransE
538
+ [8], TransH [57], TransR [26] (2) Bilinear, Rotation, and Factoriza-
539
+ tion: RotatE [44] TuckER [3], and ComplEx [48], (3) Neural Network
540
+ based: ConvKB [32]. The method selection and evaluation choices
541
+ are similar to [28, 37] that propose new metrics for KG embeddings.
542
+ All methods run on a single P100 GPU machine for a maximum of
543
+ 100 epochs each and evaluated every 5 epochs. For training/testing
544
+ the KG embedding methods we make use of the pykg2vec [61]
545
+ library and validation runs are executed 20 times on average. We
546
+ use the standard/best hyperparameters for these datasets that the
547
+ considered KGE methods reported [3, 8, 26, 44, 48, 57, 61].
548
+ 5.1
549
+ Datasets
550
+ We use standard English KG completion datasets: WN18, WN18RR,
551
+ FB15k237, FB15k, YAGO3-10 [2, 44]. The WN18 dataset is obtained
552
+ from Wordnet [27] containing lexical relations between English
553
+ words. WN18RR removes the inverse relations in the WN18 dataset.
554
+ FB15k is obtained from the Freebase [7] knowledge graph, and
555
+ FB15k237 was created from FB15k by removing the inverse relations.
556
+ The dataset details are in the Table 1. For scaling experiment, we
557
+ rely on large scale YAGO3-10 dataset [2] and due to brevity, results
558
+ for Yago3-10 are in appendix ( cf., Figure 6 and table 9).
559
+ 5.2
560
+ Comparative Methods
561
+ Considering ours is the first work of its kind, we select some com-
562
+ petitive baselines as below and explain "why" we chose them. For
563
+ evaluation, we report correlation [14] between KP and baselines
564
+ with ranking metrics (Hits@N (N= 1,3,10), MRR and MR).
565
+ Conicity [40]: It finds the average cosine of the angle between an
566
+ embedding and the mean embedding vector. In a sense, it gives
567
+
568
+ PERM =E.(ER(T)
569
+ Distribution
570
+ Distribution
571
+ of positive
572
+ ER(T)
573
+ of neqative
574
+ triples
575
+ triples
576
+ Score of a positive triple TWWW ’23, APRIL 30 - MAY 4, 2023, Texas, USA
577
+ Bastos, et al.
578
+ spread of a KG embedding method in space. We would like to
579
+ observe instead of topology, if calculating geometric properties of
580
+ a KG embedding method be an alternative for ranking metric.
581
+ Average Vector Length: This metric was also proposed by Sharma
582
+ et al. [40] to study the geometry of the KG embedding methods. It
583
+ computes the average length of the embeddings.
584
+ Graph Kernel (GK): we use graph kernels to compare the two
585
+ graphs(G+, G−) obtained for our approach. The rationale is to check
586
+ if we could get some distance metric that correlates with the ranking
587
+ metrics without persistent homology. Hence, this baseline empha-
588
+ sizes a direct comparison for the validity of persistent homology in
589
+ our proposed method. As an implementation, we employ the widely
590
+ used shortest path kernel [10] to compare how the paths(edge
591
+ weights/scores) change between the two graphs. Since the method
592
+ is computationally expensive, we sample nodes [11] and apply the
593
+ kernel on the sampled graph, averaging multiple runs.
594
+ Table 1: (Open-Source)Benchmark Datasets for Experi-
595
+ ments.
596
+ Dataset
597
+ Triples
598
+ Entities
599
+ Relations
600
+ FB15K
601
+ 592,213
602
+ 14.951
603
+ 1,345
604
+ FB15K-237
605
+ 272,115
606
+ 14,541
607
+ 237
608
+ WN18
609
+ 151,442
610
+ 40,943
611
+ 18
612
+ WN18RR
613
+ 93,003
614
+ 40,943
615
+ 11
616
+ Yago3-10
617
+ 1,089,040
618
+ 123,182
619
+ 37
620
+ 6
621
+ RESULTS AND DISCUSSION
622
+ We conduct our experiments in response to the following research
623
+ questions: RQ1: Is there a correlation between the proposed metric
624
+ and ranking metrics for popular KG embedding methods? RQ2:
625
+ Can the proposed metric be used to perform early stopping during
626
+ training? RQ3: What is the computational efficiency of proposed
627
+ metric wrt ranking metrics for KGE evaluation?
628
+ KP for faster prototyping of KGE methods: Our core hypoth-
629
+ esis in the paper is to develop an efficient alternative (proxy) to
630
+ the ranking metrics. Hence, for a fair evaluation, we use the triples
631
+ in the test set for computing KP. Ideally, this should be able to
632
+ simulate the evaluation of the ranking metrics on the same (test)
633
+ set. If true, there exists a high correlation between the two mea-
634
+ sures, namely the KP and the ranking metrics. Table 2 shows the
635
+ linear correlations between the ranking metrics and our method &
636
+ baselines. We report the linear(Pearson’s) correlation because we
637
+ would like a linear relationship between the proposed measure and
638
+ the ranking metric (for brevity, other correlations are in appendix
639
+ Tables 7, 8). This would help interpret whether an increase/decrease
640
+ in the measure would cause a corresponding increase/decrease in
641
+ the ranking metric that we wish to simulate. Specifically we train all
642
+ the KG embedding methods for a predefined number of epochs and
643
+ evaluate the finally obtained models to get the ranking metrics and
644
+ KP. The correlations are then computed between KP and each
645
+ of the ranking metrics. We observe that KP(test) configuration
646
+ (triples are sampled from the test set) achieves the highest correla-
647
+ tion coefficient value among all the existing geometric and kernel
648
+ baseline methods in most cases. For instance, on FB15K, KP(test)
649
+ reports high correlation value of 0.786 with Hits@1, whereas best
650
+ baseline for this dataset (AVL) has corresponding correlation value
651
+ as 0.339. Similarly for WN18RR, KP(test) has correlation value
652
+ of 0.482 compared to AVL with -0.272 correlation with Hits@1.
653
+ Conicity and AVL that provide geometric perspective shows mostly
654
+ low positive correlation with ranking metrics whereas the Graph
655
+ Kernel based method shows highly negative correlations, making
656
+ these methods unsuitable for direct applicability. It indicates that
657
+ the topology of the KG induced by the learnt representations seems
658
+ a good predictor of the performance on similar data distributions
659
+ with high correlation with ranking metric (answering RQ1).
660
+ Furthermore, the results also report a configuration KP(train) in
661
+ which we compute KP on the triples of the train set and find the
662
+ correlation with the ranking metrics obtained from the test set.
663
+ Here our rationale is to study whether the proposed metric would
664
+ be able to capture the generalizability of the unseen test (real world)
665
+ data that is of a similar distribution as the training data. Initial re-
666
+ sults in Table 2 are promising with high correlation of KP(train)
667
+ with ranking metric. Hence, it may enable the use of KP in settings
668
+ without test/validation data while using the available (possibly lim-
669
+ ited) data for training, for example, in few-shot scenarios. We leave
670
+ this promising direction of research for future.
671
+ 6.1
672
+ KP as a criterion for early stopping
673
+ Does KP hold correlation while early stopping? To know
674
+ when to stop the training process to prevent overfitting, we must
675
+ be able to estimate the variance of the model. This is generally done
676
+ by observing the validation/test set error. Thus, to use a method as
677
+ a criterion for early stopping, it should be able to predict this gen-
678
+ eralization error. Table 2 explains that KP(Train) can predict the
679
+ generalizability of methods on the last epoch, it remains to empiri-
680
+ cally verify that KP also predicts the performance at every interval
681
+ during the training process. Hence, we study the correlations of
682
+ the proposed method with the ranking metrics for individual KG
683
+ embedding methods in the intra-method setting. Specifically, for
684
+ a given method, we obtain the KP score and the ranking metrics
685
+ on the test set and compute the correlations at every evaluation
686
+ interval. Results in Table 3 suggest that KP has a decent correla-
687
+ tion in the intra-method setting. It indicates that KP could be used
688
+ in place of the ranking metrics for deciding a criterion on early
689
+ stopping if the score keeps persistently falling (answering RQ2).
690
+ What is the relative error of early stopping between KP
691
+ and Ranking Metric? To further cross-validate our response to
692
+ RQ2, we now compute the absolute relative error between the rank-
693
+ ing metrics of the best models selected by KP and the expected
694
+ ranking metrics. Ideally, we would expect the performance of the
695
+ model obtained using this process on unseen test data(preferably
696
+ of the same distribution) to be close to the best achievable result,
697
+ i.e., the relative error should be small. This is important as if we
698
+ were to use any metric for faster prototyping, it should also be
699
+ a good criterion for model selection(selecting a model with less
700
+ generalization error) and being efficient. Table 4 shows that the
701
+ relative error is marginal, of the order of 10−2, in most cases(with
702
+ few exceptions), indicating that KP could be used for early stop-
703
+ ping. The deviation is higher for some methods, such as ConvKB,
704
+
705
+ Can Persistent Homology provide an efficient alternative
706
+ WWW ’23, APRIL 30 - MAY 4, 2023, Texas, USA
707
+ Metrics
708
+ FB15K
709
+ FB15K237
710
+ WN18
711
+ WN18RR
712
+ Hits1(↑)
713
+ Hits3(↑)
714
+ Hits10(↑)
715
+ MR(↓)
716
+ MRR(↑)
717
+ Hits1(↑)
718
+ Hits3(↑)
719
+ Hits10(↑)
720
+ MR(↓)
721
+ MRR(↑)
722
+ Hits1(↑)
723
+ Hits3(↑)
724
+ Hits10(↑)
725
+ MR(↓)
726
+ MRR(↑)
727
+ Hits1(↑)
728
+ Hits3(↑)
729
+ Hits10(↑)
730
+ MR(↓)
731
+ MRR(↑)
732
+ Conicity
733
+ -0.156
734
+ -0.170
735
+ -0.202
736
+ 0.085
737
+ -0.183
738
+ 0.509
739
+ 0.379
740
+ 0.356
741
+ -0.352
742
+ 0.424
743
+ -0.052
744
+ -0.096
745
+ -0.123
746
+ 0.389
747
+ -0.096
748
+ -0.267
749
+ -0.471
750
+ -0.510
751
+ 0.266
752
+ -0.448
753
+ AVL
754
+ 0.339
755
+ 0.325
756
+ 0.261
757
+ -0.423
758
+ 0.308
759
+ -0.527
760
+ -0.149
761
+ -0.158
762
+ 0.188
763
+ -0.284
764
+ 0.805
765
+ 0.825
766
+ 0.856
767
+ -0.884
768
+ 0.840
769
+ -0.272
770
+ -0.456
771
+ -0.488
772
+ 0.303
773
+ -0.438
774
+ GK(train)
775
+ -0.825
776
+ -0.852
777
+ -0.815
778
+ 0.952
779
+ -0.843
780
+ -0.903
781
+ -0.955
782
+ -0.972
783
+ 0.970
784
+ -0.965
785
+ -0.645
786
+ -0.648
787
+ -0.669
788
+ 0.611
789
+ -0.663
790
+ -0.518
791
+ -0.808
792
+ -0.840
793
+ 0.591
794
+ -0.779
795
+ GK(test)
796
+ -0.285
797
+ -0.318
798
+ -0.247
799
+ 0.629
800
+ -0.300
801
+ -0.031
802
+ -0.130
803
+ -0.123
804
+ 0.101
805
+ -0.095
806
+ -0.579
807
+ -0.565
808
+ -0.569
809
+ 0.412
810
+ -0.575
811
+ -0.276
812
+ -0.589
813
+ -0.658
814
+ 0.470
815
+ -0.549
816
+ KP (Train)
817
+ 0.482
818
+ 0.418
819
+ 0.449
820
+ -0.072
821
+ 0.433
822
+ 0.773
823
+ 0.711
824
+ 0.702
825
+ -0.714
826
+ 0.745
827
+ 0.769
828
+ 0.769
829
+ 0.782
830
+ -0.682
831
+ 0.780
832
+ 0.500
833
+ 0.809
834
+ 0.852
835
+ -0.755
836
+ 0.777
837
+ KP (Test)
838
+ 0.786
839
+ 0.731
840
+ 0.661
841
+ -0.669
842
+ 0.721
843
+ 0.825
844
+ 0.870
845
+ 0.864
846
+ -0.861
847
+ 0.871
848
+ 0.875
849
+ 0.887
850
+ 0.909
851
+ -0.884
852
+ 0.899
853
+ 0.482
854
+ 0.816
855
+ 0.863
856
+ -0.683
857
+ 0.776
858
+ Table 2: Pearson’s linear correlation (𝑟) scores computed from the metric scores with respect to the ranking metrics on the
859
+ standard KG embedding datasets. The KG methods are evaluated after training. Green values are the best.
860
+ Datasets
861
+ FB15K237
862
+ WN18RR
863
+ KG methods
864
+ r
865
+ 𝜌
866
+ 𝜏
867
+ r
868
+ 𝜌
869
+ 𝜏
870
+ TransE
871
+ 0.955
872
+ 0.861
873
+ 0.709
874
+ 0.876
875
+ 0.833
876
+ 0.722
877
+ TransH
878
+ 0.688
879
+ 0.570
880
+ 0.409
881
+ 0.864
882
+ 0.717
883
+ 0.555
884
+ TransR
885
+ 0.975
886
+ 0.942
887
+ 0.811
888
+ 0.954
889
+ 0.967
890
+ 0.889
891
+ Complex
892
+ 0.938
893
+ 0.788
894
+ 0.610
895
+ 0.833
896
+ 0.933
897
+ 0.833
898
+ RotatE
899
+ 0.896
900
+ 0.735
901
+ 0.579
902
+ 0.774
903
+ 0.983
904
+ 0.944
905
+ TuckER
906
+ 0.906
907
+ 0.676
908
+ 0.527
909
+ 0.352
910
+ 0.25
911
+ 0.167
912
+ ConvKB
913
+ 0.086
914
+ 0.012
915
+ 0.007
916
+ 0.276
917
+ 0.569
918
+ 0.422
919
+ Table 3: Correlation scores computed between KP and the
920
+ ranking metric(Hits@10) on the standard KG embedding
921
+ datasets with the methods evaluated at every interval as the
922
+ training progresses. Here, r: Pearson correlation co-efficient,
923
+ 𝜌: Spearman’s correlation co-efficient, 𝜏: Kendall’s Tau.
924
+ which had convergence issues. We infer from observed behavior
925
+ that if the KG embedding method has not converged(to good re-
926
+ sults), the correlation and, thus, the early stopping prediction may
927
+ suffer. Despite a few outliers, the promising results shall encourage
928
+ the community to research, develop, and use KGE benchmarking
929
+ methods that are also computationally efficient.
930
+ Datasets
931
+ FB15K237
932
+ WN18RR
933
+ KG methods
934
+ hits@1
935
+ hits@10
936
+ MRR
937
+ hits@1
938
+ hits@10
939
+ MRR
940
+ TransE
941
+ 0.006
942
+ 0.006
943
+ 0.007
944
+ 0.000
945
+ 0.007
946
+ 0.004
947
+ TransH
948
+ 0.045
949
+ 0.015
950
+ 0.019
951
+ 0.130
952
+ 0.018
953
+ 0023
954
+ TransR
955
+ 0.074
956
+ 0.045
957
+ 0.053
958
+ 0.242
959
+ 0.062
960
+ 0.016
961
+ Complex
962
+ 0.001
963
+ 0.002
964
+ 0.003
965
+ 0.317
966
+ 0.021
967
+ 0.028
968
+ RotatE
969
+ 0.022
970
+ 0.009
971
+ 0.007
972
+ 0.017
973
+ 0.005
974
+ 0.009
975
+ TuckER
976
+ 0.008
977
+ 0.006
978
+ 0.002
979
+ 0.293
980
+ 0.022
981
+ 0.101
982
+ ConvKB
983
+ 0.000
984
+ 0.043
985
+ 0.043
986
+ 0.659
987
+ 0.453
988
+ 0.569
989
+ Table 4: Early stopping using KP. The values depict the ab-
990
+ solute relative error between the metrics of the best models
991
+ selected using KP and ranking metrics.
992
+ 6.2
993
+ Timing analysis and carbon footprint
994
+ We now study the time taken for running the evaluation (including
995
+ evaluation at intervals) of the same methods as in section 6.1 on
996
+ the standard datasets. Table 5 shows the evaluation times (valida-
997
+ tion+test) and speedup for each method on the respective datasets.
998
+ The training time is constant for ranking metric and KP. In some
999
+ cases (ConvKB), we observe KP achieves a speedup of up to 2576
1000
+ times on model evaluation time drastically reducing evaluation time
1001
+ from 18 hours to 27 seconds; the latter is even roughly equal to the
1002
+ carbon footprint of making a cup of coffee2. Furthermore, Figure
1003
+ 2https://tinyurl.com/4w2xmwry
1004
+ Figure 4: Figure shows a study on the carbon footprint on
1005
+ WN18RR when using KP vs Hits@10. The x-axis shows the
1006
+ the carbon footprint in g eq 𝐶𝑂2.
1007
+ 4 illustrates the carbon footprints [33, 59] of the overall process
1008
+ (training + evaluation) for the methods when using KP vs ranking
1009
+ metrics. Due to evaluation time drastically reduced by KP, it also
1010
+ reduces overall carbon footprints. The promising results validate
1011
+ our attempt to develop alternative method for faster prototyping
1012
+ of KGE methods, thus saving carbon footprint (answering RQ3).
1013
+ 6.3
1014
+ Ablation Studies
1015
+ We systematically provide several studies to support our evaluation
1016
+ and characterize different properties of KP.
1017
+ Robustness to noise induced by sampling: An important
1018
+ property that makes persistent homology worthwhile is its stability
1019
+ concerning perturbations of the filtration function. This means that
1020
+ persistent homology is robust to noise and encodes the intrinsic
1021
+ topological properties of the data [19]. However, in our applica-
1022
+ tion of predicting the performance of KG embedding methods, one
1023
+ source of noise is because of sampling the negative and positive
1024
+ triples. It could cause perturbations in the graph topology due to the
1025
+ addition and deletion of edges (cf., Figure 2). Therefore, we would
1026
+ like the proposed metric to be stable concerning perturbations. To
1027
+ understand the behavior of KP against this noise, we conduct a
1028
+ study by incrementally adding samples to the graph and observing
1029
+ the mean and standard deviation of the correlation at each stage.
1030
+ In an ideal case, assuming the KG topology remains similar, the
1031
+ mean correlations should be in a narrow range with slight standard
1032
+ deviations. We observe a similar effect in Figure 5 where we report
1033
+ the mean correlation at various fractions of triples sampled, with
1034
+ the standard deviation(error bands). Here, the mean correlation
1035
+ coefficients are within the range of 0.06(0.04), and the average stan-
1036
+ dard deviations are about 0.02(0.02) for the FB15K237(WN18RR)
1037
+
1038
+ Carbon Footprint of the Overall KGE prototyping process
1039
+ Carbon Footprint using KP
1040
+ Carbon Footprint using ranking metrics
1041
+ TransE
1042
+ TransH
1043
+ TransR
1044
+ Complex
1045
+ RotatE
1046
+ ConvKB
1047
+ TuckER
1048
+ 0
1049
+ 100
1050
+ 200
1051
+ 300
1052
+ 400WWW ’23, APRIL 30 - MAY 4, 2023, Texas, USA
1053
+ Bastos, et al.
1054
+ Metrics
1055
+ Hits@10
1056
+ KP
1057
+ Speedup ↑
1058
+ Dataset
1059
+ FB15K237
1060
+ WN18RR
1061
+ FB15K237
1062
+ WN18RR
1063
+ FB15K237
1064
+ WN18RR
1065
+ split
1066
+ val + test
1067
+ val + test
1068
+ val + test
1069
+ val + test
1070
+ Avg
1071
+ Avg
1072
+ TransE
1073
+ 103.6
1074
+ 86.1
1075
+ 0.337
1076
+ 0.120
1077
+ x 322.8
1078
+ x 754.1
1079
+ TransH
1080
+ 37.1
1081
+ 21.2
1082
+ 0.333
1083
+ 0.099
1084
+ x 117.0
1085
+ x 224.4
1086
+ TransR
1087
+ 192.0
1088
+ 137.1
1089
+ 0.352
1090
+ 0.135
1091
+ x 572.0
1092
+ x 1066.4
1093
+ Complex
1094
+ 136.1
1095
+ 151.4
1096
+ 0.340
1097
+ 0.142
1098
+ x 420.1
1099
+ x 1121.7
1100
+ RotatE
1101
+ 174.2
1102
+ 155.2
1103
+ 0.359
1104
+ 0.142
1105
+ x 509.5
1106
+ x 1145.6
1107
+ TuckER
1108
+ 94.8
1109
+ 22.1
1110
+ 0.332
1111
+ 0.098
1112
+ x 300.0
1113
+ x 241.9
1114
+ ConvKB
1115
+ 1106.0
1116
+ 138.1
1117
+ 0.451
1118
+ 0.139
1119
+ x 2576.6
1120
+ x 1044.3
1121
+ Table 5: Evaluation Metric Comparison wrt Computing
1122
+ Time (in minutes, for 100 epochs). Column 1 denotes
1123
+ popular KGE methods. Depicted values denote evalua-
1124
+ tion(validation+test) time for computing a metric and corre-
1125
+ sponding speedup using KP. KP significantly reduces the
1126
+ evaluation time (green).
1127
+ dataset. This shows that KP inherits the robustness of the topo-
1128
+ logical data analysis techniques, enabling linear time by sampling
1129
+ from the graph for dense KGs while keeping it robust.
1130
+ Figure 5: Effect of sample size on the correlation coefficient
1131
+ between KP and the ranking metrics on FB15K237 (left dia-
1132
+ gram) and WN18RR datasets. The correlations for the differ-
1133
+ ent sampling fractions are comparable. Also, the standard
1134
+ deviation is less, indicating the method’s robustness due to
1135
+ changes in local topology while doing sampling.
1136
+ Generalizability Study- Correlation with Stratified Rank-
1137
+ ing Metric: Mohamed et al. [28] proposed a new stratified metric
1138
+ (strat-metric) that can be tuned to focus on the unpopular entities,
1139
+ unlike the standard ranking metrics, using certain hyperparameters
1140
+ (𝛽𝑒 ∈ (−1, 1), 𝛽𝑟 ∈ (−1, 1)). Special cases of these hyperparame-
1141
+ ters give the micro and macro ranking metrics. Goal here is to
1142
+ study whether our method can predict strat-metric for the spe-
1143
+ cial case of 𝛽𝑒 = 1, 𝛽𝑟 = 0, which estimates the performance for
1144
+ unpopular(sparse) entities. Also, we aim to observe if KP holds
1145
+ a correlation with variants of the ranking metric concerning its
1146
+ generalization ability. The results (cf., Table 6) shows that KP has
1147
+ a good correlation with each of the stratified ranking metrics which
1148
+ indicate KP also takes into account the local geometry/topology
1149
+ [1] of the sparse entities and relations.
1150
+ 6.4
1151
+ Summary of Results and Open Directions
1152
+ To sum up, following are key observations gleaning from empirical
1153
+ studies: 1) KP shows high correlation with ranking metrics (Ta-
1154
+ ble 2) and its stratified version (Table 6). It paves the way for the
1155
+ use of KP for faster prototyping of KGE methods. 2) KP holds
1156
+ Datasets
1157
+ FB15K237
1158
+ WN18RR
1159
+ Metrics
1160
+ r
1161
+ 𝜌
1162
+ 𝜏
1163
+ r
1164
+ 𝜌
1165
+ 𝜏
1166
+ Strat-Hits@1 (↑)
1167
+ 0.965
1168
+ 0.857
1169
+ 0.714
1170
+ 0.513
1171
+ 0.482
1172
+ 0.411
1173
+ Strat-Hits@3 (↑)
1174
+ 0.898
1175
+ 0.821
1176
+ 0.619
1177
+ 0.691
1178
+ 0.714
1179
+ 0.524
1180
+ Strat-Hits@10 (↑)
1181
+ 0.871
1182
+ 0.821
1183
+ 0.619
1184
+ 0.870
1185
+ 0.750
1186
+ 0.619
1187
+ Strat-MR (↓)
1188
+ -0.813
1189
+ -0.679
1190
+ -0.524
1191
+ -0.701
1192
+ -0.821
1193
+ -0.619
1194
+ Strat-MRR (↑)
1195
+ 0.806
1196
+ 0.679
1197
+ 0.524
1198
+ 0.658
1199
+ 0.714
1200
+ 0.524
1201
+ Table 6: KP correlation with stratified ranking metrics as
1202
+ proposed in [28].
1203
+ a high correlation at every interval during the training process
1204
+ (Table 3) with marginal relative error; hence, it could be used for
1205
+ early stopping of a KGE method. 3) KP inherits key properties of
1206
+ persistent homology, i.e., it is robust to noise induced by sampling.
1207
+ 4) The overall carbon footprints of the evaluation cycle is drastically
1208
+ reduced if KP is preferred over ranking metrics.
1209
+ What’s Next? We show that topological data analysis based on
1210
+ persistent homology can act as a proxy for ranking metrics with
1211
+ conclusive empirical evidence and supporting theoretical founda-
1212
+ tions. However, it is the first step toward a more extensive research
1213
+ agenda. We believe substantial work is needed collectively in the
1214
+ research community to develop strong foundations, solving scal-
1215
+ ing issues (across embedding methods, datasets, KGs, etc.) until
1216
+ persistent homology-based methods are widely adopted.
1217
+ For example, there could be methods/datasets where the correla-
1218
+ tion turns out to be a small positive value or even negative, in which
1219
+ case we may not be able to use KP in the existing form to simu-
1220
+ late the ranking metrics for these methods/datasets. In those cases,
1221
+ some alteration may exist for the same and seek further exploration
1222
+ similar to what stratified ranking metric [28] does by fixing issues
1223
+ encountered in the ranking metric. Furthermore, theorem 4.4 would
1224
+ be a key to understand error bounds when interpreting limited per-
1225
+ formance (e.g., when the correlation is a small positive). However,
1226
+ this does not limit the use of KP for KGE methods as it captures and
1227
+ contrasts the topology of the positive and negative sampled graphs
1228
+ learned from these methods, which could be a useful metric by
1229
+ itself. In this paper, the emphasis is on the need for evaluation and
1230
+ benchmarking methods that are computationally efficient rather
1231
+ than providing an exhaustive one method fits all metric. We believe
1232
+ that there is much scope for future research in this direction. Some
1233
+ promising directions include 1) better sampling techniques(instead
1234
+ of the random sampling used in this paper), 2) rigorous theoretical
1235
+ analysis drawing the boundaries on the abilities/limitations across
1236
+ settings (zero-shot, few-shot, etc.), 3) using KP (and related metrics)
1237
+ in continuous spaces, that could be differentiable and approximate
1238
+ the ranking metrics, in the optimization process of KGE methods.
1239
+ 7
1240
+ CONCLUSION
1241
+ We propose Knowledge Persistence (KP), first work that uses tech-
1242
+ niques from topological data analysis, as a predictor of the ranking
1243
+ metrics to efficiently evaluate the performance of KG embedding
1244
+ approaches. With theoretical and empirical evidences, our work
1245
+ brings efficiency at center stage in the evaluation of KG embedding
1246
+ methods along with traditional way of reporting their performance.
1247
+ Finally, with efficiency as crucial criteria for evaluation, we hope
1248
+
1249
+ Hits@10
1250
+ 0.90
1251
+ MRR
1252
+ 0.88
1253
+ 0.86
1254
+ 0.84
1255
+ 0.82
1256
+ 0.80
1257
+ 0.78
1258
+ 0.2
1259
+ 0.4
1260
+ 0.6
1261
+ 0.8
1262
+ 1.00.95
1263
+ Hits@10
1264
+ MRR
1265
+ 0.90
1266
+ 0.85
1267
+ 0.80
1268
+ 0.75
1269
+ 0.2
1270
+ 0.4
1271
+ 0.6
1272
+ 0.8
1273
+ 1.0Can Persistent Homology provide an efficient alternative
1274
+ WWW ’23, APRIL 30 - MAY 4, 2023, Texas, USA
1275
+ KGE research becomes more inclusive and accessible to the broader
1276
+ research community with limited computing resources.
1277
+ Acknowledgment This work was partly supported by JSPS
1278
+ KAKENHI Grant Number JP21K21280.
1279
+ REFERENCES
1280
+ [1] Henry Adams and Michael Moy. 2021. Topology Applied to Machine
1281
+ Learning: From Global to Local. Frontiers in Artificial Intelligence 4
1282
+ (2021), 54.
1283
+ [2] Mehdi Ali, Max Berrendorf, Charles Tapley Hoyt, Laurent Vermue,
1284
+ Mikhail Galkin, Sahand Sharifzadeh, Asja Fischer, Volker Tresp, and
1285
+ Jens Lehmann. 2021. Bringing light into the dark: A large-scale evalua-
1286
+ tion of knowledge graph embedding models under a unified framework.
1287
+ IEEE Transactions on Pattern Analysis and Machine Intelligence (2021).
1288
+ [3] Ivana Balažević, Carl Allen, and Timothy Hospedales. 2019. TuckER:
1289
+ Tensor Factorization for Knowledge Graph Completion. In Proceedings
1290
+ of the 2019 Conference on Empirical Methods in Natural Language Pro-
1291
+ cessing and the 9th International Joint Conference on Natural Language
1292
+ Processing (EMNLP-IJCNLP). 5185–5194.
1293
+ [4] Iti Bansal, Sudhanshu Tiwari, and Carlos R Rivero. 2020. The impact
1294
+ of negative triple generation strategies and anomalies on knowledge
1295
+ graph completion. In Proceedings of the 29th ACM International Con-
1296
+ ference on Information & Knowledge Management. 45–54.
1297
+ [5] Anson Bastos, Kuldeep Singh, Abhishek Nadgeri, Saeedeh Shekarpour,
1298
+ Isaiah Onando Mulang, and Johannes Hoffart. 2021. Hopfe: Knowl-
1299
+ edge graph representation learning using inverse hopf fibrations. In
1300
+ Proceedings of the 30th ACM International Conference on Information &
1301
+ Knowledge Management. 89–99.
1302
+ [6] Max Berrendorf, Evgeniy Faerman, Laurent Vermue, and Volker Tresp.
1303
+ 2020. Interpretable and Fair Comparison of Link Prediction or Entity
1304
+ Alignment Methods. In 2020 IEEE/WIC/ACM International Joint Con-
1305
+ ference on Web Intelligence and Intelligent Agent Technology (WI-IAT).
1306
+ IEEE, 371–374.
1307
+ [7] Kurt D. Bollacker, Robert P. Cook, and Patrick Tufts. 2007. Freebase: A
1308
+ Shared Database of Structured General Human Knowledge. In AAAI.
1309
+ [8] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston,
1310
+ and Oksana Yakhnenko. 2013. Translating embeddings for modeling
1311
+ multi-relational data. In NeurlPS. 1–9.
1312
+ [9] Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio.
1313
+ 2011. Learning structured embeddings of knowledge bases. In Twenty-
1314
+ fifth AAAI conference on artificial intelligence.
1315
+ [10] Karsten M Borgwardt and Hans-Peter Kriegel. 2005. Shortest-path
1316
+ kernels on graphs. In Fifth IEEE international conference on data mining
1317
+ (ICDM’05). IEEE, 8–pp.
1318
+ [11] Karsten M. Borgwardt, Tobias Petri, S. V. N. Vishwanathan, and Hans-
1319
+ Peter Kriegel. 2007. An Efficient Sampling Scheme For Comparison of
1320
+ Large Graphs. In Mining and Learning with Graphs, MLG.
1321
+ [12] Andrei Z Broder, Marc Najork, and Janet L Wiener. 2003. Efficient
1322
+ URL caching for world wide web crawling. In Proceedings of the 12th
1323
+ international conference on World Wide Web. 679–689.
1324
+ [13] Mathieu Carrière, Marco Cuturi, and Steve Oudot. 2017. Sliced Wasser-
1325
+ stein Kernel for Persistence Diagrams. In Proceedings of the 34th In-
1326
+ ternational Conference on Machine Learning (Proceedings of Machine
1327
+ Learning Research), Vol. 70. PMLR, 664–673.
1328
+ [14] Nian Shong Chok. 2010. Pearson’s versus Spearman’s and Kendall’s cor-
1329
+ relation coefficients for continuous data. Ph.D. Dissertation. University
1330
+ of Pittsburgh.
1331
+ [15] Herbert Edelsbrunner, David Letscher, and Afra Zomorodian. 2000.
1332
+ Topological persistence and simplification. In Proceedings 41st annual
1333
+ symposium on foundations of computer science. IEEE, 454–463.
1334
+ [16] Brittany Fasy, Yu Qin, Brian Summa, and Carola Wenk. 2020. Compar-
1335
+ ing Distance Metrics on Vectorized Persistence Summaries. In NeurIPS
1336
+ 2020 Workshop on Topological Data Analysis and Beyond.
1337
+ [17] Luis Galárraga, Katja Hose, and Ralf Schenkel. 2014. Partout: a dis-
1338
+ tributed engine for efficient RDF processing. In Proceedings of the 23rd
1339
+ International Conference on World Wide Web. 267–268.
1340
+ [18] Genet Asefa Gesese, Russa Biswas, Mehwish Alam, and Harald Sack.
1341
+ 2019. A survey on knowledge graph embeddings with literals: Which
1342
+ model links better literal-ly? Semantic Web Preprint (2019), 1–31.
1343
+ [19] Felix Hensel, Michael Moor, and Bastian Rieck. 2021. A survey of topo-
1344
+ logical machine learning methods. Frontiers in Artificial Intelligence 4
1345
+ (2021), 52.
1346
+ [20] Nitisha Jain, Jan-Christoph Kalo, Wolf-Tilo Balke, and Ralf Krestel.
1347
+ 2021. Do Embeddings Actually Capture Knowledge Graph Semantics?.
1348
+ In European Semantic Web Conference. Springer, 143–159.
1349
+ [21] Prachi Jain, Sushant Rathi, Soumen Chakrabarti, et al. 2020. Knowl-
1350
+ edge base completion: Baseline strikes back (again). arXiv preprint
1351
+ arXiv:2005.00804 (2020).
1352
+ [22] Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and Philip S
1353
+ Yu. 2021. A survey on knowledge graphs: Representation, acquisition
1354
+ and applications. EEE Transactions on Neural Networks and Learning
1355
+ Systems (2021).
1356
+ [23] Rudolf Kadlec, Ondřej Bajgar, and Jan Kleindienst. 2017. Knowledge
1357
+ Base Completion: Baselines Strike Back. In Proceedings of the 2nd
1358
+ Workshop on Representation Learning for NLP. 69–74.
1359
+ [24] Zelong Li, Jianchao Ji, Zuohui Fu, Yingqiang Ge, Shuyuan Xu, Chong
1360
+ Chen, and Yongfeng Zhang. 2021. Efficient non-sampling knowledge
1361
+ graph embedding. In Proceedings of the Web Conference 2021. 1727–
1362
+ 1736.
1363
+ [25] Lipyeow Lim, Min Wang, Sriram Padmanabhan, Jeffrey Scott Vitter,
1364
+ and Ramesh Agarwal. 2003. Dynamic maintenance of web indexes
1365
+ using landmarks. In Proceedings of the 12th international conference on
1366
+ World Wide Web. 102–111.
1367
+ [26] Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015.
1368
+ Learning entity and relation embeddings for knowledge graph com-
1369
+ pletion. In Proceedings of the AAAI Conference on Artificial Intelligence,
1370
+ Vol. 29.
1371
+ [27] George A Miller. 1995. WordNet: a lexical database for English. Com-
1372
+ mun. ACM 38, 11 (1995), 39–41.
1373
+ [28] Aisha Mohamed, Shameem Parambath, Zoi Kaoudi, and Ashraf Aboul-
1374
+ naga. 2020. Popularity agnostic evaluation of knowledge graph em-
1375
+ beddings. In Conference on Uncertainty in Artificial Intelligence. PMLR,
1376
+ 1059–1068.
1377
+ [29] Michael Moor, Max Horn, Bastian Rieck, and Karsten Borgwardt. 2020.
1378
+ Topological autoencoders. In International conference on machine learn-
1379
+ ing. PMLR, 7045–7054.
1380
+ [30] Kimia Nadjahi, Alain Durmus, Pierre E Jacob, Roland Badeau, and
1381
+ Umut Simsekli. 2021. Fast Approximation of the Sliced-Wasserstein
1382
+ Distance Using Concentration of Random Projections. Advances in
1383
+ Neural Information Processing Systems 34 (2021).
1384
+ [31] Mojtaba
1385
+ Nayyeri,
1386
+ Chengjin
1387
+ Xu,
1388
+ Yadollah
1389
+ Yaghoobzadeh,
1390
+ Hamed Shariat Yazdi, and Jens Lehmann. 2019.
1391
+ Toward Un-
1392
+ derstanding The Effect Of Loss function On Then Performance Of
1393
+ Knowledge Graph Embedding. arXiv preprint arXiv:1909.00519 (2019).
1394
+ [32] Tu Dinh Nguyen, Dat Quoc Nguyen, Dinh Phung, et al. 2018. A
1395
+ Novel Embedding Model for Knowledge Base Completion Based on
1396
+ Convolutional Neural Network. In NAACL. 327–333.
1397
+ [33] David Patterson, Joseph Gonzalez, Urs Hölzle, Quoc Le, Chen Liang,
1398
+ Lluis-Miquel Munguia, Daniel Rothchild, David So, Maud Texier, and
1399
+ Jeff Dean. 2022. The Carbon Footprint of Machine Learning Training
1400
+ Will Plateau, Then Shrink. arXiv preprint arXiv:2204.05149 (2022).
1401
+ [34] Xutan Peng, Guanyi Chen, Chenghua Lin, and Mark Stevenson. 2021.
1402
+ Highly Efficient Knowledge Graph Embedding Learning with Orthog-
1403
+ onal Procrustes Analysis. In NAACL. 2364–2375.
1404
+ [35] Pouya Pezeshkpour, Yifan Tian, and Sameer Singh. 2020. Revisit-
1405
+ ing evaluation of knowledge base completion models. In Automated
1406
+ Knowledge Base Construction.
1407
+ [36] Bastian Rieck, Matteo Togninalli, Christian Bock, Michael Moor, Max
1408
+ Horn, Thomas Gumbsch, and Karsten Borgwardt. 2018. Neural Per-
1409
+ sistence: A Complexity Measure for Deep Neural Networks Using
1410
+ Algebraic Topology. In International Conference on Learning Represen-
1411
+ tations.
1412
+ [37] Wiem Ben Rim, Carolin Lawrence, Kiril Gashteovski, Mathias Niepert,
1413
+ and Naoaki Okazaki. 2021. Behavioral Testing of Knowledge Graph
1414
+ Embedding Models for Link Prediction. In 3rd Conference on Automated
1415
+ Knowledge Base Construction.
1416
+ [38] Tara Safavi and Danai Koutra. 2020. CoDEx: A Comprehensive Knowl-
1417
+ edge Graph Completion Benchmark. In Proceedings of the 2020 Confer-
1418
+ ence on Empirical Methods in Natural Language Processing (EMNLP).
1419
+ 8328–8350.
1420
+ [39] Remi M Sakia. 1992. The Box-Cox transformation technique: a review.
1421
+ Journal of the Royal Statistical Society: Series D (The Statistician) 41, 2
1422
+ (1992), 169–178.
1423
+ [40] Aditya Sharma, Partha Talukdar, et al. 2018. Towards understanding
1424
+ the geometry of knowledge graph embeddings. In Proceedings of the
1425
+ 56th Annual Meeting of the Association for Computational Linguistics
1426
+ (Volume 1: Long Papers). 122–131.
1427
+
1428
+ WWW ’23, APRIL 30 - MAY 4, 2023, Texas, USA
1429
+ Bastos, et al.
1430
+ [41] Kuldeep Singh, Arun Sethupat Radhakrishna, Andreas Both, Saeedeh
1431
+ Shekarpour, Ioanna Lytra, Ricardo Usbeck, Akhilesh Vyas, Akmal
1432
+ Khikmatullaev, Dharmen Punjani, Christoph Lange, et al. 2018. Why
1433
+ reinvent the wheel: Let’s build question answering systems together.
1434
+ In Proceedings of the 2018 world wide web conference. 1247–1256.
1435
+ [42] C. Spearman. 1907. Demonstration of Formulæ for True Measurement
1436
+ of Correlation. The American Journal of Psychology 18, 2 (1907), 161–
1437
+ 169. http://www.jstor.org/stable/1412408
1438
+ [43] Marina Speranskaya, Martin Schmitt, and Benjamin Roth. 2020. Rank-
1439
+ ing vs. Classifying: Measuring Knowledge Base Completion Quality.
1440
+ In Automated Knowledge Base Construction.
1441
+ [44] Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2018. Ro-
1442
+ tatE: Knowledge Graph Embedding by Relational Rotation in Complex
1443
+ Space. In International Conference on Learning Representations.
1444
+ [45] Zhiqing Sun, Shikhar Vashishth, Soumya Sanyal, Partha Talukdar, and
1445
+ Yiming Yang. 2020. A Re-evaluation of Knowledge Graph Completion
1446
+ Methods. In Proceedings of the 58th Annual Meeting of the Association
1447
+ for Computational Linguistics. 5516–5522.
1448
+ [46] Pedro Tabacof and Luca Costabello. 2019. Probability Calibration for
1449
+ Knowledge Graph Embedding Models. In International Conference on
1450
+ Learning Representations.
1451
+ [47] Sudhanshu Tiwari, Iti Bansal, and Carlos R Rivero. 2021. Revisiting
1452
+ the evaluation protocol of knowledge graph completion methods for
1453
+ link prediction. In Proceedings of the Web Conference 2021. 809–820.
1454
+ [48] Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and
1455
+ Guillaume Bouchard. 2016.
1456
+ Complex embeddings for simple link
1457
+ prediction. In International Conference on Machine Learning. PMLR,
1458
+ 2071–2080.
1459
+ [49] Ke Tu, Jianxin Ma, Peng Cui, Jian Pei, and Wenwu Zhu. 2019. Au-
1460
+ tone: Hyperparameter optimization for massive network embedding.
1461
+ In Proceedings of the 25th ACM SIGKDD International Conference on
1462
+ Knowledge Discovery & Data Mining. 216–225.
1463
+ [50] Renata Turkeš, Guido Montúfar, and Nina Otter. 2022. On the effec-
1464
+ tiveness of persistent homology. https://doi.org/10.48550/ARXIV.2206.
1465
+ 10551
1466
+ [51] C. Villani. 2009. Optimal transport. Grundlehren der Mathematischen
1467
+ Wissenschaften [Fundamental Principles of Mathematical Sciences] 338
1468
+ (2009).
1469
+ [52] Haoyu Wang, Yaqing Wang, Defu Lian, and Jing Gao. 2021. A light-
1470
+ weight knowledge graph embedding framework for efficient inference
1471
+ and storage. In Proceedings of the 30th ACM International Conference
1472
+ on Information & Knowledge Management. 1909–1918.
1473
+ [53] Kai Wang, Yu Liu, Qian Ma, and Quan Z Sheng. 2021. Mulde: Multi-
1474
+ teacher knowledge distillation for low-dimensional knowledge graph
1475
+ embeddings. In Proceedings of the Web Conference 2021. 1716–1726.
1476
+ [54] Kai Wang, Yu Liu, and Quan Z Sheng. 2022. Swift and Sure: Hardness-
1477
+ aware Contrastive Learning for Low-dimensional Knowledge Graph
1478
+ Embeddings. In Proceedings of the ACM Web Conference 2022. 838–849.
1479
+ [55] Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge
1480
+ graph embedding: A survey of approaches and applications. IEEE
1481
+ Transactions on Knowledge and Data Engineering 29, 12 (2017), 2724–
1482
+ 2743.
1483
+ [56] Xin Wang, Shuyi Fan, Kun Kuang, and Wenwu Zhu. 2021. Explain-
1484
+ able automated graph representation learning with hyperparameter
1485
+ importance. In International Conference on Machine Learning. PMLR,
1486
+ 10727–10737.
1487
+ [57] Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014.
1488
+ Knowledge graph embedding by translating on hyperplanes. In Pro-
1489
+ ceedings of the AAAI Conference on Artificial Intelligence, Vol. 28.
1490
+ [58] Larry Wasserman. 2018. Topological data analysis. Annual Review of
1491
+ Statistics and Its Application 5 (2018), 501–532.
1492
+ [59] Carole-Jean Wu, Ramya Raghavendra, Udit Gupta, Bilge Acun, New-
1493
+ sha Ardalani, Kiwan Maeng, Gloria Chang, Fiona Aga, Jinshi Huang,
1494
+ Charles Bai, et al. 2022. Sustainable ai: Environmental implications,
1495
+ challenges and opportunities. Proceedings of Machine Learning and
1496
+ Systems 4 (2022), 795–813.
1497
+ [60] Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng.
1498
+ 2015. Embedding Entities and Relations for Learning and Inference in
1499
+ Knowledge Bases. In 3rd International Conference on Learning Repre-
1500
+ sentations, ICLR.
1501
+ [61] Shih-Yuan Yu, Sujit Rokka Chhetri, Arquimedes Canedo, Palash Goyal,
1502
+ and Mohammad Abdullah Al Faruque. 2021. Pykg2vec: A Python
1503
+ Library for Knowledge Graph Embedding. J. Mach. Learn. Res. 22
1504
+ (2021), 16–1.
1505
+ [62] Yufeng Zhang, Weiqing Wang, Wei Chen, Jiajie Xu, An Liu, and Lei
1506
+ Zhao. 2021. Meta-Learning Based Hyper-Relation Feature Modeling for
1507
+ Out-of-Knowledge-Base Embedding. In Proceedings of the 30th ACM
1508
+ International Conference on Information & Knowledge Management.
1509
+ 2637–2646.
1510
+ [63] Yongqi Zhang, Zhanke Zhou, Quanming Yao, and Yong Li. 2022. KG-
1511
+ Tuner: Efficient Hyper-parameter Search for Knowledge Graph Learn-
1512
+ ing. CoRR abs/2205.02460 (2022).
1513
+ [64] Afra Zomorodian and Gunnar Carlsson. 2005. Computing persistent
1514
+ homology. Discrete & Computational Geometry 33, 2 (2005), 249–274.
1515
+ 8
1516
+ APPENDIX
1517
+ Figure 6: Study on the carbon footprint of the evaluation
1518
+ phase of the KGE methods on YAGO3-10 when using KP vs
1519
+ Hits@10. The x-axis shows the the carbon footprint in g eq
1520
+ 𝐶𝑂2 in log scale.
1521
+ Figure 7: Study on the carbon footprint of the evaluation
1522
+ phase of the KGE methods on Wikidata when using KP vs
1523
+ Hits@10. The x-axis shows the the carbon footprint in g eq
1524
+ 𝐶𝑂2 in log scale.
1525
+ 8.1
1526
+ Extended Evaluation
1527
+ Effect of KP on Efficient KGE Methods Evaluation: The re-
1528
+ search community has recently proposed several KGE methods
1529
+ to improve training efficiency [34, 52, 54]. Our idea in this experi-
1530
+ ment is to perceive if efficient KGE methods improve their overall
1531
+ carbon footprint using KP. For the same, we selected state-of-the-
1532
+ art efficient KGE methods: Procrustes [34] and HalE [54]. Figure
1533
+ 8 illustrates that using KP for evaluation drastically reduces the
1534
+ carbon footprints of already efficient KGE methods. For instance,
1535
+ the carbon footprint of HalE is reduced from 110g (using hits@10)
1536
+ to 20g of CO2 (using KP).
1537
+
1538
+ Carbon Footprint of the Evaluation phase of the KGE
1539
+ prototyping process (YAGO3_10)
1540
+ Carbon Footprint using Kp
1541
+ Carbon Footprint using ranking metrics
1542
+ TransE
1543
+ TransH
1544
+ TransR
1545
+ Method
1546
+ Complex
1547
+ RotatE
1548
+ ConvKB
1549
+ TuckER
1550
+ 0.5
1551
+ L0Carbon Footprint of the Evaluation phase of the KGE
1552
+ prototyping process (Wikidata)
1553
+ Carbon Footprint using KP
1554
+ Carbon Footprint using ranking metrics
1555
+ TransE
1556
+ TransH
1557
+ TransR
1558
+ Method
1559
+ Complex
1560
+ RotatE
1561
+ ConvkB
1562
+ TuckER
1563
+ LOCan Persistent Homology provide an efficient alternative
1564
+ WWW ’23, APRIL 30 - MAY 4, 2023, Texas, USA
1565
+ Metrics
1566
+ FB15K
1567
+ FB15K237
1568
+ WN18
1569
+ WN18RR
1570
+ Hits1(↑)
1571
+ Hits3(↑)
1572
+ Hits10(↑)
1573
+ MR(↓)
1574
+ MRR(↑)
1575
+ Hits1(↑)
1576
+ Hits3(↑)
1577
+ Hits10(↑)
1578
+ MR(↓)
1579
+ MRR(↑)
1580
+ Hits1(↑)
1581
+ Hits3(↑)
1582
+ Hits10(↑)
1583
+ MR(↓)
1584
+ MRR(↑)
1585
+ Hits1(↑)
1586
+ Hits3(↑)
1587
+ Hits10(↑)
1588
+ MR(↓)
1589
+ MRR(↑)
1590
+ Conicity
1591
+ 0.071
1592
+ -0.071
1593
+ 0.000
1594
+ -0.036
1595
+ -0.214
1596
+ 0.393
1597
+ 0.250
1598
+ -0.071
1599
+ 0.036
1600
+ 0.250
1601
+ 0.600
1602
+ 0.600
1603
+ 0.600
1604
+ -0.143
1605
+ 0.600
1606
+ -0.393
1607
+ -0.607
1608
+ -0.607
1609
+ 0.357
1610
+ -0.607
1611
+ AVL
1612
+ 0.607
1613
+ 0.214
1614
+ 0.250
1615
+ -0.679
1616
+ 0.179
1617
+ -0.107
1618
+ -0.321
1619
+ -0.429
1620
+ 0.393
1621
+ -0.250
1622
+ 0.886
1623
+ 0.886
1624
+ 0.886
1625
+ -0.771
1626
+ 0.886
1627
+ 0.321
1628
+ -0.143
1629
+ -0.464
1630
+ 0.607
1631
+ -0.143
1632
+ Graph Kernel (Train)
1633
+ -0.536
1634
+ -0.321
1635
+ -0.357
1636
+ 0.964
1637
+ -0.393
1638
+ -0.929
1639
+ -0.714
1640
+ -0.607
1641
+ 0.643
1642
+ -0.821
1643
+ -0.943
1644
+ -0.943
1645
+ -0.943
1646
+ 0.714
1647
+ -0.943
1648
+ -0.357
1649
+ -0.786
1650
+ -0.607
1651
+ 0.571
1652
+ -0.786
1653
+ Graph Kernel (Test)
1654
+ -0.107
1655
+ 0.107
1656
+ 0.000
1657
+ 0.893
1658
+ 0.036
1659
+ -0.429
1660
+ -0.607
1661
+ -0.679
1662
+ 0.464
1663
+ -0.607
1664
+ -0.657
1665
+ -0.657
1666
+ -0.657
1667
+ 0.086
1668
+ -0.657
1669
+ -0.393
1670
+ -0.714
1671
+ -0.821
1672
+ 0.786
1673
+ -0.714
1674
+ KP (Train)
1675
+ 0.214
1676
+ 0.536
1677
+ 0.750
1678
+ 0.000
1679
+ 0.607
1680
+ 0.893
1681
+ 0.750
1682
+ 0.643
1683
+ -0.679
1684
+ 0.786
1685
+ 0.829
1686
+ 0.829
1687
+ 0.829
1688
+ -0.600
1689
+ 0.829
1690
+ 0.286
1691
+ 0.714
1692
+ 0.643
1693
+ -0.750
1694
+ 0.714
1695
+ KP (Test)
1696
+ 0.964
1697
+ 0.750
1698
+ 0.750
1699
+ -0.536
1700
+ 0.714
1701
+ 0.714
1702
+ 0.821
1703
+ 0.857
1704
+ -0.750
1705
+ 0.857
1706
+ 0.943
1707
+ 0.943
1708
+ 0.943
1709
+ -0.829
1710
+ 0.943
1711
+ 0.286
1712
+ 0.714
1713
+ 0.643
1714
+ -0.643
1715
+ 0.714
1716
+ Table 7: Spearman’s ranked correlation (𝜌) scores computed from the metric scores with respect to the ranking metrics on the
1717
+ standard KG embedding datasets. The KG methods are evaluated after training.
1718
+ Metrics
1719
+ FB15K
1720
+ FB15K237
1721
+ WN18
1722
+ WN18RR
1723
+ Hits1(↑)
1724
+ Hits3(↑)
1725
+ Hits10(↑)
1726
+ MR(↓)
1727
+ MRR(↑)
1728
+ Hits1(↑)
1729
+ Hits3(↑)
1730
+ Hits10(↑)
1731
+ MR(↓)
1732
+ MRR(↑)
1733
+ Hits1(↑)
1734
+ Hits3(↑)
1735
+ Hits10(↑)
1736
+ MR(↓)
1737
+ MRR(↑)
1738
+ Hits1(↑)
1739
+ Hits3(↑)
1740
+ Hits10(↑)
1741
+ MR(↓)
1742
+ MRR(↑)
1743
+ Conicity
1744
+ 0.048
1745
+ -0.048
1746
+ -0.048
1747
+ 0.048
1748
+ -0.143
1749
+ 0.238
1750
+ 0.143
1751
+ -0.048
1752
+ -0.048
1753
+ 0.143
1754
+ 0.467
1755
+ 0.467
1756
+ 0.467
1757
+ -0.333
1758
+ 0.467
1759
+ -0.238
1760
+ -0.333
1761
+ -0.429
1762
+ 0.238
1763
+ -0.333
1764
+ AVL
1765
+ 0.429
1766
+ 0.143
1767
+ 0.143
1768
+ -0.524
1769
+ 0.048
1770
+ -0.143
1771
+ -0.238
1772
+ -0.429
1773
+ 0.333
1774
+ -0.238
1775
+ 0.733
1776
+ 0.733
1777
+ 0.733
1778
+ -0.600
1779
+ 0.733
1780
+ 0.238
1781
+ -0.048
1782
+ -0.333
1783
+ 0.524
1784
+ -0.048
1785
+ Graph Kernel (Train)
1786
+ -0.429
1787
+ -0.143
1788
+ -0.143
1789
+ 0.905
1790
+ -0.238
1791
+ -0.810
1792
+ -0.524
1793
+ -0.333
1794
+ 0.429
1795
+ -0.714
1796
+ -0.867
1797
+ -0.867
1798
+ -0.867
1799
+ 0.467
1800
+ -0.867
1801
+ -0.333
1802
+ -0.619
1803
+ -0.333
1804
+ 0.333
1805
+ -0.619
1806
+ Graph Kernel (Test)
1807
+ -0.143
1808
+ 0.143
1809
+ -0.048
1810
+ 0.810
1811
+ 0.048
1812
+ -0.429
1813
+ -0.524
1814
+ -0.524
1815
+ 0.429
1816
+ -0.524
1817
+ -0.600
1818
+ -0.600
1819
+ -0.600
1820
+ 0.200
1821
+ -0.600
1822
+ -0.238
1823
+ -0.524
1824
+ -0.619
1825
+ 0.619
1826
+ -0.524
1827
+ KP (Train)
1828
+ 0.143
1829
+ 0.429
1830
+ 0.619
1831
+ -0.048
1832
+ 0.524
1833
+ 0.714
1834
+ 0.619
1835
+ 0.429
1836
+ -0.524
1837
+ 0.619
1838
+ 0.600
1839
+ 0.600
1840
+ 0.600
1841
+ -0.467
1842
+ 0.600
1843
+ 0.238
1844
+ 0.524
1845
+ 0.429
1846
+ -0.619
1847
+ 0.524
1848
+ KP (Test)
1849
+ 0.905
1850
+ 0.619
1851
+ 0.619
1852
+ -0.429
1853
+ 0.524
1854
+ 0.619
1855
+ 0.714
1856
+ 0.714
1857
+ -0.619
1858
+ 0.714
1859
+ 0.867
1860
+ 0.867
1861
+ 0.867
1862
+ -0.733
1863
+ 0.867
1864
+ 0.238
1865
+ 0.524
1866
+ 0.429
1867
+ -0.429
1868
+ 0.524
1869
+ Table 8: Kendall’s tau (𝜏) scores computed from the metric scores with respect to the ranking metrics on the standard KG
1870
+ embedding datasets. The KG methods are evaluated after training.
1871
+ Figure 8: Study on efficient KGE methods and their the car-
1872
+ bon footprint on WN18RR when using KP vs Hits@10. The
1873
+ x-axis shows the the carbon footprint in g eq 𝐶𝑂2.
1874
+ Robustness and Efficiency on large KGs: This ablation study
1875
+ aims to gauge the correlation behavior of KP and ranking metric
1876
+ on a large-scale KG. For the experiment, we use Yago3-10 dataset.
1877
+ A key reason to select the Yago-based dataset is that besides being
1878
+ large-scale, it has rich semantics. Results in Table 9 illustrate KP
1879
+ shows a stable and high correlation with the ranking metric, con-
1880
+ firming the robustness of KP. We show carbon footprint results in
1881
+ Figure 6 for the yago dataset. Further we also study the efficiency
1882
+ of KP on the wikidata dataset in Figure 7 which reaffirms that KP
1883
+ maintains its efficiency on large scale datasets.
1884
+ Efficiency comparison of Sliced Wasserstein vs Wasserstein
1885
+ as distance metric in KP: In this study we empirically provide
1886
+ a rationale for using sliced wasserstein as a distance metric over
1887
+ the wasserstein distance in KP. The results are in table 10. We see
1888
+ that KP using sliced wasserstein distance provides a significant
1889
+ computational advantage over wasserstein distance, while having
1890
+ a good performance as seen in the previous experiments. Thus
1891
+ we need an efficient approximation such as the sliced wasserstein
1892
+ distance as the distance metric in place of wasserstein distance in
1893
+ KP.
1894
+ Metrics
1895
+ Hits@1(↑)
1896
+ Hits@3(↑)
1897
+ Hits@10(↑)
1898
+ MR(↓)
1899
+ MRR(↑)
1900
+ r
1901
+ 0.657
1902
+ 0.594
1903
+ 0.414
1904
+ -0.920
1905
+ 0.572
1906
+ 𝜌
1907
+ 0.679
1908
+ 0.679
1909
+ 0.5
1910
+ -0.714
1911
+ 0.643
1912
+ 𝜏
1913
+ 0.524
1914
+ 0.524
1915
+ 0.333
1916
+ -0.524
1917
+ 0.429
1918
+ Table 9: KP correlations on the YAGO dataset.
1919
+ Metrics
1920
+ KP(W)
1921
+ KP(SW)
1922
+ Speedup ↑
1923
+ Dataset
1924
+ FB15K237
1925
+ WN18RR
1926
+ FB15K237
1927
+ WN18RR
1928
+ FB15K237
1929
+ WN18RR
1930
+ split
1931
+ val + test
1932
+ val + test
1933
+ val + test
1934
+ val + test
1935
+ Avg
1936
+ Avg
1937
+ TransE
1938
+ 1136.766
1939
+ 9.655
1940
+ 0.321
1941
+ 0.114
1942
+ x 3540.3
1943
+ x 84.6
1944
+ TransH
1945
+ 2943.869
1946
+ 7.549
1947
+ 0.317
1948
+ 0.095
1949
+ x 9278.5
1950
+ x 79.8
1951
+ TransR
1952
+ 1734.576
1953
+ 4.423
1954
+ 0.336
1955
+ 0.129
1956
+ x 5168.3
1957
+ x 34.4
1958
+ Complex
1959
+ 1054.721
1960
+ 13.089
1961
+ 0.324
1962
+ 0.135
1963
+ x 3255.3
1964
+ x 97.0
1965
+ RotatE
1966
+ 865.417
1967
+ 12.783
1968
+ 0.342
1969
+ 0.136
1970
+ x 2531.1
1971
+ x 94.3
1972
+ TuckER
1973
+ 1021.649
1974
+ 3.840
1975
+ 0.316
1976
+ 0.098
1977
+ x 3230.0
1978
+ x 39.1
1979
+ ConvKB
1980
+ 719.310
1981
+ 5.154
1982
+ 0.429
1983
+ 0.132
1984
+ x 1675.7
1985
+ x 39.0
1986
+ Table 10: Evaluation Metric Comparison wrt Computing
1987
+ Time (in minutes, for 100 epochs). Column 1 denotes
1988
+ popular KGE methods. Depicted values denote evalua-
1989
+ tion(validation+test) time for computing a metric and cor-
1990
+ responding speedup using KP(𝑆𝑊 ). KP(𝑆𝑊 ) with sliced
1991
+ wasserstein as the distance metric significantly reduces the
1992
+ evaluation time (green) in comparison with KP(𝑊 ) which
1993
+ uses the wasserstein distance.
1994
+ 8.2
1995
+ Theoretical Proof Sketches
1996
+ We work under the following considerations: As the KGE method
1997
+ converges the mean statistic(𝑚𝜈) of the scores of the positive triples
1998
+ consistently lies on one side of the half plane formed by the mean
1999
+ statistic(𝑚𝜇) of the negative triples, irrespective of the data distri-
2000
+ bution. The detail proofs are here.
2001
+ Lemma 8.1. KP has a monotone increasing correspondence with
2002
+ the Proxy of the Expected Ranking Metrics(PERM) under the above
2003
+ stated considerations as 𝑚𝜈 deviates from 𝑚𝜇
2004
+
2005
+ Carbon Footprint of the Overall KGE prototyping process using
2006
+ methods that save on training time(WN18RR)
2007
+ Carbon Footprint using KP
2008
+ Carbon Footprint using ranking metrics
2009
+ HaLE
2010
+ Method
2011
+ ProcrustEs
2012
+ 0
2013
+ 25
2014
+ 50
2015
+ 75
2016
+ 100
2017
+ 125WWW ’23, APRIL 30 - MAY 4, 2023, Texas, USA
2018
+ Bastos, et al.
2019
+ Proof Sketch. Considering the 0-dimensional PD as used by
2020
+ KP and a normal distribution for the edge weights (can be extended
2021
+ to other distributions using techniques like [39]) of the graph(scores
2022
+ of the triples), we have univariate gaussian measures [40] 𝜇 and 𝜈
2023
+ for the positive and negative distributions respectively. Denote by
2024
+ 𝑚𝜇 and 𝑚𝜈 the means of the distributions 𝜇 and 𝜈 respectively and
2025
+ by Σ𝜇, Σ𝜈 the respective covariance matrices.
2026
+ 𝑊 2
2027
+ 2 (𝜇,𝜈) = ∥𝜇 − 𝜈∥2 + 𝐵(Σ𝜇, Σ𝜈)2
2028
+ (2)
2029
+ where 𝐵(Σ𝜇, Σ𝜈)2 = 𝑡𝑟 (Σ𝜇 + Σ𝜈 − 2(Σ
2030
+ 1
2031
+ 2𝜇 Σ𝜈Σ
2032
+ 1
2033
+ 2𝜇 )
2034
+ 1
2035
+ 2 ).
2036
+ Next we see how that changing the means of the distribution(and
2037
+ also variance) changes PERM and KP. We can show that,
2038
+ 𝑃 =
2039
+ ∫ 𝑥=∞
2040
+ 𝑥=−∞
2041
+ 𝐷+(𝑥)
2042
+ �∫ 𝑦=∞
2043
+ 𝑦=𝑥
2044
+ 𝐷−(𝑥)𝑑𝑦
2045
+
2046
+ 𝑑𝑥
2047
+ 𝜕𝑃
2048
+ 𝜕𝑚𝜈
2049
+ =
2050
+ ∫ 𝑥=∞
2051
+ 𝑥=−∞
2052
+ 𝐷+(𝑥)
2053
+ �∫ 𝑦=∞
2054
+ 𝑦=𝑥
2055
+ 𝜕𝐷−(𝑥)
2056
+ 𝜕𝑚𝜈
2057
+ 𝑑𝑦
2058
+
2059
+ 𝑑𝑥
2060
+ ≥ 0
2061
+ 𝜕𝑃
2062
+ 𝜕Σ𝜈
2063
+ =
2064
+ ∫ 𝑥=∞
2065
+ 𝑥=−∞
2066
+ 𝐷+(𝑥)
2067
+ �∫ 𝑦=∞
2068
+ 𝑦=𝑥
2069
+ 𝜕𝐷−(𝑦)
2070
+ 𝜕Σ𝜈
2071
+ 𝑑𝑦
2072
+
2073
+ 𝑑𝑥
2074
+ ≤ 0
2075
+ 𝜕𝑃
2076
+ 𝜕Σ𝜇
2077
+ =
2078
+ ∫ 𝑥=∞
2079
+ 𝑥=−∞
2080
+ 𝜕𝐷+(𝑥)
2081
+ 𝜕Σ𝜈
2082
+ �∫ 𝑦=∞
2083
+ 𝑦=𝑥
2084
+ 𝐷−(𝑦)𝑑𝑦
2085
+
2086
+ 𝑑𝑥
2087
+ Since KP is the (sliced) wasserstein distance between PDs we can
2088
+ show the respective gradients are as below,
2089
+ 𝜕𝑊 2
2090
+ 2 (𝜇,𝜈)
2091
+ 𝜕𝑚𝜈
2092
+ = 2|𝑚𝜇 − 𝑚𝜈 |
2093
+ ≥ 0
2094
+ 𝜕𝑊 2
2095
+ 2 (𝜇,𝜈)
2096
+ 𝜕Σ𝜈
2097
+ = 𝐼 − Σ
2098
+ 1
2099
+ 2𝜇 (Σ
2100
+ 1
2101
+ 2𝜇 Σ𝜈Σ
2102
+ 1
2103
+ 2𝜇 )
2104
+ −1
2105
+ 2 Σ
2106
+ 1
2107
+ 2𝜇
2108
+ As the generating process of the scores changes the gradient of
2109
+ PERM along the direction (𝑑𝑚𝜈,𝑑𝜎𝜇,𝑑𝜎𝜈) can be shown to be the
2110
+ following
2111
+
2112
+ (𝑑𝑚𝜈,𝑑𝜎,𝑑𝜎) ,
2113
+ � 𝜕𝑃𝐸𝑅𝑀
2114
+ 𝜕𝑚𝜈
2115
+ , 𝜕𝑃𝐸𝑅𝑀
2116
+ 𝜕Σ𝜇
2117
+ , 𝜕𝑃𝐸𝑅𝑀
2118
+ 𝜕Σ𝜈
2119
+ ��
2120
+ ≥ 0
2121
+ Similarly the gradient of KP along the direction (𝑑𝑚𝜇,𝑑𝜎𝜇,𝑑𝜎𝜈)
2122
+ is
2123
+
2124
+ (𝑑𝑚𝜈,𝑑𝜎,𝑑𝜎), (
2125
+ 𝜕𝑊 2
2126
+ 2 (𝜇,𝜈)
2127
+ 𝜕𝑚𝜈
2128
+ ,
2129
+ 𝜕𝑊 2
2130
+ 2 (𝜇,𝜈)
2131
+ 𝜕Σ𝜇
2132
+ ,
2133
+ 𝜕𝑊 2
2134
+ 2 (𝜇,𝜈)
2135
+ 𝜕Σ𝜈
2136
+ )
2137
+
2138
+ ≥ 0
2139
+ Since both PERM and and KP vary in the same manner as the
2140
+ distribution changes, the two have a one-one correspondence [42].
2141
+
2142
+ The above lemma shows that there is a one-one correspondence
2143
+ between KP and PERM and by definition PERM has a one-one cor-
2144
+ respondence with the ranking metrics. Therefore, the next theorem
2145
+ follows as a natural consequence
2146
+ Theorem 8.1. KP has a one-one correspondence with the Ranking
2147
+ Metrics under the above stated considerations
2148
+ Theorem 8.2. Under the considerations of theorem 8.1, the relative
2149
+ change in KP on addition of random noise to the scores is bounded
2150
+ by a function of the original and noise-induced covariance matrix
2151
+ as ΔK P
2152
+ K P ≤ 𝑚𝑎𝑥((1 − |Σ+1
2153
+ 𝜇1 Σ−1
2154
+ 𝜇2 |
2155
+ 3
2156
+ 2 ), (1 − |Σ+1
2157
+ 𝜈1 Σ−1
2158
+ 𝜈2 |
2159
+ 3
2160
+ 2 )), where Σ𝜇1 and
2161
+ Σ𝜈1 are the covariance matrices of the positive and negative triples’
2162
+ scores respectively and Σ𝜇2 and Σ𝜈2 are that of the corrupted scores.
2163
+ Proof Sketch. Consider a zero mean random noise to simulate
2164
+ the process of varying the distribution of the scores of the KGE
2165
+ method. Let 𝑚𝜇1 and 𝑚𝜈1 be the means of the positive and negative
2166
+ triples’ scores of the original method and Σ𝜇1, Σ𝜈1 be the respective
2167
+ covariance matrices. Let 𝑚𝜇2 and 𝑚𝜈2 be the means of the positive
2168
+ and negative triples’ scores of the corrupted method and Σ𝜇2, Σ𝜈2
2169
+ be the respective covariance matrices. Considering the kantorovich
2170
+ duality [51] and taking the difference between the two measures
2171
+ we have
2172
+ KP1 − KP2
2173
+ =
2174
+ 𝑖𝑛𝑓
2175
+ 𝛾1∈Π(𝑥,𝑦)
2176
+
2177
+ 𝛾1
2178
+ 𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒(𝑥,𝑦)𝑑𝛾1(𝑥,𝑦)
2179
+
2180
+ 𝑖𝑛𝑓
2181
+ 𝛾2∈Π(𝑥,𝑦)
2182
+
2183
+ 𝛾2
2184
+ 𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒(𝑥,𝑦)𝑑𝛾2(𝑥,𝑦)
2185
+ ≤ 𝑠𝑢𝑝
2186
+ Φ,Ψ
2187
+
2188
+ 𝑥
2189
+ Φ(𝑥)𝑑𝜇1(𝑥) +
2190
+
2191
+ 𝑦
2192
+ Ψ(𝑦)𝑑𝜈1(𝑦)
2193
+
2194
+
2195
+ 𝑥
2196
+ Φ(𝑥)𝑑𝜇2(𝑥) −
2197
+
2198
+ 𝑦
2199
+ Ψ(𝑦)𝑑𝜈2(𝑦)
2200
+ ≤ 𝑠𝑢𝑝
2201
+ Φ,Ψ
2202
+
2203
+ 𝑥
2204
+ Φ(𝑥)(𝑑𝜇1(𝑥) − 𝑑𝜇2(𝑥)) +
2205
+
2206
+ 𝑦
2207
+ Ψ(𝑦)(𝑑𝜈1(𝑦) − 𝑑𝜈2(𝑦))
2208
+ Now by definition of the measure 𝜇1 we have
2209
+ 𝜕𝜇1
2210
+ 𝜕𝑥 = −𝜇1Σ−1
2211
+ 𝜇1 (𝑥 − ���𝜇1)
2212
+ 𝑑𝜇1(𝑥𝑖) = −(𝜇1Σ−1
2213
+ 𝜇1 (𝑥 − 𝑚𝜇1))[𝑖]𝑑𝑥𝑖
2214
+ ∴ 𝑑𝜇1(𝑥) = 𝑑𝑒𝑡(𝑑𝑖𝑎𝑔(−𝜇1Σ−1
2215
+ 𝜇1 (𝑥 − 𝑚𝜇1)))𝑑𝑥
2216
+ From the above results we can show the following
2217
+ KP1 − KP2
2218
+ ≤ 𝑚𝑎𝑥((1 − 𝑑𝑒𝑡(Σ𝜇1Σ−1
2219
+ 𝜇2 )
2220
+ 𝑛
2221
+ 2 +1), (1 − 𝑑𝑒𝑡(Σ𝜈1Σ−1
2222
+ 𝜈2 )
2223
+ 𝑛
2224
+ 2 +1))KP1
2225
+ ∴ ΔKP
2226
+ KP
2227
+ ≤ 𝑚𝑎𝑥
2228
+ ��
2229
+ 1 − 𝑑𝑒𝑡(Σ𝜇1Σ−1
2230
+ 𝜇2 )
2231
+ 𝑛
2232
+ 2 +1�
2233
+ ,
2234
+
2235
+ 1 − 𝑑𝑒𝑡(Σ𝜈1Σ−1
2236
+ 𝜈2 )
2237
+ 𝑛
2238
+ 2 +1��
2239
+ In our case as we work in the univariate setting 𝑛 = 1 and thus we
2240
+ have ΔK P
2241
+ K P ≤ 𝑚𝑎𝑥
2242
+ ��
2243
+ 1 − 𝑑𝑒𝑡(Σ𝜇1Σ−1
2244
+ 𝜇2 )
2245
+ 3
2246
+ 2
2247
+
2248
+ ,
2249
+
2250
+ 1 − 𝑑𝑒𝑡(Σ𝜈1Σ−1
2251
+ 𝜈2 )
2252
+ 3
2253
+ 2
2254
+ ��
2255
+ , as
2256
+ required.
2257
+
2258
+
2259
+ Can Persistent Homology provide an efficient alternative
2260
+ WWW ’23, APRIL 30 - MAY 4, 2023, Texas, USA
2261
+ Theorem 8.2 shows that as noise is induced gradually, the KP
2262
+ value changes in a bounded manner as desired.
2263
+
KNFOT4oBgHgl3EQfyzQ_/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff